WorldWideScience

Sample records for facilitate improved estimations

  1. Improved Radiation Dosimetry/Risk Estimates to Facilitate Environmental Management of Plutonium-Contaminated Sites

    Energy Technology Data Exchange (ETDEWEB)

    Scott, Bobby R.; Tokarskaya, Zoya B.; Zhuntova, Galina V.; Osovets, Sergey V.; Syrchikov, Victor A., Belyaeva, Zinaida D.

    2007-12-14

    This report summarizes 4 years of research achievements in this Office of Science (BER), U.S. Department of Energy (DOE) project. The research described was conducted by scientists and supporting staff at Lovelace Respiratory Research Institute (LRRI)/Lovelace Biomedical and Environmental Research Institute (LBERI) and the Southern Urals Biophysics Institute (SUBI). All project objectives and goals were achieved. A major focus was on obtaining improved cancer risk estimates for exposure via inhalation to plutonium (Pu) isotopes in the workplace (DOE radiation workers) and environment (public exposures to Pu-contaminated soil). A major finding was that low doses and dose rates of gamma rays can significantly suppress cancer induction by alpha radiation from inhaled Pu isotopes. The suppression relates to stimulation of the body's natural defenses, including immunity against cancer cells and selective apoptosis which removes precancerous and other aberrant cells.

  2. Primary care quality improvement from a practice facilitator's perspective.

    Science.gov (United States)

    Liddy, Clare E; Blazhko, Valeriya; Dingwall, Molly; Singh, Jatinderpreet; Hogg, William E

    2014-02-03

    Practice facilitation has proven to be effective at improving care delivery. Practice facilitators are healthcare professionals who work with and support other healthcare providers. To the best of our knowledge, very few studies have explored the perspective of facilitators. The objective of this study was to gain insight into the barriers that facilitators face during the facilitation process and to identify approaches used to overcome these barriers to help practices move towards positive change. We conducted semi-structured interviews with four practice facilitators who worked with 84 primary care practices in Eastern Ontario, Canada over a period of five years (2007-2012). The transcripts were analyzed independently by three members of the research team using an open coding technique. A qualitative data analysis using immersion/crystallization technique was applied to interpret the interview transcripts. Common barriers identified by the facilitators included accessibility to the practice (e.g., difficulty scheduling meetings, short meetings), organizational behaviour (team organization, team conflicts, etc.), challenges with practice engagement (e.g., lack of interest, lack of trust), resistance to change, and competing priorities. To help practices move towards positive change the facilitators had to tailor their approach, integrate themselves, be persistent with practices, and exhibit flexibility. The consensus on redesigning and transforming primary care in North America and around the world is rapidly growing. Practice facilitation has been pivotal in materializing the transformation in the way primary care practices deliver care. This study provides an exclusive insight into facilitator approaches which will assist the design and implementation of small- and large-scale facilitation interventions.

  3. An Improved Cluster Richness Estimator

    Energy Technology Data Exchange (ETDEWEB)

    Rozo, Eduardo; /Ohio State U.; Rykoff, Eli S.; /UC, Santa Barbara; Koester, Benjamin P.; /Chicago U. /KICP, Chicago; McKay, Timothy; /Michigan U.; Hao, Jiangang; /Michigan U.; Evrard, August; /Michigan U.; Wechsler, Risa H.; /SLAC; Hansen, Sarah; /Chicago U. /KICP, Chicago; Sheldon, Erin; /New York U.; Johnston, David; /Houston U.; Becker, Matthew R.; /Chicago U. /KICP, Chicago; Annis, James T.; /Fermilab; Bleem, Lindsey; /Chicago U.; Scranton, Ryan; /Pittsburgh U.

    2009-08-03

    Minimizing the scatter between cluster mass and accessible observables is an important goal for cluster cosmology. In this work, we introduce a new matched filter richness estimator, and test its performance using the maxBCG cluster catalog. Our new estimator significantly reduces the variance in the L{sub X}-richness relation, from {sigma}{sub lnL{sub X}}{sup 2} = (0.86 {+-} 0.02){sup 2} to {sigma}{sub lnL{sub X}}{sup 2} = (0.69 {+-} 0.02){sup 2}. Relative to the maxBCG richness estimate, it also removes the strong redshift dependence of the richness scaling relations, and is significantly more robust to photometric and redshift errors. These improvements are largely due to our more sophisticated treatment of galaxy color data. We also demonstrate the scatter in the L{sub X}-richness relation depends on the aperture used to estimate cluster richness, and introduce a novel approach for optimizing said aperture which can be easily generalized to other mass tracers.

  4. An Improved Cluster Richness Estimator

    CERN Document Server

    Rozo, Eduardo; Koester, Benjamin P; McKay, Timothy; Hao, Jiangang; Evrard, August; Wechsler, Risa H; Hansen, Sarah; Sheldon, Erin; Johnston, David; Becker, Matthew; Annis, James; Bleem, Lindsey; Scranton, Ryan

    2008-01-01

    Minimizing the scatter between cluster mass and accessible observables is an important goal for cluster cosmology. In this work, we introduce a new matched filter richness estimator, and test its performance using the maxBCG cluster catalog. Our new estimator significantly reduces the variance in the L_X-richness relation, from \\sigma_{\\ln L_X}^2=(0.86\\pm0.02)^2 to \\sigma_{\\ln L_X}^2=(0.69\\pm0.02)^2. Relative to the maxBCG richness estimate, it also removes the strong redshift dependence of the richness scaling relations, and is significantly more robust to photometric and redshift errors. These improvements are largely due to our more sophisticated treatment of galaxy color data. We also demonstrate the scatter in the L_X-richness relation depends on the aperture used to estimate cluster richness, and introduce a novel approach for optimizing said aperture which can be easily generalized to other mass tracers.

  5. Facilitation: A Novel Way to Improve Students' Well-being

    DEFF Research Database (Denmark)

    Adriansen, Hanne Kirstine Olesen; Madsen, Lene Møller

    2013-01-01

    In this article we analyze a project that used facilitation techniques, which are known from training in industry, to improve the study environment at a public research university in Denmark. In 2009, the project was initiated in one graduate program; and it has subsequently been modified and ins...

  6. Facilitation: a novel way to improve students' well-being

    DEFF Research Database (Denmark)

    Adriansen, Hanne Kirstine; Madsen, Lene Møller

    2013-01-01

    In this article we analyze a project that used facilitation techniques, which are known from training in industry, to improve the study environment at a public research university in Denmark. In 2009, the project was initiated in one graduate program; and it has subsequently been modified and ins...

  7. How Workflow Systems Facilitate Business Process Reengineering and Improvement

    Directory of Open Access Journals (Sweden)

    Mohamed El Khadiri

    2012-03-01

    Full Text Available This paper investigates the relationship between workflow systems and business process reengineering and improvement. The study is based on a real case study at the “Centre Rgional dInvestissement” (CRI of Marrakech, Morocco. The CRI is entrusted to coordinate various investment projects at the regional level. Our previous work has shown that workflow system can be a basis for business process reengineering. However, for continuous process improvement, the system has shown to be insufficient as it fails to deal with exceptions and problem resolutions that informal communications provide. However, when this system is augmented with an expanded corporate memory system that includes social tools, to capture informal communication and data, we are closer to a more complete system that facilitates business process reengineering and improvement.

  8. Improved KAM estimates for the Siegel radius

    Energy Technology Data Exchange (ETDEWEB)

    Liverani, C.; Turchetti, G.

    1986-12-01

    For the Siegel center problem the authors explore the possibility of improving the KAM estimates, with a view to possible extensions to Hamiltonian systems. The use of a suitable norm and explicit perturbative computations allow estimates to within a factor 2 of the Siegel radius for the quadratic map.

  9. Facilitating communication with patients for improved migraine outcomes.

    Science.gov (United States)

    Buse, Dawn C; Lipton, Richard B

    2008-06-01

    Effective communication is integral to good medical care. Medical professional groups, regulatory agencies, educators, researchers, and patients recognize its importance. Quality of medical communication is directly related to patient satisfaction, improvement in medication adherence, treatment compliance, other outcomes, decreased risk of malpractice, and increase in health care providers' levels of satisfaction. However, skill level and training remain problematic in this area. Fortunately, research has shown that medical communication skills can be successfully taught and acquired, and that improvement in communication skills improves outcomes. The American Migraine Communication Studies I and II evaluated the current state of health care provider-patient communication in headache care and tested a simple educational intervention. They found problematic issues but demonstrated that these areas could be improved. We review theoretical models of effective communication and discuss strategies for improving communication, including active listening, interviewing strategies, and methods for gathering information about headache-related impairment, mood, and quality of life.

  10. Facilitating Improved Writing among Students through Directed Peer Review

    Science.gov (United States)

    Crossman, Joanne M.; Kite, Stacey L.

    2012-01-01

    This study contributes to scant empirical investigation of peer critique of writing among heterogeneously grouped native and nonnative speakers of English, now commonplace in higher education. This mixed-methods study investigated the use of directed peer review to improve writing among graduate students, the majority of whom were nonnative…

  11. Data Fusion for Improved Respiration Rate Estimation

    Directory of Open Access Journals (Sweden)

    Gari D. Clifford

    2010-01-01

    Full Text Available We present an application of a modified Kalman-Filter (KF framework for data fusion to the estimation of respiratory rate from multiple physiological sources which is robust to background noise. A novel index of the underlying signal quality of respiratory signals is presented and then used to modify the noise covariance matrix of the KF which discounts the effect of noisy data. The signal quality index, together with the KF innovation sequence, is also used to weight multiple independent estimates of the respiratory rate from independent KFs. The approach is evaluated both on a realistic artificial ECG model (with real additive noise and on real data taken from 30 subjects with overnight polysomnograms, containing ECG, respiration, and peripheral tonometry waveforms from which respiration rates were estimated. Results indicate that our automated voting system can out-perform any individual respiration rate estimation technique at all levels of noise and respiration rates exhibited in our data. We also demonstrate that even the addition of a noisier extra signal leads to an improved estimate using our framework. Moreover, our simulations demonstrate that different ECG respiration extraction techniques have different error profiles with respect to the respiration rate, and therefore a respiration rate-related modification of any fusion algorithm may be appropriate.

  12. Improving Voluntary Environmental Management Programs: Facilitating Learning and Adaptation

    Science.gov (United States)

    Genskow, Kenneth D.; Wood, Danielle M.

    2011-05-01

    Environmental planners and managers face unique challenges understanding and documenting the effectiveness of programs that rely on voluntary actions by private landowners. Programs, such as those aimed at reducing nonpoint source pollution or improving habitat, intend to reach those goals by persuading landowners to adopt behaviors and management practices consistent with environmental restoration and protection. Our purpose with this paper is to identify barriers for improving voluntary environmental management programs and ways to overcome them. We first draw upon insights regarding data, learning, and adaptation from the adaptive management and performance management literatures, describing three key issues: overcoming information constraints, structural limitations, and organizational culture. Although these lessons are applicable to a variety of voluntary environmental management programs, we then present the issues in the context of on-going research for nonpoint source water quality pollution. We end the discussion by highlighting important elements for advancing voluntary program efforts.

  13. Improving Emission Estimates With The Community Emissions Data System (CEDS

    Science.gov (United States)

    Smith, S.; Hoesly, R. M.

    2016-12-01

    Inventory data is a key component of scientific and regulatory efforts focused on air pollution, climate and global change and also a critical compliment for observational emission efforts. The Community Emissions Data System (CEDS) project aims to provide consistent estimates of historical anthropogenic emissions using an open-source data system. The first product from this system was anthropogenic emissions over 1750-2014 of reactive gases, aerosols, and carbon dioxide, for use in CMIP6. These data are annually resolved, have monthly seasonality, were estimated at a moderately detailed level of 50+ sectors and 8 fuel types, and were mapped to spatial grids. CEDS combines bottom-up default emissions estimates that are calibrated to country-level inventories where these are deemed reliable. Outside of years where inventories are available, driver data and emission factors are extended using user-defined rules. The system is designed to facilitate annual updates (so the most recent inventory data is available). The software and most input data are being released as open source software in order to provide access to assumptions, improve emission estimates, and allow access to fundamental emissions data for research purposes. We report on our efforts to expand the spatial resolution by estimating emission trends by state/province for large countries. This will allow spatial shifts in emissions over time to be better represented and make the data more useful for research such as that discussed in this session. As part of these improvements we will add support for use of regionally-specific emission proxies and point sources. A key focus of ongoing research is better quantification of emissions uncertainty. Our goal is consistent estimation of uncertainty over time, sector, and country. We will also report on results estimating the additional uncertainty associated with extending emissions data over recent years. http://www.globalchange.umd.edu/CEDS/

  14. Improving lensing cluster mass estimate with flexion

    CERN Document Server

    Cardone, Vincenzo F; Er, Xinzhong; Maoli, Roberto; Scaramella, Roberto

    2016-01-01

    Gravitational lensing has long been considered as a valuable tool to determine the total mass of galaxy clusters. The shear profile as inferred from the statistics of ellipticity of background galaxies allows to probe the cluster intermediate and outer regions thus determining the virial mass estimate. However, the mass sheet degeneracy and the need for a large number of background galaxies motivate the search for alternative tracers which can break the degeneracy among model parameters and hence improve the accuracy of the mass estimate. Lensing flexion, i.e. the third derivative of the lensing potential, has been suggested as a good answer to the above quest since it probes the details of the mass profile. We investigate here whether this is indeed the case considering jointly using weak lensing, magnification and flexion. We use a Fisher matrix analysis to forecast the relative improvement in the mass accuracy for different assumptions on the shear and flexion signal - to - noise (S/N) ratio also varying t...

  15. Improving Lidar Turbulence Estimates for Wind Energy

    Energy Technology Data Exchange (ETDEWEB)

    Newman, Jennifer F.; Clifton, Andrew; Churchfield, Matthew J.; Klein, Petra

    2016-10-06

    Remote sensing devices (e.g., lidars) are quickly becoming a cost-effective and reliable alternative to meteorological towers for wind energy applications. Although lidars can measure mean wind speeds accurately, these devices measure different values of turbulence intensity (TI) than an instrument on a tower. In response to these issues, a lidar TI error reduction model was recently developed for commercially available lidars. The TI error model first applies physics-based corrections to the lidar measurements, then uses machine-learning techniques to further reduce errors in lidar TI estimates. The model was tested at two sites in the Southern Plains where vertically profiling lidars were collocated with meteorological towers. This presentation primarily focuses on the physics-based corrections, which include corrections for instrument noise, volume averaging, and variance contamination. As different factors affect TI under different stability conditions, the combination of physical corrections applied in L-TERRA changes depending on the atmospheric stability during each 10-minute time period. This stability-dependent version of L-TERRA performed well at both sites, reducing TI error and bringing lidar TI estimates closer to estimates from instruments on towers. However, there is still scatter evident in the lidar TI estimates, indicating that there are physics that are not being captured in the current version of L-TERRA. Two options are discussed for modeling the remainder of the TI error physics in L-TERRA: machine learning and lidar simulations. Lidar simulations appear to be a better approach, as they can help improve understanding of atmospheric effects on TI error and do not require a large training data set.

  16. Improved moment scaling estimation for multifractal signals

    Directory of Open Access Journals (Sweden)

    D. Veneziano

    2009-11-01

    Full Text Available A fundamental problem in the analysis of multifractal processes is to estimate the scaling exponent K(q of moments of different order q from data. Conventional estimators use the empirical moments μ^rq=⟨ | εr(τ|q of wavelet coefficients εr(τ, where τ is location and r is resolution. For stationary measures one usually considers "wavelets of order 0" (averages, whereas for functions with multifractal increments one must use wavelets of order at least 1. One obtains K^(q as the slope of log( μ^rq against log(r over a range of r. Negative moments are sensitive to measurement noise and quantization. For them, one typically uses only the local maxima of | εr(τ| (modulus maxima methods. For the positive moments, we modify the standard estimator K^(q to significantly reduce its variance at the expense of a modest increase in the bias. This is done by separately estimating K(q from sub-records and averaging the results. For the negative moments, we show that the standard modulus maxima estimator is biased and, in the case of additive noise or quantization, is not applicable with wavelets of order 1 or higher. For these cases we propose alternative estimators. We also consider the fitting of parametric models of K(q and show how, by splitting the record into sub-records as indicated above, the accuracy of standard methods can be significantly improved.

  17. An improved estimation and focusing scheme for vector velocity estimation

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Munk, Peter

    1999-01-01

    beamforming method. A modified autocorrelation approach employing fourth order moments of the input data is used for velocity estimation. The new estimator calculates the axial and lateral velocity component independently of each other. The estimation is optimized for differences in axial and lateral......The full blood velocity vector must be estimated in medical ultrasound to give a correct depiction of the blood flow. This can be done by introducing a transversely oscillating pulse-echo ultrasound field, which makes the received signal influenced by a transverse motion. Such an approach...... modulation periods in the ultrasound field by using a lag different from one in the estimation process, and noise artifacts are reduced by using averaging of RF samples. Furthermore, compensation for the axial velocity can be introduced, and the velocity estimation is done at a fixed depth in tissue...

  18. Enabling Continuous Quality Improvement in Practice: The Role and Contribution of Facilitation

    Science.gov (United States)

    Harvey, Gillian; Lynch, Elizabeth

    2017-01-01

    Facilitating the implementation of continuous quality improvement (CQI) is a complex undertaking. Numerous contextual factors at a local, organizational, and health system level can influence the trajectory and ultimate success of an improvement program. Some of these contextual factors are amenable to modification, others less so. As part of planning and implementing healthcare improvement, it is important to assess and build an understanding of contextual factors that might present barriers to or enablers of implementation. On the basis of this initial diagnosis, it should then be possible to design and implement the improvement intervention in a way that is responsive to contextual barriers and enablers, often described as “tailoring” the implementation approach. Having individuals in the active role of facilitators is proposed as an effective way of delivering a context-sensitive, tailored approach to implementing CQI. This paper presents an overview of the facilitator role in implementing CQI. Drawing on empirical evidence from the use of facilitator roles in healthcare, the type of skills and knowledge required will be considered, along with the type of facilitation strategies that can be employed in the implementation process. Evidence from both case studies and systematic reviews of facilitation will be reviewed and key lessons for developing and studying the role in the future identified. PMID:28275594

  19. Do Indonesian Children's Experiences with Large Currency Units Facilitate Magnitude Estimation of Long Temporal Periods?

    Science.gov (United States)

    Cheek, Kim A.

    2016-09-01

    Ideas about temporal (and spatial) scale impact students' understanding across science disciplines. Learners have difficulty comprehending the long time periods associated with natural processes because they have no referent for the magnitudes involved. When people have a good "feel" for quantity, they estimate cardinal number magnitude linearly. Magnitude estimation errors can be explained by confusion about the structure of the decimal number system, particularly in terms of how powers of ten are related to one another. Indonesian children regularly use large currency units. This study investigated if they estimate long time periods accurately and if they estimate those time periods the same way they estimate analogous currency units. Thirty-nine children from a private International Baccalaureate school estimated temporal magnitudes up to 10,000,000,000 years in a two-part study. Artifacts children created were compared to theoretical model predictions previously used in number magnitude estimation studies as reported by Landy et al. (Cognitive Science 37:775-799, 2013). Over one third estimated the magnitude of time periods up to 10,000,000,000 years linearly, exceeding what would be expected based upon prior research with children this age who lack daily experience with large quantities. About half treated successive powers of ten as a count sequence instead of multiplicatively related when estimating magnitudes of time periods. Children generally estimated the magnitudes of long time periods and familiar, analogous currency units the same way. Implications for ways to improve the teaching and learning of this crosscutting concept/overarching idea are discussed.

  20. Do Indonesian Children's Experiences with Large Currency Units Facilitate Magnitude Estimation of Long Temporal Periods?

    Science.gov (United States)

    Cheek, Kim A.

    2017-08-01

    Ideas about temporal (and spatial) scale impact students' understanding across science disciplines. Learners have difficulty comprehending the long time periods associated with natural processes because they have no referent for the magnitudes involved. When people have a good "feel" for quantity, they estimate cardinal number magnitude linearly. Magnitude estimation errors can be explained by confusion about the structure of the decimal number system, particularly in terms of how powers of ten are related to one another. Indonesian children regularly use large currency units. This study investigated if they estimate long time periods accurately and if they estimate those time periods the same way they estimate analogous currency units. Thirty-nine children from a private International Baccalaureate school estimated temporal magnitudes up to 10,000,000,000 years in a two-part study. Artifacts children created were compared to theoretical model predictions previously used in number magnitude estimation studies as reported by Landy et al. (Cognitive Science 37:775-799, 2013). Over one third estimated the magnitude of time periods up to 10,000,000,000 years linearly, exceeding what would be expected based upon prior research with children this age who lack daily experience with large quantities. About half treated successive powers of ten as a count sequence instead of multiplicatively related when estimating magnitudes of time periods. Children generally estimated the magnitudes of long time periods and familiar, analogous currency units the same way. Implications for ways to improve the teaching and learning of this crosscutting concept/overarching idea are discussed.

  1. Laser photogrammetry improves size and demographic estimates for whale sharks

    National Research Council Canada - National Science Library

    Rohner, Christoph A; Richardson, Anthony J; Prebble, Clare E M; Marshall, Andrea D; Bennett, Michael B; Weeks, Scarla J; Cliff, Geremy; Wintner, Sabine P; Pierce, Simon J

    2015-01-01

    .... We used laser photogrammetry at two aggregation sites to obtain more accurate size estimates of free-swimming whale sharks compared to visual estimates, allowing improved estimates of biological parameters...

  2. Two different strategies to facilitate involvement in healthcare improvements: a Swedish county council initiative.

    Science.gov (United States)

    Andersson, Ann-Christine; Idvall, Ewa; Perseius, Kent-Inge; Elg, Mattias

    2014-09-01

    From a management point of view, there are many different approaches from which to choose to engage staff members in initiatives to improve performance. The present study evaluated how two different types of improvement strategies facilitate and encourage involvement of different professional groups in health-care organizations. Empirical data of two different types of strategies were collected within an improvement project in a County Council in Sweden. The data analysis was carried out through classifying the participants' profession, position, gender, and the organizational administration of which they were a part, in relation to their participation. An improvement project in a County Council in Sweden. Designed Improvement Processes consisted of n=105 teams and Intrapreneurship Projects of n=202 projects. Two different types of improvement strategies, Designed Improvement Processes and Intrapreneurship Projects. How two different types of improvement strategies facilitate and encourage involvement of different professional groups in healthcare organizations. Nurses were the largest group participating in both improvement initiatives. Physicians were also well represented, although they seemed to prefer the less structured Intrapreneurship Projects approach. Assistant nurses, being the second largest staff group, were poorly represented in both initiatives. This indicates that the benefits and support for one group may push another group aside. Managers need to give prerequisites and incentives for staff who do not participate in improvements to do so. Comparisons of different types of improvement initiatives are an underused research strategy that yields interesting and thoughtful results.

  3. Teachers' Perspectives and Suggestions for Improving Teacher Education to Facilitate Student Learning

    Science.gov (United States)

    Linkenhoker, Dina L.

    2012-01-01

    The purpose of this transcendental phenomenological study is to give teachers a voice to express their self-efficacy beliefs, their opinions about the content and the effectiveness of their teacher preparation programs to facilitate student learning, and to hear their suggestions for improving teacher education to enable future educators to…

  4. What impedes and what facilitates a quality improvement project for older hospitalized patients?

    NARCIS (Netherlands)

    IJkema, R.; Langelaan, M.; Steeg, L. van de; Wagner, C.

    2014-01-01

    Objective: To gain insight into which factors impede, and which facilitate, the implementation of a complex multi-component improvement initiative in hospitalized older patients. Design: A qualitative study based on semi-structured interviews. The three dimensions of Pettigrew and Whipp's theoretica

  5. Improving Distribution Resiliency with Microgrids and State and Parameter Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Tuffner, Francis K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Williams, Tess L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Schneider, Kevin P. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Elizondo, Marcelo A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sun, Yannan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Liu, Chen-Ching [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Xu, Yin [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gourisetti, Sri Nikhil Gup [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-09-30

    Modern society relies on low-cost reliable electrical power, both to maintain industry, as well as provide basic social services to the populace. When major disturbances occur, such as Hurricane Katrina or Hurricane Sandy, the nation’s electrical infrastructure can experience significant outages. To help prevent the spread of these outages, as well as facilitating faster restoration after an outage, various aspects of improving the resiliency of the power system are needed. Two such approaches are breaking the system into smaller microgrid sections, and to have improved insight into the operations to detect failures or mis-operations before they become critical. Breaking the system into smaller sections of microgrid islands, power can be maintained in smaller areas where distribution generation and energy storage resources are still available, but bulk power generation is no longer connected. Additionally, microgrid systems can maintain service to local pockets of customers when there has been extensive damage to the local distribution system. However, microgrids are grid connected a majority of the time and implementing and operating a microgrid is much different than when islanded. This report discusses work conducted by the Pacific Northwest National Laboratory that developed improvements for simulation tools to capture the characteristics of microgrids and how they can be used to develop new operational strategies. These operational strategies reduce the cost of microgrid operation and increase the reliability and resilience of the nation’s electricity infrastructure. In addition to the ability to break the system into microgrids, improved observability into the state of the distribution grid can make the power system more resilient. State estimation on the transmission system already provides great insight into grid operations and detecting abnormal conditions by leveraging existing measurements. These transmission-level approaches are expanded to using

  6. Assessing and Improving Student Organizations: Resources for Facilitators CD-ROM. The Assessing and Improving Student Organization (AISO) Program

    Science.gov (United States)

    Nolfi, Tricia; Ruben, Brent D.

    2010-01-01

    This companion to the "Guide for Students" and "Student Workbook" includes the complete set of PowerPoint slides, a PDF of the Facilitator's Guide in PPT (PowerPoint) slide show format, and PDFs of all scoring sheets, handouts and project planning guides needed for the AISO (Assessing and Improving Student Organization) process. The Assessing and…

  7. Facilitating participatory steps for planning and implementing low-cost improvements in small workplaces.

    Science.gov (United States)

    Kogi, Kazutaka

    2008-07-01

    In this paper, practical means of facilitating participatory steps taken in workplace improvement programs in small workplaces were reviewed. The reviewed programs included those organized by partners of our Asian inter-country network for small enterprises, construction sites, home workplaces and agricultural farms. Trainers who commonly acted as facilitators were found to play multiple roles in helping managers, workers and farmers take initiative and achieve immediate improvements. The participatory steps were more successfully facilitated when the trainers supported (a) building on local good practice, (b) focusing on a range of basic ergonomics principles, and (c) stepwise progress through feedback of achievements. The use of action-oriented toolkits comprising low-cost action checklists and group work guides was commonly helpful. The locally adjusted nature of the toolkits seemed essential. Trainers could thus help people build local initiative, plan and implement low-cost ideas through serial group work steps and confirm benefits in a stepwise manner. The review of the results suggested that a local network of trainers trained in the use of locally adjusted toolkits was vital for facilitating effective improvements in different small workplaces.

  8. Improved variance estimation along sample eigenvectors

    NARCIS (Netherlands)

    Hendrikse, Anne; Veldhuis, Raymond; Spreeuwers, Luuk

    2009-01-01

    Second order statistics estimates in the form of sample eigenvalues and sample eigenvectors give a sub optimal description of the population density. So far only attempts have been made to reduce the bias in the sample eigenvalues. However, because the sample eigenvectors differ from the population

  9. Evidence-based practice barriers and facilitators from a continuous quality improvement perspective: an integrative review.

    Science.gov (United States)

    Solomons, Nan M; Spross, Judith A

    2011-01-01

    The purpose of the present study is to examine the barriers and facilitators to evidence-based practice (EBP) using Shortell's framework for continuous quality improvement (CQI). EBP is typically undertaken to improve practice. Although there have been many studies focused on the barriers and facilitators to adopting EBP, these have not been tied explicitly to CQI frameworks. CINAHL, Academic Search Premier, Medline, Psych Info, ABI/Inform and LISTA databases were searched using the keywords: nurses, information literacy, access to information, sources of knowledge, decision making, research utilization, information seeking behaviour and nursing practice, evidence-based practice. Shortell's framework was used to organize the barriers and facilitators. Across the articles, the most common barriers were lack of time and lack of autonomy to change practice which falls within the strategic and cultural dimensions in Shortell's framework. Barriers and facilitators to EBP adoption occur at the individual and institutional levels. Solutions to the barriers need to be directed to the dimension where the barrier occurs, while recognizing that multidimensional approaches are essential to the success of overcoming these barriers. The findings of the present study can help nurses identify barriers and implement strategies to promote EBP as part of CQI. © 2010 The Authors. Journal compilation © 2010 Blackwell Publishing Ltd.

  10. Improving the teaching skills of residents as tutors/facilitators and addressing the shortage of faculty facilitators for PBL modules.

    Science.gov (United States)

    Jafri, Wasim; Mumtaz, Khalid; Burdick, William P; Morahan, Page S; Freeman, Rosslynne; Zehra, Tabassum

    2007-10-08

    Residents play an important role in teaching of medical undergraduate students. Despite their importance in teaching undergraduates they are not involved in any formal training in teaching and leadership skills. We aimed to compare the teaching skills of residents with faculty in facilitating small group Problem Based Learning (PBL) sessions. This quasi experimental descriptive comparative research involved 5 postgraduate year 4 residents and five senior faculty members. The study was conducted with all phase III (Final year) students rotating in Gastroenterology. The residents and faculty members received brief training of one month in facilitation and core principles of adult education. Different aspects of teaching skills of residents and faculty were evaluated by students on a questionnaire (graded on Likert Scale from 1 to 10) assessing i) Knowledge Base-content Learning (KBL), ii) PBL, iii) Student Centered Learning (SCL) and iv) Group Skills (GS). There were 33 PBL teaching sessions in which 120 evaluation forms were filled; out of these 53% forms were filled for residents and 47% for faculty group. The faculty showed a statistically greater rating in "KBL" (faculty 8.37 Vs resident 7.94; p-value 0.02), "GS" (faculty 8.06 vs. residents 7.68; p-value 0.04). Differences in faculty and resident scores in "the PBL" and "SCL" were not significant. The overall score of faculty facilitators, however, was statistically significant for resident facilitators. (p = .05). 1) Residents are an effective supplement to faculty members for PBL; 2) Additional facilitators for PBL sessions can be identified in an institution by involvement of residents in teacher training workshops.

  11. Improving the teaching skills of residents as tutors/facilitators and addressing the shortage of faculty facilitators for PBL modules

    Directory of Open Access Journals (Sweden)

    Morahan Page S

    2007-10-01

    Full Text Available Abstract Background Residents play an important role in teaching of medical undergraduate students. Despite their importance in teaching undergraduates they are not involved in any formal training in teaching and leadership skills. We aimed to compare the teaching skills of residents with faculty in facilitating small group Problem Based Learning (PBL sessions. Methods This quasi experimental descriptive comparative research involved 5 postgraduate year 4 residents and five senior faculty members. The study was conducted with all phase III (Final year students rotating in Gastroenterology. The residents and faculty members received brief training of one month in facilitation and core principles of adult education. Different aspects of teaching skills of residents and faculty were evaluated by students on a questionnaire (graded on Likert Scale from 1 to 10 assessing i Knowledge Base-content Learning (KBL, ii PBL, iii Student Centered Learning (SCL and iv Group Skills (GS. Results There were 33 PBL teaching sessions in which 120 evaluation forms were filled; out of these 53% forms were filled for residents and 47% for faculty group. The faculty showed a statistically greater rating in "KBL" (faculty 8.37 Vs resident 7.94; p-value 0.02, "GS" (faculty 8.06 vs. residents 7.68; p-value 0.04. Differences in faculty and resident scores in "the PBL" and "SCL" were not significant. The overall score of faculty facilitators, however, was statistically significant for resident facilitators. (p = .05. Conclusion 1 Residents are an effective supplement to faculty members for PBL; 2 Additional facilitators for PBL sessions can be identified in an institution by involvement of residents in teacher training workshops.

  12. Improved linear least squares estimation using bounded data uncertainty

    KAUST Repository

    Ballal, Tarig

    2015-04-01

    This paper addresses the problemof linear least squares (LS) estimation of a vector x from linearly related observations. In spite of being unbiased, the original LS estimator suffers from high mean squared error, especially at low signal-to-noise ratios. The mean squared error (MSE) of the LS estimator can be improved by introducing some form of regularization based on certain constraints. We propose an improved LS (ILS) estimator that approximately minimizes the MSE, without imposing any constraints. To achieve this, we allow for perturbation in the measurement matrix. Then we utilize a bounded data uncertainty (BDU) framework to derive a simple iterative procedure to estimate the regularization parameter. Numerical results demonstrate that the proposed BDU-ILS estimator is superior to the original LS estimator, and it converges to the best linear estimator, the linear-minimum-mean-squared error estimator (LMMSE), when the elements of x are statistically white.

  13. Improving lidar turbulence estimates for wind energy

    Science.gov (United States)

    Newman, J. F.; Clifton, A.; Churchfield, M. J.; Klein, P.

    2016-09-01

    Remote sensing devices (e.g., lidars) are quickly becoming a cost-effective and reliable alternative to meteorological towers for wind energy applications. Although lidars can measure mean wind speeds accurately, these devices measure different values of turbulence intensity (TI) than an instrument on a tower. In response to these issues, a lidar TI error reduction model was recently developed for commercially available lidars. The TI error model first applies physics-based corrections to the lidar measurements, then uses machine-learning techniques to further reduce errors in lidar TI estimates. The model was tested at two sites in the Southern Plains where vertically profiling lidars were collocated with meteorological towers. Results indicate that the model works well under stable conditions but cannot fully mitigate the effects of variance contamination under unstable conditions. To understand how variance contamination affects lidar TI estimates, a new set of equations was derived in previous work to characterize the actual variance measured by a lidar. Terms in these equations were quantified using a lidar simulator and modeled wind field, and the new equations were then implemented into the TI error model.

  14. Improved estimation in a non-Gaussian parametric regression

    CERN Document Server

    Pchelintsev, Evgeny

    2011-01-01

    The paper considers the problem of estimating the parameters in a continuous time regression model with a non-Gaussian noise of pulse type. The noise is specified by the Ornstein-Uhlenbeck process driven by the mixture of a Brownian motion and a compound Poisson process. Improved estimates for the unknown regression parameters, based on a special modification of the James-Stein procedure with smaller quadratic risk than the usual least squares estimates, are proposed. The developed estimation scheme is applied for the improved parameter estimation in the discrete time regression with the autoregressive noise depending on unknown nuisance parameters.

  15. Improving care for people after stroke: how change was actively facilitated.

    Science.gov (United States)

    Bamford, David; Rothwell, Katy; Tyrrell, Pippa; Boaden, Ruth

    2013-01-01

    This paper aims to report on the approach to change used in the development of a tool to assess patient status six months after stroke (the Greater Manchester Stroke Assessment Tool: GM-SAT). The overall approach to change is based on the Promoting Action on Research Implementation in Health Services (PARiHS) Framework, which involves extensive stakeholder engagement before implementation. A key feature was the use of a facilitator without previous clinical experience. The active process of change involved a range of stakeholders--commissioners, patients and professionals--as well as review of published research evidence. The result of this process was the creation of the GM-SAT. The details of the decision processes within the tool included a range of perspectives; the process of localisation led commissioners to identify gaps in care provision as well as learning from others in terms of how services might be provided and organised. The facilitator role was key at all stages in bringing together the wide range of perspectives; the relatively neutral perceived status of the facilitator enabled resistance to change to be minimised. The output of this project, the GM-SAT, has the potential to significantly improve patients' physical, psychological and social outcomes and optimise their quality of life. This will be explored further in future phases of work. A structured process of change which included multiple stakeholder involvement throughout, localisation of approaches and a dedicated independent facilitator role was effective in achieving the development of a useful tool (GM-SAT).

  16. ESTIMATING NUMBER DENSITY NV – A COMPARISON OF AN IMPROVED SALTYKOV ESTIMATOR AND THE DISECTOR METHOD

    Directory of Open Access Journals (Sweden)

    Ashot Davtian

    2011-05-01

    Full Text Available Two methods for the estimation of number per unit volume NV of spherical particles are discussed: the (physical disector (Sterio, 1984 and Saltykov's estimator (Saltykov, 1950; Fullman, 1953. A modification of Saltykov's estimator is proposed which reduces the variance. Formulae for bias and variance are given for both disector and improved Saltykov estimator for the case of randomly positioned particles. They enable the comparison of the two estimators with respect to their precision in terms of mean squared error.

  17. Improving Sample Estimate Reliability and Validity with Linked Ego Networks

    CERN Document Server

    Lu, Xin

    2012-01-01

    Respondent-driven sampling (RDS) is currently widely used in public health, especially for the study of hard-to-access populations such as injecting drug users and men who have sex with men. The method works like a snowball sample but can, given that some assumptions are met, generate unbiased population estimates. However, recent studies have shown that traditional RDS estimators are likely to generate large variance and estimate error. To improve the performance of traditional estimators, we propose a method to generate estimates with ego network data collected by RDS. By simulating RDS processes on an empirical human social network with known population characteristics, we have shown that the precision of estimates on the composition of network link types is greatly improved with ego network data. The proposed estimator for population characteristics shows superior advantage over traditional RDS estimators, and most importantly, the new method exhibits strong robustness to the recruitment preference of res...

  18. Improving Collective Estimations Using Resistance to Social Influence.

    Directory of Open Access Journals (Sweden)

    Gabriel Madirolas

    2015-11-01

    Full Text Available Groups can make precise collective estimations in cases like the weight of an object or the number of items in a volume. However, in others tasks, for example those requiring memory or mental calculation, subjects often give estimations with large deviations from factual values. Allowing members of the group to communicate their estimations has the additional perverse effect of shifting individual estimations even closer to the biased collective estimation. Here we show that this negative effect of social interactions can be turned into a method to improve collective estimations. We first obtained a statistical model of how humans change their estimation when receiving the estimates made by other individuals. We confirmed using existing experimental data its prediction that individuals use the weighted geometric mean of private and social estimations. We then used this result and the fact that each individual uses a different value of the social weight to devise a method that extracts the subgroups resisting social influence. We found that these subgroups of individuals resisting social influence can make very large improvements in group estimations. This is in contrast to methods using the confidence that each individual declares, for which we find no improvement in group estimations. Also, our proposed method does not need to use historical data to weight individuals by performance. These results show the benefits of using the individual characteristics of the members in a group to better extract collective wisdom.

  19. Beamspace root estimator bank for DOA estimation with an improved threshold performance

    OpenAIRE

    Vasylyshyn, V. I.

    2014-01-01

    A beamspace root modification of pseudorandom joint estimation strategy (PR-JES) with using of the outlier identification and cure technique is presented. This technique allows identifying and rectifying the outliers that can arise in the direction-of-arrival (DOA) estimation by eigenstructure methods. It improves the performance of beamspace root estimator bank in the threshold region of signal-to-noise ratio.

  20. The expression of glycerol facilitators from various yeast species improves growth on glycerol of Saccharomyces cerevisiae

    DEFF Research Database (Denmark)

    Klein, Mathias; Islam, Zia ul; Knudsen, Peter Boldsen;

    2016-01-01

    of predicted glycerol facilitators (Fps1 homologues) from superior glycerol-utilizing yeast species such as Pachysolen tannophilus, Komagataella pastoris, Yarrowia lipolytica and Cyberlindnera jadinii significantly improves the growth performance on glycerol of the previously selected glycerol-consuming S....... cerevisiae wild-type strain (CBS 6412-13A). The maximum specific growth rate increased from 0.13 up to 0.18 h−1 and a biomass yield coefficient of 0.56 gDW/gglycerol was observed. These results pave the way for exploiting the assets of glycerol in the production of fuels, chemicals and pharmaceuticals based...

  1. Laser photogrammetry improves size and demographic estimates for whale sharks

    OpenAIRE

    Rohner, Christoph A.; Richardson, Anthony J.; Prebble, Clare E.M.; Marshall, Andrea D.; Bennett, Michael B.; Weeks, Scarla J.; Geremy Cliff; Wintner, Sabine P.; Pierce, Simon J.

    2015-01-01

    Whale sharks Rhincodon typus are globally threatened, but a lack of biological and demographic information hampers an accurate assessment of their vulnerability to further decline or capacity to recover. We used laser photogrammetry at two aggregation sites to obtain more accurate size estimates of free-swimming whale sharks compared to visual estimates, allowing improved estimates of biological parameters. Individual whale sharks ranged from 432–917 cm total length (TL) (mean ± SD = 673 ± 11...

  2. Colloid facilitated transport in fractured rocks : parameter estimation and comparison with experimental data.

    Energy Technology Data Exchange (ETDEWEB)

    Viswanathan, H. S. (Hari Selvi); Wolfsberg, A. V. (Andrew V.); Reimus, P. W. (Paul William); Ware, S. D. (Stuart D.); Lu, G. (Guoping)

    2003-01-01

    Colloid-facilitated migration of plutonium in fractured rock has been implicated in both field and laboratory studies . Other reactive radionuclides may also experience enhanced mobility due to groundwater colloids. Model prediction of this process is necessary for assessment of contaminant boundaries in systems for which radionuclides are already in the groundwater and for performance assessment of potential repositories for radioactive waste. Therefore, a reactive transport model is developed and parameterized using results from controlled laboratory fracture column experiments. Silica, montmorillonite and clinoptilolite colloids are used in the experiments along with plutonium and Tritium . . The goal of the numerical model is to identify and parameterize the physical and chemical processes that affect the colloid-facilitated transport of plutonium in the fractures. The parameters used in this model are similar in form to those that might be used in a field-scale transport model.

  3. Key components of external facilitation in an acute stroke quality improvement collaborative in the Veterans Health Administration.

    Science.gov (United States)

    Bidassie, Balmatee; Williams, Linda S; Woodward-Hagg, Heather; Matthias, Marianne S; Damush, Teresa M

    2015-05-14

    Facilitation is a key component for successful implementation in several implementation frameworks; however, there is a paucity of research specifying this component. As part of a stroke quality improvement intervention in the Veterans Health Administration (VHA), facilitation plus data feedback was compared to data feedback alone in 11 VA medical facilities. The objective of this study was to elucidate upon the facilitation components of the stroke quality improvement. We conducted a secondary evaluation of external facilitation using semi-structured interviews. Five facilitators and two program directors were interviewed. Qualitative analysis was performed on transcribed interviews to gain an understanding of the role and activities of external facilitators during the on-site and telephone facilitation. Quantitative frequencies were calculated from the self-reported time spent in facilitation tasks by facilitators. The external facilitators saw their role as empowering the clinical teams to take ownership of the process changes at the clinical sites to improve their performance quality. To fulfill this role, they reported engaging in a number of core tasks during telephone and on-site visits including: assessing the context in which the teams were currently operating, guiding the clinical teams through their planned changes and use of process improvement tools, identifying resources and making referrals, holding teams accountable for plan implementation with on-site visits, and providing support and encouragement to the teams. Time spent in facilitation activities changed across time from guiding change (early) to supporting efforts made by the clinical teams (later). Facilitation activity transitioned to more monitoring, problem solving, and intentional work to hand over the clinical improvement process to the site teams with the coach's role being increasingly that of a more distant consultant. Overall, this study demonstrated that external facilitation is not

  4. Improved ice loss estimate of the northwestern Greenland ice sheet

    DEFF Research Database (Denmark)

    Kjeldsen, K. K.; Khan, Shfaqat Abbas; Wahr, J.;

    2013-01-01

    We estimate ice volume change rates in the northwest Greenland drainage basin during 2003–2009 using Ice, Cloud and land Elevation Satellite (ICESat) laser altimeter data. Elevation changes are often reported to be largest near the frontal portion of outlet glaciers. To improve the volume change...... estimate, we supplement the ICESat data with altimeter surveys from NASA's Airborne Topographic Mapper from 2002 to 2010 and NASA's Land, Vegetation and Ice Sensor from 2010. The Airborne data are mainly concentrated along the ice margin and thus have a significant impact on the estimate of the volume...... change. Our results show that adding Airborne Topographic Mapper and Land, Vegetation and Ice Sensor data to the ICESat data increases the catchment-wide estimate of ice volume loss by 11%, mainly due to an improved volume loss estimate along the ice sheet margin. Furthermore, our results show...

  5. Estimate of error bounds in the improved support vector regression

    Institute of Scientific and Technical Information of China (English)

    SUN Yanfeng; LIANG Yanchun; WU Chunguo; YANG Xiaowei; LEE Heow Pueh; LIN Wu Zhong

    2004-01-01

    An estimate of a generalization error bound of the improved support vector regression(SVR)is provided based on our previous work.The boundedness of the error of the improved SVR is proved when the algorithm is applied to the function approximation.

  6. An Example of an Improvable Rao-Blackwell Improvement, Inefficient Maximum Likelihood Estimator, and Unbiased Generalized Bayes Estimator.

    Science.gov (United States)

    Galili, Tal; Meilijson, Isaac

    2016-01-02

    The Rao-Blackwell theorem offers a procedure for converting a crude unbiased estimator of a parameter θ into a "better" one, in fact unique and optimal if the improvement is based on a minimal sufficient statistic that is complete. In contrast, behind every minimal sufficient statistic that is not complete, there is an improvable Rao-Blackwell improvement. This is illustrated via a simple example based on the uniform distribution, in which a rather natural Rao-Blackwell improvement is uniformly improvable. Furthermore, in this example the maximum likelihood estimator is inefficient, and an unbiased generalized Bayes estimator performs exceptionally well. Counterexamples of this sort can be useful didactic tools for explaining the true nature of a methodology and possible consequences when some of the assumptions are violated. [Received December 2014. Revised September 2015.].

  7. An Example of an Improvable Rao–Blackwell Improvement, Inefficient Maximum Likelihood Estimator, and Unbiased Generalized Bayes Estimator

    Science.gov (United States)

    Galili, Tal; Meilijson, Isaac

    2016-01-01

    The Rao–Blackwell theorem offers a procedure for converting a crude unbiased estimator of a parameter θ into a “better” one, in fact unique and optimal if the improvement is based on a minimal sufficient statistic that is complete. In contrast, behind every minimal sufficient statistic that is not complete, there is an improvable Rao–Blackwell improvement. This is illustrated via a simple example based on the uniform distribution, in which a rather natural Rao–Blackwell improvement is uniformly improvable. Furthermore, in this example the maximum likelihood estimator is inefficient, and an unbiased generalized Bayes estimator performs exceptionally well. Counterexamples of this sort can be useful didactic tools for explaining the true nature of a methodology and possible consequences when some of the assumptions are violated. [Received December 2014. Revised September 2015.] PMID:27499547

  8. Conducting an audit to improve the facilitation of emergency maternal and newborn referral in northern Ghana.

    Science.gov (United States)

    Awoonor-Williams, John Koku; Bailey, Patricia E; Yeji, Francis; Adongo, Ayire Emmanuel; Baffoe, Peter; Williams, Afua; Mercer, Sarah

    2015-10-01

    Ghana Health Service conducted an audit to strengthen the referral system for pregnant or recently pregnant women and newborns in northern Ghana. The audit took place in 16 facilities with two 3-month cycles of data collection in 2011. Midwife-led teams tracked 446 referred women until they received definitive treatment. Between the two audit cycles, teams identified and implemented interventions to address gaps in referral services. During this time period, we observed important increases in facilitating referral mechanisms, including a decrease in the dependence on taxis in favour of national or facility ambulances/vehicles; an increase in health workers escorting referrals to the appropriate receiving facility; greater use of referral slips and calling ahead to alert receiving facilities and higher feedback rates. As referral systems require attention from multiple levels of engagement, on the provider end we found that regional managers increasingly resolved staffing shortages; district management addressed the costliness and lack of transport and increased midwives' ability to communicate with pregnant women and drivers; and that facility staff increasingly adhered to guidelines and facilitating mechanisms. By conducting an audit of maternal and newborn referrals, the Ghana Health Service identified areas for improvement that service providers and management at multiple levels addressed, demonstrating a platform for problem solving that could be a model elsewhere.

  9. Improvement Schemes for Indoor Mobile Location Estimation: A Survey

    Directory of Open Access Journals (Sweden)

    Jianga Shang

    2015-01-01

    Full Text Available Location estimation is significant in mobile and ubiquitous computing systems. The complexity and smaller scale of the indoor environment impose a great impact on location estimation. The key of location estimation lies in the representation and fusion of uncertain information from multiple sources. The improvement of location estimation is a complicated and comprehensive issue. A lot of research has been done to address this issue. However, existing research typically focuses on certain aspects of the problem and specific methods. This paper reviews mainstream schemes on improving indoor location estimation from multiple levels and perspectives by combining existing works and our own working experiences. Initially, we analyze the error sources of common indoor localization techniques and provide a multilayered conceptual framework of improvement schemes for location estimation. This is followed by a discussion of probabilistic methods for location estimation, including Bayes filters, Kalman filters, extended Kalman filters, sigma-point Kalman filters, particle filters, and hidden Markov models. Then, we investigate the hybrid localization methods, including multimodal fingerprinting, triangulation fusing multiple measurements, combination of wireless positioning with pedestrian dead reckoning (PDR, and cooperative localization. Next, we focus on the location determination approaches that fuse spatial contexts, namely, map matching, landmark fusion, and spatial model-aided methods. Finally, we present the directions for future research.

  10. Colloid facilitated transport in fractured rock : parameter estimation and comparison with experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Viswanathan, H. S. (Hari Selvi); Wolfsberg, A. V. (Andrew V.)

    2002-01-01

    Many contaminants in groundwater strongly interact with the immobile porous matrix, which retards their movement relative to groundwater flow. Colloidal particles, which are often present in groundwater, have a relatively small size and large specific surface area which makes it possible for them to also adsorb pollutants. The sorption of tracers to colloids may enhance their mobility in groundwater, relative to the case where colloids are not present. A class of pollutants for which colloid-facilitated transport may be of particular significance are radioactive isotopes. A major reason for why geologic repositories are considered suitable for the disposal of spent nuclear fuel is the strong affinity of many radionuclides to adsorb onto the porous matrix. Therefore, radionuclides accidentally released, would be contained in the geological media by adsorption or filtration until sufficient decay takes place. However, the presence of colloids may enhance radionuclide mobility in the groundwater, and reduce the efficiency of geologic media to act as a natural barrier.

  11. Evaluation of attention training and metacognitive facilitation to improve reading comprehension in aphasia.

    Science.gov (United States)

    Lee, Jaime B; Moore Sohlberg, McKay

    2013-05-01

    This pilot study investigated the impact of direct attention training combined with metacognitive facilitation on reading comprehension in individuals with aphasia. A single-subject, multiple baseline design was employed across 4 participants to evaluate potential changes in reading comprehension resulting from an 8-week intervention using Attention Process Training-3 (APT-3). The primary outcome measure was a maze reading task. Pre- and posttesting included attention and reading comprehension measures. Visual inspection of graphed performance data across conditions was used as the primary method of analysis. Treatment effect sizes were calculated for changes in reading comprehension probes from baseline to maintenance phases. Two of the study's 4 participants demonstrated improvements in maze reading, with corresponding effect sizes that were small in magnitude according to benchmarks for aphasia treatment research. All 4 participants made improvements on select standardized measures of attention. Interventions that include a metacognitive component with direct attention training may elicit improvements in participants' attention and allocation of resources. Maze passage reading is a repeated measure that appears sensitive to treatment-related changes in reading comprehension. Issues for future research related to measurement, candidacy, and clinical delivery are discussed.

  12. Improving terrain height estimates from RADARSAT interferometric measurements

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, P.A.; Eichel, P.H.; Calloway, T.M.

    1998-03-01

    The authors describe two methods of combining two-pass RADAR-SAT interferometric phase maps with existing DTED (digital terrain elevation data) to produce improved terrain height estimates. The first is a least-squares estimation procedure that fits the unwrapped phase data to a phase map computed from the DTED. The second is a filtering technique that combines the interferometric height map with the DTED map based on spatial frequency content. Both methods preserve the high fidelity of the interferometric data.

  13. Parameter Estimation for Improving Association Indicators in Binary Logistic Regression

    Directory of Open Access Journals (Sweden)

    Mahdi Bashiri

    2012-02-01

    Full Text Available The aim of this paper is estimation of Binary logistic regression parameters for maximizing the log-likelihood function with improved association indicators. In this paper the parameter estimation steps have been explained and then measures of association have been introduced and their calculations have been analyzed. Moreover a new related indicators based on membership degree level have been expressed. Indeed association measures demonstrate the number of success responses occurred in front of failure in certain number of Bernoulli independent experiments. In parameter estimation, existing indicators values is not sensitive to the parameter values, whereas the proposed indicators are sensitive to the estimated parameters during the iterative procedure. Therefore, proposing a new association indicator of binary logistic regression with more sensitivity to the estimated parameters in maximizing the log- likelihood in iterative procedure is innovation of this study.

  14. Improved ice loss estimate of the northwestern Greenland ice sheet

    Science.gov (United States)

    Kjeldsen, Kristian K.; Khan, Shfaqat Abbas; Wahr, John; Korsgaard, Niels J.; KjæR, Kurt H.; BjøRk, Anders A.; Hurkmans, Ruud; Broeke, Michiel R.; Bamber, Jonathan L.; Angelen, Jan H.

    2013-02-01

    We estimate ice volume change rates in the northwest Greenland drainage basin during 2003-2009 using Ice, Cloud and land Elevation Satellite (ICESat) laser altimeter data. Elevation changes are often reported to be largest near the frontal portion of outlet glaciers. To improve the volume change estimate, we supplement the ICESat data with altimeter surveys from NASA's Airborne Topographic Mapper from 2002 to 2010 and NASA's Land, Vegetation and Ice Sensor from 2010. The Airborne data are mainly concentrated along the ice margin and thus have a significant impact on the estimate of the volume change. Our results show that adding Airborne Topographic Mapper and Land, Vegetation and Ice Sensor data to the ICESat data increases the catchment-wide estimate of ice volume loss by 11%, mainly due to an improved volume loss estimate along the ice sheet margin. Furthermore, our results show a significant acceleration in mass loss at elevations above 1200 m. Both the improved mass loss estimate along the ice sheet margin and the acceleration at higher elevations have implications for predictions of the elastic adjustment of the lithosphere caused by present-day ice mass changes. Our study shows that the use of ICESat data alone to predict elastic uplift rates biases the predicted rates by several millimeters per year at GPS locations along the northwestern coast.

  15. Improved Carrier Frequency Offset Estimation in OFDM Systems

    Institute of Scientific and Technical Information of China (English)

    LIU Xiao-ming; LIU Yuan-an

    2004-01-01

    A new carrier frequency offset estimation algorithm in Orthogonal Frequency Division Multiplexing (OFDM) systems is proposed. The proposed algorithm is an improvement of the Michele Morelli (M&M) algorithm. It also uses a training symbol which is divided into L>2 identical parts to estimate the carrier frequency offset, and the estimation range is ±L/2 the subcarrier spacing, but the performance of the proposed Maximum Likelihood (ML) algorithm is more robust and complex efficient than the M&M algorithm.

  16. Improved Estimation and Interpretation of Correlations in Neural Circuits

    Science.gov (United States)

    Yatsenko, Dimitri; Josić, Krešimir; Ecker, Alexander S.; Froudarakis, Emmanouil; Cotton, R. James; Tolias, Andreas S.

    2015-01-01

    Ambitious projects aim to record the activity of ever larger and denser neuronal populations in vivo. Correlations in neural activity measured in such recordings can reveal important aspects of neural circuit organization. However, estimating and interpreting large correlation matrices is statistically challenging. Estimation can be improved by regularization, i.e. by imposing a structure on the estimate. The amount of improvement depends on how closely the assumed structure represents dependencies in the data. Therefore, the selection of the most efficient correlation matrix estimator for a given neural circuit must be determined empirically. Importantly, the identity and structure of the most efficient estimator informs about the types of dominant dependencies governing the system. We sought statistically efficient estimators of neural correlation matrices in recordings from large, dense groups of cortical neurons. Using fast 3D random-access laser scanning microscopy of calcium signals, we recorded the activity of nearly every neuron in volumes 200 μm wide and 100 μm deep (150–350 cells) in mouse visual cortex. We hypothesized that in these densely sampled recordings, the correlation matrix should be best modeled as the combination of a sparse graph of pairwise partial correlations representing local interactions and a low-rank component representing common fluctuations and external inputs. Indeed, in cross-validation tests, the covariance matrix estimator with this structure consistently outperformed other regularized estimators. The sparse component of the estimate defined a graph of interactions. These interactions reflected the physical distances and orientation tuning properties of cells: The density of positive ‘excitatory’ interactions decreased rapidly with geometric distances and with differences in orientation preference whereas negative ‘inhibitory’ interactions were less selective. Because of its superior performance, this

  17. Improved estimation and interpretation of correlations in neural circuits.

    Directory of Open Access Journals (Sweden)

    Dimitri Yatsenko

    2015-03-01

    Full Text Available Ambitious projects aim to record the activity of ever larger and denser neuronal populations in vivo. Correlations in neural activity measured in such recordings can reveal important aspects of neural circuit organization. However, estimating and interpreting large correlation matrices is statistically challenging. Estimation can be improved by regularization, i.e. by imposing a structure on the estimate. The amount of improvement depends on how closely the assumed structure represents dependencies in the data. Therefore, the selection of the most efficient correlation matrix estimator for a given neural circuit must be determined empirically. Importantly, the identity and structure of the most efficient estimator informs about the types of dominant dependencies governing the system. We sought statistically efficient estimators of neural correlation matrices in recordings from large, dense groups of cortical neurons. Using fast 3D random-access laser scanning microscopy of calcium signals, we recorded the activity of nearly every neuron in volumes 200 μm wide and 100 μm deep (150-350 cells in mouse visual cortex. We hypothesized that in these densely sampled recordings, the correlation matrix should be best modeled as the combination of a sparse graph of pairwise partial correlations representing local interactions and a low-rank component representing common fluctuations and external inputs. Indeed, in cross-validation tests, the covariance matrix estimator with this structure consistently outperformed other regularized estimators. The sparse component of the estimate defined a graph of interactions. These interactions reflected the physical distances and orientation tuning properties of cells: The density of positive 'excitatory' interactions decreased rapidly with geometric distances and with differences in orientation preference whereas negative 'inhibitory' interactions were less selective. Because of its superior performance, this

  18. An improved method for estimating the frequency correlation function

    KAUST Repository

    Chelli, Ali

    2012-04-01

    For time-invariant frequency-selective channels, the transfer function is a superposition of waves having different propagation delays and path gains. In order to estimate the frequency correlation function (FCF) of such channels, the frequency averaging technique can be utilized. The obtained FCF can be expressed as a sum of auto-terms (ATs) and cross-terms (CTs). The ATs are caused by the autocorrelation of individual path components. The CTs are due to the cross-correlation of different path components. These CTs have no physical meaning and leads to an estimation error. We propose a new estimation method aiming to improve the estimation accuracy of the FCF of a band-limited transfer function. The basic idea behind the proposed method is to introduce a kernel function aiming to reduce the CT effect, while preserving the ATs. In this way, we can improve the estimation of the FCF. The performance of the proposed method and the frequency averaging technique is analyzed using a synthetically generated transfer function. We show that the proposed method is more accurate than the frequency averaging technique. The accurate estimation of the FCF is crucial for the system design. In fact, we can determine the coherence bandwidth from the FCF. The exact knowledge of the coherence bandwidth is beneficial in both the design as well as optimization of frequency interleaving and pilot arrangement schemes. © 2012 IEEE.

  19. Improved ice loss estimate of the northwestern Greenland ice sheet

    NARCIS (Netherlands)

    Kjeldsen, K.K.; Khan, S.A.; van den Broeke, M.R.; van Angelen, J.H.

    2013-01-01

    We estimate ice volume change rates in the northwest Greenland drainage basin during 2003–2009 using Ice, Cloud and land Elevation Satellite (ICESat) laser altimeter data. Elevation changes are often reported to be largest near the frontal portion of outlet glaciers. To improve the volume change est

  20. Improving and Evaluating Nested Sampling Algorithm for Marginal Likelihood Estimation

    Science.gov (United States)

    Ye, M.; Zeng, X.; Wu, J.; Wang, D.; Liu, J.

    2016-12-01

    With the growing impacts of climate change and human activities on the cycle of water resources, an increasing number of researches focus on the quantification of modeling uncertainty. Bayesian model averaging (BMA) provides a popular framework for quantifying conceptual model and parameter uncertainty. The ensemble prediction is generated by combining each plausible model's prediction, and each model is attached with a model weight which is determined by model's prior weight and marginal likelihood. Thus, the estimation of model's marginal likelihood is crucial for reliable and accurate BMA prediction. Nested sampling estimator (NSE) is a new proposed method for marginal likelihood estimation. The process of NSE is accomplished by searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm is often used for local sampling. However, M-H is not an efficient sampling algorithm for high-dimensional or complicated parameter space. For improving the efficiency of NSE, it could be ideal to incorporate the robust and efficient sampling algorithm - DREAMzs into the local sampling of NSE. The comparison results demonstrated that the improved NSE could improve the efficiency of marginal likelihood estimation significantly. However, both improved and original NSEs suffer from heavy instability. In addition, the heavy computation cost of huge number of model executions is overcome by using an adaptive sparse grid surrogates.

  1. Improving quantum state estimation with mutually unbiased bases.

    Science.gov (United States)

    Adamson, R B A; Steinberg, A M

    2010-07-16

    When used in quantum state estimation, projections onto mutually unbiased bases have the ability to maximize information extraction per measurement and to minimize redundancy. We present the first experimental demonstration of quantum state tomography of two-qubit polarization states to take advantage of mutually unbiased bases. We demonstrate improved state estimation as compared to standard measurement strategies and discuss how this can be understood from the structure of the measurements we use. We experimentally compared our method to the standard state estimation method for three different states and observe that the infidelity was up to 1.84 ± 0.06 times lower by using our technique than it was by using standard state estimation methods.

  2. Improving energy expenditure estimation by using a triaxial accelerometer.

    Science.gov (United States)

    Chen, K Y; Sun, M

    1997-12-01

    In our study of 125 subjects (53 men and 72 women) for two 24-h periods, we validated energy expenditure (EE), estimated by a triaxial accelerometer (Tritrac-R3D), by using a whole-room indirect calorimeter under close-to-normal living conditions. The estimated EE was correlated with the measured total EE for the 2 days (r = 0. 925 and r = 0.855; P linear and a nonlinear model to predict EE by using the acceleration components from the Tritrac. Predicted EE was significantly improved with both models in estimating total EE, total EE for physical activities, EE in low-intensity activities, minute-by-minute averaged relative difference, and minute-by-minute SEE (all P acceleration, EE can be estimated with higher accuracy (averaged SEE = 0.418 W/kg) than with the Tritrac model.

  3. Using Supervised Learning to Improve Monte Carlo Integral Estimation

    CERN Document Server

    Tracey, Brendan; Alonso, Juan J

    2011-01-01

    Monte Carlo (MC) techniques are often used to estimate integrals of a multivariate function using randomly generated samples of the function. In light of the increasing interest in uncertainty quantification and robust design applications in aerospace engineering, the calculation of expected values of such functions (e.g. performance measures) becomes important. However, MC techniques often suffer from high variance and slow convergence as the number of samples increases. In this paper we present Stacked Monte Carlo (StackMC), a new method for post-processing an existing set of MC samples to improve the associated integral estimate. StackMC is based on the supervised learning techniques of fitting functions and cross validation. It should reduce the variance of any type of Monte Carlo integral estimate (simple sampling, importance sampling, quasi-Monte Carlo, MCMC, etc.) without adding bias. We report on an extensive set of experiments confirming that the StackMC estimate of an integral is more accurate than ...

  4. Practitioner's knowledge representation a pathway to improve software effort estimation

    CERN Document Server

    Mendes, Emilia

    2014-01-01

    The main goal of this book is to help organizations improve their effort estimates and effort estimation processes by providing a step-by-step methodology that takes them through the creation and validation of models that are based on their own knowledge and experience. Such models, once validated, can then be used to obtain predictions, carry out risk analyses, enhance their estimation processes for new projects and generally advance them as learning organizations.Emilia Mendes presents the Expert-Based Knowledge Engineering of Bayesian Networks (EKEBNs) methodology, which she has used and adapted during the course of several industry collaborations with different companies world-wide over more than 6 years. The book itself consists of two major parts: first, the methodology's foundations in knowledge management, effort estimation (with special emphasis on the intricacies of software and Web development) and Bayesian networks are detailed; then six industry case studies are presented which illustrate the pra...

  5. The art and science of cancer education and evaluation: toward facilitating improved patient outcomes.

    Science.gov (United States)

    Johnson, Lenora; Ousley, Anita; Swarz, Jeffrey; Bingham, Raymond J; Erickson, J Bianca; Ellis, Steven; Moody, Terra

    2011-03-01

    Cancer education is a constantly evolving field, as science continues to advance both our understanding of cancer and its effects on patients, families, and communities. Moving discoveries to practice expeditiously is paramount to impacting cancer outcomes. The continuing education of cancer care professionals throughout their practice life is vital to facilitating the adoption of therapeutic innovations. Meanwhile, more general educational programs serve to keep cancer patients, their families, and the public informed of the latest findings in cancer research. The National Cancer Institute conducted an assessment of the current knowledge base for cancer education which involved two literature reviews, one of the general literature of the evaluation of medical and health education efforts, and the other of the preceding 5 years of the Journal of Cancer Education (JCE). These reviews explored a wide range of educational models and methodologies. In general, those that were most effective used multiple methodologies, interactive techniques, and multiple exposures over time. Less than one third of the articles in the JCE reported on a cancer education or communication product, and of these, only 70% had been evaluated for effectiveness. Recommendations to improve the evaluation of cancer education and the educational focus of the JCE are provided.

  6. Facilitating the improved management of waste in South Africa through a national waste information system.

    Science.gov (United States)

    Godfrey, Linda

    2008-01-01

    Developing a waste information system (WIS) for a country is more than just about collecting routine data on waste; it is about facilitating the improved management of waste by providing timely, reliable information to the relevant role-players. It is a means of supporting the waste governance challenges facing South Africa - challenges ranging from strategic waste management issues at national government to basic operational challenges at local government. The paper addresses two hypotheses. The first is that the identified needs of government can provide a platform from which to design a national WIS framework for a developing country such as South Africa, and the second is that the needs for waste information reflect greater, currently unfulfilled challenges in the sustainable management of waste. Through a participatory needs analysis process, it is shown that waste information is needed by the three spheres of government, to support amongst others, informed planning and decision-making, compliance monitoring and enforcement, community participation through public access to information, human, infrastructure and financial resource management and policy development. These needs for waste information correspond closely with key waste management challenges currently facing the country. A shift in governments approach to waste, in line with national and international policy, is evident from identified current and future waste information needs. However, the need for information on landfilling remains entrenched within government, possibly due to the poor compliance of landfill sites in South Africa and the problems around the illegal disposal of both general and hazardous waste.

  7. Chitosan improves anti-biofilm efficacy of gentamicin through facilitating antibiotic penetration.

    Science.gov (United States)

    Mu, Haibo; Guo, Fan; Niu, Hong; Liu, Qianjin; Wang, Shunchun; Duan, Jinyou

    2014-12-03

    Antibiotic overuse is one of the major drivers in the generation of antibiotic resistant "super bugs" that can potentially cause serious effects on health. In this study, we reported that the polycationic polysaccharide, chitosan could improve the efficacy of a given antibiotic (gentamicin) to combat bacterial biofilms, the universal lifestyle of microbes in the world. Short- or long-term treatment with the mixture of chitosan and gentamicin resulted in the dispersal of Listeria monocytogenes (L. monocytogenes) biofilms. In this combination, chitosan with a moderate molecular mass (~13 kDa) and high N-deacetylation degree (~88% DD) elicited an optimal anti-biofilm and bactericidal activity. Mechanistic insights indicated that chitosan facilitated the entry of gentamicin into the architecture of L. monocytogenes biofilms. Finally, we showed that this combination was also effective in the eradication of biofilms built by two other Listeria species, Listeria welshimeri and Listeria innocua. Thus, our findings pointed out that chitosan supplementation might overcome the resistance of Listeria biofilms to gentamicin, which might be helpful in prevention of gentamicin overuse in case of combating Listeria biofilms when this specific antibiotic was recommended.

  8. Chitosan Improves Anti-Biofilm Efficacy of Gentamicin through Facilitating Antibiotic Penetration

    Directory of Open Access Journals (Sweden)

    Haibo Mu

    2014-12-01

    Full Text Available Antibiotic overuse is one of the major drivers in the generation of antibiotic resistant “super bugs” that can potentially cause serious effects on health. In this study, we reported that the polycationic polysaccharide, chitosan could improve the efficacy of a given antibiotic (gentamicin to combat bacterial biofilms, the universal lifestyle of microbes in the world. Short- or long-term treatment with the mixture of chitosan and gentamicin resulted in the dispersal of Listeria monocytogenes (L. monocytogenes biofilms. In this combination, chitosan with a moderate molecular mass (~13 kDa and high N-deacetylation degree (~88% DD elicited an optimal anti-biofilm and bactericidal activity. Mechanistic insights indicated that chitosan facilitated the entry of gentamicin into the architecture of L. monocytogenes biofilms. Finally, we showed that this combination was also effective in the eradication of biofilms built by two other Listeria species, Listeria welshimeri and Listeria innocua. Thus, our findings pointed out that chitosan supplementation might overcome the resistance of Listeria biofilms to gentamicin, which might be helpful in prevention of gentamicin overuse in case of combating Listeria biofilms when this specific antibiotic was recommended.

  9. Improved Differential Evolution Algorithm for Parameter Estimation to Improve the Production of Biochemical Pathway

    Directory of Open Access Journals (Sweden)

    Chuii Khim Chong

    2012-06-01

    Full Text Available This paper introduces an improved Differential Evolution algorithm (IDE which aims at improving its performance in estimating the relevant parameters for metabolic pathway data to simulate glycolysis pathway for yeast. Metabolic pathway data are expected to be of significant help in the development of efficient tools in kinetic modeling and parameter estimation platforms. Many computation algorithms face obstacles due to the noisy data and difficulty of the system in estimating myriad of parameters, and require longer computational time to estimate the relevant parameters. The proposed algorithm (IDE in this paper is a hybrid of a Differential Evolution algorithm (DE and a Kalman Filter (KF. The outcome of IDE is proven to be superior than Genetic Algorithm (GA and DE. The results of IDE from experiments show estimated optimal kinetic parameters values, shorter computation time and increased accuracy for simulated results compared with other estimation algorithms

  10. Performance Analysis of an Improved MUSIC DoA Estimator

    Science.gov (United States)

    Vallet, Pascal; Mestre, Xavier; Loubaton, Philippe

    2015-12-01

    This paper adresses the statistical performance of subspace DoA estimation using a sensor array, in the asymptotic regime where the number of samples and sensors both converge to infinity at the same rate. Improved subspace DoA estimators were derived (termed as G-MUSIC) in previous works, and were shown to be consistent and asymptotically Gaussian distributed in the case where the number of sources and their DoA remain fixed. In this case, which models widely spaced DoA scenarios, it is proved in the present paper that the traditional MUSIC method also provides DoA consistent estimates having the same asymptotic variances as the G-MUSIC estimates. The case of DoA that are spaced of the order of a beamwidth, which models closely spaced sources, is also considered. It is shown that G-MUSIC estimates are still able to consistently separate the sources, while it is no longer the case for the MUSIC ones. The asymptotic variances of G-MUSIC estimates are also evaluated.

  11. The Source Signature Estimator - System Improvements and Applications

    Energy Technology Data Exchange (ETDEWEB)

    Sabel, Per; Brink, Mundy; Eidsvig, Seija; Jensen, Lars

    1998-12-31

    This presentation relates briefly to the first part of the joint project on post-survey analysis of shot-by-shot based source signature estimation. The improvements of a Source Signature Estimator system are analysed. The notional source method can give suboptimal results when not inputting the real array geometry, i.e. actual separations between the sub-arrays of an air gun array, to the notional source algorithm. This constraint has been addressed herein and was implemented for the first time in the field in summer 1997. The second part of this study will show the potential advantages for interpretation when the signature estimates are then to be applied in the data processing. 5 refs., 1 fig.

  12. Improved Phasor Estimation Method for Dynamic Voltage Restorer Applications

    DEFF Research Database (Denmark)

    Ebrahimzadeh, Esmaeil; Farhangi, Shahrokh; Iman-Eini, Hossein;

    2015-01-01

    The dynamic voltage restorer (DVR) is a series compensator for distribution system applications, which protects sensitive loads against voltage sags by fast voltage injection. The DVR must estimate the magnitude and phase of the measured voltages to achieve the desired performance. This paper...... proposes a phasor parameter estimation algorithm based on a recursive variable and fixed data window least error squares (LES) method for the DVR control system. The proposed algorithm, in addition to decreasing the computational burden, improves the frequency response of the control scheme based...... on the fixed data window LES method. The DVR control system based on the proposed algorithm provides a better compromise between the estimation speed and accuracy of the voltage and current signals and can be implemented using a simple and low-cost processor. The results of the studies indicate...

  13. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    Directory of Open Access Journals (Sweden)

    Zhiwei Zhao

    2015-02-01

    Full Text Available Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1 achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2 greatly improves the performance of protocols exploiting link correlation.

  14. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    Science.gov (United States)

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-01-01

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314

  15. Improving Empirical Approaches to Estimating Local Greenhouse Gas Emissions

    Science.gov (United States)

    Blackhurst, M.; Azevedo, I. L.; Lattanzi, A.

    2016-12-01

    Evidence increasingly indicates our changing climate will have significant global impacts on public health, economies, and ecosystems. As a result, local governments have become increasingly interested in climate change mitigation. In the U.S., cities and counties representing nearly 15% of the domestic population plan to reduce 300 million metric tons of greenhouse gases over the next 40 years (or approximately 1 ton per capita). Local governments estimate greenhouse gas emissions to establish greenhouse gas mitigation goals and select supporting mitigation measures. However, current practices produce greenhouse gas estimates - also known as a "greenhouse gas inventory " - of empirical quality often insufficient for robust mitigation decision making. Namely, current mitigation planning uses sporadic, annual, and deterministic estimates disaggregated by broad end use sector, obscuring sources of emissions uncertainty, variability, and exogeneity that influence mitigation opportunities. As part of AGU's Thriving Earth Exchange, Ari Lattanzi of City of Pittsburgh, PA recently partnered with Dr. Inez Lima Azevedo (Carnegie Mellon University) and Dr. Michael Blackhurst (University of Pittsburgh) to improve the empirical approach to characterizing Pittsburgh's greenhouse gas emissions. The project will produce first-order estimates of the underlying sources of uncertainty, variability, and exogeneity influencing Pittsburgh's greenhouse gases and discuss implications of mitigation decision making. The results of the project will enable local governments to collect more robust greenhouse gas inventories to better support their mitigation goals and improve measurement and verification efforts.

  16. Improved mirror position estimation using resonant quantum smoothing

    Energy Technology Data Exchange (ETDEWEB)

    Wheatley, Trevor A. [UNSW Australia, School of Engineering and Information Technology, Canberra, ACT (Australia); Australian Research Council, Centre for Quantum Computation and Communication Technology, Canberra (Australia); Tsang, Mankei [National University of Singapore, Department of Electrical and Computer Engineering, Singapore (Singapore); National University of Singapore, Department of Physics, Singapore (Singapore); Petersen, Ian R. [UNSW Australia, School of Engineering and Information Technology, Canberra, ACT (Australia); Huntington, Elanor H. [UNSW Australia, School of Engineering and Information Technology, Canberra, ACT (Australia); Australian Research Council, Centre for Quantum Computation and Communication Technology, Canberra (Australia); Australian National University, Research School of Engineering, College of Engineering and Computer Science, Canberra, ACT (Australia)

    2015-05-20

    Quantum parameter estimation, the ability to precisely obtain a classical value in a quantum system, is very important to many key quantum technologies. Many of these technologies rely on an optical probe, either coherent or squeezed states to make a precise measurement of a parameter ultimately limited by quantum mechanics. We use this technique to theoretically model, simulate and validate by experiment the measurement and precise estimation of the position of a cavity mirror. In non-resonant systems, the achieved estimation enhancement from quantum smoothing over optimal filtering has not exceeded a factor two, even when squeezed state probes were used. Using a coherent state probe, we show that using quantum smoothing on a mechanically resonant structure driven by a resonant forcing function can result significantly greater improvement in parameter estimation than with non-resonant systems. In this work, we show that it is possible to achieve a smoothing improvement by a factor in excess of three times over optimal filtering. By using intra-cavity light as the probe we obtain finer precision than has been achieved with the equivalent quantum resources in free-space. (orig.)

  17. Improving the estimation of the tuberculosis burden in India.

    Science.gov (United States)

    Cowling, Krycia; Dandona, Rakhi; Dandona, Lalit

    2014-11-01

    Although India is considered to be the country with the greatest tuberculosis burden, estimates of the disease's incidence, prevalence and mortality in India rely on sparse data with substantial uncertainty. The relevant available data are less reliable than those from countries that have recently improved systems for case reporting or recently invested in national surveys of tuberculosis prevalence. We explored ways to improve the estimation of the tuberculosis burden in India. We focused on case notification data - among the most reliable data available - and ways to investigate the associated level of underreporting, as well as the need for a national tuberculosis prevalence survey. We discuss several recent developments - i.e. changes in national policies relating to tuberculosis, World Health Organization guidelines for the investigation of the disease, and a rapid diagnostic test - that should improve data collection for the estimation of the tuberculosis burden in India and elsewhere. We recommend the implementation of an inventory study in India to assess the underreporting of tuberculosis cases, as well as a national survey of tuberculosis prevalence. A national assessment of drug resistance in Indian strains of Mycobacterium tuberculosis should also be considered. The results of such studies will be vital for the accurate monitoring of tuberculosis control efforts in India and globally.

  18. Increasing fMRI sampling rate improves Granger causality estimates.

    Directory of Open Access Journals (Sweden)

    Fa-Hsuan Lin

    Full Text Available Estimation of causal interactions between brain areas is necessary for elucidating large-scale functional brain networks underlying behavior and cognition. Granger causality analysis of time series data can quantitatively estimate directional information flow between brain regions. Here, we show that such estimates are significantly improved when the temporal sampling rate of functional magnetic resonance imaging (fMRI is increased 20-fold. Specifically, healthy volunteers performed a simple visuomotor task during blood oxygenation level dependent (BOLD contrast based whole-head inverse imaging (InI. Granger causality analysis based on raw InI BOLD data sampled at 100-ms resolution detected the expected causal relations, whereas when the data were downsampled to the temporal resolution of 2 s typically used in echo-planar fMRI, the causality could not be detected. An additional control analysis, in which we SINC interpolated additional data points to the downsampled time series at 0.1-s intervals, confirmed that the improvements achieved with the real InI data were not explainable by the increased time-series length alone. We therefore conclude that the high-temporal resolution of InI improves the Granger causality connectivity analysis of the human brain.

  19. Improving the accuracy of livestock distribution estimates through spatial interpolation

    Directory of Open Access Journals (Sweden)

    Ward Bryssinckx

    2012-11-01

    Full Text Available Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes. For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P <0.009 based on a sample of 2,077 parishes using one-stage stratified samples. During aggregation, area-weighted mean values were assigned to higher administrative unit levels. However, when this step is preceded by a spatial interpolation to fill in missing values in non

  20. Barriers and Facilitators to Implementing a Change Initiative in Long-Term Care Using the INTERACT® Quality Improvement Program.

    Science.gov (United States)

    Tappen, Ruth M; Wolf, David G; Rahemi, Zahra; Engstrom, Gabriella; Rojido, Carolina; Shutes, Jill M; Ouslander, Joseph G

    Implementation of major organizational change initiatives presents a challenge for long-term care leadership. Implementation of the INTERACT® (Interventions to Reduce Acute Care Transfers) quality improvement program, designed to improve the management of acute changes in condition and reduce unnecessary emergency department visits and hospitalizations of nursing home residents, serves as an example to illustrate the facilitators and barriers to major change in long-term care. As part of a larger study of the impact of INTERACT® on rates of emergency department visits and hospitalizations, staff of 71 nursing homes were called monthly to follow-up on their progress and discuss successful facilitating strategies and any challenges and barriers they encountered during the yearlong implementation period. Themes related to barriers and facilitators were identified. Six major barriers to implementation were identified: the magnitude and complexity of the change (35%), instability of facility leadership (27%), competing demands (40%), stakeholder resistance (49%), scarce resources (86%), and technical problems (31%). Six facilitating strategies were also reported: organization-wide involvement (68%), leadership support (41%), use of administrative authority (14%), adequate training (66%), persistence and oversight on the part of the champion (73%), and unfolding positive results (14%). Successful introduction of a complex change such as the INTERACT® quality improvement program in a long-term care facility requires attention to the facilitators and barriers identified in this report from those at the frontline.

  1. Requirements and standards facilitating quality improvement for reporting systems in gastrointestinal endoscopy: European Society of Gastrointestinal Endoscopy (ESGE) Position Statement.

    Science.gov (United States)

    Bretthauer, Michael; Aabakken, Lars; Dekker, Evelien; Kaminski, Michal F; Rösch, Thomas; Hultcrantz, Rolf; Suchanek, Stepan; Jover, Rodrigo; Kuipers, Ernst J; Bisschops, Raf; Spada, Cristiano; Valori, Roland; Domagk, Dirk; Rees, Colin; Rutter, Matthew D

    2016-03-01

    To develop standards for high quality in gastrointestinal (GI) endoscopy, the European Society of Gastrointestinal Endoscopy (ESGE) has established the ESGE Quality Improvement Committee. A prerequisite for quality assurance and improvement for all GI endoscopy procedures is state-of-the-art integrated digital reporting systems for standardized documentation of the procedures. The current paper describes the ESGE's viewpoints on the requirements for high-quality endoscopy reporting systems in GI endoscopy. Recommendations 1 Endoscopy reporting systems must be electronic. 2 Endoscopy reporting systems should be integrated into hospitals' patient record systems. 3 Endoscopy reporting systems should include patient identifiers to facilitate data linkage to other data sources. 4 Endoscopy reporting systems shall restrict the use of free-text entry to a minimum, and be based mainly on structured data entry. 5 Separate entry of data for quality or research purposes is discouraged. Automatic data transfer for quality and research purposes must be facilitated. 6 Double entry of data by the endoscopist or associate personnel is discouraged. Available data from outside sources (administrative or medical) must be made available automatically. 7 Endoscopy reporting systems shall facilitate the inclusion of information on histopathology of detected lesions, patient satisfaction, adverse events, and surveillance recommendations. 8 Endoscopy reporting systems must facilitate easy data retrieval at any time in a universally compatible format. 9 Endoscopy reporting systems must include data fields for key performance indicators as defined by quality improvement committees. 10 Endoscopy reporting systems must facilitate changes in indicators and data entry fields as required by professional organizations.

  2. Improving Google Flu Trends estimates for the United States through transformation.

    Directory of Open Access Journals (Sweden)

    Leah J Martin

    Full Text Available Google Flu Trends (GFT uses Internet search queries in an effort to provide early warning of increases in influenza-like illness (ILI. In the United States, GFT estimates the percentage of physician visits related to ILI (%ILINet reported by the Centers for Disease Control and Prevention (CDC. However, during the 2012-13 influenza season, GFT overestimated %ILINet by an appreciable amount and estimated the peak in incidence three weeks late. Using data from 2010-14, we investigated the relationship between GFT estimates (%GFT and %ILINet. Based on the relationship between the relative change in %GFT and the relative change in %ILINet, we transformed %GFT estimates to better correspond with %ILINet values. In 2010-13, our transformed %GFT estimates were within ± 10% of %ILINet values for 17 of the 29 weeks that %ILINet was above the seasonal baseline value determined by the CDC; in contrast, the original %GFT estimates were within ± 10% of %ILINet values for only two of these 29 weeks. Relative to the %ILINet peak in 2012-13, the peak in our transformed %GFT estimates was 2% lower and one week later, whereas the peak in the original %GFT estimates was 74% higher and three weeks later. The same transformation improved %GFT estimates using the recalibrated 2013 GFT model in early 2013-14. Our transformed %GFT estimates can be calculated approximately one week before %ILINet values are reported by the CDC and the transformation equation was stable over the time period investigated (2010-13. We anticipate our results will facilitate future use of GFT.

  3. An Improved Channel Estimation Algorithm Based on Estimating Level Crossing Rate for the CDMA Receiver

    Institute of Scientific and Technical Information of China (English)

    MAZhangyong; YANYongqing; ZHAOChunming; YOUXiaohu

    2003-01-01

    In this paper, an improved channel esti-mation algorithm based on tracking the level crossing rate (LCR) for fading rate is proposed in the CDMA systems with the continuous pilot channel. By using a simple LCRestimator, the Doppler-shift can be calculated approxi-mately, thus the observation length of the channel estima-tion can be adjusted dynamically. The procedure is pre-sented which includes the iterative algorithm for the time varying channel. Moreover, computer simulation results show that the algorithm achieves good tradeoff between the noise compression capability and the channel tracking performance.

  4. Tuning target selection algorithms to improve galaxy redshift estimates

    Science.gov (United States)

    Hoyle, Ben; Paech, Kerstin; Rau, Markus Michael; Seitz, Stella; Weller, Jochen

    2016-06-01

    We showcase machine learning (ML) inspired target selection algorithms to determine which of all potential targets should be selected first for spectroscopic follow-up. Efficient target selection can improve the ML redshift uncertainties as calculated on an independent sample, while requiring less targets to be observed. We compare seven different ML targeting algorithms with the Sloan Digital Sky Survey (SDSS) target order, and with a random targeting algorithm. The ML inspired algorithms are constructed iteratively by estimating which of the remaining target galaxies will be most difficult for the ML methods to accurately estimate redshifts using the previously observed data. This is performed by predicting the expected redshift error and redshift offset (or bias) of all of the remaining target galaxies. We find that the predicted values of bias and error are accurate to better than 10-30 per cent of the true values, even with only limited training sample sizes. We construct a hypothetical follow-up survey and find that some of the ML targeting algorithms are able to obtain the same redshift predictive power with 2-3 times less observing time, as compared to that of the SDSS, or random, target selection algorithms. The reduction in the required follow-up resources could allow for a change to the follow-up strategy, for example by obtaining deeper spectroscopy, which could improve ML redshift estimates for deeper test data.

  5. IMPROVED ESTIMATION OF FIBER LENGTH FROM 3-DIMENSIONAL IMAGES

    Directory of Open Access Journals (Sweden)

    Joachim Ohser

    2013-03-01

    Full Text Available A new method is presented for estimating the specific fiber length from 3D images of macroscopically homogeneous fiber systems. The method is based on a discrete version of the Crofton formula, where local knowledge from 3x3x3-pixel configurations of the image data is exploited. It is shown that the relative error resulting from the discretization of the outer integral of the Crofton formula amonts at most 1.2%. An algorithmic implementation of the method is simple and the runtime as well as the amount of memory space are low. The estimation is significantly improved by considering 3x3x3-pixel configurations instead of 2x2x2, as already studied in literature.

  6. Tuning target selection algorithms to improve galaxy redshift estimates

    CERN Document Server

    Hoyle, Ben; Rau, Markus Michael; Seitz, Stella; Weller, Jochen

    2015-01-01

    We showcase machine learning (ML) inspired target selection algorithms to determine which of all potential targets should be selected first for spectroscopic follow up. Efficient target selection can improve the ML redshift uncertainties as calculated on an independent sample, while requiring less targets to be observed. We compare the ML targeting algorithms with the Sloan Digital Sky Survey (SDSS) target order, and with a random targeting algorithm. The ML inspired algorithms are constructed iteratively by estimating which of the remaining target galaxies will be most difficult for the machine learning methods to accurately estimate redshifts using the previously observed data. This is performed by predicting the expected redshift error and redshift offset (or bias) of all of the remaining target galaxies. We find that the predicted values of bias and error are accurate to better than 10-30% of the true values, even with only limited training sample sizes. We construct a hypothetical follow-up survey and fi...

  7. Improving gravitational-wave parameter estimation using Gaussian process regression

    CERN Document Server

    Moore, Christopher J; Chua, Alvin J K; Gair, Jonathan R

    2015-01-01

    Folding uncertainty in theoretical models into Bayesian parameter estimation is necessary in order to make reliable inferences. A general means of achieving this is by marginalising over model uncertainty using a prior distribution constructed using Gaussian process regression (GPR). Here, we apply this technique to (simulated) gravitational-wave signals from binary black holes that could be observed using advanced-era gravitational-wave detectors. Unless properly accounted for, uncertainty in the gravitational-wave templates could be the dominant source of error in studies of these systems. We explain our approach in detail and provide proofs of various features of the method, including the limiting behaviour for high signal-to-noise, where systematic model uncertainties dominate over noise errors. We find that the marginalised likelihood constructed via GPR offers a significant improvement in parameter estimation over the standard, uncorrected likelihood. We also examine the dependence of the method on the ...

  8. Motion estimation based on an improved block matching technique

    Institute of Scientific and Technical Information of China (English)

    Tangfei Tao; Chongzhao Han; Yanqi Wu; Xin Kang

    2006-01-01

    @@ An improved block-matching algorithm for fast motion estimation is proposed. The matching criterion is the sum of absolute difference. The basic idea is to obtain the best estimation of motion vectors by an optimization of the search process which can terminate the time-consuming computation of matching evaluation between the current block and the ineligible candidate block as early as possible and eliminate the search positions as many as possible in the search area. The performance of this algorithm is evaluated by theoretic analysis and compared with the full search algorithm (FSA). The simulation results demonstrate that the computation load of this algorithm is much less than that of FSA, and the motion vectors obtained by this algorithm are identical to those of FSA.

  9. Reporting systems in gastrointestinal endoscopy: Requirements and standards facilitating quality improvement: European Society of Gastrointestinal Endoscopy position statement.

    Science.gov (United States)

    Bretthauer, Michael; Aabakken, Lars; Dekker, Evelien; Kaminski, Michal F; Rösch, Thomas; Hultcrantz, Rolf; Suchanek, Stepan; Jover, Rodrigo; Kuipers, Ernst J; Bisschops, Raf; Spada, Cristiano; Valori, Roland; Domagk, Dirk; Rees, Colin; Rutter, Matthew D

    2016-04-01

    To develop standards for high quality of gastrointestinal endoscopy, the European Society of Gastrointestinal Endoscopy (ESGE) has established the ESGE Quality Improvement Committee. A prerequisite for quality assurance and improvement for all gastrointestinal endoscopy procedures is state-of-the-art integrated digital reporting systems for standardized documentation of the procedures. The current paper describes the ESGE's viewpoints on requirements for high-quality endoscopy reporting systems. The following recommendations are issued: Endoscopy reporting systems must be electronic.Endoscopy reporting systems should be integrated into hospital patient record systems.Endoscopy reporting systems should include patient identifiers to facilitate data linkage to other data sources.Endoscopy reporting systems shall restrict the use of free text entry to a minimum, and be based mainly on structured data entry.Separate entry of data for quality or research purposes is discouraged. Automatic data transfer for quality and research purposes must be facilitated.Double entry of data by the endoscopist or associate personnel is discouraged. Available data from outside sources (administrative or medical) must be made available automatically.Endoscopy reporting systems shall enable the inclusion of information on histopathology of detected lesions; patient's satisfaction; adverse events; surveillance recommendations.Endoscopy reporting systems must facilitate easy data retrieval at any time in a universally compatible format.Endoscopy reporting systems must include data fields for key performance indicators as defined by quality improvement committees.Endoscopy reporting systems must facilitate changes in indicators and data entry fields as required by professional organizations.

  10. A Teacher Training Model for Improving Social Facilitation in the Inclusive Program

    Science.gov (United States)

    Robinson, Suzanne; Myck-Wayne, Janice

    2016-01-01

    The twofold purpose of this article is to highlight the importance of fostering social competence within inclusive preschool programs and to describe a model for training teachers in research-based social facilitation strategies so as to promote social interaction between children with and without disabilities. This model was developed to address…

  11. A Breakthrough for Josh: How Use of an iPad Facilitated Reading Improvement

    Science.gov (United States)

    McClanahan, Barbara; Williams, Kristen; Kennedy, Ed; Tate, Susan

    2012-01-01

    As part of a diagnosis and tutoring project in an elementary education reading course, a pre-service teacher was encouraged to use an iPad as the vehicle for intervention strategies with a fifth grade struggling reader with Attention Deficit Hyperactivity Disorder. The device not only helped the student focus attention, it facilitated his becoming…

  12. Improving the Accuracy of Estimation of Climate Extremes

    Science.gov (United States)

    Zolina, Olga; Detemmerman, Valery; Trenberth, Kevin E.

    2010-12-01

    Workshop on Metrics and Methodologies of Estimation of Extreme Climate Events; Paris, France, 27-29 September 2010; Climate projections point toward more frequent and intense weather and climate extremes such as heat waves, droughts, and floods, in a warmer climate. These projections, together with recent extreme climate events, including flooding in Pakistan and the heat wave and wildfires in Russia, highlight the need for improved risk assessments to help decision makers and the public. But accurate analysis and prediction of risk of extreme climate events require new methodologies and information from diverse disciplines. A recent workshop sponsored by the World Climate Research Programme (WCRP) and hosted at United Nations Educational, Scientific and Cultural Organization (UNESCO) headquarters in France brought together, for the first time, a unique mix of climatologists, statisticians, meteorologists, oceanographers, social scientists, and risk managers (such as those from insurance companies) who sought ways to improve scientists' ability to characterize and predict climate extremes in a changing climate.

  13. Reducing measurement scale mismatch to improve surface energy flux estimation

    Science.gov (United States)

    Iwema, Joost; Rosolem, Rafael; Rahman, Mostaquimur; Blyth, Eleanor; Wagener, Thorsten

    2016-04-01

    Soil moisture importantly controls land surface processes such as energy and water partitioning. A good understanding of these controls is needed especially when recognizing the challenges in providing accurate hyper-resolution hydrometeorological simulations at sub-kilometre scales. Soil moisture controlling factors can, however, differ at distinct scales. In addition, some parameters in land surface models are still often prescribed based on observations obtained at another scale not necessarily employed by such models (e.g., soil properties obtained from lab samples used in regional simulations). To minimize such effects, parameters can be constrained with local data from Eddy-Covariance (EC) towers (i.e., latent and sensible heat fluxes) and Point Scale (PS) soil moisture observations (e.g., TDR). However, measurement scales represented by EC and PS still differ substantially. Here we use the fact that Cosmic-Ray Neutron Sensors (CRNS) estimate soil moisture at horizontal footprint similar to that of EC fluxes to help answer the following question: Does reduced observation scale mismatch yield better soil moisture - surface fluxes representation in land surface models? To answer this question we analysed soil moisture and surface fluxes measurements from twelve COSMOS-Ameriflux sites in the USA characterized by distinct climate, soils and vegetation types. We calibrated model parameters of the Joint UK Land Environment Simulator (JULES) against PS and CRNS soil moisture data, respectively. We analysed the improvement in soil moisture estimation compared to uncalibrated model simulations and then evaluated the degree of improvement in surface fluxes before and after calibration experiments. Preliminary results suggest that a more accurate representation of soil moisture dynamics is achieved when calibrating against observed soil moisture and further improvement obtained with CRNS relative to PS. However, our results also suggest that a more accurate

  14. Laser photogrammetry improves size and demographic estimates for whale sharks

    Science.gov (United States)

    Richardson, Anthony J.; Prebble, Clare E.M.; Marshall, Andrea D.; Bennett, Michael B.; Weeks, Scarla J.; Cliff, Geremy; Wintner, Sabine P.; Pierce, Simon J.

    2015-01-01

    Whale sharks Rhincodon typus are globally threatened, but a lack of biological and demographic information hampers an accurate assessment of their vulnerability to further decline or capacity to recover. We used laser photogrammetry at two aggregation sites to obtain more accurate size estimates of free-swimming whale sharks compared to visual estimates, allowing improved estimates of biological parameters. Individual whale sharks ranged from 432–917 cm total length (TL) (mean ± SD = 673 ± 118.8 cm, N = 122) in southern Mozambique and from 420–990 cm TL (mean ± SD = 641 ± 133 cm, N = 46) in Tanzania. By combining measurements of stranded individuals with photogrammetry measurements of free-swimming sharks, we calculated length at 50% maturity for males in Mozambique at 916 cm TL. Repeat measurements of individual whale sharks measured over periods from 347–1,068 days yielded implausible growth rates, suggesting that the growth increment over this period was not large enough to be detected using laser photogrammetry, and that the method is best applied to estimating growth rates over longer (decadal) time periods. The sex ratio of both populations was biased towards males (74% in Mozambique, 89% in Tanzania), the majority of which were immature (98% in Mozambique, 94% in Tanzania). The population structure for these two aggregations was similar to most other documented whale shark aggregations around the world. Information on small (<400 cm) whale sharks, mature individuals, and females in this region is lacking, but necessary to inform conservation initiatives for this globally threatened species. PMID:25870776

  15. Laser photogrammetry improves size and demographic estimates for whale sharks

    Directory of Open Access Journals (Sweden)

    Christoph A. Rohner

    2015-04-01

    Full Text Available Whale sharks Rhincodon typus are globally threatened, but a lack of biological and demographic information hampers an accurate assessment of their vulnerability to further decline or capacity to recover. We used laser photogrammetry at two aggregation sites to obtain more accurate size estimates of free-swimming whale sharks compared to visual estimates, allowing improved estimates of biological parameters. Individual whale sharks ranged from 432–917 cm total length (TL (mean ± SD = 673 ± 118.8 cm, N = 122 in southern Mozambique and from 420–990 cm TL (mean ± SD = 641 ± 133 cm, N = 46 in Tanzania. By combining measurements of stranded individuals with photogrammetry measurements of free-swimming sharks, we calculated length at 50% maturity for males in Mozambique at 916 cm TL. Repeat measurements of individual whale sharks measured over periods from 347–1,068 days yielded implausible growth rates, suggesting that the growth increment over this period was not large enough to be detected using laser photogrammetry, and that the method is best applied to estimating growth rates over longer (decadal time periods. The sex ratio of both populations was biased towards males (74% in Mozambique, 89% in Tanzania, the majority of which were immature (98% in Mozambique, 94% in Tanzania. The population structure for these two aggregations was similar to most other documented whale shark aggregations around the world. Information on small (<400 cm whale sharks, mature individuals, and females in this region is lacking, but necessary to inform conservation initiatives for this globally threatened species.

  16. Laser photogrammetry improves size and demographic estimates for whale sharks.

    Science.gov (United States)

    Rohner, Christoph A; Richardson, Anthony J; Prebble, Clare E M; Marshall, Andrea D; Bennett, Michael B; Weeks, Scarla J; Cliff, Geremy; Wintner, Sabine P; Pierce, Simon J

    2015-01-01

    Whale sharks Rhincodon typus are globally threatened, but a lack of biological and demographic information hampers an accurate assessment of their vulnerability to further decline or capacity to recover. We used laser photogrammetry at two aggregation sites to obtain more accurate size estimates of free-swimming whale sharks compared to visual estimates, allowing improved estimates of biological parameters. Individual whale sharks ranged from 432-917 cm total length (TL) (mean ± SD = 673 ± 118.8 cm, N = 122) in southern Mozambique and from 420-990 cm TL (mean ± SD = 641 ± 133 cm, N = 46) in Tanzania. By combining measurements of stranded individuals with photogrammetry measurements of free-swimming sharks, we calculated length at 50% maturity for males in Mozambique at 916 cm TL. Repeat measurements of individual whale sharks measured over periods from 347-1,068 days yielded implausible growth rates, suggesting that the growth increment over this period was not large enough to be detected using laser photogrammetry, and that the method is best applied to estimating growth rates over longer (decadal) time periods. The sex ratio of both populations was biased towards males (74% in Mozambique, 89% in Tanzania), the majority of which were immature (98% in Mozambique, 94% in Tanzania). The population structure for these two aggregations was similar to most other documented whale shark aggregations around the world. Information on small (<400 cm) whale sharks, mature individuals, and females in this region is lacking, but necessary to inform conservation initiatives for this globally threatened species.

  17. Towards Improved Snow Water Equivalent Estimation via GRACE Assimilation

    Science.gov (United States)

    Forman, Bart; Reichle, Rofl; Rodell, Matt

    2011-01-01

    Passive microwave (e.g. AMSR-E) and visible spectrum (e.g. MODIS) measurements of snow states have been used in conjunction with land surface models to better characterize snow pack states, most notably snow water equivalent (SWE). However, both types of measurements have limitations. AMSR-E, for example, suffers a loss of information in deep/wet snow packs. Similarly, MODIS suffers a loss of temporal correlation information beyond the initial accumulation and final ablation phases of the snow season. Gravimetric measurements, on the other hand, do not suffer from these limitations. In this study, gravimetric measurements from the Gravity Recovery and Climate Experiment (GRACE) mission are used in a land surface model data assimilation (DA) framework to better characterize SWE in the Mackenzie River basin located in northern Canada. Comparisons are made against independent, ground-based SWE observations, state-of-the-art modeled SWE estimates, and independent, ground-based river discharge observations. Preliminary results suggest improved SWE estimates, including improved timing of the subsequent ablation and runoff of the snow pack. Additionally, use of the DA procedure can add vertical and horizontal resolution to the coarse-scale GRACE measurements as well as effectively downscale the measurements in time. Such findings offer the potential for better understanding of the hydrologic cycle in snow-dominated basins located in remote regions of the globe where ground-based observation collection if difficult, if not impossible. This information could ultimately lead to improved freshwater resource management in communities dependent on snow melt as well as a reduction in the uncertainty of river discharge into the Arctic Ocean.

  18. Improved PPP ambiguity resolution by COES FCB estimation

    Science.gov (United States)

    Li, Yihe; Gao, Yang; Shi, Junbo

    2016-05-01

    Precise point positioning (PPP) integer ambiguity resolution is able to significantly improve the positioning accuracy with the correction of fractional cycle biases (FCBs) by shortening the time to first fix (TTFF) of ambiguities. When satellite orbit products are adopted to estimate the satellite FCB corrections, the narrow-lane (NL) FCB corrections will be contaminated by the orbit's line-of-sight (LOS) errors which subsequently affect ambiguity resolution (AR) performance, as well as positioning accuracy. To effectively separate orbit errors from satellite FCBs, we propose a cascaded orbit error separation (COES) method for the PPP implementation. Instead of using only one direction-independent component in previous studies, the satellite NL improved FCB corrections are modeled by one direction-independent component and three directional-dependent components per satellite in this study. More specifically, the direction-independent component assimilates actual FCBs, whereas the directional-dependent components are used to assimilate the orbit errors. To evaluate the performance of the proposed method, GPS measurements from a regional and a global network are processed with the IGSReal-time service (RTS), IGS rapid (IGR) products and predicted orbits with >10 cm 3D root mean square (RMS) error. The improvements by the proposed FCB estimation method are validated in terms of ambiguity fractions after applying FCB corrections and positioning accuracy. The numerical results confirm that the obtained FCBs using the proposed method outperform those by conventional method. The RMS of ambiguity fractions after applying FCB corrections is reduced by 13.2 %. The position RMSs in north, east and up directions are reduced by 30.0, 32.0 and 22.0 % on average.

  19. Collective learning, change and improvement in health care: trialling a facilitated learning initiative with general practice teams.

    Science.gov (United States)

    Bunniss, Suzanne; Gray, Francesca; Kelly, Diane

    2012-06-01

    Many patients, families, health care professionals and politicians desire for quality improvement within the UK National Health Service. One way to achieve this change is for health care teams to work and learn together more effectively. This research aimed to design and trial a facilitated learning programme with the aim of supporting general practice teams in fostering the characteristics of learning organizations. This is an action research study. Qualitative data were captured during and after the trial from 40 participants in two multi-professional general practice teams within different Scottish health boards. Data were gathered using observations, semi-structured interviews and written learning notes. Taking part in the LPP was a positive experience of learning together as a practice and enhanced communication within the team was a particular outcome. External facilitation helped provide focus and reduce inter-professional barriers. Teams found working in small, mixed role discussion groups particularly valuable in understanding each others' perspectives. The active learning style of the LPP could be daunting at times but teams valued the chance to identify their own quality improvement goals. Teams introduced a number of changes to improve the quality of care within their practice as a result of their participation. This trial of the learning practice programme shows that, with facilitation and the appropriate input of resources, general practice teams can successfully apply learning organization principles to produce quality improvement outcomes. The study also demonstrates the value of action research in researching iterative change over time. © 2011 Blackwell Publishing Ltd.

  20. Improved Estimates of Air Pollutant Emissions from Biorefinery

    Energy Technology Data Exchange (ETDEWEB)

    Tan, Eric C. D.

    2015-11-13

    We have attempted to use detailed kinetic modeling approach for improved estimation of combustion air pollutant emissions from biorefinery. We have developed a preliminary detailed reaction mechanism for biomass combustion. Lignin is the only biomass component included in the current mechanism and methane is used as the biogas surrogate. The model is capable of predicting the combustion emissions of greenhouse gases (CO2, N2O, CH4) and criteria air pollutants (NO, NO2, CO). The results are yet to be compared with the experimental data. The current model is still in its early stages of development. Given the acknowledged complexity of biomass oxidation, as well as the components in the feed to the combustor, obviously the modeling approach and the chemistry set discussed here may undergo revision, extension, and further validation in the future.

  1. IMPROVING PROJECT SCHEDULE ESTIMATES USING HISTORICAL DATA AND SIMULATION

    Directory of Open Access Journals (Sweden)

    P.H. Meyer

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: Many projects are not completed on time or within the original budget. This is caused by uncertainty in project variables as well as the occurrence of risk events. A study was done to determine ways of measuring the risk in development projects executed by a mining company in South Africa. The main objective of the study was to determine whether historical project data would provide a more accurate means of estimating the total project duration. Original estimates and actual completion times for tasks of a number of projects were analysed and compared. The results of the study indicated that a more accurate total duration for a project could be obtained by making use of historical project data. The accuracy of estimates could be improved further by building a comprehensive project schedule database within a specific industry.

    AFRIKAANSE OPSOMMING: Verskeie projekte word nie binne die oorspronklike skedule of begroting voltooi nie. Dit word dikwels veroorsaak deur onsekerheid oor projekveranderlikes en die voorkoms van risiko’s. 'n Studie is gedoen om 'n metode te ontwikkel om risiko te meet vir ontwikkelingsprojekte van 'n mynmaatskappy in Suid Afrika. Die hoofdoel van die studie was om te bepaal of historiese projekdata gebruik kon word om 'n akkurater tydsduur vir 'n projek te beraam. Die geraamde tydsduur van take vir 'n aantal projekte is ontleed en vergelyk met die werklike tydsduur. Die resultate van die studie het getoon dat 'n akkurater totale tydsduur vir die projek verkry kon word deur gebruik te maak van historiese projekdata. Die akkuraatheid kan verder verbeter word deur 'n databasis van projekskedules vir 'n bepaalde industrie te ontwikkel en by datum te hou.

  2. Development of an Automated Bone Mineral Density Software Application: Facilitation Radiologic Reporting and Improvement of Accuracy.

    Science.gov (United States)

    Tsai, I-Ta; Tsai, Meng-Yuan; Wu, Ming-Ting; Chen, Clement Kuen-Huang

    2016-06-01

    The conventional method of bone mineral density (BMD) report production by dictation and transcription is time consuming and prone to error. We developed an automated BMD reporting system based on the raw data from a dual energy X-ray absorptiometry (DXA) scanner for facilitating the report generation. The automated BMD reporting system, a web application, digests the DXA's raw data and automatically generates preliminary reports. In Jan. 2014, 500 examinations were randomized into an automatic group (AG) and a manual group (MG), and the speed of report generation was compared. For evaluation of the accuracy and analysis of errors, 5120 examinations during Jan. 2013 and Dec. 2013 were enrolled retrospectively, and the context of automatically generated reports (AR) was compared with the formal manual reports (MR). The average time spent for report generation in AG and in MG was 264 and 1452 s, respectively (p Z scores in AR is 100 %. The overall accuracy of AR and MR is 98.8 and 93.7 %, respectively (p < 0.001). The mis-categorization rate in AR and MR is 0.039 and 0.273 %, respectively (p = 0.0013). Errors occurred in AR and can be grouped into key-in errors by technicians and need for additional judgements. We constructed an efficient and reliable automated BMD reporting system. It facilitates current clinical service and potentially prevents human errors from technicians, transcriptionists, and radiologists.

  3. A robust and accurate center-frequency estimation (RACE) algorithm for improving motion estimation performance of SinMod on tagged cardiac MR images without known tagging parameters☆

    Science.gov (United States)

    Liu, Hong; Wang, Jie; Xu, Xiangyang; Song, Enmin; Wang, Qian; Jin, Renchao; Hung, Chih-Cheng; Fei, Baowei

    2015-01-01

    A robust and accurate center-frequency (CF) estimation (RACE) algorithm for improving the performance of the local sine-wave modeling (SinMod) method, which is a good motion estimation method for tagged cardiac magnetic resonance (MR) images, is proposed in this study. The RACE algorithm can automatically, effectively and efficiently produce a very appropriate CF estimate for the SinMod method, under the circumstance that the specified tagging parameters are unknown, on account of the following two key techniques: (1) the well-known mean-shift algorithm, which can provide accurate and rapid CF estimation; and (2) an original two-direction-combination strategy, which can further enhance the accuracy and robustness of CF estimation. Some other available CF estimation algorithms are brought out for comparison. Several validation approaches that can work on the real data without ground truths are specially designed. Experimental results on human body in vivo cardiac data demonstrate the significance of accurate CF estimation for SinMod, and validate the effectiveness of RACE in facilitating the motion estimation performance of SinMod. PMID:25087857

  4. Reporting systems in gastrointestinal endoscopy: Requirements and standards facilitating quality improvement: European Society of Gastrointestinal Endoscopy position statement

    Science.gov (United States)

    Aabakken, Lars; Dekker, Evelien; Kaminski, Michal F; Rösch, Thomas; Hultcrantz, Rolf; Suchanek, Stepan; Jover, Rodrigo; Kuipers, Ernst J; Bisschops, Raf; Spada, Cristiano; Valori, Roland; Domagk, Dirk; Rees, Colin; Rutter, Matthew D

    2016-01-01

    To develop standards for high quality of gastrointestinal endoscopy, the European Society of Gastrointestinal Endoscopy (ESGE) has established the ESGE Quality Improvement Committee. A prerequisite for quality assurance and improvement for all gastrointestinal endoscopy procedures is state-of-the-art integrated digital reporting systems for standardized documentation of the procedures. The current paper describes the ESGE’s viewpoints on requirements for high-quality endoscopy reporting systems. The following recommendations are issued: Endoscopy reporting systems must be electronic.Endoscopy reporting systems should be integrated into hospital patient record systems.Endoscopy reporting systems should include patient identifiers to facilitate data linkage to other data sources.Endoscopy reporting systems shall restrict the use of free text entry to a minimum, and be based mainly on structured data entry.Separate entry of data for quality or research purposes is discouraged. Automatic data transfer for quality and research purposes must be facilitated.Double entry of data by the endoscopist or associate personnel is discouraged. Available data from outside sources (administrative or medical) must be made available automatically.Endoscopy reporting systems shall enable the inclusion of information on histopathology of detected lesions; patient’s satisfaction; adverse events; surveillance recommendations.Endoscopy reporting systems must facilitate easy data retrieval at any time in a universally compatible format.Endoscopy reporting systems must include data fields for key performance indicators as defined by quality improvement committees.Endoscopy reporting systems must facilitate changes in indicators and data entry fields as required by professional organizations. PMID:27087943

  5. Can sensory attention focused exercise facilitate the utilization of proprioception for improved balance control in PD?

    Science.gov (United States)

    Lefaivre, Shannon C; Almeida, Quincy J

    2015-02-01

    Impaired sensory processing in Parkinson's disease (PD) has been argued to contribute to balance deficits. Exercises aimed at improving sensory feedback and body awareness have the potential to ameliorate balance deficits in PD. Recently, PD SAFEx™, a sensory and attention focused rehabilitation program, has been shown to improve motor deficits in PD, although balance control has never been evaluated. The objective of this study was to measure the effects of PD SAFEx™ on balance control in PD. Twenty-one participants with mild to moderate idiopathic PD completed 12 weeks of PD SAFEx™ training (three times/week) in a group setting. Prior to training, participants completed a pre-assessment evaluating balance in accordance with an objective, computerized test of balance (modified clinical test of sensory integration and balance (m-CTSIB) and postural stability testing (PST)) protocols. The m-CTSIB was our primary outcome measure, which allowed assessment of balance in both eyes open and closed conditions, thus enabling evaluation of specific sensory contributions to balance improvement. At post-test, a significant interaction between time of assessment and vision condition (p=.014) demonstrated that all participants significantly improved balance control, specifically when eyes were closed. Balance control did not change from pre to post with eyes open. These results provide evidence that PD SAFEx™ is effective at improving the ability to utilize proprioceptive information, resulting in improved balance control in the absence of vision. Enhancing the ability to utilize proprioception for individuals with PD is an important intermediary to improving balance deficits.

  6. Developing a performance data suite to facilitate lean improvement in a chemotherapy day unit.

    Science.gov (United States)

    Lingaratnam, Senthil; Murray, Danielle; Carle, Amber; Kirsa, Sue W; Paterson, Rebecca; Rischin, Danny

    2013-07-01

    A multidisciplinary team from the Peter MacCallum Cancer Centre in Melbourne, Australia, developed a performance data suite to support a service improvement project based on lean manufacturing principles in its 19-chair chemotherapy day unit (CDU) and cytosuite chemotherapy production facility. The aims of the project were to reduce patient wait time and improve equity of access to the CDU. A project team consisting of a pharmacist and CDU nurse supported the management team for 10 months in engaging staff and customers to identify waste in processes, analyze root causes, eliminate non-value-adding steps, reduce variation, and level workloads to improve quality and flow. Process mapping, staff and patient tracking and opinion surveys, medical record audits, and interrogation of electronic treatment records were undertaken. This project delivered a 38% reduction in median wait time on the day (from 32 to 20 minutes; P lean improvement methodology provided a robust framework for improved understanding and management of complex system constraints within a CDU, resulting in improved access to treatment and reduced waiting times on the day.

  7. An Improved Estimator For Black-Scholes-Merton Implied Volatility

    NARCIS (Netherlands)

    W.G.P.M. Hallerbach (Winfried)

    2004-01-01

    textabstractWe derive an estimator for Black-Scholes-Merton implied volatility that, when compared to the familiar Corrado & Miller [JBaF, 1996] estimator, has substantially higher approximation accuracy and extends over a wider region of moneyness.

  8. Variance Clustering Improved Dynamic Conditional Correlation MGARCH Estimators

    OpenAIRE

    Gian Piero Aielli; Massimiliano Caporin

    2011-01-01

    It is well-known that the estimated GARCH dynamics exhibit common patterns. Starting from this fact we extend the Dynamic Conditional Correlation (DCC) model by allowing for a cluster- ing structure of the univariate GARCH parameters. The model can be estimated in two steps, the first devoted to the clustering structure, and the second focusing on correlation parameters. Differently from the traditional two-step DCC estimation, we get large system feasibility of the joint estimation of the wh...

  9. Improved injection needles facilitate germline transformation of the buckeye butterfly Junonia coenia.

    Science.gov (United States)

    Beaudette, Kahlia; Hughes, Tia M; Marcus, Jeffrey M

    2014-01-01

    Germline transformation with transposon vectors is an important tool for insect genetics, but progress in developing transformation protocols for butterflies has been limited by high post-injection ova mortality. Here we present an improved glass injection needle design for injecting butterfly ova that increases survival in three Nymphalid butterfly species. Using the needles to genetically transform the common buckeye butterfly Junonia coenia, the hatch rate for injected Junonia ova was 21.7%, the transformation rate was 3%, and the overall experimental efficiency was 0.327%, a substantial improvement over previous results in other butterfly species. Improved needle design and a higher efficiency of transformation should permit the deployment of transposon-based genetic tools in a broad range of less fecund lepidopteran species.

  10. Estimation of air quality improvement at road and street intersections

    Energy Technology Data Exchange (ETDEWEB)

    Hoeglund, P.G. [Royal Inst. of Technology, Stockholm (Sweden). Traffic and Transport Planning

    1995-12-31

    There has always been a very great problem to quantify the detrimental exhaust air pollution related to the traffic flow, especially at road and street intersections. Until now model calculations have been developed mainly for the links between the intersections. In an attempt to remedy this situation the author has developed a method of estimating emissions on the micro level from motor vehicles at intersections as a help for infrastructural design related to improved environmental conditions. Very parsimonious knowledge exists regarding the deceleration and acceleration patterns at road- and street intersections. Not many surveys are done neither in Sweden nor within other countries. Evidently, the need for knowledge regarding deceleration and acceleration behaviour on the micro level has until now not been given priority. In traffic safety related research studies have been done describing the drivers` deceleration and acceleration behaviour and the vehicles` braking performance. Those results give deceleration data for extreme situations and are not useful for describing normal decelerations and accelerations at road- and street intersections. Environment related problems within the traffic flow analysis are now accentuating the need for the studying of special deceleration and acceleration behaviours in combination with an alternative design of the road and street infrastructure. There is a big difference in different vehicles` amount of emitted exhaust pollutions during the passing of intersections depending on the vehicles` speed levels related to their deceleration and acceleration levels. (author)

  11. Data Mining to Facilitate Effective User Navigation and Improve Structure of a Website

    Directory of Open Access Journals (Sweden)

    Shweta Mohod,

    2014-08-01

    Full Text Available Now a days for a company it is very important to have active presence on web to become successful in electronic market. This requirement is fulfilled by an interactive website of a company. Website can be used to sell goods, maintain customer relationships, promotion of goods. By looking at the customer's response towards website we can decide campaign for future products and services. So for this website should be interactive and must be used by user by the way designer wants to. So, it is very important to study navigational behavior of a customer. By analyzing the web logs which records the navigational activity of the customer we can get several sequential patterns which helps us to study whether user is pursuing site's goal or not. Sequences are analyzed with WUM (Web Utilization Miner which gives g-sequence (Generalized Sequence and aggregate tree as an output. On the basis of the structure that is given by WUM, conclusions are studied and it is decided whether site structure needs improvement; The improvements suggested should be minimum but at the same time site should satisfy business goal. A mathematical programming model is used which suggest minimum improvements to site. Improvements which tends to change structure of site are avoided as it may be confusing for old site user.

  12. A Qualitative Study Exploring Facilitators for Improved Health Behaviors and Health Behavior Programs: Mental Health Service Users’ Perspectives

    Directory of Open Access Journals (Sweden)

    Candida Graham

    2014-01-01

    Full Text Available Objective. Mental health service users experience high rates of cardiometabolic disorders and have a 20–25% shorter life expectancy than the general population from such disorders. Clinician-led health behavior programs have shown moderate improvements, for mental health service users, in managing aspects of cardiometabolic disorders. This study sought to potentially enhance health initiatives by exploring (1 facilitators that help mental health service users engage in better health behaviors and (2 the types of health programs mental health service users want to develop. Methods. A qualitative study utilizing focus groups was conducted with 37 mental health service users attending a psychosocial rehabilitation center, in Northern British Columbia, Canada. Results. Four major facilitator themes were identified: (1 factors of empowerment, self-value, and personal growth; (2 the need for social support; (3 pragmatic aspects of motivation and planning; and (4 access. Participants believed that engaging with programs of physical activity, nutrition, creativity, and illness support would motivate them to live more healthily. Conclusions and Implications for Practice. Being able to contribute to health behavior programs, feeling valued and able to experience personal growth are vital factors to engage mental health service users in health programs. Clinicians and health care policy makers need to account for these considerations to improve success of health improvement initiatives for this population.

  13. Improving Video Game Development: Facilitating Heterogeneous Team Collaboration through Flexible Software Processes

    Science.gov (United States)

    Musil, Juergen; Schweda, Angelika; Winkler, Dietmar; Biffl, Stefan

    Based on our observations of Austrian video game software development (VGSD) practices we identified a lack of systematic processes/method support and inefficient collaboration between various involved disciplines, i.e. engineers and artists. VGSD includes heterogeneous disciplines, e.g. creative arts, game/content design, and software. Nevertheless, improving team collaboration and process support is an ongoing challenge to enable a comprehensive view on game development projects. Lessons learned from software engineering practices can help game developers to increase game development processes within a heterogeneous environment. Based on a state of the practice survey in the Austrian games industry, this paper presents (a) first results with focus on process/method support and (b) suggests a candidate flexible process approach based on Scrum to improve VGSD and team collaboration. Results showed (a) a trend to highly flexible software processes involving various disciplines and (b) identified the suggested flexible process approach as feasible and useful for project application.

  14. Zero-tension lysimeters: An improved design to monitor colloid-facilitated contaminant transport in the vadose zone

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, M.L.; Scharf, R.L.; Shang, C.

    1995-04-24

    There is increasing evidence that mobile colloids facilitate the long-distance transport of contaminants. The mobility of fine particles and macromolecules has been linked to the movement of actinides, organic contaminants, and heavy metals through soil. Direct evidence for colloid mobility includes the presence of humic materials in deep aquifers as well as coatings of accumulated clay, organic matter, or sesquioxides on particle or aggregate surfaces in subsoil horizons of many soils. The potential for colloid-facilitated transport of contaminants from hazardous-waste sites requires adequate monitoring before, during, and after in-situ remediation treatments. Zero-tension lysimeters (ZTLs) are especially appropriate for sampling water as it moves through saturated soil, although some unsaturated flow events may be sampled as well. Because no ceramic barrier or fiberglass wick is involved to maintain tension on the water (as is the case with other lysimeters), particles suspended in the water as well as dissolved species may be sampled with ZTLs. In this report, a ZTL design is proposed that is more suitable for monitoring colloid-facilitated contaminant migration. The improved design consists of a cylinder made of polycarbonate or polytetrafluoroethylene (PTFE) that is placed below undisturbed soil material. In many soils, a hydraulically powered tube may be used to extract an undisturbed core of soil before placement of the lysimeter. In those cases, the design has significant advantages over conventional designs with respect to simplicity and speed of installation. Therefore, it will allow colloid-facilitated transport of contaminants to be monitored at more locations at a given site.

  15. Improved Rosetta Pedotransfer Estimation of Hydraulic Properties and Their Covariance

    Science.gov (United States)

    Zhang, Y.; Schaap, M. G.

    2014-12-01

    Quantitative knowledge of the soil hydraulic properties is necessary for most studies involving water flow and solute transport in the vadose zone. However, it is always expensive, difficult, and time consuming to measure hydraulic properties directly. Pedotransfer functions (PTFs) have been widely used to forecast soil hydraulic parameters. Rosetta is is one of many PTFs and based on artificial neural network analysis coupled with the bootstrap sampling method. The model provides hierarchical PTFs for different levels of input data for Rosetta (H1-H5 models, with higher order models requiring more input variables). The original Rosetta model consists of separate PTFs for the four "van Genuchten" (VG) water retention parameters and saturated hydraulic conductivity (Ks) because different numbers of samples were available for these characteristics. In this study, we present an improved Rosetta pedotransfer function that uses a single model for all five parameters combined; these parameters are weighed for each sample individually using the covariance matrix obtained from the curve-fit of the VG parameters to the primary data. The optimal number of hidden nodes, weights for saturated hydraulic conductivity and water retention parameters in the neural network and bootstrap realization were selected. Results show that root mean square error (RMSE) for water retention decreased from 0.076 to 0.072 cm3/cm3 for the H2 model and decreased from 0.044 to 0.039 cm3/cm3 for the H5 model. Mean errors which indicate variable matric potential-dependent bias were also reduced significantly in the new model. The RMSE for Ks increased slightly (H2: 0.717 to 0.722; H5: 0.581 to 0.594); this increase is minimal and a result of using a single model for water retention and Ks. Despite this small increase the new model is recommended because of its improved estimation of water retention, and because it is now possible to calculate the full covariance matrix of soil water retention

  16. Improved measurements of RNA structure conservation with generalized centroid estimators

    Directory of Open Access Journals (Sweden)

    Yohei eOkada

    2011-08-01

    Full Text Available Identification of non-protein-coding RNAs (ncRNAs in genomes is acrucial task for not only molecular cell biology but alsobioinformatics. Secondary structures of ncRNAs are employed as a keyfeature of ncRNA analysis since biological functions of ncRNAs aredeeply related to their secondary structures. Although the minimumfree energy (MFE structure of an RNA sequence is regarded as the moststable structure, MFE alone could not be an appropriate measure foridentifying ncRNAs since the free energy is heavily biased by thenucleotide composition. Therefore, instead of MFE itself, severalalternative measures for identifying ncRNAs have been proposed such asthe structure conservation index (SCI and the base pair distance(BPD, both of which employ MFE structures. However, thesemeasurements are unfortunately not suitable for identifying ncRNAs insome cases including the genome-wide search and incur high falsediscovery rate. In this study, we propose improved measurements basedon SCI and BPD, applying generalized centroid estimators toincorporate the robustness against low quality multiple alignments.Our experiments show that our proposed methods achieve higher accuracythan the original SCI and BPD for not only human-curated structuralalignments but also low quality alignments produced by CLUSTALW. Furthermore, the centroid-based SCI on CLUSTAL W alignments is moreaccurate than or comparable with that of the original SCI onstructural alignments generated with RAF, a high quality structuralaligner, for which two-fold expensive computational time is requiredon average. We conclude that our methods are more suitable forgenome-wide alignments which are of low quality from the point of viewon secondary structures than the original SCI and BPD.

  17. Improved measurements of RNA structure conservation with generalized centroid estimators.

    Science.gov (United States)

    Okada, Yohei; Saito, Yutaka; Sato, Kengo; Sakakibara, Yasubumi

    2011-01-01

    Identification of non-protein-coding RNAs (ncRNAs) in genomes is a crucial task for not only molecular cell biology but also bioinformatics. Secondary structures of ncRNAs are employed as a key feature of ncRNA analysis since biological functions of ncRNAs are deeply related to their secondary structures. Although the minimum free energy (MFE) structure of an RNA sequence is regarded as the most stable structure, MFE alone could not be an appropriate measure for identifying ncRNAs since the free energy is heavily biased by the nucleotide composition. Therefore, instead of MFE itself, several alternative measures for identifying ncRNAs have been proposed such as the structure conservation index (SCI) and the base pair distance (BPD), both of which employ MFE structures. However, these measurements are unfortunately not suitable for identifying ncRNAs in some cases including the genome-wide search and incur high false discovery rate. In this study, we propose improved measurements based on SCI and BPD, applying generalized centroid estimators to incorporate the robustness against low quality multiple alignments. Our experiments show that our proposed methods achieve higher accuracy than the original SCI and BPD for not only human-curated structural alignments but also low quality alignments produced by CLUSTAL W. Furthermore, the centroid-based SCI on CLUSTAL W alignments is more accurate than or comparable with that of the original SCI on structural alignments generated with RAF, a high quality structural aligner, for which twofold expensive computational time is required on average. We conclude that our methods are more suitable for genome-wide alignments which are of low quality from the point of view on secondary structures than the original SCI and BPD.

  18. Developing a monitoring method facilitating continual improvements in the sorting of waste at recycling centres.

    Science.gov (United States)

    Krook, Joakim; Eklund, Mats

    2010-01-01

    Beneficial use of waste relies on efficient systems for collection and separation. In Sweden, a bring system involving recycling centres for collection of bulky, electr(on)ic and hazardous waste has been introduced. A significant share of this waste is incorrectly sorted, causing downstream environmental implications. At present, however, there is a lack of affordable and accurate monitoring methods for providing the recycling centres with the necessary facts for improving the sorting of waste. The aim of this study was therefore to evaluate the usability of a simplified and potentially more suitable waste monitoring method for recycling centres. This method is based on standardised observations where the occurrence of incorrect sorting is monitored by taking digital pictures of the waste which then are analysed according to certain guidelines. The results show that the developed monitoring method could offer a resource-efficient and useful tool for proactive quality work at recycling centres, involving continuous efforts in developing and evaluating measures for improved sorting of waste. More research is however needed in order to determine to what extent the obtained results from the monitoring method are reliable.

  19. Social networks improve leaderless group navigation by facilitating long-distance communication

    Directory of Open Access Journals (Sweden)

    Nikolai W. F. BODE, A. Jamie WOOD, Daniel W. FRANKS

    2012-04-01

    Full Text Available Group navigation is of great importance for many animals, such as migrating flocks of birds or shoals of fish. One theory states that group membership can improve navigational accuracy compared to limited or less accurate individual navigational ability in groups without leaders (“Many-wrongs principle”. Here, we simulate leaderless group navigation that includes social connections as preferential interactions between individuals. Our results suggest that underlying social networks can reduce navigational errors of groups and increase group cohesion. We use network summary statistics, in particular network motifs, to study which characteristics of networks lead to these improvements. It is networks in which preferences between individuals are not clustered, but spread evenly across the group that are advantageous in group navigation by effectively enhancing long-distance information exchange within groups. We suggest that our work predicts a base-line for the type of social structure we might expect to find in group-living animals that navigate without leaders [Current Zoology 58 (2: 329-341, 2012].

  20. Social networks improve leaderless group navigation by facilitating long-distance communication

    Institute of Scientific and Technical Information of China (English)

    Nikolai W.F.BODE; A.Jamie WOOD; Daniel W.FRANKS

    2012-01-01

    Group navigation is of great importance for many animals,such as migrating flocks of birds or shoals of fish.One theory states that group membership can improve navigational accuracy compared to limited or less accurate individual navigational ability in groups without leaders ("Many-wrongs principle").Here,we simulate leaderless group navigation that includes social connectious as preferential interactions between individuals.Our results suggest that underlying social networks can reduce navigational errors of groups and increase group cohesion.We use network summary statistics,in particular network motifs,to study which characteristics of networks lead to these improvements.It is networks in which preferences between individuals are not clustered,but spread evenly across the group that are advantageous in group navigation by effectively enhancing long-distance information exchange within groups.We suggest that our work predicts a base-line for the type of social structure we might expect to find in group-living animals that navigate without leaders [Current Zoology 58 (2):329-341,2012].

  1. Organizational coherence in health care organizations: conceptual guidance to facilitate quality improvement and organizational change.

    Science.gov (United States)

    McAlearney, Ann Scheck; Terris, Darcey; Hardacre, Jeanne; Spurgeon, Peter; Brown, Claire; Baumgart, Andre; Nyström, Monica E

    2014-01-01

    We sought to improve our understanding of how health care quality improvement (QI) methods and innovations could be efficiently and effectively translated between settings to reduce persistent gaps in health care quality both within and across countries. We aimed to examine whether we could identify a core set of organizational cultural attributes, independent of context and setting, which might be associated with success in implementing and sustaining QI systems in health care organizations. We convened an international group of investigators to explore the issues of organizational culture and QI in different health care contexts and settings. This group met in person 3 times and held a series of conference calls to discuss emerging ideas over 2 years. Investigators also conducted pilot studies in their home countries to examine the applicability of our conceptual model. We suggest that organizational coherence may be a critical element of QI efforts in health care organizations and propose that there are 3 key components of organizational coherence: (1) people, (2) processes, and (3) perspectives. Our work suggests that the concept of organizational coherence embraces both culture and context and can thus help guide both researchers and practitioners in efforts to enhance health care QI efforts, regardless of organizational type, location, or context.

  2. Improved Critical Eigenfunction Restriction Estimates on Riemannian Surfaces with Nonpositive Curvature

    Science.gov (United States)

    Xi, Yakun; Zhang, Cheng

    2016-07-01

    We show that one can obtain improved L 4 geodesic restriction estimates for eigenfunctions on compact Riemannian surfaces with nonpositive curvature. We achieve this by adapting Sogge's strategy in (Improved critical eigenfunction estimates on manifolds of nonpositive curvature, Preprint). We first combine the improved L 2 restriction estimate of Blair and Sogge (Concerning Toponogov's Theorem and logarithmic improvement of estimates of eigenfunctions, Preprint) and the classical improved {L^∞} estimate of Bérard to obtain an improved weak-type L 4 restriction estimate. We then upgrade this weak estimate to a strong one by using the improved Lorentz space estimate of Bak and Seeger (Math Res Lett 18(4):767-781, 2011). This estimate improves the L 4 restriction estimate of Burq et al. (Duke Math J 138:445-486, 2007) and Hu (Forum Math 6:1021-1052, 2009) by a power of {(log logλ)^{-1}} . Moreover, in the case of compact hyperbolic surfaces, we obtain further improvements in terms of {(logλ)^{-1}} by applying the ideas from (Chen and Sogge, Commun Math Phys 329(3):435-459, 2014) and (Blair and Sogge, Concerning Toponogov's Theorem and logarithmic improvement of estimates of eigenfunctions, Preprint). We are able to compute various constants that appeared in (Chen and Sogge, Commun Math Phys 329(3):435-459, 2014) explicitly, by proving detailed oscillatory integral estimates and lifting calculations to the universal cover H^2.

  3. From crossbreeding to biotechnology-facilitated improvement of banana and plantain.

    Science.gov (United States)

    Ortiz, Rodomiro; Swennen, Rony

    2014-01-01

    The annual harvest of banana and plantain (Musa spp.) is approximately 145 million tons worldwide. About 85% of this global production comes from small plots and kitchen or backyard gardens from the developing world, and only 15% goes to the export trade. Musa acuminata and Musa balbisiana are the ancestors of several hundreds of parthenocarpic Musa diploid and polyploid cultivars, which show multiple origins through inter- and intra-specific hybridizations from these two wild diploid species. Generating hybrids combining host plant resistance to pathogens and pests, short growth cycles and height, high fruit yield, parthenocarpy, and desired quality from the cultivars remains a challenge for Musa crossbreeding, which started about one century ago in Trinidad. The success of Musa crossbreeding depends on the production of true hybrid seeds in a crop known for its high levels of female sterility, particularly among polyploid cultivars. All banana export cultivars grown today are, however, selections from somatic mutants of the group Cavendish and have a very narrow genetic base, while smallholders in sub-Saharan Africa, tropical Asia and Latin America use some bred-hybrids (mostly cooking types). Musa improvement goals need to shift to address emerging threats because of the changing climate. Innovative cell and molecular biology tools have the potential to enhance the pace and efficiency of genetic improvement in Musa. Micro-propagation has been successful for high throughput of clean planting materials while in vitro seed germination assists in obtaining seedlings after inter-specific and across ploidy hybridization. Flow cytometry protocols are used for checking ploidy among genebank accessions and breeding materials. DNA markers, the genetic maps based on them, and the recent sequencing of the banana genome offer means for gaining more insights in the genetics of the crops and to identifying genes that could lead to accelerating Musa betterment. Likewise, DNA

  4. Sintered cadmium telluride nanocrystal photovoltaics: Improving chemistry to facilitate roll-to-roll fabrication

    Science.gov (United States)

    Kurley, James Matthew, III

    Recent interest in clean, renewable energy has increased importance on cost-effective and materials efficient deposition methods. Solution-processed solar cells utilizing cadmium telluride nanocrystal inks offer a viable method for reducing cost, increasing materials effectiveness, and decreasing the need for fossil fuels in the near future. Initial work focused on developing a useful platform for testing new chemistries for solubilizing and depositing nanocrystal inks. Layer-by-layer deposition using a combination of spincoating, cadmium chloride treatment, and annealing created a photovoltaic-grade CdTe absorber layer. In conjunction with layer-by-layer deposition, a device architecture of ITO/CdTe/ZnO/Al was utilized to create power conversion efficiencies of over 12% with the help of current/light soaking. Detailed exploration of device geometry, capacitance measurements, and intensity- and temperature-dependent testing determined the ITO/CdTe interface required additional scrutiny. This initial investigation sparked three new. avenues of research: create an Ohmic contact to CdTe, remove the cadmium chloride bath treatment, and create a roll-to-roll friendly process. Improved contact between ITO and CdTe was achieved by using a variety of materials already proven to create Ohmic contact to CdTe. While most of these materials were previously employed using standard approaches, solution-processed analogs were explored. The cadmium chloride bath treatment proved inconsistent, wasteful, and difficult to utilize quickly. It was removed by using trichlorocadmate-capped nanocrystals to combine the semiconductor with the required grain growth agent. To establish roll-to-roll friendly process, the deposition method was improved, heating source changed, and cadmium chloride bath step was removed. Spraycoating or doctor-blading the trichlorocadmate-capped nanocrystals followed by annealing with an IR lamp established a process that can deposit CdTe in a high throughput

  5. Facilitated Nurse Medication-Related Event Reporting to Improve Medication Management Quality and Safety in Intensive Care Units.

    Science.gov (United States)

    Xu, Jie; Reale, Carrie; Slagle, Jason M; Anders, Shilo; Shotwell, Matthew S; Dresselhaus, Timothy; Weinger, Matthew B

    Medication safety presents an ongoing challenge for nurses working in complex, fast-paced, intensive care unit (ICU) environments. Studying ICU nurse's medication management-especially medication-related events (MREs)-provides an approach to analyze and improve medication safety and quality. The goal of this study was to explore the utility of facilitated MRE reporting in identifying system deficiencies and the relationship between MREs and nurses' work in the ICUs. We conducted 124 structured 4-hour observations of nurses in three different ICUs. Each observation included measurement of nurse's moment-to-moment activity and self-reports of workload and negative mood. The observer then obtained MRE reports from the nurse using a structured tool. The MREs were analyzed by three experts. MREs were reported in 35% of observations. The 60 total MREs included four medication errors and seven adverse drug events. Of the 49 remaining MREs, 65% were associated with negative patient impact. Task/process deficiencies were the most common contributory factor for MREs. MRE occurrence was correlated with increased total task volume. MREs also correlated with increased workload, especially during night shifts. Most of these MREs would not be captured by traditional event reporting systems. Facilitated MRE reporting provides a robust information source about potential breakdowns in medication management safety and opportunities for system improvement.

  6. Improved variance estimation of maximum likelihood estimators in stable first-order dynamic regression models

    NARCIS (Netherlands)

    Kiviet, J.F.; Phillips, G.D.A.

    2014-01-01

    In dynamic regression models conditional maximum likelihood (least-squares) coefficient and variance estimators are biased. Using expansion techniques an approximation is obtained to the bias in variance estimation yielding a bias corrected variance estimator. This is achieved for both the standard

  7. Improving Estimation Accuracy of Aggregate Queries on Data Cubes

    Energy Technology Data Exchange (ETDEWEB)

    Pourabbas, Elaheh; Shoshani, Arie

    2008-08-15

    In this paper, we investigate the problem of estimation of a target database from summary databases derived from a base data cube. We show that such estimates can be derived by choosing a primary database which uses a proxy database to estimate the results. This technique is common in statistics, but an important issue we are addressing is the accuracy of these estimates. Specifically, given multiple primary and multiple proxy databases, that share the same summary measure, the problem is how to select the primary and proxy databases that will generate the most accurate target database estimation possible. We propose an algorithmic approach for determining the steps to select or compute the source databases from multiple summary databases, which makes use of the principles of information entropy. We show that the source databases with the largest number of cells in common provide the more accurate estimates. We prove that this is consistent with maximizing the entropy. We provide some experimental results on the accuracy of the target database estimation in order to verify our results.

  8. How does sport psychology actually improve athletic performance? A framework to facilitate athletes' and coaches' understanding.

    Science.gov (United States)

    Gee, Chris J

    2010-09-01

    The popularity of sport psychology, both as an academic discipline and an applied practice, has grown substantially over the past two decades. Few within the realm of competitive athletics would argue with the importance of being mentally prepared prior to an athletic competition as well as the need to maintain that particular mindset during a competitive contest. Nevertheless, recent research has shown that many athletes, coaches, and sporting administrators are still quite reluctant to seek out the services of a qualified sport psychologist, even if they believe it could help. One of the primary reasons for this hesitation appears to be a lack of understanding about the process and the mechanisms by which these mental skills affect performance. Unlike the "harder sciences" of sport physiology and biochemistry where athletes can see the tangible results in themselves or other athletes (e.g., he or she lifted weights, developed larger muscles, and is now stronger/faster as a result), the unfamiliar and often esoteric nature of sport psychology appears to be impeding a large number of athletes from soliciting these important services. As such, the purpose of this article is to provide the reader with a simple framework depicting how mental skills training translates into improved within-competition performance. This framework is intended to help bridge the general "understanding gap" that is currently being reported by a large number of athletes and coaches, while also helping sport psychology practitioners sell their valuable services to individual athletes and teams.

  9. Electrotactile feedback improves performance and facilitates learning in the routine grasping task

    Directory of Open Access Journals (Sweden)

    Milica Isaković

    2016-06-01

    Full Text Available Aim of this study was to investigate the feasibility of electrotactile feedback in closed loop training of force control during the routine grasping task. The feedback was provided using an array electrode and a simple six-level spatial coding, and the experiment was conducted in three amputee subjects. The psychometric tests confirmed that the subjects could perceive and interpret the electrotactile feedback with a high success rate. The subjects performed the routine grasping task comprising 4 blocks of 60 grasping trials. In each trial, the subjects employed feedforward control to close the hand and produce the desired grasping force (four levels. First (baseline and the last (validation session were performed in open loop, while the second and the third session (training included electrotactile feedback. The obtained results confirmed that using the feedback improved the accuracy and precision of the force control. In addition, the subjects performed significantly better in the validation vs. baseline session, therefore suggesting that electrotactile feedback can be used for learning and training of myoelectric control.

  10. Improving reflectance estimation by BRDF-consistent region clustering

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Previous studies in reflectance estimation generally require prior segmentation of an image into regions of uniform reflectance. Due to the measurement noise and limited sampling of the BRDF (bi-directional reflectance function) directions, such estimated results of reflectance are not accurate. In this paper, we propose a novel method for reducing uncertainty in reflectance estimates by merging image regions which have consistent reflectance observations. Each image region acts as a reflectance subspace, so merging of the image regions can result in subspace reduction. We propose a Bayesian segmentation framework to decrease the reflectance uncertainty by using novel merging criteria. Finally, a maximum likelihood reflectance estimation is made for each resulting image region. Experimental results verify the feasibility and superiority of this reflectance-oriented region merging method.

  11. FEH Local: Improving flood estimates using historical data

    Directory of Open Access Journals (Sweden)

    Prosdocimi Ilaria

    2016-01-01

    Full Text Available The traditional approach to design flood estimation (for example, to derive the 100-year flood is to apply a statistical model to time series of peak river flow measured by gauging stations. Such records are typically not very long, for example in the UK only about 10% of the stations have records that are more than 50 years in length. Along-explored way to augment the data available from a gauging station is to derive information about historical flood events and paleo-floods, which can be obtained from careful exploration of archives, old newspapers, flood marks or other signs of past flooding that are still discernible in the catchment, and the history of settlements. The inclusion of historical data in flood frequency estimation has been shown to substantially reduce the uncertainty around the estimated design events and is likely to provide insight into the rarest events which might have pre-dated the relatively short systematic records. Among other things, the FEH Local project funded by the Environment Agency aims to develop methods to easily incorporate historical information into the standard method of statistical flood frequency estimation in the UK. Different statistical estimation procedures are explored, namely maximum likelihood and partial probability weighted moments, and the strengths and weaknesses of each method are investigated. The project assesses the usefulness of historical data and aims to provide practitioners with useful guidelines to indicate in what circumstances the inclusion of historical data is likely to be beneficial in terms of reducing both the bias and the variability of the estimated flood frequency curves. The guidelines are based on the results of a large Monte Carlo simulation study, in which different estimation procedures and different data availability scenarios are studied. The study provides some indication of the situations under which different estimation procedures might give a better performance.

  12. Bayesian fusion algorithm for improved oscillometric blood pressure estimation.

    Science.gov (United States)

    Forouzanfar, Mohamad; Dajani, Hilmi R; Groza, Voicu Z; Bolic, Miodrag; Rajan, Sreeraman; Batkin, Izmail

    2016-11-01

    A variety of oscillometric algorithms have been recently proposed in the literature for estimation of blood pressure (BP). However, these algorithms possess specific strengths and weaknesses that should be taken into account before selecting the most appropriate one. In this paper, we propose a fusion method to exploit the advantages of the oscillometric algorithms and circumvent their limitations. The proposed fusion method is based on the computation of the weighted arithmetic mean of the oscillometric algorithms estimates, and the weights are obtained using a Bayesian approach by minimizing the mean square error. The proposed approach is used to fuse four different oscillometric blood pressure estimation algorithms. The performance of the proposed method is evaluated on a pilot dataset of 150 oscillometric recordings from 10 subjects. It is found that the mean error and standard deviation of error are reduced relative to the individual estimation algorithms by up to 7 mmHg and 3 mmHg in estimation of systolic pressure, respectively, and by up to 2 mmHg and 3 mmHg in estimation of diastolic pressure, respectively.

  13. Improved Recharge Estimation from Portable, Low-Cost Weather Stations.

    Science.gov (United States)

    Holländer, Hartmut M; Wang, Zijian; Assefa, Kibreab A; Woodbury, Allan D

    2016-03-01

    Groundwater recharge estimation is a critical quantity for sustainable groundwater management. The feasibility and robustness of recharge estimation was evaluated using physical-based modeling procedures, and data from a low-cost weather station with remote sensor techniques in Southern Abbotsford, British Columbia, Canada. Recharge was determined using the Richards-based vadose zone hydrological model, HYDRUS-1D. The required meteorological data were recorded with a HOBO(TM) weather station for a short observation period (about 1 year) and an existing weather station (Abbotsford A) for long-term study purpose (27 years). Undisturbed soil cores were taken at two locations in the vicinity of the HOBO(TM) weather station. The derived soil hydraulic parameters were used to characterize the soil in the numerical model. Model performance was evaluated using observed soil moisture and soil temperature data obtained from subsurface remote sensors. A rigorous sensitivity analysis was used to test the robustness of the model. Recharge during the short observation period was estimated at 863 and 816 mm. The mean annual recharge was estimated at 848 and 859 mm/year based on a time series of 27 years. The relative ratio of annual recharge-precipitation varied from 43% to 69%. From a monthly recharge perspective, the majority (80%) of recharge due to precipitation occurred during the hydrologic winter period. The comparison of the recharge estimates with other studies indicates a good agreement. Furthermore, this method is able to predict transient recharge estimates, and can provide a reasonable tool for estimates on nutrient leaching that is often controlled by strong precipitation events and rapid infiltration of water and nitrate into the soil.

  14. Benefits, Facilitators, Barriers, and Strategies to Improve Pesticide Protective Behaviors: Insights from Farmworkers in North Carolina Tobacco Fields.

    Science.gov (United States)

    Walton, AnnMarie Lee; LePrevost, Catherine E; Linnan, Laura; Sanchez-Birkhead, Ana; Mooney, Kathi

    2017-06-23

    Pesticide exposure is associated with deleterious health effects. Prior studies suggest Latino farmworkers perceive little control over their occupational health. Using the Health Belief Model as a theoretical guide, we explored the perceptions of Latino farmworkers working in tobacco in North Carolina (n = 72) about benefits and facilitators of pesticide protective behaviors as well as barriers, and strategies to overcome barriers to their use. Interviews were conducted with participants at farmworker housing during non-work time. Qualitative data were analyzed using ATLAS.ti. Farmworkers recognized pesticide protective behaviors as helping them to not get sick and stay healthy. Farmworkers perceived work experience as facilitating protective behaviors. Wetness in the field was the most commonly cited barrier to protective behavior use. To overcome this barrier, farmworkers suggested use of water-resistant outerwear, as well as packing a change of clothes for mid-day, with space and time to change provided by employers. Examination of the efficacy and feasibility of farmworkers' suggestions for addressing barriers is warranted. Training and behavior modeling by experienced peers may improve behavior adoption and perceived control.

  15. Maximum-likelihood fits to histograms for improved parameter estimation

    CERN Document Server

    Fowler, Joseph W

    2013-01-01

    Straightforward methods for adapting the familiar chi^2 statistic to histograms of discrete events and other Poisson distributed data generally yield biased estimates of the parameters of a model. The bias can be important even when the total number of events is large. For the case of estimating a microcalorimeter's energy resolution at 6 keV from the observed shape of the Mn K-alpha fluorescence spectrum, a poor choice of chi^2 can lead to biases of at least 10% in the estimated resolution when up to thousands of photons are observed. The best remedy is a Poisson maximum-likelihood fit, through a simple modification of the standard Levenberg-Marquardt algorithm for chi^2 minimization. Where the modification is not possible, another approach allows iterative approximation of the maximum-likelihood fit.

  16. Improving methods estimation of the investment climate of the country

    Directory of Open Access Journals (Sweden)

    E. V. Ryabinin

    2016-01-01

    the most objective assessment of the investment climate in the country in order to build their strategies market functioning. The article describes two methods to obtain an estimate of the investment climate, a fundamental and expertise. Studies have shown that the fundamental method provides the most accurate and objective assessment of, but not all of the investment potential factors can be subjected to mathematical evaluation. The use of expert opinion on the practice of subjectivity difficult to experts, so its use requires special care. In modern economic practice it proved that the investment climate elements directly affect the investment decisions of companies. Improving the investment climate assessment methodology, it allows you to build the most optimal form of cooperation between investors from the host country. In today’s political tensions, this path requires clear cooperation of subjects, both in the domestic and international level. However, now, these measures will avoid the destabilization of Russia’s relations with foreign investors.

  17. Strategies to facilitate implementation and sustainability of large system transformations: a case study of a national program for improving quality of care for elderly people

    National Research Council Canada - National Science Library

    Nyström, Monica Elisabeth; Strehlenert, Helena; Hansson, Johan; Hasson, Henna

    2014-01-01

    .... The purpose of this study was to examine the characteristics of core activities and strategies to facilitate implementation and change of a national program aimed at improving life for the most ill...

  18. Improvement in airsea flux estimates derived from satellite observations

    OpenAIRE

    Bentamy, Abderrahim; Grodsky, Semyon A.; Katsaros, Kristina; Mestas-nunez, Alberto M.; Blanke, Bruno; Desbiolles, Fabien

    2013-01-01

    A new method is developed to estimate daily turbulent airsea fluxes over the global ocean on a 0.25 degrees grid. The required surface wind speed (w(10)) and specific air humidity (q(10)) at 10m height are both estimated from remotely sensed measurements. w(10) is obtained from the SeaWinds scatterometer on board the QuikSCAT satellite. A new empirical model relating brightness temperatures (T-b) from the Special Sensor Microwave Imager (SSM/I) and q(10) is developed. It is an extension of th...

  19. Facilitating improved road safety based on increased knowledge about driving behaviour and profiling sub-groups of drivers

    DEFF Research Database (Denmark)

    Martinussen, Laila Marianne

    with underlying mechanisms of lack of focus, emotional stress, recklessness and confusion, and hence it is highly important to further explore means to making drivers become more focused or attentive when driving, and to deal with emotional responses in traffic like impatience and frustration (Article 1). 2......The aim of the Ph.D. study presented in this thesis was to facilitate improved road safety through increased understanding of methods used to measure driving behaviour, and through increased knowledge about driving behaviour in sub-groups of drivers. More specifically, the usefulness of the Driver...... Behaviour Questionnaire (DBQ) within a Danish context was explored, sub-groups of drivers differing in their potential danger in traffic were identified, and the relationship between implicit attitudes towards safe and risky driving and self-reported driving behaviour was explored. The methods applied were...

  20. Light-emitting conjugated polymers with microporous network architecture: interweaving scaffold promotes electronic conjugation, facilitates exciton migration, and improves luminescence.

    Science.gov (United States)

    Xu, Yanhong; Chen, Long; Guo, Zhaoqi; Nagai, Atsushi; Jiang, Donglin

    2011-11-09

    Herein we report a strategy for the design of highly luminescent conjugated polymers by restricting rotation of the polymer building blocks through a microporous network architecture. We demonstrate this concept using tetraphenylethene (TPE) as a building block to construct a light-emitting conjugated microporous polymer. The interlocked network successfully restricted the rotation of the phenyl units, which are the major cause of fluorescence deactivation in TPE, thus providing intrinsic luminescence activity for the polymers. We show positive "CMP effects" that the network promotes π-conjugation, facilitates exciton migration, and improves luminescence activity. Although the monomer and linear polymer analogue in solvents are nonemissive, the network polymers are highly luminescent in various solvents and the solid state. Because emission losses due to rotation are ubiquitous among small chromophores, this strategy can be generalized for the de novo design of light-emitting materials by integrating the chromophores into an interlocked network architecture.

  1. A degradable, bioactive, gelatinized alginate hydrogel to improve stem cell/growth factor delivery and facilitate healing after myocardial infarction.

    Science.gov (United States)

    Della Rocca, Domenico G; Willenberg, Bradley J; Ferreira, Leonardo F; Wate, Prateek S; Petersen, John W; Handberg, Eileen M; Zheng, Tong; Steindler, Dennis A; Terada, Naohiro; Batich, Christopher D; Byrne, Barry J; Pepine, Carl J

    2012-11-01

    Despite remarkable effectiveness of reperfusion and drug therapies to reduce morbidity and mortality following myocardial infarction (MI), many patients have debilitating symptoms and impaired left ventricular (LV) function highlighting the need for improved post-MI therapies. A promising concept currently under investigation is intramyocardial injection of high-water content, polymeric biomaterial gels (e.g., hydrogels) to modulate myocardial scar formation and LV adverse remodeling. We propose a degradable, bioactive hydrogel that forms a unique microstructure of continuous, parallel capillary-like channels (Capgel). We hypothesize that the innovative architecture and composition of Capgel can serve as a platform for endogenous cell recruitment and drug/cell delivery, therefore facilitating myocardial repair after MI.

  2. The utah beacon experience: integrating quality improvement, health information technology, and practice facilitation to improve diabetes outcomes in small health care facilities.

    Science.gov (United States)

    Tennison, Janet; Rajeev, Deepthi; Woolsey, Sarah; Black, Jeff; Oostema, Steven J; North, Christie

    2014-01-01

    The Utah Improving Care through Connectivity and Collaboration (IC3) Beacon community (2010-2013) was spearheaded by HealthInsight, a nonprofit, community-based organization. One of the main objectives of IC(3) was to improve health care provided to patients with diabetes in three Utah counties, collaborating with 21 independent smaller clinics and two large health care enterprises. This paper will focus on the use of health information technology (HIT) and practice facilitation to develop and implement new care processes to improve clinic workflow and ultimately improve patients' diabetes outcomes at 21 participating smaller, independent clinics. Early in the project, we learned that most of the 21 clinics did not have the resources needed to successfully implement quality improvement (QI) initiatives. IC(3) helped clinics effectively use data generated from their electronic health records (EHRs) to design and implement interventions to improve patients' diabetes outcomes. This close coupling of HIT, expert practice facilitation, and Learning Collaboratives was found to be especially valuable in clinics with limited resources. Through this process we learned that (1) an extensive readiness assessment improved clinic retention, (2) clinic champions were important for a successful collaboration, and (3) current EHR systems have limited functionality to assist in QI initiatives. In general, smaller, independent clinics lack knowledge and experience with QI and have limited HIT experience to improve patient care using electronic clinical data. Additionally, future projects like IC(3) Beacon will be instrumental in changing clinic culture so that QI is integrated into routine workflow. Our efforts led to significant changes in how practice staff optimized their EHRs to manage and improve diabetes care, while establishing the framework for sustainability. Some of the IC(3) Beacon practices are currently smoothly transitioning to new models of care such as Patient

  3. The use of absolute values improves performance of estimation formulae

    DEFF Research Database (Denmark)

    Redal-Baigorri, Belén; Rasmussen, Knud; Heaf, James Goya

    2013-01-01

    Estimation of Glomerular Filtration Rate (GFR) by equations such as Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) or Modification of Diet in Renal Disease (MDRD) is usually expressed as a Body Surface Area (BSA) indexed value (ml/min per 1.73 m²). This can have severe clinical conse...

  4. Barriers and facilitators of interventions for improving antiretroviral therapy adherence: a systematic review of global qualitative evidence

    Directory of Open Access Journals (Sweden)

    Qingyan Ma

    2016-10-01

    Full Text Available Introduction: Qualitative research on antiretroviral therapy (ART adherence interventions can provide a deeper understanding of intervention facilitators and barriers. This systematic review aims to synthesize qualitative evidence of interventions for improving ART adherence and to inform patient-centred policymaking. Methods: We searched 19 databases to identify studies presenting primary qualitative data on the experiences, attitudes and acceptability of interventions to improve ART adherence among PLHIV and treatment providers. We used thematic synthesis to synthesize qualitative evidence and the CERQual (Confidence in the Evidence from Reviews of Qualitative Research approach to assess the confidence of review findings. Results: Of 2982 references identified, a total of 31 studies from 17 countries were included. Twelve studies were conducted in high-income countries, 13 in middle-income countries and six in low-income countries. Study populations focused on adults living with HIV (21 studies, n=1025, children living with HIV (two studies, n=46, adolescents living with HIV (four studies, n=70 and pregnant women living with HIV (one study, n=79. Twenty-three studies examined PLHIV perspectives and 13 studies examined healthcare provider perspectives. We identified six themes related to types of interventions, including task shifting, education, mobile phone text messaging, directly observed therapy, medical professional outreach and complex interventions. We also identified five cross-cutting themes, including strengthening social relationships, ensuring confidentiality, empowerment of PLHIV, compensation and integrating religious beliefs into interventions. Our qualitative evidence suggests that strengthening PLHIV social relationships, PLHIV empowerment and developing culturally appropriate interventions may facilitate adherence interventions. Our study indicates that potential barriers are inadequate training and compensation for lay

  5. Improving estimates of riverine fresh water into the Mediterranean sea

    Science.gov (United States)

    Wang, Fuxing; Polcher, Jan

    2017-04-01

    Estimating the freshwater input from the continents into the Mediterranean sea is a difficult endeavor due to the uncertainties from un-gauged rivers, human activities, and measurement of water flow at river outlet. One approach to estimate the freshwater inflow into the Mediterranean sea is based on the observed flux (about 63% available) and a simple annual water balance for rivers without observations (ignoring human usage and other processes). This method is the basis of most water balance studies of the Mediterranean sea and oceanic modelling activities, but it only provides annual mean values with a very strong assumption. Another approach is done by forcing a state of the art land surface model (LSM) with bias corrected atmospheric conditions. This method can estimate total fresh water flowing into the Mediterranean at daily scale but with all the caveats associated to models. We use data assimilation techniques by merging data between the model output (ORCHIDEE LSM developed at Institut Pierre Simon Laplace) and the observed river discharge from Global Runoff Data Center (GRDC) to correct the modelled fluxes with observations over the entire basin. Over each sub watershed, the GRDC data (if available) are applied to correct model simulated river discharge. This will allow to compensate for systematic errors of model or missing processes and provide estimates of the riverine input into the sea at high temporal and spatial resolution. We will analyze the freshwater inflow into the Mediterranean obtained here with different approaches reported in previous paper. The new estimates will serve for ocean modelling and water balance studies of the region.

  6. Efficacy of proprioceptive neuromuscular facilitation techniques versus traditional prosthetic training for improving ambulatory function in transtibial amputees

    Directory of Open Access Journals (Sweden)

    Pallavi Sahay, MPT

    2014-06-01

    Full Text Available The objective of this randomized controlled trial was to evaluate the efficacy of proprioceptive neuromuscular facilitation (PNF techniques in comparison to traditional prosthetic training (TPT in improving ambulatory function in transtibial amputees. Thirty study participants (19 men and 11 women with unilateral transtibial amputation participated in the study. They were randomly allocated to either the traditional training group (i.e., TPT (n = 15 or the PNF training group (n = 15. The treatment in the TPT group consisted of weight-bearing, weight-shifting, balance, and gait exercises for 30 minutes daily for 10 treatment sessions. In the PNF group, the same activities were performed by employing PNF principles and techniques. The outcome measures were gait parameters (e.g., stride width, step length, and stride length and the Locomotor Capabilities Index (LCI. The between-group comparisons at the end of the trial showed that the PNF group showed significant improvement in gait parameters and in the LCI, compared to the TPT group (p < 0.05. The results of the study suggested that prosthetic training based on proprioceptive feedback is more effective than the traditional prosthetic programme in improving ambulatory function.

  7. Estimation of time varying system parameters from ambient response using improved Particle-Kalman filter with correlated noise

    Science.gov (United States)

    Sen, Subhamoy; Crinière, Antoine; Mevel, Laurent; Cerou, Frederic; Dumoulin, Jean

    2017-04-01

    within a PF environment that estimates the parameters. This facilitates employing relatively less expensive linear KF for linear state estimation problem while costly PF is employed only for parameter estimation. Additionally, the proposed algorithm also takes care of those systems for which system and measurement noises are not uncorrelated as it is commonly idealized in standard filtering algorithms. As an example, for mechanical systems under ambient vibration it happens when acceleration response is considered as measurement. Thus the process and measurement noise in these system descriptions are obviously correlated. For this, an improved description for the Kalman gain is developed. Further, to enhance the consistency of particle filtering based parameter estimation involving high dimensional parameter space, a new temporal evolution strategy for the particles is defined. This strategy aims at restricting the solution from diverging (up to the point of no return) because of an isolated event of infeasible estimation which is very much likely especially when dealing with high dimensional parameter space.

  8. Improved estimate of the cross section for inverse beta decay

    CERN Document Server

    Ankowski, Artur M

    2016-01-01

    The hypothesis of the conserved vector current, relating the vector weak and isovector electromagnetic currents, plays a fundamental role in quantitative description of neutrino interactions. Despite being experimentally confirmed with great precision, it is not fully implemented in existing calculations of the cross section for inverse beta decay, the dominant mechanism of antineutrino scattering at energies below a few tens of MeV. In this article, I estimate the corresponding cross section and its uncertainty, ensuring conservation of the vector current. While converging to previous calculations at energies of several MeV, the obtained result is appreciably lower and predicts more directional positron production near the reaction threshold. These findings suggest that in the current estimate of the flux of geologically produced antineutrinos the 232Th and 238U components may be underestimated by 6.1 and 3.7%, respectively. The proposed search for light sterile neutrinos using a 144Ce--144Pr source is predi...

  9. Uncertainty Estimation Improves Energy Measurement and Verification Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Walter, Travis; Price, Phillip N.; Sohn, Michael D.

    2014-05-14

    Implementing energy conservation measures in buildings can reduce energy costs and environmental impacts, but such measures cost money to implement so intelligent investment strategies require the ability to quantify the energy savings by comparing actual energy used to how much energy would have been used in absence of the conservation measures (known as the baseline energy use). Methods exist for predicting baseline energy use, but a limitation of most statistical methods reported in the literature is inadequate quantification of the uncertainty in baseline energy use predictions. However, estimation of uncertainty is essential for weighing the risks of investing in retrofits. Most commercial buildings have, or soon will have, electricity meters capable of providing data at short time intervals. These data provide new opportunities to quantify uncertainty in baseline predictions, and to do so after shorter measurement durations than are traditionally used. In this paper, we show that uncertainty estimation provides greater measurement and verification (M&V) information and helps to overcome some of the difficulties with deciding how much data is needed to develop baseline models and to confirm energy savings. We also show that cross-validation is an effective method for computing uncertainty. In so doing, we extend a simple regression-based method of predicting energy use using short-interval meter data. We demonstrate the methods by predicting energy use in 17 real commercial buildings. We discuss the benefits of uncertainty estimates which can provide actionable decision making information for investing in energy conservation measures.

  10. Recent Improvements in Estimating Convective and Stratiform Rainfall in Amazonia

    Science.gov (United States)

    Negri, Andrew J.

    1999-01-01

    In this paper we present results from the application of a satellite infrared (IR) technique for estimating rainfall over northern South America. Our main objectives are to examine the diurnal variability of rainfall and to investigate the relative contributions from the convective and stratiform components. We apply the technique of Anagnostou et al (1999). In simple functional form, the estimated rain area A(sub rain) may be expressed as: A(sub rain) = f(A(sub mode),T(sub mode)), where T(sub mode) is the mode temperature of a cloud defined by 253 K, and A(sub mode) is the area encompassed by T(sub mode). The technique was trained by a regression between coincident microwave estimates from the Goddard Profiling (GPROF) algorithm (Kummerow et al, 1996) applied to SSM/I data and GOES IR (11 microns) observations. The apportionment of the rainfall into convective and stratiform components is based on the microwave technique described by Anagnostou and Kummerow (1997). The convective area from this technique was regressed against an IR structure parameter (the Convective Index) defined by Anagnostou et al (1999). Finally, rainrates are assigned to the Am.de proportional to (253-temperature), with different rates for the convective and stratiform

  11. Improved estimation of radiated axions from cosmological axionic strings

    CERN Document Server

    Hiramatsu, Takashi; Sekiguchi, Toyokazu; Yamaguchi, Masahide; Yokoyama, Jun'ichi

    2010-01-01

    Cosmological evolution of axionic string network is analyzed in terms of field-theoretic simulations in a box of 512^3 grids, which are the largest ever, using a new and more efficient identification scheme of global strings. The scaling parameter is found to be \\xi=0.87 +- 0.14 in agreement with previous results. The energy spectrum is calculated precisely using a pseudo power spectrum estimator which significantly reduces the error in the mean reciprocal comoving momentum. The resultant constraint on the axion decay constant leads to f_a <= 3*10^11 GeV. We also discuss implications for the early Universe.

  12. On Distributed PV Hosting Capacity Estimation, Sensitivity Study, and Improvement

    Energy Technology Data Exchange (ETDEWEB)

    Ding, Fei; Mather, Barry

    2017-07-01

    This paper first studies the estimated distributed PV hosting capacities of seventeen utility distribution feeders using the Monte Carlo simulation based stochastic analysis, and then analyzes the sensitivity of PV hosting capacity to both feeder and photovoltaic system characteristics. Furthermore, an active distribution network management approach is proposed to maximize PV hosting capacity by optimally switching capacitors, adjusting voltage regulator taps, managing controllable branch switches and controlling smart PV inverters. The approach is formulated as a mixed-integer nonlinear optimization problem and a genetic algorithm is developed to obtain the solution. Multiple simulation cases are studied and the effectiveness of the proposed approach on increasing PV hosting capacity is demonstrated.

  13. Improving Mantel-Haenszel DIF Estimation through Bayesian Updating

    Science.gov (United States)

    Zwick, Rebecca; Ye, Lei; Isham, Steven

    2012-01-01

    This study demonstrates how the stability of Mantel-Haenszel (MH) DIF (differential item functioning) methods can be improved by integrating information across multiple test administrations using Bayesian updating (BU). The authors conducted a simulation that showed that this approach, which is based on earlier work by Zwick, Thayer, and Lewis,…

  14. Covariance specification and estimation to improve top-down Green House Gas emission estimates

    Science.gov (United States)

    Ghosh, S.; Lopez-Coto, I.; Prasad, K.; Whetstone, J. R.

    2015-12-01

    The National Institute of Standards and Technology (NIST) operates the North-East Corridor (NEC) project and the Indianapolis Flux Experiment (INFLUX) in order to develop measurement methods to quantify sources of Greenhouse Gas (GHG) emissions as well as their uncertainties in urban domains using a top down inversion method. Top down inversion updates prior knowledge using observations in a Bayesian way. One primary consideration in a Bayesian inversion framework is the covariance structure of (1) the emission prior residuals and (2) the observation residuals (i.e. the difference between observations and model predicted observations). These covariance matrices are respectively referred to as the prior covariance matrix and the model-data mismatch covariance matrix. It is known that the choice of these covariances can have large effect on estimates. The main objective of this work is to determine the impact of different covariance models on inversion estimates and their associated uncertainties in urban domains. We use a pseudo-data Bayesian inversion framework using footprints (i.e. sensitivities of tower measurements of GHGs to surface emissions) and emission priors (based on Hestia project to quantify fossil-fuel emissions) to estimate posterior emissions using different covariance schemes. The posterior emission estimates and uncertainties are compared to the hypothetical truth. We find that, if we correctly specify spatial variability and spatio-temporal variability in prior and model-data mismatch covariances respectively, then we can compute more accurate posterior estimates. We discuss few covariance models to introduce space-time interacting mismatches along with estimation of the involved parameters. We then compare several candidate prior spatial covariance models from the Matern covariance class and estimate their parameters with specified mismatches. We find that best-fitted prior covariances are not always best in recovering the truth. To achieve

  15. Facilitated patient experience feedback can improve nursing care: a pilot study for a phase III cluster randomised controlled trial

    Science.gov (United States)

    2013-01-01

    Control and Feedback Plus = 8.28 ± 7.2 (p = 0.02)). Conclusions This study provides preliminary evidence that facilitated patient feedback can improve patients’ experiences such that a full trial is justified. These findings suggest that merely informing nurses of patient survey results in writing does not stimulate improvements, even if results are disaggregated by ward, but the addition of ward meetings had an important and significant impact. PMID:23826970

  16. Hawaii Clean Energy Initiative (HCEI) Scenario Analysis: Quantitative Estimates Used to Facilitate Working Group Discussions (2008-2010)

    Energy Technology Data Exchange (ETDEWEB)

    Braccio, R.; Finch, P.; Frazier, R.

    2012-03-01

    This report provides details on the Hawaii Clean Energy Initiative (HCEI) Scenario Analysis to identify potential policy options and evaluate their impact on reaching the 70% HECI goal, present possible pathways to attain the goal based on currently available technology, with an eye to initiatives under way in Hawaii, and provide an 'order-of-magnitude' cost estimate and a jump-start to action that would be adjusted with a better understanding of the technologies and market.

  17. Estimating Missing Features to Improve Multimedia Information Retrieval

    Energy Technology Data Exchange (ETDEWEB)

    Bagherjeiran, A; Love, N S; Kamath, C

    2006-09-28

    Retrieval in a multimedia database usually involves combining information from different modalities of data, such as text and images. However, all modalities of the data may not be available to form the query. The retrieval results from such a partial query are often less than satisfactory. In this paper, we present an approach to complete a partial query by estimating the missing features in the query. Our experiments with a database of images and their associated captions show that, with an initial text-only query, our completion method has similar performance to a full query with both image and text features. In addition, when we use relevance feedback, our approach outperforms the results obtained using a full query.

  18. An improved model for estimating pesticide emissions for agricultural LCA

    DEFF Research Database (Denmark)

    Dijkman, Teunis Johannes; Birkved, Morten; Hauschild, Michael Zwicky

    2011-01-01

    Credible quantification of chemical emissions in the inventory phase of Life Cycle Assessment (LCA) is crucial since chemicals are the dominating cause of the human and ecotoxicity-related environmental impacts in Life Cycle Impact Assessment (LCIA). When applying LCA for assessment of agricultural...... products, off-target pesticide emissions need to be quantified as accurately as possible because of the considerable toxicity effects associated with chemicals designed to have a high impact on biological organisms like for example insects or weed plants. PestLCI was developed to estimate the fractions...... of the applied pesticide that is emitted from a field to the surrounding environmental compartments: air, surface water, and ground water. However, the applicability of the model has been limited to 1 typical Danish soil type and 1 climatic profile obtained from the national Danish meteorological station...

  19. Improved estimates of ocean heat content from 1960 to 2015.

    Science.gov (United States)

    Cheng, Lijing; Trenberth, Kevin E; Fasullo, John; Boyer, Tim; Abraham, John; Zhu, Jiang

    2017-03-01

    Earth's energy imbalance (EEI) drives the ongoing global warming and can best be assessed across the historical record (that is, since 1960) from ocean heat content (OHC) changes. An accurate assessment of OHC is a challenge, mainly because of insufficient and irregular data coverage. We provide updated OHC estimates with the goal of minimizing associated sampling error. We performed a subsample test, in which subsets of data during the data-rich Argo era are colocated with locations of earlier ocean observations, to quantify this error. Our results provide a new OHC estimate with an unbiased mean sampling error and with variability on decadal and multidecadal time scales (signal) that can be reliably distinguished from sampling error (noise) with signal-to-noise ratios higher than 3. The inferred integrated EEI is greater than that reported in previous assessments and is consistent with a reconstruction of the radiative imbalance at the top of atmosphere starting in 1985. We found that changes in OHC are relatively small before about 1980; since then, OHC has increased fairly steadily and, since 1990, has increasingly involved deeper layers of the ocean. In addition, OHC changes in six major oceans are reliable on decadal time scales. All ocean basins examined have experienced significant warming since 1998, with the greatest warming in the southern oceans, the tropical/subtropical Pacific Ocean, and the tropical/subtropical Atlantic Ocean. This new look at OHC and EEI changes over time provides greater confidence than previously possible, and the data sets produced are a valuable resource for further study.

  20. The improved 10th order QED expression for a_{\\mu} new results and related estimates

    CERN Document Server

    Kataev, A L

    2006-01-01

    New estimates of the 10th order QED corrections to the muon anomalous magnetic moment are presented. The estimates include the information on definite improved 10th order QED contributions to $a_{\\mu}$, calculated by Kinoshita and Nio. The final estimates are in good agreement with the ones, given recently by Kinoshita.

  1. Participatory design facilitates Person Centred Nursing in service improvement with older people: a secondary directed content analysis.

    Science.gov (United States)

    Wolstenholme, Daniel; Ross, Helen; Cobb, Mark; Bowen, Simon

    2017-05-01

    To explore, using the example of a project working with older people in an outpatient setting in a large UK NHS Teaching hospital, how the constructs of Person Centred Nursing are reflected in interviews from participants in a Co-design led service improvement project. Person Centred Care and Person Centred Nursing are recognised terms in healthcare. Co-design (sometimes called participatory design) is an approach that seeks to involve all stakeholders in a creative process to deliver the best result, be this a product, technology or in this case a service. Co-design practice shares some of the underpinning philosophy of Person Centred Nursing and potentially has methods to aid in Person Centred Nursing implementation. The research design was a qualitative secondary Directed analysis. Seven interview transcripts from nurses and older people who had participated in a Co-design led improvement project in a large teaching hospital were transcribed and analysed. Two researchers analysed the transcripts for codes derived from McCormack & McCance's Person Centred Nursing Framework. The four most expressed codes were as follows: from the pre-requisites: knowing self; from care processes, engagement, working with patient's beliefs and values and shared Decision-making; and from Expected outcomes, involvement in care. This study describes the Co-design theory and practice that the participants responded to in the interviews and look at how the co-design activity facilitated elements of the Person Centred Nursing framework. This study adds to the rich literature about using emancipatory and transformational approaches to Person Centred Nursing development, and is the first study exploring explicitly the potential contribution of Co-design to this area. Methods from Co-design allow older people to contribute as equals in a practice development project, co-design methods can facilitate nursing staff to engage meaningfully with older participants and develop a shared

  2. The effect of construction cost estimating (CCE software on job performance: An improvement plan

    Directory of Open Access Journals (Sweden)

    Mohd Mukelas M.F.

    2014-01-01

    Full Text Available This paper presents a comprehensive statistical research on the effect of construction cost estimating software’s features towards estimating job performance. The objectives of this study are identification of cost estimating software features, analyzing the significant relation of cost estimating software’s features towards job performance, Explore the problem faced during the implementation and lastly propose a plan to improve the cost estimating software usage among contractors in Malaysia. The study statistically reveals four features of cost estimating software that significantly impact towards changes in cost estimating job performance. These features were refined by performing interview to focus group of respondent to observe the actual possible problems during the implementation. Eventually, the proposed improvement plan was validated by the focus group of respondents to enhance the cost estimating software implementation among contractors in Malaysia.

  3. Improved Estimation of Population Mean Using Median and Coefficient of Variation of Auxiliary Variable

    Directory of Open Access Journals (Sweden)

    Subhash Kumar Yadav,

    2014-01-01

    Full Text Available This manuscript deals with the estimation of population mean of the variable under study using an improved ratio type estimator utilizing the known values of median and coefficient of variation of auxiliary variable. The expressions for the bias and mean square error (MSE of the proposed estimator are obtained up to the first order of approximation. The optimum estimator is also obtained for the optimum value of the constant of the estimator and its optimum properties are also studied. It is shown that the proposed estimator is better than the existing ratio estimators in the literature. For the justification of the improvement of the proposed estimator over others, an empirical study is also carried out.

  4. Improved Estimates of Moments and Winds from Radar Wind Profiler

    Energy Technology Data Exchange (ETDEWEB)

    Helmus, Jonathan [Argonne National Lab. (ANL), Argonne, IL (United States); Ghate, Virendra P. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-01-02

    The U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility (ACRF) operates nine radar wind profilers (RWP) across its sites. These RWPs operate at 915 MHz or 1290 MHz frequency and report the first three moments of the Doppler spectrum. The operational settings of the RWP were modified in summer, 2015 to have single pulse length setting for the wind mode and two pulse length settings for the precipitation mode. The moments data collected during the wind mode are used to retrieve horizontal winds. The vendor-reported winds are available at variable time resolution (10 mins, 60 mins, etc.) and contain a significant amount of contamination due to noise and clutter. In this data product we have recalculated the moments and the winds from the raw radar Doppler spectrum and have made efforts to mitigate the contamination due to instrument noise in the wind estimates. Additionally, the moments and wind data has been reported in a harmonized layout identical for all locations and sites.

  5. Improving Multiyear Ice Concentration Estimates with Reanalysis Air Temperatures

    Science.gov (United States)

    Ye, Y.; Shokr, M.; Heygster, G.; Spreen, G.

    2015-12-01

    Multiyear ice (MYI) characteristics can be retrieved from passive or active microwave remote sensing observations. One of the algorithms that combine both of observations to identify partial concentrations of ice types (including MYI) is the Environment Canada's Ice Concentration Extractor (ECICE). However, cycles of warm/cold air temperature trigger wet-refreeze cycles of the snow cover on MYI ice surface. Under wet snow conditions, anomalous brightness temperature and backscatter, similar to those of first year ice (FYI) are observed. This leads to misidentification of MYI as being FYI, hence decreasing the estimated MYI concentration suddenly. The purpose of this study is to introduce a correction scheme to restore the MYI concentration under this condition. The correction is based on air temperature records. It utilizes the fact that the warm spell in autumn lasts for a short period of time (a few days). The correction is applied to MYI concentration results from ECICE using an input of combined QuikSCAT and AMSR-E data; acquired over the Arctic region in a series of autumn seasons from 2003 to 2008. The correction works well by replacing anomalous MYI concentrations with interpolated ones. For September of the six years, it introduces over 0.1×106 km2 MYI area except for 2005. Due to the regional effect of the warm air spells, the correction could be important in the operational applications where small and meso scale ice concentrations are crucial.

  6. An improved method for nonlinear parameter estimation: a case study of the Rössler model

    Science.gov (United States)

    He, Wen-Ping; Wang, Liu; Jiang, Yun-Di; Wan, Shi-Quan

    2016-08-01

    Parameter estimation is an important research topic in nonlinear dynamics. Based on the evolutionary algorithm (EA), Wang et al. (2014) present a new scheme for nonlinear parameter estimation and numerical tests indicate that the estimation precision is satisfactory. However, the convergence rate of the EA is relatively slow when multiple unknown parameters in a multidimensional dynamical system are estimated simultaneously. To solve this problem, an improved method for parameter estimation of nonlinear dynamical equations is provided in the present paper. The main idea of the improved scheme is to use all of the known time series for all of the components in some dynamical equations to estimate the parameters in single component one by one, instead of estimating all of the parameters in all of the components simultaneously. Thus, we can estimate all of the parameters stage by stage. The performance of the improved method was tested using a classic chaotic system—Rössler model. The numerical tests show that the amended parameter estimation scheme can greatly improve the searching efficiency and that there is a significant increase in the convergence rate of the EA, particularly for multiparameter estimation in multidimensional dynamical equations. Moreover, the results indicate that the accuracy of parameter estimation and the CPU time consumed by the presented method have no obvious dependence on the sample size.

  7. An improved iron loss estimation for permanent magnet brushless machines

    CERN Document Server

    Fang, D

    1999-01-01

    This paper presents an improved approach for predicting iron losses in permanent magnet brushless machines. The new approach is based on the fundamental concept that eddy current losses are proportional to the square of the time rate of change of flux density. Expressions are derived for predicting hysteresis and eddy current losses in the stator teeth and yoke. The so-called anomalous or excess losses, caused by the induced eddy current concentration around moving magnetic domain walls and neglected in the conventional core loss calculation, are also included in the proposed approach. In addition, the model is also capable of accounting for the stator skewing, if present. The core losses obtained from the proposed approach are compared with those measured on an existing PM motor at several operating speeds, showing very good agreement. (14 refs).

  8. Solving Richards Equation for snow improves snowpack meltwater runoff estimations

    Directory of Open Access Journals (Sweden)

    N. Wever

    2013-06-01

    Full Text Available The runoff from the snow cover during spring snow melt or rain-on-snow events is an important factor in the hydrological cycle. In this study, water transport schemes for a 1-dimensional physical based snowpack model are compared to 14 yr of lysimeter measurements at a high alpine site. The schemes include a simple bucket-type approach, an approximation of Richards Equation (RE, and the full RE. The results show that daily sums of runoff are strongly related to a positive energy balance of the snow cover and therefore, all water transport schemes show very similar performance in terms of Nash–Sutcliffe efficiency (NSE coefficients (around 0.59 and r2 values (around 0.77. Timing of the arrival of meltwater in spring at the bottom of the snowpack showed differences between the schemes, where especially in the bucket-type and approximated RE approach, meltwater release is slower than in the measurements. Overall, solving RE for the snow cover yields the best agreement between modelled and measured runoff. On sub-daily time scales, the water transport schemes behave very differently. Also here, solving RE provides the highest agreement between modelled and measured runoff in terms of NSE coefficient (0.48, where other water transport schemes loose any predictive power. This appears to be mainly due to bad timing of meltwater release during the day. Accordingly, solving RE for the snow cover improves several aspects of modelling snow cover runoff. The additional computational cost was found to be in the order of a factor of 1.5.

  9. Novel angle estimation for bistatic MIMO radar using an improved MUSIC

    Science.gov (United States)

    Li, Jianfeng; Zhang, Xiaofei; Chen, Han

    2014-09-01

    In this article, we study the problem of angle estimation for bistatic multiple-input multiple-output (MIMO) radar and propose an improved multiple signal classification (MUSIC) algorithm for joint direction of departure (DOD) and direction of arrival (DOA) estimation. The proposed algorithm obtains initial estimations of angles obtained from the signal subspace and uses the local one-dimensional peak searches to achieve the joint estimations of DOD and DOA. The angle estimation performance of the proposed algorithm is better than that of estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithm, and is almost the same as that of two-dimensional MUSIC. Furthermore, the proposed algorithm can be suitable for irregular array geometry, obtain automatically paired DOD and DOA estimations, and avoid two-dimensional peak searching. The simulation results verify the effectiveness and improvement of the algorithm.

  10. Improved DOA Estimation Algorithm with Sensor Array Perturbations for CDMA System

    Institute of Scientific and Technical Information of China (English)

    杨维; 程时昕

    2003-01-01

    An improved direction of arrival (DOA) estimation algorithm with sensor gain and phase uncertainties for synchronous code division multiple access(CDMA) system with decorrelator is presented. Through decorrelating processing DOAs of the desired users can be estimated independently and all other resolved signal interferences are eliminated. Emphasis is directed to applications in which sensor gain and phase are perturbed that often happen actually. It is shown that improved DOA estimation can be achieved for decoupled signals by gain and phase pre-estimation procedures.

  11. Group-contribution+ (GC+) based estimation of properties of pure components: Improved property estimation and uncertainty analysis

    DEFF Research Database (Denmark)

    Hukkerikar, Amol; Sarup, Bent; Ten Kate, Antoon

    2012-01-01

    The aim of this work is to present revised and improved model parameters for group-contribution+ (GC+) models (combined group-contribution (GC) method and atom connectivity index (CI) method) employed for the estimation of pure component properties, together with covariance matrices to quantify.......g. prediction errors in terms of 95% confidence intervals) in the estimated property values. This feature allows one to evaluate the effects of these uncertainties on product-process design, simulation and optimization calculations, contributing to better-informed and more reliable engineering solutions. (C...

  12. Shrinkage estimation of the genomic relationship matrix can improve genomic estimated breeding values in the training set.

    Science.gov (United States)

    Müller, Dominik; Technow, Frank; Melchinger, Albrecht E

    2015-04-01

    We evaluated several methods for computing shrinkage estimates of the genomic relationship matrix and demonstrated their potential to enhance the reliability of genomic estimated breeding values of training set individuals. In genomic prediction in plant breeding, the training set constitutes a large fraction of the total number of genotypes assayed and is itself subject to selection. The objective of our study was to investigate whether genomic estimated breeding values (GEBVs) of individuals in the training set can be enhanced by shrinkage estimation of the genomic relationship matrix. We simulated two different population types: a diversity panel of unrelated individuals and a biparental family of doubled haploid lines. For different training set sizes (50, 100, 200), number of markers (50, 100, 200, 500, 2,500) and heritabilities (0.25, 0.5, 0.75), shrinkage coefficients were computed by four different methods. Two of these methods are novel and based on measures of LD, the other two were previously described in the literature, one of which was extended by us. Our results showed that shrinkage estimation of the genomic relationship matrix can significantly improve the reliability of the GEBVs of training set individuals, especially for a low number of markers. We demonstrate that the number of markers is the primary determinant of the optimum shrinkage coefficient maximizing the reliability and we recommend methods eligible for routine usage in practical applications.

  13. F-35 Sustainment: Need for Affordable Strategy, Greater Attention to Risks, and Improved Cost Estimates

    Science.gov (United States)

    2014-09-01

    F - 35 SUSTAINMENT Need for Affordable Strategy, Greater Attention to Risks, and Improved Cost Estimates Report...2. REPORT TYPE 3. DATES COVERED 00-00-2014 to 00-00-2014 4. TITLE AND SUBTITLE F - 35 Sustainment: Need for Affordable Strategy, Greater...House of Representatives September 2014 F - 35 SUSTAINMENT Need for Affordable Strategy, Greater Attention to Risks, and Improved Cost Estimates Why

  14. Barriers and facilitators to evidence based care of type 2 diabetes patients : experiences of general practitioners participating to a quality improvement program

    NARCIS (Netherlands)

    Goderis, G.; Borgermans, L.D.A.; Mathieu, C.; Broeke, C. Van Den; Hannes, K.; Heyrman, J.; Grol, R.P.T.M.

    2009-01-01

    ABSTRACT: OBJECTIVE: To evaluate the barriers and facilitators to high-quality diabetes care as experienced by general practitioners (GPs) who participated in an 18-month quality improvement program (QIP). This QIP was implemented to promote compliance with international guidelines. METHODS: Twenty

  15. Initial position estimation method for permanent magnet synchronous motor based on improved pulse voltage injection

    DEFF Research Database (Denmark)

    Wang, Z.; Lu, K.; Ye, Y.

    2011-01-01

    According to saliency of permanent magnet synchronous motor (PMSM), the information of rotor position is implied in performance of stator inductances due to the magnetic saturation effect. Researches focused on the initial rotor position estimation of PMSM by injecting modulated pulse voltage...... vectors. The relationship between the inductance variations and voltage vector positions was studied. The inductance variation effect on estimation accuracy was studied as well. An improved five-pulses injection method was proposed, to improve the estimation accuracy by choosing optimaized voltage vectors...

  16. Improving Visual Acuity of Myopes through Operant Training: The Evaluation of Psychological and Physiological Mechanisms Facilitating Acuity Enhancement

    Science.gov (United States)

    1988-12-01

    covert servo-controlled tracking optometer (Cornsweet & Crane, 1970) that made the near-instantaneous feedback of monocular accommodative state possible...and Crane infrared- optometer measures of accommodation into auditory tones that reflected the instantaneous refractive state of the eye. While an...performance change might as well have been facilitated by blur interpretation. Nonobtrusive incorporation of a covert tracking optometer (Cornsweet & Crane

  17. Improvements in Limb Kinetic Apraxia by Repetition of a Newly Designed Facilitation Exercise in a Patient with Corticobasal Degeneration

    Science.gov (United States)

    Kawahira, Kazumi; Noma, Tomokazu; Iiyama, Junichi; Etoh, Seiji; Ogata, Atsuko; Shimodozono, Megumi

    2009-01-01

    Corticobasal degeneration is a progressive neurological disorder characterized by a combination of parkinsonism and cortical dysfunction such as limb kinetic apraxia, alien limb phenomenon, and dementia. To study the effect of repetitive facilitation exercise (RFE) in a patient with corticobasal degeneration, we used a newly designed facilitation…

  18. Improved dichotomous search frequency offset estimator for burst-mode continuous phase modulation

    Institute of Scientific and Technical Information of China (English)

    翟文超; 李赞; 司江勃; 柏均

    2015-01-01

    A data-aided technique for carrier frequency offset estimation with continuous phase modulation (CPM) in burst-mode transmission is presented. The proposed technique first exploits a special pilot sequence, or training sequence, to form a sinusoidal waveform. Then, an improved dichotomous search frequency offset estimator is introduced to determine the frequency offset using the sinusoid. Theoretical analysis and simulation results indicate that our estimator is noteworthy in the following aspects. First, the estimator can operate independently of timing recovery. Second, it has relatively low outlier, i.e., the minimum signal-to-noise ratio (SNR) required to guarantee estimation accuracy. Finally, the most important property is that our estimator is complexity-reduced compared to the existing dichotomous search methods: it eliminates the need for fast Fourier transform (FFT) and modulation removal, and exhibits faster convergence rate without accuracy degradation.

  19. Improving estimates of the prevalence of Female Genital Mutilation/Cutting among migrants in Western countries

    Directory of Open Access Journals (Sweden)

    Livia Elisa Ortensi

    2015-02-01

    Full Text Available Background: Female Genital Mutilation/Cutting (FGM/C is an emerging topic in immigrant countries as a consequence of the increasing proportion of African women in overseas communities. Objective: While the prevalence of FGM/C is routinely measured in practicing countries, the prevalence of the phenomenon in western countries is substantially unknown, as no standardized methods exist yet for immigrant countries. The aim of this paper is to present an improved method of indirect estimation of the prevalence of FGM/C among first generation migrants based on a migrant selection hypothesis. A criterion to assess reliability of indirect estimates is also provided. Methods: The method is based on data from Demographic Health Surveys (DHS and Multiple Indicator Cluster Surveys (MICS. Migrants' Selection Hypothesis is used to correct national prevalence estimates and obtain an improved estimation of prevalence among overseas communities. Results: The application of the selection hypothesis modifies national estimates, usually predicting a lower occurrence of FGM/C among immigrants than in their respective practicing countries. A comparison of direct and indirect estimations confirms that the method correctly predicts the direction of the variation in the expected prevalence and satisfactorily approximates direct estimates. Conclusions: Given its wide applicability, this method would be a useful instrument to estimate FGM/C occurrence among first generation immigrants and provide corresponding support for policies in countries where information from ad hoc surveys is unavailable.

  20. Improving background estimation in events with multiple jets and at least one charged lepton

    CERN Document Server

    Ligtenberg, Cornelis

    2016-01-01

    The ATLAS experiment searches for SUSY signal events with many jets and at least one charged lepton. The backgrounds for these signals are not well described by Monte Carlo generators. For the dominant $t\\bar{t}$ background, the performance of several generators is compared. For the $W+$jets background, the charge asymmetry method as a tool for background estimation is investigated. Chances to improve the estimations of both backgrounds are indicated.

  1. Improving the Estimation of Uncalibrated Fractional Phase Offsets for PPP Ambiguity Resolution

    OpenAIRE

    Xingxing Li; Zhang, X

    2012-01-01

    Integer ambiguity resolution in Precise Point Positioning (PPP) can shorten convergence time and improve accuracy significantly. Uncalibrated Fractional Offsets (UFOs) originating in the satellites destroy the integer nature of carrier phase ambiguities observed at a single station. Several methods have been developed to estimate UFO information from a reference network for PPP ambiguity resolution. In this paper, we present a new approach for estimating Zero-Differenced (ZD) UFOs via float Z...

  2. A Parameter Estimation Method for Nonlinear Systems Based on Improved Boundary Chicken Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Shaolong Chen

    2016-01-01

    Full Text Available Parameter estimation is an important problem in nonlinear system modeling and control. Through constructing an appropriate fitness function, parameter estimation of system could be converted to a multidimensional parameter optimization problem. As a novel swarm intelligence algorithm, chicken swarm optimization (CSO has attracted much attention owing to its good global convergence and robustness. In this paper, a method based on improved boundary chicken swarm optimization (IBCSO is proposed for parameter estimation of nonlinear systems, demonstrated and tested by Lorenz system and a coupling motor system. Furthermore, we have analyzed the influence of time series on the estimation accuracy. Computer simulation results show it is feasible and with desirable performance for parameter estimation of nonlinear systems.

  3. Gene expression during blow fly development: improving the precision of age estimates in forensic entomology.

    Science.gov (United States)

    Tarone, Aaron M; Foran, David R

    2011-01-01

    Forensic entomologists use size and developmental stage to estimate blow fly age, and from those, a postmortem interval. Since such estimates are generally accurate but often lack precision, particularly in the older developmental stages, alternative aging methods would be advantageous. Presented here is a means of incorporating developmentally regulated gene expression levels into traditional stage and size data, with a goal of more precisely estimating developmental age of immature Lucilia sericata. Generalized additive models of development showed improved statistical support compared to models that did not include gene expression data, resulting in an increase in estimate precision, especially for postfeeding third instars and pupae. The models were then used to make blind estimates of development for 86 immature L. sericata raised on rat carcasses. Overall, inclusion of gene expression data resulted in increased precision in aging blow flies.

  4. Inference about the tail of a distribution. Improvement on the Hill estimator

    CERN Document Server

    Nuyts, Jean

    2010-01-01

    The Hill estimator is often used to infer the power behavior in tails of experimental distribution functions. This estimator is known to produce bad results in certain situations which have lead to the so-called Hill horror plots. In this brief note, we propose an improved estimator which is simple and coherent and often provides an efficient remedy in the bad situations, especially when the distribution is decreasing slowly, when the data is restricted by external cuts to lie within a finite domain, or even when the distribution is increasing.

  5. Adaptive OFDM Radar Waveform Design for Improved Micro-Doppler Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Sen, Satyabrata [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Center for Engineering Science Advanced Research, Computer Science and Mathematics Division

    2014-07-01

    Here we analyze the performance of a wideband orthogonal frequency division multiplexing (OFDM) signal in estimating the micro-Doppler frequency of a rotating target having multiple scattering centers. The use of a frequency-diverse OFDM signal enables us to independently analyze the micro-Doppler characteristics with respect to a set of orthogonal subcarrier frequencies. We characterize the accuracy of micro-Doppler frequency estimation by computing the Cramer-Rao bound (CRB) on the angular-velocity estimate of the target. Additionally, to improve the accuracy of the estimation procedure, we formulate and solve an optimization problem by minimizing the CRB on the angular-velocity estimate with respect to the OFDM spectral coefficients. We present several numerical examples to demonstrate the CRB variations with respect to the signal-to-noise ratios, number of temporal samples, and number of OFDM subcarriers. We also analysed numerically the improvement in estimation accuracy due to the adaptive waveform design. A grid-based maximum likelihood estimation technique is applied to evaluate the corresponding mean-squared error performance.

  6. Simple and Efficient Algorithm for Improving the MDL Estimator of the Number of Sources

    Directory of Open Access Journals (Sweden)

    Dayan A. Guimarães

    2014-10-01

    Full Text Available We propose a simple algorithm for improving the MDL (minimum description length estimator of the number of sources of signals impinging on multiple sensors. The algorithm is based on the norms of vectors whose elements are the normalized and nonlinearly scaled eigenvalues of the received signal covariance matrix and the corresponding normalized indexes. Such norms are used to discriminate the largest eigenvalues from the remaining ones, thus allowing for the estimation of the number of sources. The MDL estimate is used as the input data of the algorithm. Numerical results unveil that the so-called norm-based improved MDL (iMDL algorithm can achieve performances that are better than those achieved by the MDL estimator alone. Comparisons are also made with the well-known AIC (Akaike information criterion estimator and with a recently-proposed estimator based on the random matrix theory (RMT. It is shown that our algorithm can also outperform the AIC and the RMT-based estimator in some situations.

  7. Estimation of root zone storage capacity at the catchment scale using improved Mass Curve Technique

    Science.gov (United States)

    Zhao, Jie; Xu, Zongxue; Singh, Vijay P.

    2016-09-01

    The root zone storage capacity (Sr) greatly influences runoff generation, soil water movement, and vegetation growth and is hence an important variable for ecological and hydrological modelling. However, due to the great heterogeneity in soil texture and structure, there seems to be no effective approach to monitor or estimate Sr at the catchment scale presently. To fill the gap, in this study the Mass Curve Technique (MCT) was improved by incorporating a snowmelt module for the estimation of Sr at the catchment scale in different climatic regions. The "range of perturbation" method was also used to generate different scenarios for determining the sensitivity of the improved MCT-derived Sr to its influencing factors after the evaluation of plausibility of Sr derived from the improved MCT. Results can be showed as: (i) Sr estimates of different catchments varied greatly from ∼10 mm to ∼200 mm with the changes of climatic conditions and underlying surface characteristics. (ii) The improved MCT is a simple but powerful tool for the Sr estimation in different climatic regions of China, and incorporation of more catchments into Sr comparisons can further improve our knowledge on the variability of Sr. (iii) Variation of Sr values is an integrated consequence of variations in rainfall, snowmelt water and evapotranspiration. Sr values are most sensitive to variations in evapotranspiration of ecosystems. Besides, Sr values with a longer return period are more stable than those with a shorter return period when affected by fluctuations in its influencing factors.

  8. The Role of Satellite Imagery to Improve Pastureland Estimates in South America

    Science.gov (United States)

    Graesser, J.

    2015-12-01

    Agriculture has changed substantially across the globe over the past half century. While much work has been done to improve spatial-temporal estimates of agricultural changes, we still know more about the extent of row-crop agriculture than livestock-grazed land. The gap between cropland and pastureland estimates exists largely because it is challenging to characterize natural versus grazed grasslands from a remote sensing perspective. However, the impasse of pastureland estimates is set to break, with an increasing number of spaceborne sensors and freely available satellite data. The Landsat satellite archive in particular provides researchers with immense amounts of data to improve pastureland information. Here we focus on South America, where pastureland expansion has been scrutinized for the past few decades. We explore the challenges of estimating pastureland using temporal Landsat imagery and focus on key agricultural countries, regions, and ecosystems. We focus on the suggested shift of pastureland from the Argentine Pampas to northern Argentina, and the mixing of small-scale and large-scale ranching in eastern Paraguay and how it could impact the Chaco forest to the west. Further, the Beni Savannahs of northern Bolivia and the Colombian Llanos—both grassland and savannah regions historically used for livestock grazing—have been hinted at as future areas for cropland expansion. There are certainly environmental concerns with pastureland expansion into forests; but what are the environmental implications when well-managed pasture systems are converted to intensive soybean or palm oil plantation? Tropical, grazed grasslands are important habitats for biodiversity, and pasturelands can mitigate soil erosion when well managed. Thus, we must improve estimates of grazed land before we can make informed policy and conservation decisions. This talk presents insights into pastureland estimates in South America and discusses the feasibility to improve current

  9. Subspace Leakage Analysis and Improved DOA Estimation With Small Sample Size

    Science.gov (United States)

    Shaghaghi, Mahdi; Vorobyov, Sergiy A.

    2015-06-01

    Classical methods of DOA estimation such as the MUSIC algorithm are based on estimating the signal and noise subspaces from the sample covariance matrix. For a small number of samples, such methods are exposed to performance breakdown, as the sample covariance matrix can largely deviate from the true covariance matrix. In this paper, the problem of DOA estimation performance breakdown is investigated. We consider the structure of the sample covariance matrix and the dynamics of the root-MUSIC algorithm. The performance breakdown in the threshold region is associated with the subspace leakage where some portion of the true signal subspace resides in the estimated noise subspace. In this paper, the subspace leakage is theoretically derived. We also propose a two-step method which improves the performance by modifying the sample covariance matrix such that the amount of the subspace leakage is reduced. Furthermore, we introduce a phenomenon named as root-swap which occurs in the root-MUSIC algorithm in the low sample size region and degrades the performance of the DOA estimation. A new method is then proposed to alleviate this problem. Numerical examples and simulation results are given for uncorrelated and correlated sources to illustrate the improvement achieved by the proposed methods. Moreover, the proposed algorithms are combined with the pseudo-noise resampling method to further improve the performance.

  10. Improved pulse transit time estimation by system identification analysis of proximal and distal arterial waveforms.

    Science.gov (United States)

    Xu, Da; Ryan, Kathy L; Rickards, Caroline A; Zhang, Guanqun; Convertino, Victor A; Mukkamala, Ramakrishna

    2011-10-01

    We investigated the system identification approach for potentially improved estimation of pulse transit time (PTT), a popular arterial stiffness marker. In this approach, proximal and distal arterial waveforms are measured and respectively regarded as the input and output of a system. Next, the system impulse response is identified from all samples of the measured input and output. Finally, the time delay of the impulse response is detected as the PTT estimate. Unlike conventional foot-to-foot detection techniques, this approach is designed to provide an artifact robust estimate of the true PTT in the absence of wave reflection. The approach is also applicable to arbitrary types of arterial waveforms. We specifically applied a parametric system identification technique to noninvasive impedance cardiography (ICG) and peripheral arterial blood pressure waveforms from 15 humans subjected to lower-body negative pressure. We assessed the technique through the correlation coefficient (r) between its 1/PTT estimates and measured diastolic pressure (DP) per subject and the root mean squared error (RMSE) of the DP predicted from these estimates and measured DP. The technique achieved average r and RMSE values of 0.81 ± 0.16 and 4.3 ± 1.3 mmHg. For comparison, the corresponding values were 0.59 ± 0.37 (P system identification approach can indeed improve PTT estimation.

  11. An interactive web-based learning unit to facilitate and improve intrapartum nursing care of nursing students.

    Science.gov (United States)

    Gerdprasert, Sailom; Pruksacheva, Tassanee; Panijpan, Bhinyo; Ruenwongsa, Pintip

    2011-07-01

    First clinical exposures are stressful situations for nursing students, especially, when practicing on the labour ward. The purpose of this study was to develop intrapartum nursing care web-based learning to facilitate students' acquisition of conceptual knowledge and performance skills. This web-based learning unit integrated the 5E-model and information technology with the lecture content. Eighty four nursing students were recruited in the study. The control group received traditional teaching, while the experimental group was supplemented with the web-based learning unit on intrapartum nursing care. The results showed that the students in the experimental group had significant higher scores in conceptual knowledge and performance skill. The students also had significant lower scores in ignorance - related stress when compared to those of the control group. The students supplemented with the web-based course showed a strong positive attitude toward the new learning method.

  12. Combining satellite altimetry and gravimetry data to improve Antarctic mass balance and gia estimates

    NARCIS (Netherlands)

    Gunter, B.C.; Didova, O.; Riva, R.E.M.; van den Broeke, M.R.; Ligtenberg, S.R.M.; Lenaerts, J.T.M.; King, M.; Urban, T.

    2012-01-01

    This study explores an approach that simultaneously estimates Antarctic mass balance and glacial isostatic adjustment (GIA) through the combination of satellite gravity and altimetry data sets. The results improve upon previous efforts by incorporating reprocessed data sets over a longer period of t

  13. Combining satellite altimetry and gravimetry data to improve Antarctic mass balance and gia estimates

    NARCIS (Netherlands)

    Gunter, B.C.; Didova, O.; Riva, R.E.M.; van den Broeke, M.R.; Ligtenberg, S.R.M.; Lenaerts, J.T.M.; King, M.; Urban, T.

    2012-01-01

    This study explores an approach that simultaneously estimates Antarctic mass balance and glacial isostatic adjustment (GIA) through the combination of satellite gravity and altimetry data sets. The results improve upon previous efforts by incorporating reprocessed data sets over a longer period of

  14. Assimilation of active and passive microwave observations for improved estimates of soil moisture and crop growth

    Science.gov (United States)

    An Ensemble Kalman Filter-based data assimilation framework that links a crop growth model with active and passive (AP) microwave models was developed to improve estimates of soil moisture (SM) and vegetation biomass over a growing season of soybean. Complementarities in AP observations were incorpo...

  15. Estimating Evapotranspiration from an Improved Two-Source Energy Balance Model Using ASTER Satellite Imagery

    Directory of Open Access Journals (Sweden)

    Qifeng Zhuang

    2015-11-01

    Full Text Available Reliably estimating the turbulent fluxes of latent and sensible heat at the Earth’s surface by remote sensing is important for research on the terrestrial hydrological cycle. This paper presents a practical approach for mapping surface energy fluxes using Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER images from an improved two-source energy balance (TSEB model. The original TSEB approach may overestimate latent heat flux under vegetative stress conditions, as has also been reported in recent research. We replaced the Priestley-Taylor equation used in the original TSEB model with one that uses plant moisture and temperature constraints based on the PT-JPL model to obtain a more accurate canopy latent heat flux for model solving. The collected ASTER data and field observations employed in this study are over corn fields in arid regions of the Heihe Watershed Allied Telemetry Experimental Research (HiWATER area, China. The results were validated by measurements from eddy covariance (EC systems, and the surface energy flux estimates of the improved TSEB model are similar to the ground truth. A comparison of the results from the original and improved TSEB models indicates that the improved method more accurately estimates the sensible and latent heat fluxes, generating more precise daily evapotranspiration (ET estimate under vegetative stress conditions.

  16. Improving peak flow estimates in artificial neural network river flow models

    Science.gov (United States)

    Sudheer, K. P.; Nayak, P. C.; Ramasastri, K. S.

    2003-02-01

    In this paper, the concern of accuracy in peak estimation by the artificial neural network (ANN) river flow models is discussed and a suitable statistical procedure to get better estimates from these models is presented. The possible cause for underestimation of peak flow values has been attributed to the local variations in the function being mapped due to varying skewness in the data series, and theoretical considerations of the network functioning confirm this. It is envisaged that an appropriate data transformation will reduce the local variations in the function being mapped, and thus any ANN model built on the transformed series should perform better. This heuristic is illustrated and confirmed by many case studies and the results suggest that the model performance is significantly improved by data transformation. The model built on transformed data outperforms the model built on raw data in terms of various statistical performance indices. The peak estimates are improved significantly by data transformation.

  17. Improving radar rainfall estimation by merging point rainfall measurements within a model combination framework

    Science.gov (United States)

    Hasan, Mohammad Mahadi; Sharma, Ashish; Mariethoz, Gregoire; Johnson, Fiona; Seed, Alan

    2016-11-01

    While the value of correcting raw radar rainfall estimates using simultaneous ground rainfall observations is well known, approaches that use the complete record of both gauge and radar measurements to provide improved rainfall estimates are much less common. We present here two new approaches for estimating radar rainfall that are designed to address known limitations in radar rainfall products by using a relatively long history of radar reflectivity and ground rainfall observations. The first of these two approaches is a radar rainfall estimation algorithm that is nonparametric by construction. Compared to the traditional gauge adjusted parametric relationship between reflectivity (Z) and ground rainfall (R), the suggested new approach is based on a nonparametric radar rainfall estimation method (NPR) derived using the conditional probability distribution of reflectivity and gauge rainfall. The NPR method is applied to the densely gauged Sydney Terrey Hills radar network, where it reduces the RMSE in rainfall estimates by 10%, with improvements observed at 90% of the gauges. The second of the two approaches is a method to merge radar and spatially interpolated gauge measurements. The two sources of information are combined using a dynamic combinatorial algorithm with weights that vary in both space and time. The weight for any specific period is calculated based on the error covariance matrix that is formulated from the radar and spatially interpolated rainfall errors of similar reflectivity periods in a cross-validation setting. The combination method reduces the RMSE by about 20% compared to the traditional Z-R relationship method, and improves estimates compared to spatially interpolated point measurements in sparsely gauged areas.

  18. The Use of Radar to Improve Rainfall Estimation over the Tennessee and San Joaquin River Valleys

    Science.gov (United States)

    Petersen, Walter A.; Gatlin, Patrick N.; Felix, Mariana; Carey, Lawrence D.

    2010-01-01

    This slide presentation provides an overview of the collaborative radar rainfall project between the Tennessee Valley Authority (TVA), the Von Braun Center for Science & Innovation (VCSI), NASA MSFC and UAHuntsville. Two systems were used in this project, Advanced Radar for Meteorological & Operational Research (ARMOR) Rainfall Estimation Processing System (AREPS), a demonstration project of real-time radar rainfall using a research radar and NEXRAD Rainfall Estimation Processing System (NREPS). The objectives, methodology, some results and validation, operational experience and lessons learned are reviewed. The presentation. Another project that is using radar to improve rainfall estimations is in California, specifically the San Joaquin River Valley. This is part of a overall project to develop a integrated tool to assist water management within the San Joaquin River Valley. This involves integrating several components: (1) Radar precipitation estimates, (2) Distributed hydro model, (3) Snowfall measurements and Surface temperature / moisture measurements. NREPS was selected to provide precipitation component.

  19. IMPROVED ROBUST H-INFINITY ESTIMATION FOR UNCERTAIN CONTINUOUS-TIME SYSTEMS

    Institute of Scientific and Technical Information of China (English)

    Aiguo WU; Huafeng DONG; Guangren DUAN

    2007-01-01

    The design of full-order robust estimators is investigated for continuous-time polytopic uncertain systems. The main purpose is to obtain a stable linear estimator such that the estimation error system remains robustly stable with a prescribed H∞ attenuation level. Firstly, a simple alterna- tive proof is given for an improved LMI representation of H∞ performance proposed recently. Based on the performance criterion which keeps the Lyapunov matrix out of the product of the system dynamic matrices, a sufficient condition for the existence of the robust estimator is provided in terms oflinear matrix inequalities. It is shown that the proposed design strategy allows the use of parameter-dependent Lyapunov functions and hence it is less conservative than the earlier results. A numericalexample is employed to illustrate the feasibility and advantage of the proposed design.

  20. Parameter estimation for chaotic systems based on improved boundary chicken swarm optimization

    Science.gov (United States)

    Chen, Shaolong; Yan, Renhuan

    2016-10-01

    Estimating unknown parameters for chaotic system is a key problem in the field of chaos control and synchronization. Through constructing an appropriate fitness function, parameter estimation of chaotic system could be converted to a multidimensional parameter optimization problem. In this paper, a new method base on improved boundary chicken swarm optimization (IBCSO) algorithm is proposed for solving the problem of parameter estimation in chaotic system. However, to the best of our knowledge, there is no published research work on chicken swarm optimization for parameters estimation of chaotic system. Computer simulation based on Lorenz system and comparisons with chicken swarm optimization, particle swarm optimization, and genetic algorithm shows the effectiveness and feasibility of the proposed method.

  1. Enhancing e-waste estimates: Improving data quality by multivariate Input–Output Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Feng, E-mail: fwang@unu.edu [Institute for Sustainability and Peace, United Nations University, Hermann-Ehler-Str. 10, 53113 Bonn (Germany); Design for Sustainability Lab, Faculty of Industrial Design Engineering, Delft University of Technology, Landbergstraat 15, 2628CE Delft (Netherlands); Huisman, Jaco [Institute for Sustainability and Peace, United Nations University, Hermann-Ehler-Str. 10, 53113 Bonn (Germany); Design for Sustainability Lab, Faculty of Industrial Design Engineering, Delft University of Technology, Landbergstraat 15, 2628CE Delft (Netherlands); Stevels, Ab [Design for Sustainability Lab, Faculty of Industrial Design Engineering, Delft University of Technology, Landbergstraat 15, 2628CE Delft (Netherlands); Baldé, Cornelis Peter [Institute for Sustainability and Peace, United Nations University, Hermann-Ehler-Str. 10, 53113 Bonn (Germany); Statistics Netherlands, Henri Faasdreef 312, 2492 JP Den Haag (Netherlands)

    2013-11-15

    Highlights: • A multivariate Input–Output Analysis method for e-waste estimates is proposed. • Applying multivariate analysis to consolidate data can enhance e-waste estimates. • We examine the influence of model selection and data quality on e-waste estimates. • Datasets of all e-waste related variables in a Dutch case study have been provided. • Accurate modeling of time-variant lifespan distributions is critical for estimate. - Abstract: Waste electrical and electronic equipment (or e-waste) is one of the fastest growing waste streams, which encompasses a wide and increasing spectrum of products. Accurate estimation of e-waste generation is difficult, mainly due to lack of high quality data referred to market and socio-economic dynamics. This paper addresses how to enhance e-waste estimates by providing techniques to increase data quality. An advanced, flexible and multivariate Input–Output Analysis (IOA) method is proposed. It links all three pillars in IOA (product sales, stock and lifespan profiles) to construct mathematical relationships between various data points. By applying this method, the data consolidation steps can generate more accurate time-series datasets from available data pool. This can consequently increase the reliability of e-waste estimates compared to the approach without data processing. A case study in the Netherlands is used to apply the advanced IOA model. As a result, for the first time ever, complete datasets of all three variables for estimating all types of e-waste have been obtained. The result of this study also demonstrates significant disparity between various estimation models, arising from the use of data under different conditions. It shows the importance of applying multivariate approach and multiple sources to improve data quality for modelling, specifically using appropriate time-varying lifespan parameters. Following the case study, a roadmap with a procedural guideline is provided to enhance e

  2. Multiple data sources improve DNA-based mark-recapture population estimates of grizzly bears.

    Science.gov (United States)

    Boulanger, John; Kendall, Katherine C; Stetz, Jeffrey B; Roon, David A; Waits, Lisette P; Paetkau, David

    2008-04-01

    A fundamental challenge to estimating population size with mark-recapture methods is heterogeneous capture probabilities and subsequent bias of population estimates. Confronting this problem usually requires substantial sampling effort that can be difficult to achieve for some species, such as carnivores. We developed a methodology that uses two data sources to deal with heterogeneity and applied this to DNA mark-recapture data from grizzly bears (Ursus arctos). We improved population estimates by incorporating additional DNA "captures" of grizzly bears obtained by collecting hair from unbaited bear rub trees concurrently with baited, grid-based, hair snag sampling. We consider a Lincoln-Petersen estimator with hair snag captures as the initial session and rub tree captures as the recapture session and develop an estimator in program MARK that treats hair snag and rub tree samples as successive sessions. Using empirical data from a large-scale project in the greater Glacier National Park, Montana, USA, area and simulation modeling we evaluate these methods and compare the results to hair-snag-only estimates. Empirical results indicate that, compared with hair-snag-only data, the joint hair-snag-rub-tree methods produce similar but more precise estimates if capture and recapture rates are reasonably high for both methods. Simulation results suggest that estimators are potentially affected by correlation of capture probabilities between sample types in the presence of heterogeneity. Overall, closed population Huggins-Pledger estimators showed the highest precision and were most robust to sparse data, heterogeneity, and capture probability correlation among sampling types. Results also indicate that these estimators can be used when a segment of the population has zero capture probability for one of the methods. We propose that this general methodology may be useful for other species in which mark-recapture data are available from multiple sources.

  3. Improving the statistical detection of regulated genes from microarray data using intensity-based variance estimation

    Directory of Open Access Journals (Sweden)

    Natarajan Sripriya

    2004-02-01

    Full Text Available Abstract Background Gene microarray technology provides the ability to study the regulation of thousands of genes simultaneously, but its potential is limited without an estimate of the statistical significance of the observed changes in gene expression. Due to the large number of genes being tested and the comparatively small number of array replicates (e.g., N = 3, standard statistical methods such as the Student's t-test fail to produce reliable results. Two other statistical approaches commonly used to improve significance estimates are a penalized t-test and a Z-test using intensity-dependent variance estimates. Results The performance of these approaches is compared using a dataset of 23 replicates, and a new implementation of the Z-test is introduced that pools together variance estimates of genes with similar minimum intensity. Significance estimates based on 3 replicate arrays are calculated using each statistical technique, and their accuracy is evaluated by comparing them to a reliable estimate based on the remaining 20 replicates. The reproducibility of each test statistic is evaluated by applying it to multiple, independent sets of 3 replicate arrays. Two implementations of a Z-test using intensity-dependent variance produce more reproducible results than two implementations of a penalized t-test. Furthermore, the minimum intensity-based Z-statistic demonstrates higher accuracy and higher or equal precision than all other statistical techniques tested. Conclusion An intensity-based variance estimation technique provides one simple, effective approach that can improve p-value estimates for differentially regulated genes derived from replicated microarray datasets. Implementations of the Z-test algorithms are available at http://vessels.bwh.harvard.edu/software/papers/bmcg2004.

  4. The promise of multimedia technology for STI/HIV prevention: frameworks for understanding improved facilitator delivery and participant learning.

    Science.gov (United States)

    Khan, Maria R; Epperson, Matthew W; Gilbert, Louisa; Goddard, Dawn; Hunt, Timothy; Sarfo, Bright; El-Bassel, Nabila

    2012-10-01

    There is increasing excitement about multimedia sexually transmitted infection (STI) and HIV prevention interventions, yet there has been limited discussion of how use of multimedia technology may improve STI/HIV prevention efforts. The purpose of this paper is to describe the mechanisms through which multimedia technology may work to improve the delivery and uptake of intervention material. We present conceptual frameworks describing how multimedia technology may improve intervention delivery by increasing standardization and fidelity to the intervention material and the participant's ability to learn by improving attention, cognition, emotional engagement, skills-building, and uptake of sensitive material about sexual and drug risks. In addition, we describe how the non-multimedia behavioral STI/HIV prevention intervention, Project WORTH, was adapted into a multimedia format for women involved in the criminal justice system and provide examples of how multimedia activities can more effectively target key mediators of behavioral change in this intervention.

  5. Improved particle size estimation in digital holography via sign matched filtering.

    Science.gov (United States)

    Lu, Jiang; Shaw, Raymond A; Yang, Weidong

    2012-06-04

    A matched filter method is provided for obtaining improved particle size estimates from digital in-line holograms. This improvement is relative to conventional reconstruction and pixel counting methods for particle size estimation, which is greatly limited by the CCD camera pixel size. The proposed method is based on iterative application of a sign matched filter in the Fourier domain, with sign meaning the matched filter takes values of ±1 depending on the sign of the angular spectrum of the particle aperture function. Using simulated data the method is demonstrated to work for particle diameters several times the pixel size. Holograms of piezoelectrically generated water droplets taken in the laboratory show greatly improved particle size measurements. The method is robust to additive noise and can be applied to real holograms over a wide range of matched-filter particle sizes.

  6. Examining Spectral Reflectance Saturation in Landsat Imagery and Corresponding Solutions to Improve Forest Aboveground Biomass Estimation

    Directory of Open Access Journals (Sweden)

    Panpan Zhao

    2016-06-01

    Full Text Available The data saturation problem in Landsat imagery is well recognized and is regarded as an important factor resulting in inaccurate forest aboveground biomass (AGB estimation. However, no study has examined the saturation values for different vegetation types such as coniferous and broadleaf forests. The objective of this study is to estimate the saturation values in Landsat imagery for different vegetation types in a subtropical region and to explore approaches to improving forest AGB estimation. Landsat Thematic Mapper imagery, digital elevation model data, and field measurements in Zhejiang province of Eastern China were used. Correlation analysis and scatterplots were first used to examine specific spectral bands and their relationships with AGB. A spherical model was then used to quantitatively estimate the saturation value of AGB for each vegetation type. A stratification of vegetation types and/or slope aspects was used to determine the potential to improve AGB estimation performance by developing a specific AGB estimation model for each category. Stepwise regression analysis based on Landsat spectral signatures and textures using grey-level co-occurrence matrix (GLCM was used to develop AGB estimation models for different scenarios: non-stratification, stratification based on either vegetation types, slope aspects, or the combination of vegetation types and slope aspects. The results indicate that pine forest and mixed forest have the highest AGB saturation values (159 and 152 Mg/ha, respectively, Chinese fir and broadleaf forest have lower saturation values (143 and 123 Mg/ha, respectively, and bamboo forest and shrub have the lowest saturation values (75 and 55 Mg/ha, respectively. The stratification based on either vegetation types or slope aspects provided smaller root mean squared errors (RMSEs than non-stratification. The AGB estimation models based on stratification of both vegetation types and slope aspects provided the most

  7. Fractional vegetation cover estimation based on an improved selective endmember spectral mixture model.

    Directory of Open Access Journals (Sweden)

    Ying Li

    Full Text Available Vegetation is an important part of ecosystem and estimation of fractional vegetation cover is of significant meaning to monitoring of vegetation growth in a certain region. With Landsat TM images and HJ-1B images as data source, an improved selective endmember linear spectral mixture model (SELSMM was put forward in this research to estimate the fractional vegetation cover in Huangfuchuan watershed in China. We compared the result with the vegetation coverage estimated with linear spectral mixture model (LSMM and conducted accuracy test on the two results with field survey data to study the effectiveness of different models in estimation of vegetation coverage. Results indicated that: (1 the RMSE of the estimation result of SELSMM based on TM images is the lowest, which is 0.044. The RMSEs of the estimation results of LSMM based on TM images, SELSMM based on HJ-1B images and LSMM based on HJ-1B images are respectively 0.052, 0.077 and 0.082, which are all higher than that of SELSMM based on TM images; (2 the R2 of SELSMM based on TM images, LSMM based on TM images, SELSMM based on HJ-1B images and LSMM based on HJ-1B images are respectively 0.668, 0.531, 0.342 and 0.336. Among these models, SELSMM based on TM images has the highest estimation accuracy and also the highest correlation with measured vegetation coverage. Of the two methods tested, SELSMM is superior to LSMM in estimation of vegetation coverage and it is also better at unmixing mixed pixels of TM images than pixels of HJ-1B images. So, the SELSMM based on TM images is comparatively accurate and reliable in the research of regional fractional vegetation cover estimation.

  8. Fractional vegetation cover estimation based on an improved selective endmember spectral mixture model.

    Science.gov (United States)

    Li, Ying; Wang, Hong; Li, Xiao Bing

    2015-01-01

    Vegetation is an important part of ecosystem and estimation of fractional vegetation cover is of significant meaning to monitoring of vegetation growth in a certain region. With Landsat TM images and HJ-1B images as data source, an improved selective endmember linear spectral mixture model (SELSMM) was put forward in this research to estimate the fractional vegetation cover in Huangfuchuan watershed in China. We compared the result with the vegetation coverage estimated with linear spectral mixture model (LSMM) and conducted accuracy test on the two results with field survey data to study the effectiveness of different models in estimation of vegetation coverage. Results indicated that: (1) the RMSE of the estimation result of SELSMM based on TM images is the lowest, which is 0.044. The RMSEs of the estimation results of LSMM based on TM images, SELSMM based on HJ-1B images and LSMM based on HJ-1B images are respectively 0.052, 0.077 and 0.082, which are all higher than that of SELSMM based on TM images; (2) the R2 of SELSMM based on TM images, LSMM based on TM images, SELSMM based on HJ-1B images and LSMM based on HJ-1B images are respectively 0.668, 0.531, 0.342 and 0.336. Among these models, SELSMM based on TM images has the highest estimation accuracy and also the highest correlation with measured vegetation coverage. Of the two methods tested, SELSMM is superior to LSMM in estimation of vegetation coverage and it is also better at unmixing mixed pixels of TM images than pixels of HJ-1B images. So, the SELSMM based on TM images is comparatively accurate and reliable in the research of regional fractional vegetation cover estimation.

  9. An Improved Multicell MMSE Channel Estimation in a Massive MIMO System

    Directory of Open Access Journals (Sweden)

    Ke Li

    2014-01-01

    Full Text Available Massive MIMO is a promising technology to improve both the spectrum efficiency and the energy efficiency. The key problem that impacts the throughput of a massive MIMO system is the pilot contamination due to the nonorthogonality of the pilot sequences in different cells. Conventional channel estimation schemes cannot mitigate this problem effectively, and the computational complexity is increasingly becoming larger in views of the large number of antennas employed in a massive MIMO system. Furthermore, the channel estimation is always carried out with some ideal assumptions such as the complete knowledge of large-scale fading. In this paper, a new channel estimation scheme is proposed by utilizing interference cancellation and joint processing. Highly interfering users in neighboring cells are identified based on the estimation of large-scale fading and then included in the joint channel processing; this achieves a compromise between the effectiveness and efficiency of the channel estimation at a reasonable computational cost, and leads to an improvement in the overall system performance. Simulation results are provided to demonstrate the effectiveness of the proposed scheme.

  10. DOA and Noncircular Phase Estimation of Noncircular Signal via an Improved Noncircular Rotational Invariance Propagator Method

    Directory of Open Access Journals (Sweden)

    Xueqiang Chen

    2015-01-01

    Full Text Available We consider the computationally efficient direction-of-arrival (DOA and noncircular (NC phase estimation problem of noncircular signal for uniform linear array. The key idea is to apply the noncircular propagator method (NC-PM which does not require eigenvalue decomposition (EVD of the covariance matrix or singular value decomposition (SVD of the received data. Noncircular rotational invariance propagator method (NC-RI-PM avoids spectral peak searching in PM and can obtain the closed-form solution of DOA, so it has lower computational complexity. An improved NC-RI-PM algorithm of noncircular signal for uniform linear array is proposed to estimate the elevation angles and noncircular phases with automatic pairing. We reconstruct the extended array output by combining the array output and its conjugated counterpart. Our algorithm fully uses the extended array elements in the improved propagator matrix to estimate the elevation angles and noncircular phases by utilizing the rotational invariance property between subarrays. Compared with NC-RI-PM, the proposed algorithm has better angle estimation performance and much lower computational load. The computational complexity of the proposed algorithm is analyzed. We also derive the variance of estimation error and Cramer-Rao bound (CRB of noncircular signal for uniform linear array. Finally, simulation results are presented to demonstrate the effectiveness of our algorithm.

  11. Vibration Suppression for Improving the Estimation of Kinematic Parameters on Industrial Robots

    Directory of Open Access Journals (Sweden)

    David Alejandro Elvira-Ortiz

    2016-01-01

    Full Text Available Vibration is a phenomenon that is present on every industrial system such as CNC machines and industrial robots. Moreover, sensors used to estimate angular position of a joint in an industrial robot are severely affected by vibrations and lead to wrong estimations. This paper proposes a methodology for improving the estimation of kinematic parameters on industrial robots through a proper suppression of the vibration components present on signals acquired from two primary sensors: accelerometer and gyroscope. A Kalman filter is responsible for the filtering of spurious vibration. Additionally, a sensor fusion technique is used to merge information from both sensors and improve the results obtained using each sensor separately. The methodology is implemented in a proprietary hardware signal processor and tested in an ABB IRB 140 industrial robot, first by analyzing the motion profile of only one joint and then by estimating the path tracking of two welding tasks: one rectangular and another one circular. Results from this work prove that the sensor fusion technique accompanied by proper suppression of vibrations delivers better estimation than other proposed techniques.

  12. Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process

    Directory of Open Access Journals (Sweden)

    Janet L. Rachlow

    2013-08-01

    Full Text Available United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1 if a current population size was given, (2 if a measure of uncertainty or variance was associated with current estimates of population size and (3 if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data.

  13. Improving High-resolution Spatial Estimates of Precipitation in the Equatorial Americas

    Science.gov (United States)

    Verdin, A.; Rajagopalan, B.; Funk, C. C.

    2013-12-01

    Drought and flood management practices require accurate estimates of precipitation in space and time. However, data is sparse in regions with complicated terrain (such as the Equatorial Americas), often in valleys (where people farm), and of poor quality. Consequently, extreme precipitation events are poorly represented. Satellite-derived rainfall data is an attractive alternative in such regions and is being widely used, though it too suffers from problems such as underestimation of extreme events (due to its dependency on retrieval algorithms) and the indirect relationship between satellite radiation observations and precipitation intensities. Thus, it seems appropriate to blend satellite-derived rainfall data of extensive spatial coverage with rain gauge data in order to provide a more robust estimate of precipitation. To this end, in this research we offer three techniques to blend rain gauge data and the Climate Hazards group InfraRed Precipitation (CHIRP) satellite-derived precipitation estimate for Central America and Colombia. In the first two methods, the gauge data is assigned to the closest CHIRP grid point, where the error is defined as r = Yobs - Ysat. The spatial structure of r is then modeled using physiographic information (Easting, Northing, and Elevation) by two methods (i) a traditional Cokriging approach whose variogram is calculated in Euclidean space and (ii) a nonparametric method based on local polynomial functional estimation. The models are used to estimate r at all grid points, which is then added to the CHIRP, thus creating an improved satellite estimate. We demonstrate these methods by applying them to pentadal and monthly total precipitation fields during 2009. The models' predictive abilities and their ability to capture extremes are investigated. These blending methods significantly improve upon the satellite-derived estimates and are also competitive in their ability to capture extreme precipitation. The above methods assume

  14. Functionalization of graphene oxide nanostructures improves photoluminescence and facilitates their use as optical probes in preclinical imaging.

    Science.gov (United States)

    Prabhakar, Neeraj; Näreoja, Tuomas; von Haartman, Eva; Şen Karaman, Didem; Burikov, Sergey A; Dolenko, Tatiana A; Deguchi, Takahiro; Mamaeva, Veronika; Hänninen, Pekka E; Vlasov, Igor I; Shenderova, Olga A; Rosenholm, Jessica M

    2015-06-21

    Recently reported photoluminescent nanographene oxides (nGOs), i.e. nanographene oxidised with a sulfuric/nitric acid mixture (SNOx method), have tuneable photoluminescence and are scalable, simple and fast to produce optical probes. This material belongs to the vast class of photoluminescent carbon nanostructures, including carbon dots, nanodiamonds (NDs), graphene quantum dots (GQDs), all of which demonstrate a variety of properties that are attractive for biomedical imaging such as low toxicity and stable photoluminescence. In this study, the nGOs were organically surface-modified with poly(ethylene glycol)-poly(ethylene imine) (PEG-PEI) copolymers tagged with folic acid as the affinity ligand for cancer cells expressing folate receptors. The functionalization enhanced both the cellular uptake and quantum efficiency of the photoluminescence as compared to non-modified nGOs. The nGOs exhibited an excitation dependent photoluminescence that facilitated their detection with a wide range of microscope configurations. The functionalized nGOs were non-toxic, they were retained in the stained cell population over a period of 8 days and they were distributed equally between daughter cells. We have evaluated their applicability in in vitro and in vivo (chicken embryo CAM) models to visualize and track migratory cancer cells. The good biocompatibility and easy detection of the functionalized nGOs suggest that they could address the limitations faced with quantum dots and organic fluorophores in long-term in vivo biomedical imaging.

  15. Respondent driven sampling: determinants of recruitment and a method to improve point estimation.

    Directory of Open Access Journals (Sweden)

    Nicky McCreesh

    Full Text Available INTRODUCTION: Respondent-driven sampling (RDS is a variant of a link-tracing design intended for generating unbiased estimates of the composition of hidden populations that typically involves giving participants several coupons to recruit their peers into the study. RDS may generate biased estimates if coupons are distributed non-randomly or if potential recruits present for interview non-randomly. We explore if biases detected in an RDS study were due to either of these mechanisms, and propose and apply weights to reduce bias due to non-random presentation for interview. METHODS: Using data from the total population, and the population to whom recruiters offered their coupons, we explored how age and socioeconomic status were associated with being offered a coupon, and, if offered a coupon, with presenting for interview. Population proportions were estimated by weighting by the assumed inverse probabilities of being offered a coupon (as in existing RDS methods, and also of presentation for interview if offered a coupon by age and socioeconomic status group. RESULTS: Younger men were under-recruited primarily because they were less likely to be offered coupons. The under-recruitment of higher socioeconomic status men was due in part to them being less likely to present for interview. Consistent with these findings, weighting for non-random presentation for interview by age and socioeconomic status group greatly improved the estimate of the proportion of men in the lowest socioeconomic group, reducing the root-mean-squared error of RDS estimates of socioeconomic status by 38%, but had little effect on estimates for age. The weighting also improved estimates for tribe and religion (reducing root-mean-squared-errors by 19-29%, but had little effect for sexual activity or HIV status. CONCLUSIONS: Data collected from recruiters on the characteristics of men to whom they offered coupons may be used to reduce bias in RDS studies. Further evaluation of

  16. Improving the estimation of flow speed for laser speckle imaging with single exposure time.

    Science.gov (United States)

    Wang, Yang; Wen, Dong; Chen, Xiao; Huang, Qin; Chen, Ming; Lu, Jinling; Li, Pengcheng

    2017-01-01

    Laser speckle contrast imaging is a full-field imaging technique for measuring blood flow by mapping the speckle contrast with high spatial and temporal resolution. However, the statically scattered light from stationary tissues seriously degrades the accuracy of flow speed estimation. In this Letter, we present a simple calibration approach to calculate the proportions of dynamically scattered light and correct the effect of static scattering with single exposure time. Both the phantom and animal experimental results suggest that this calibration approach has the ability to improve the estimation of the relative blood flow in the presence of static scattering.

  17. Does Ocean Color Data Assimilation Improve Estimates of Global Ocean Inorganic Carbon?

    Science.gov (United States)

    Gregg, Watson

    2012-01-01

    Ocean color data assimilation has been shown to dramatically improve chlorophyll abundances and distributions globally and regionally in the oceans. Chlorophyll is a proxy for phytoplankton biomass (which is explicitly defined in a model), and is related to the inorganic carbon cycle through the interactions of the organic carbon (particulate and dissolved) and through primary production where inorganic carbon is directly taken out of the system. Does ocean color data assimilation, whose effects on estimates of chlorophyll are demonstrable, trickle through the simulated ocean carbon system to produce improved estimates of inorganic carbon? Our emphasis here is dissolved inorganic carbon, pC02, and the air-sea flux. We use a sequential data assimilation method that assimilates chlorophyll directly and indirectly changes nutrient concentrations in a multi-variate approach. The results are decidedly mixed. Dissolved organic carbon estimates from the assimilation model are not meaningfully different from free-run, or unassimilated results, and comparisons with in situ data are similar. pC02 estimates are generally worse after data assimilation, with global estimates diverging 6.4% from in situ data, while free-run estimates are only 4.7% higher. Basin correlations are, however, slightly improved: r increase from 0.78 to 0.79, and slope closer to unity at 0.94 compared to 0.86. In contrast, air-sea flux of C02 is noticeably improved after data assimilation. Global differences decline from -0.635 mol/m2/y (stronger model sink from the atmosphere) to -0.202 mol/m2/y. Basin correlations are slightly improved from r=O.77 to r=0.78, with slope closer to unity (from 0.93 to 0.99). The Equatorial Atlantic appears as a slight sink in the free-run, but is correctly represented as a moderate source in the assimilation model. However, the assimilation model shows the Antarctic to be a source, rather than a modest sink and the North Indian basin is represented incorrectly as a sink

  18. Improved rapid magnitude estimation for a community-based, low-cost MEMS accelerometer network

    Science.gov (United States)

    Chung, Angela I.; Cochran, Elizabeth S.; Kaiser, Anna E.; Christensen, Carl M.; Yildirim, Battalgazi; Lawrence, Jesse F.

    2015-01-01

    Immediately following the Mw 7.2 Darfield, New Zealand, earthquake, over 180 Quake‐Catcher Network (QCN) low‐cost micro‐electro‐mechanical systems accelerometers were deployed in the Canterbury region. Using data recorded by this dense network from 2010 to 2013, we significantly improved the QCN rapid magnitude estimation relationship. The previous scaling relationship (Lawrence et al., 2014) did not accurately estimate the magnitudes of nearby (estimates earthquake magnitudes within 1 magnitude unit of the GNS Science GeoNet earthquake catalog magnitudes for 99% of the events tested, within 0.5 magnitude units for 90% of the events, and within 0.25 magnitude units for 57% of the events. These magnitudes are reliably estimated within 3 s of the initial trigger recorded on at least seven stations. In this report, we present the methods used to calculate a new scaling relationship and demonstrate the accuracy of the revised magnitude estimates using a program that is able to retrospectively estimate event magnitudes using archived data.

  19. Development of a mixed pixel filter for improved dimension estimation using AMCW laser scanner

    Science.gov (United States)

    Wang, Qian; Sohn, Hoon; Cheng, Jack C. P.

    2016-09-01

    Accurate dimension estimation is desired in many fields, but the traditional dimension estimation methods are time-consuming and labor-intensive. In the recent decades, 3D laser scanners have become popular for dimension estimation due to their high measurement speed and accuracy. Nonetheless, scan data obtained by amplitude-modulated continuous-wave (AMCW) laser scanners suffer from erroneous data called mixed pixels, which can influence the accuracy of dimension estimation. This study develops a mixed pixel filter for improved dimension estimation using AMCW laser scanners. The distance measurement of mixed pixels is firstly formulated based on the working principle of laser scanners. Then, a mixed pixel filter that can minimize the classification errors between valid points and mixed pixels is developed. Validation experiments were conducted to verify the formulation of the distance measurement of mixed pixels and to examine the performance of the proposed mixed pixel filter. Experimental results show that, for a specimen with dimensions of 840 mm × 300 mm, the overall errors of the dimensions estimated after applying the proposed filter are 1.9 mm and 1.0 mm for two different scanning resolutions, respectively. These errors are much smaller than the errors (4.8 mm and 3.5 mm) obtained by the scanner's built-in filter.

  20. Improved Angular Velocity Estimation Using MEMS Sensors with Applications in Miniature Inertially Stabilized Platforms

    Institute of Scientific and Technical Information of China (English)

    ZHOU Xiaoyao; ZHANG Zhiyong; FAN Dapeng

    2011-01-01

    The performance of any inertially stabilized platform(ISP)is strongly related to the bandwidth and accuracy of the angular velocity signals.This paper discusses the development of an optimal state estimator for sensing inertial velocity using low-cost micro-electro-mechanical systems(MEMS)sensors.A low-bandwidth gyroscope is used alone with two low-performance accelerometers to obtain the estimation.The gyroscope has its own limited dynamics and mainly contributes to the low-frequency components of the estimation.The accelerometers have inherent biases and mainly contribute to the high-frequency components of the estimation.Extensive experimental results show that the state estimator can achieve high-performance signals over a wide range of velocities without drifts in both the t- and s-domains.Furthermore,with applications in miniature inertially stabilized platforms,the control characteristic presents a significantly improvement over the existing methods.The method can be also applied to robotics,attitude estimation,and friction compensation.

  1. Improved estimates of Belgian private health expenditure can give important lessons to other OECD countries.

    Science.gov (United States)

    Calcoen, Piet; Moens, Dirk; Verlinden, Pieter; van de Ven, Wynand P M M; Pacolet, Jozef

    2015-03-01

    OECD Health Data are a well-known source for detailed information about health expenditure. These data enable us to analyze health policy issues over time and in comparison with other countries. However, current official Belgian estimates of private expenditure (as published in the OECD Health Data) have proven not to be reliable. We distinguish four potential major sources of problems with estimating private health spending: interpretation of definitions, formulation of assumptions, missing or incomplete data and incorrect data. Using alternative sources of billing information, we have reached more accurate estimates of private and out-of-pocket expenditure. For Belgium we found differences of more than 100% between our estimates and the official Belgian estimates of private health expenditure (as published in the OECD Health Data). For instance, according to OECD Health Data private expenditure on hospitals in Belgium amounts to €3.1 billion, while according to our alternative calculations these expenses represent only €1.1 billion. Total private expenditure differs only 1%, but this is a mere coincidence. This exercise may be of interest to other OECD countries looking to improve their estimates of private expenditure on health. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  2. Improved streamflow recession parameter estimation with attention to calculation of - dQ/dt

    Science.gov (United States)

    Roques, Clément; Rupp, David E.; Selker, John S.

    2017-10-01

    The rate of streamflow recession can be used to assess storage-outflow properties of source aquifers. A common method of analyzing streamflow recession is to plot the time rate of change in streamflow Q as a function of Q in a log-log space. Theory predicts, for diagnostic recession regimes, a power law relationship - dQ/dt = aQb, where recession coefficients a and b are functions of the hydraulic and geometric properties of the aquifer and of boundary and initial conditions. Observational error reduces the accuracy of estimates of a and b with errors in estimating the time derivative of the late-time recession (-dQ/dt) being particularly sensitive to observational error. Here we propose a method to improve estimation of a and b with particular focus on the estimation of -dQ/dt. Compared to previously published methods we find greater robustness in estimates of -dQ/dt and recession parameters and less sensitivity to the methodological parameters employed. Previous methods result in up to 50 to 100% error when estimating the recession parameter b, while the proposed methodology produces errors below 5% in the cases analyzed.

  3. New approaches to improve a WCDMA SIR estimator by employing different post-processing stages

    Directory of Open Access Journals (Sweden)

    Amnart Chaichoet

    2008-09-01

    Full Text Available For effective control of transmission power in WCDMA mobile systems, a good estimate of signal-to-interference ratio (SIR is needed. Conventionally, an adaptive SIR estimator employs a moving average (MA filter (Yoon et al., 2002 to encounter fading channel distortion. However, the resulting estimate seems to have high estimation error due to fluctuation in the channel variation. In this paper, an additional post-processing stage is proposed to improve the estimation accuracy by reducing the variation of the estimate. Four variations of post-processing stages, namely 1 a moving average (MA postfilter,2 an exponential moving average (EMA post-filter, 3 an IIR post-filter and 4 least-mean-squared (LMS adaptive post-filter, are proposed and their optimal performance in terms of root-mean-square error (RMSE are then compared by simulation. The results show the best comparable performance when the MA and LMS post-filter are used. However, the MA post-filter requires a lookup table of filter order for optimal performance at different channel conditions, while the LMS post-filter can be used conveniently without a lookup table.

  4. Improved FRFT-based method for estimating the physical parameters from Newton's rings

    Science.gov (United States)

    Wu, Jin-Min; Lu, Ming-Feng; Tao, Ran; Zhang, Feng; Li, Yang

    2017-04-01

    Newton's rings are often encountered in interferometry, and in analyzing them, we can estimate the physical parameters, such as curvature radius and the rings' center. The fractional Fourier transform (FRFT) is capable of estimating these physical parameters from the rings despite noise and obstacles, but there is still a small deviation between the estimated coordinates of the rings' center and the actual values. The least-squares fitting method is popularly used for its accuracy but it is easily affected by the initial values. Nevertheless, with the estimated results from the FRFT, it is easy to meet the requirements of initial values. In this paper, the proposed method combines the advantages of the fractional Fourier transform (FRFT) with the least-squares fitting method in analyzing Newton's rings fringe patterns. Its performance is assessed by analyzing simulated and actual Newton's rings images. The experimental results show that the proposed method is capable of estimating the parameters in the presence of noise and obstacles. Under the same conditions, the estimation results are better than those obtained with the original FRFT-based method, especially for the rings' center. Some applications are shown to illustrate that the improved FRFT-based method is an important technique for interferometric measurements.

  5. Improved tilt-depth method for fast estimation of top and bottom depths of magnetic bodies

    Science.gov (United States)

    Wang, Yan-Guo; Zhang, Jin; Ge, Kun-Peng; Chen, Xiao; Nie, Feng-Jun

    2016-06-01

    The tilt-depth method can be used to make fast estimation of the top depth of magnetic bodies. However, it is unable to estimate bottom depths and its every inversion point only has a single solution. In order to resolve such weaknesses, this paper presents an improved tilt-depth method based on the magnetic anomaly expression of vertical contact with a finite depth extent, which can simultaneously estimate top and bottom depths of magnetic bodies. In addition, multiple characteristic points are selected on the tilt angle map for joint computation to improve reliability of inversion solutions. Two- and threedimensional model tests show that this improved tilt-depth method is effective in inverting buried depths of top and bottom bodies, and has a higher inversion precision for top depths than the conventional method. The improved method is then used to process aeromagnetic data over the Changling Fault Depression in the Songliao Basin, and inversion results of top depths are found to be more accurate for actual top depths of volcanic rocks in two nearby drilled wells than those using the conventional tilt-depth method.

  6. Functionalization of graphene oxide nanostructures improves photoluminescence and facilitates their use as optical probes in preclinical imaging

    Science.gov (United States)

    Prabhakar, Neeraj; Näreoja, Tuomas; von Haartman, Eva; Şen Karaman, Didem; Burikov, Sergey A.; Dolenko, Tatiana A.; Deguchi, Takahiro; Mamaeva, Veronika; Hänninen, Pekka E.; Vlasov, Igor I.; Shenderova, Olga A.; Rosenholm, Jessica M.

    2015-06-01

    Recently reported photoluminescent nanographene oxides (nGOs), i.e. nanographene oxidised with a sulfuric/nitric acid mixture (SNOx method), have tuneable photoluminescence and are scalable, simple and fast to produce optical probes. This material belongs to the vast class of photoluminescent carbon nanostructures, including carbon dots, nanodiamonds (NDs), graphene quantum dots (GQDs), all of which demonstrate a variety of properties that are attractive for biomedical imaging such as low toxicity and stable photoluminescence. In this study, the nGOs were organically surface-modified with poly(ethylene glycol)-poly(ethylene imine) (PEG-PEI) copolymers tagged with folic acid as the affinity ligand for cancer cells expressing folate receptors. The functionalization enhanced both the cellular uptake and quantum efficiency of the photoluminescence as compared to non-modified nGOs. The nGOs exhibited an excitation dependent photoluminescence that facilitated their detection with a wide range of microscope configurations. The functionalized nGOs were non-toxic, they were retained in the stained cell population over a period of 8 days and they were distributed equally between daughter cells. We have evaluated their applicability in in vitro and in vivo (chicken embryo CAM) models to visualize and track migratory cancer cells. The good biocompatibility and easy detection of the functionalized nGOs suggest that they could address the limitations faced with quantum dots and organic fluorophores in long-term in vivo biomedical imaging.Recently reported photoluminescent nanographene oxides (nGOs), i.e. nanographene oxidised with a sulfuric/nitric acid mixture (SNOx method), have tuneable photoluminescence and are scalable, simple and fast to produce optical probes. This material belongs to the vast class of photoluminescent carbon nanostructures, including carbon dots, nanodiamonds (NDs), graphene quantum dots (GQDs), all of which demonstrate a variety of properties that are

  7. Phylogenetic and functional analysis of the Cation Diffusion Facilitator (CDF family: improved signature and prediction of substrate specificity

    Directory of Open Access Journals (Sweden)

    Jeandroz Sylvain

    2007-04-01

    Full Text Available Abstract Background The Cation Diffusion Facilitator (CDF family is a ubiquitous family of heavy metal transporters. Much interest in this family has focused on implications for human health and bioremediation. In this work a broad phylogenetic study has been undertaken which, considered in the context of the functional characteristics of some fully characterised CDF transporters, has aimed at identifying molecular determinants of substrate selectivity and at suggesting metal specificity for newly identified CDF transporters. Results Representative CDF members from all three kingdoms of life (Archaea, Eubacteria, Eukaryotes were retrieved from genomic databases. Protein sequence alignment has allowed detection of a modified signature that can be used to identify new hypothetical CDF members. Phylogenetic reconstruction has classified the majority of CDF family members into three groups, each containing characterised members that share the same specificity towards the principally-transported metal, i.e. Zn, Fe/Zn or Mn. The metal selectivity of newly identified CDF transporters can be inferred by their position in one of these groups. The function of some conserved amino acids was assessed by site-directed mutagenesis in the poplar Zn2+ transporter PtdMTP1 and compared with similar experiments performed in prokaryotic members. An essential structural role can be assigned to a widely conserved glycine residue, while aspartate and histidine residues, highly conserved in putative transmembrane domains, might be involved in metal transport. The potential role of group-conserved amino acid residues in metal specificity is discussed. Conclusion In the present study phylogenetic and functional analyses have allowed the identification of three major substrate-specific CDF groups. The metal selectivity of newly identified CDF transporters can be inferred by their position in one of these groups. The modified signature sequence proposed in this work can be

  8. Improving adolescent social competence and behavior: a randomized trial of an 11-week equine facilitated learning prevention program.

    Science.gov (United States)

    Pendry, Patricia; Carr, Alexa M; Smith, Annelise N; Roeter, Stephanie M

    2014-08-01

    There is growing evidence that promoting social competence in youth is an effective strategy to prevent mental, emotional, and behavioral disorders in adulthood. Research suggests that programs delivered in collaboration with schools are particularly effective when they target social and emotional skill building, utilize an interactive instructional style, provide opportunities for youth participation and self-direction, and include explicit attempts to enhance youth social competence. A relatively new but popular approach that incorporates these characteristics is human animal interaction, which can be implemented in educational settings. We report the results from a randomized clinical trial examining the effects of an 11-week equine facilitated learning (EFL) program on the social competence and behavior of 5th-8th grade children. Children (N = 131) were recruited through referral by school counselors and school-based recruitment and then screened for low social competence. Researchers randomly assigned children to an experimental (n = 53) or waitlisted control group (n = 60). Children in the experimental group participated in an 11-week EFL program consisting of once-weekly, 90-min sessions of individual and team-focused activities, whereas children in the control group served as a wait-listed control and participated 16 weeks later. Parents of children in both groups rated child social competence at pretest and posttest. Three independent raters observed and reported children's positive and negative behavior using a validated checklist during each weekly session. Results indicated that program participation had a moderate treatment effect (d = .55) on social competence (p = .02) that was independent of pretest levels, age, gender, and referral status. Results showed that higher levels of program attendance predicted children's trajectories of observed positive (β = .500; p = .003) and negative behavior (β = -.062; p < .001) over the 11-week program.

  9. A Control Variate Method for Probabilistic Performance Assessment. Improved Estimates for Mean Performance Quantities of Interest

    Energy Technology Data Exchange (ETDEWEB)

    MacKinnon, Robert J.; Kuhlman, Kristopher L

    2016-05-01

    We present a method of control variates for calculating improved estimates for mean performance quantities of interest, E(PQI) , computed from Monte Carlo probabilistic simulations. An example of a PQI is the concentration of a contaminant at a particular location in a problem domain computed from simulations of transport in porous media. To simplify the presentation, the method is described in the setting of a one- dimensional elliptical model problem involving a single uncertain parameter represented by a probability distribution. The approach can be easily implemented for more complex problems involving multiple uncertain parameters and in particular for application to probabilistic performance assessment of deep geologic nuclear waste repository systems. Numerical results indicate the method can produce estimates of E(PQI)having superior accuracy on coarser meshes and reduce the required number of simulations needed to achieve an acceptable estimate.

  10. Improving estimates of oil pollution to the sea from land-based sources.

    Science.gov (United States)

    Saito, Laurel; Rosen, Michael R; Roesner, Larry; Howard, Nalin

    2010-07-01

    This paper presents improvements to calculation methods used in the National Research Council's (NRC) Oil in the Sea reports from 2003 and 1985 to estimate land-based contributions of petroleum hydrocarbons to the sea from North America. Using procedures similar to the 2003 report, but with more robust methods for handling non-detections, estimated land-based contributions for 1977 and 2000 were over 50% lower than the best 1996 estimate in the NRC report. The largest loads were from the northeastern United States and the Gulf of Mexico region for both the 2003 report and updated calculations. Calculations involved many sources of uncertainty, including lack of available data, variable methods of measuring and reporting data, and variable methods of reporting values below detection limits. This updated analysis of land-based loads of petroleum hydrocarbons to the sea highlights the continued need for more monitoring and research on inputs, fates and effects of these sources. Published by Elsevier Ltd.

  11. SOC Estimation of LiFePO4 Battery based on Improved Ah Integral Method

    Directory of Open Access Journals (Sweden)

    Zheng ZHU

    2013-07-01

    Full Text Available State of charge (SOC is the most important status parameters of energy storage system, which is able to predict the available mileage of electric vehicle. In fact, the accuracy of SOC estimation plays a vital role in the usability and security of the battery. To fully consider the practical demands, a novel method to predict SOC of LiFePO4 battery is presented in this paper, which defines the correct coefficient separately under two working conditions of charging and discharging. Based on effective factors such as coulombic efficiency, charge and discharge current, and temperature, an Ah integral SOC estimation method with two kinds of efficiency correct coefficients is established by performing massive experimental study. Experiments prove that the estimated error of SOC is less than 5%. Compared with the original Ah method, the improved Ah method is more advantageous in the accuracy and reliability.

  12. Pre- and post-processing filters for improvement of blood velocity estimation

    DEFF Research Database (Denmark)

    Schlaikjer, Malene; Jensen, Jørgen Arendt

    2000-01-01

    with different signal-to-noise ratios (SNR). The exact extent of the vessel and the true velocities are thereby known. Velocity estimates were obtained by employing Kasai's autocorrelator on the data. The post-processing filter was used on the computed 2D velocity map. An improvement of the RMS error...... velocity in the vessels. Post-processing is beneficial to obtain an image that minimizes the variation, and present the important information to the clinicians. Applying the theory of fluid mechanics introduces restrictions on the variations possible in a flow field. Neighboring estimates in time and space...... should be highly correlated, since transitions should occur smoothly. This idea is the basis of the algorithm developed in this study. From Bayesian image processing theory an a posteriori probability distribution for the velocity field is computed based on constraints on smoothness. An estimate...

  13. Angular velocity estimation based on star vector with improved current statistical model Kalman filter.

    Science.gov (United States)

    Zhang, Hao; Niu, Yanxiong; Lu, Jiazhen; Zhang, He

    2016-11-20

    Angular velocity information is a requisite for a spacecraft guidance, navigation, and control system. In this paper, an approach for angular velocity estimation based merely on star vector measurement with an improved current statistical model Kalman filter is proposed. High-precision angular velocity estimation can be achieved under dynamic conditions. The amount of calculation is also reduced compared to a Kalman filter. Different trajectories are simulated to test this approach, and experiments with real starry sky observation are implemented for further confirmation. The estimation accuracy is proved to be better than 10-4  rad/s under various conditions. Both the simulation and the experiment demonstrate that the described approach is effective and shows an excellent performance under both static and dynamic conditions.

  14. Optimization of Camera Arrangement Using Correspondence Field to Improve Depth Estimation.

    Science.gov (United States)

    Fu, Shichao; Safaei, Farzad; Li, Wanqing

    2017-04-18

    Stereo matching algorithms attempt to estimate depth from the images obtained by two cameras. In most cases, the arrangement of cameras (their locations and orientations with respect to the scene) are determined based on human experience. In this paper, it is shown that the camera arrangement can be optimized using the concept of correspondence field (CF) for better acquisition of depth. Specifically, the paper demonstrates the relationship between the correspondence field of a pair of cameras and depth estimation accuracy and presents a method to optimize their arrangement based on the gradient of CF. The experimental results show that a pair of cameras optimized by the proposed method can improve the accuracy of depth estimation by as much as 30% compared to the conventional camera arrangements.

  15. The importance of crown dimensions to improve tropical tree biomass estimates.

    Science.gov (United States)

    Goodman, Rosa C; Phillips, Oliver L; Baker, Timothy R

    2014-06-01

    Tropical forests play a vital role in the global carbon cycle, but the amount of carbon they contain and its spatial distribution remain uncertain. Recent studies suggest that once tree height is accounted for in biomass calculations, in addition to diameter and wood density, carbon stock estimates are reduced in many areas. However, it is possible that larger crown sizes might offset the reduction in biomass estimates in some forests where tree heights are lower because even comparatively short trees develop large, well-lit crowns in or above the forest canopy. While current allometric models and theory focus on diameter, wood density, and height, the influence of crown size and structure has not been well studied. To test the extent to which accounting for crown parameters can improve biomass estimates, we harvested and weighed 51 trees (11-169 cm diameter) in southwestern Amazonia where no direct biomass measurements have been made. The trees in our study had nearly half of total aboveground biomass in the branches (44% +/- 2% [mean +/- SE]), demonstrating the importance of accounting for tree crowns. Consistent with our predictions, key pantropical equations that include height, but do not account for crown dimensions, underestimated the sum total biomass of all 51 trees by 11% to 14%, primarily due to substantial underestimates of many of the largest trees. In our models, including crown radius greatly improves performance and reduces error, especially for the largest trees. In addition, over the full data set, crown radius explained more variation in aboveground biomass (10.5%) than height (6.0%). Crown form is also important: Trees with a monopodial architectural type are estimated to have 21-44% less mass than trees with other growth patterns. Our analysis suggests that accounting for crown allometry would substantially improve the accuracy of tropical estimates of tree biomass and its distribution in primary and degraded forests.

  16. Soft Sensor of Vehicle State Estimation Based on the Kernel Principal Component and Improved Neural Network

    Directory of Open Access Journals (Sweden)

    Haorui Liu

    2016-01-01

    Full Text Available In the car control systems, it is hard to measure some key vehicle states directly and accurately when running on the road and the cost of the measurement is high as well. To address these problems, a vehicle state estimation method based on the kernel principal component analysis and the improved Elman neural network is proposed. Combining with nonlinear vehicle model of three degrees of freedom (3 DOF, longitudinal, lateral, and yaw motion, this paper applies the method to the soft sensor of the vehicle states. The simulation results of the double lane change tested by Matlab/SIMULINK cosimulation prove the KPCA-IENN algorithm (kernel principal component algorithm and improved Elman neural network to be quick and precise when tracking the vehicle states within the nonlinear area. This algorithm method can meet the software performance requirements of the vehicle states estimation in precision, tracking speed, noise suppression, and other aspects.

  17. The electronic image stabilization technology research based on improved optical-flow motion vector estimation

    Science.gov (United States)

    Wang, Chao; Ji, Ming; Zhang, Ying; Jiang, Wentao; Lu, Xiaoyan; Wang, Jiaoying; Yang, Heng

    2016-01-01

    The electronic image stabilization technology based on improved optical-flow motion vector estimation technique can effectively improve the non normal shift, such as jitter, rotation and so on. Firstly, the ORB features are extracted from the image, a set of regions are built on these features; Secondly, the optical-flow vector is computed in the feature regions, in order to reduce the computational complexity, the multi resolution strategy of Pyramid is used to calculate the motion vector of the frame; Finally, qualitative and quantitative analysis of the effect of the algorithm is carried out. The results show that the proposed algorithm has better stability compared with image stabilization based on the traditional optical-flow motion vector estimation method.

  18. Accuracy Improvement of Zenith Tropospheric Delay Estimation Based on GPS Precise Point Positioning Algorithm

    Institute of Scientific and Technical Information of China (English)

    ZHU Qinglin; ZHAO Zhenwei; LIN Leke; WU Zhensen

    2010-01-01

    In the precise point positioning (PPP), some impossible accurately simulated systematic errors still remained in the GPS observations and will inevitably degrade the precision of zenith tropospheric delay (ZTD) estimation. The stochastic models used in the GPS PPP mode are compared. In this paper, the research results show that the precision of PPP-derived ZTD can be obviously improved through selecting a suitable stochastic model for GPS measurements. Low-elevation observations can cover more troposphere information that can improve the estimation of ZTD. A new stochastic model based on satellite low elevation cosine square is presented. The results show that the stochastic model using satellite elevation-based cosine square function is better than previous stochastic models.

  19. Experimental verification of an interpolation algorithm for improved estimates of animal position.

    Science.gov (United States)

    Schell, Chad; Jaffe, Jules S

    2004-07-01

    This article presents experimental verification of an interpolation algorithm that was previously proposed in Jaffe [J. Acoust. Soc. Am. 105, 3168-3175 (1999)]. The goal of the algorithm is to improve estimates of both target position and target strength by minimizing a least-squares residual between noise-corrupted target measurement data and the output of a model of the sonar's amplitude response to a target at a set of known locations. Although this positional estimator was shown to be a maximum likelihood estimator, in principle, experimental verification was desired because of interest in understanding its true performance. Here, the accuracy of the algorithm is investigated by analyzing the correspondence between a target's true position and the algorithm's estimate. True target position was measured by precise translation of a small test target (bead) or from the analysis of images of fish from a coregistered optical imaging system. Results with the stationary spherical test bead in a high signal-to-noise environment indicate that a large increase in resolution is possible, while results with commercial aquarium fish indicate a smaller increase is obtainable. However, in both experiments the algorithm provides improved estimates of target position over those obtained by simply accepting the angular positions of the sonar beam with maximum output as target position. In addition, increased accuracy in target strength estimation is possible by considering the effects of the sonar beam patterns relative to the interpolated position. A benefit of the algorithm is that it can be applied "ex post facto" to existing data sets from commercial multibeam sonar systems when only the beam intensities have been stored after suitable calibration.

  20. Improved methods for GRACE-derived groundwater storage change estimation in large-scale agroecosystems

    Science.gov (United States)

    Brena, A.; Kendall, A. D.; Hyndman, D. W.

    2013-12-01

    Large-scale agroecosystems are major providers of agricultural commodities and an important component of the world's food supply. In agroecosystems that depend mainly in groundwater, it is well known that their long-term sustainability can be at risk because of water management strategies and climatic trends. The water balance of groundwater-dependent agroecosystems such as the High Plains aquifer (HPA) are often dominated by pumping and irrigation, which enhance hydrological processes such as evapotranspiration, return flow and recharge in cropland areas. This work provides and validates new quantitative groundwater estimation methods for the HPA that combine satellite-based estimates of terrestrial water storage (GRACE), hydrological data assimilation products (NLDAS-2) and in situ measurements of groundwater levels and irrigation rates. The combined data can be used to elucidate the controls of irrigation on the water balance components of agroecosystems, such as crop evapotranspiration, soil moisture deficit and recharge. Our work covers a decade of continuous observations and model estimates from 2003 to 2013, which includes a significant drought since 2011. This study aims to: (1) test the sensitivity of groundwater storage to soil moisture and irrigation, (2) improve estimates of irrigation and soil moisture deficits (3) infer mean values of groundwater recharge across the HPA. The results show (1) significant improvements in GRACE-derived aquifer storage changes using methods that incorporate irrigation and soil moisture deficit data, (2) an acceptable correlation between the observed and estimated aquifer storage time series for the analyzed period, and (3) empirically-estimated annual rates of groundwater recharge that are consistent with previous geochemical and modeling studies. We suggest testing these correction methods in other large-scale agroecosystems with intensive groundwater pumping and irrigation rates.

  1. Improving winter leaf area index estimation in coniferous forests and its significance in estimating the land surface albedo

    Science.gov (United States)

    Wang, Rong; Chen, Jing M.; Pavlic, Goran; Arain, Altaf

    2016-09-01

    Winter leaf area index (LAI) of evergreen coniferous forests exerts strong control on the interception of snow, snowmelt and energy balance. Simulation of winter LAI and associated winter processes in land surface models is challenging. Retrieving winter LAI from remote sensing data is difficult due to cloud contamination, poor illumination, lower solar elevation and higher radiation reflection by snow background. Underestimated winter LAI in evergreen coniferous forests is one of the major issues limiting the application of current remote sensing LAI products. It has not been fully addressed in past studies in the literature. In this study, we used needle lifespan to correct winter LAI in a remote sensing product developed by the University of Toronto. For the validation purpose, the corrected winter LAI was then used to calculate land surface albedo at five FLUXNET coniferous forests in Canada. The RMSE and bias values for estimated albedo were 0.05 and 0.011, respectively, for all sites. The albedo map over coniferous forests across Canada produced with corrected winter LAI showed much better agreement with the GLASS (Global LAnd Surface Satellites) albedo product than the one produced with uncorrected winter LAI. The results revealed that the corrected winter LAI yielded much greater accuracy in simulating land surface albedo, making the new LAI product an improvement over the original one. Our study will help to increase the usability of remote sensing LAI products in land surface energy budget modeling.

  2. Multilayer soil model for improvement of soil moisture estimation using the small perturbation method

    Science.gov (United States)

    Song, Kaijun; Zhou, Xiaobing; Fan, Yong

    2009-12-01

    A multilayer soil model is presented for improved estimation of soil moisture content using the first-order small perturbation method (SPM) applied to measurements of radar backscattering coefficient. The total reflection coefficient of the natural bare soil including volume scattering contribution is obtained using the multilayer model. The surface reflection terms in SPM model are replaced by the total reflection coefficient from the multilayer soil surface in estimating soil moisture. The difference between the modified SPM model and the original SPM surface model is that the modified SPM model includes both the surface scattering and the volumetric scattering of the natural bare soil. Both the modified SPM model and the original SPM model are tested in soil moisture retrievals using experimental microwave backscattering coefficient data in the literature. Results show that the mean square errors between the measured data and the values estimated by the modified SPM model from all samples are 5.2%, while errors from the original SPM model are 8.4%. This indicates that the capability of estimating soil moisture by the SPM model is improved when the surface reflection terms are replaced by the total reflection coefficients of multilayer soil model over bare or very sparsely vegetation covered fields.

  3. Combining optimization methods with response spectra curve-fitting toward improved damping ratio estimation

    Science.gov (United States)

    Brewick, Patrick T.; Smyth, Andrew W.

    2016-12-01

    The authors have previously shown that many traditional approaches to operational modal analysis (OMA) struggle to properly identify the modal damping ratios for bridges under traffic loading due to the interference caused by the driving frequencies of the traffic loads. This paper presents a novel methodology for modal parameter estimation in OMA that overcomes the problems presented by driving frequencies and significantly improves the damping estimates. This methodology is based on finding the power spectral density (PSD) of a given modal coordinate, and then dividing the modal PSD into separate regions, left- and right-side spectra. The modal coordinates were found using a blind source separation (BSS) algorithm and a curve-fitting technique was developed that uses optimization to find the modal parameters that best fit each side spectra of the PSD. Specifically, a pattern-search optimization method was combined with a clustering analysis algorithm and together they were employed in a series of stages in order to improve the estimates of the modal damping ratios. This method was used to estimate the damping ratios from a simulated bridge model subjected to moving traffic loads. The results of this method were compared to other established OMA methods, such as Frequency Domain Decomposition (FDD) and BSS methods, and they were found to be more accurate and more reliable, even for modes that had their PSDs distorted or altered by driving frequencies.

  4. Non-invasively measured cardiac magnetic field maps improve the estimation of the current distribution

    OpenAIRE

    Kosch, Olaf; Steinhoff, Uwe; Trahms, Lutz; Trontelj, Zvonko; Jazbinšek, Vojko

    2015-01-01

    Comprehensive body surface potential mapping (BSPM) and magnetic field mapping (MFM) measurements have been carried out in order to improve the estimation of the current distribution generated by the human heart. Electric and magnetic fields and also the planar gradient of the magnetic field during the QRS complex were imaged as a time series of field maps. A model of the current distribution should explain the features of both BSPM and MFM. Simulated maps generated by a single dipole or a st...

  5. Improving the Estimations of VaR-GARCH Using Genetic Algorithm

    Institute of Scientific and Technical Information of China (English)

    WANG Chun-feng; LI Gang

    2001-01-01

    In this paper, genetic algorithm (GA) is put forward to improve the accuracy and robustness of the parameters estimation in GARCH models, and the results are applied to calculate value at risk. The computing examples of Dow Jones Index and exchange rate are presented, and the computation results indicate that VaR-GARCH model based on GA outperformed the conventional numerical method on the aspect of computational robustness and accuracy.

  6. Improving estimation of microseismic focal mechanisms using a high-resolution velocity model

    Science.gov (United States)

    Chen, T.; Chen, Y.; Lin, Y.; Huang, L.

    2015-12-01

    Injection and migration of CO2 during the geological carbon sequestration change the pore pressure and stress distribution in the reservoir. The change in stress may induce brittle failure on fractures, causing microseismic events. Focal mechanisms of induced microseismic events are useful for understanding stress evolution in the reservoir. An accurate estimation of microseismic focal mechanism depends on the accuracy of velocity models. In this work, we study the improvement on estimation of microseismic focal mechanisms using a high-resolution velocity model. We obtain the velocity model using a velocity inversion algorithm with a modified total-variation scheme rather than the commonly used Tikhonov regularization technique. We demonstrate with synthetic microseismic data that the velocity inversion method with a modified total-variation regularization scheme improves velocity inversion, and the improved velocity models enhance the accuracy of estimated focal mechanisms of microseismic events. We apply the new methodology to microseismic data acquired at a CO2-EOR (enhanced oil recovery) site at Aneth, Utah.

  7. Parameters Nonlinear Estimation of the Propulsion System Performance Seeking Control Using Improved PSO

    Directory of Open Access Journals (Sweden)

    Yin Dawei

    2010-12-01

    Full Text Available The estimation of aeroengine component deviation parameters (CDP is an important portion of aeronautical propulsion system performance-seeking control (PSC, which employs linear Kalman filter based on piecewise state variable model (SVM traditionally. But it’s not easy to get SVM, and the process of linearizing the nonlinear model to get the SVM will introduce errors. So parameters nonlinear estimation was introduced based on the nonlinear aeroengine model directly. The nonlinear estimation model is established according to aeroengine operation balance and the measured and calculated values matching of measurable parameters. The nonlinear estimation was changed to a problem of solving complex nonlinear equations, which is equal to an optimization problem. Time-varying inertia weight particle swarm optimization (PSO with constriction factor was employed to solve the problem in order to satisfy the requirement of precision and calculation speed. The simulation results of a given turbofan engine show that utilizing the improved PSO algorithm can estimate the CPD precisely with satisfied converging speed.

  8. Improving MIMO-OFDM decision-directed channel estimation by utilizing error-correcting codes

    Directory of Open Access Journals (Sweden)

    P. Beinschob

    2009-05-01

    Full Text Available In this paper a decision-directed Multiple-Input Multiple-Output (MIMO channel tracking algorithm is enhanced to raise the channel estimate accuracy. While DDCE is prone to error propagation the enhancement employs channel decoding in the tracking process. Therefore, a quantized block of symbols is checked on consistency via the channel decoder, possibly corrected and then used. This yields a more robust tracking of the channel in terms of bit error rate and improves the channel estimate under certain conditions.

    Equalization is performed to prove the feasibility of the obtained channel estimate. Therefore a combined signal consisting of data and pilot symbols is sent. Adaptive filters are applied to exploit correlations in time, frequency and spatial domain. By using good error-correcting coding schemes like Turbo Codes or Low Density Parity Check (LDPC codes, adequate channel estimates can be acquired even at low signal to noise ratios (SNR. The proposed algorithm among two others is applied for channel estimation and equalization and results are compared.

  9. Improving reservoir volumetric estimations in petroleum resource assessment using discovery process models

    Institute of Scientific and Technical Information of China (English)

    Chen Zhuoheng; Osadetz Kirk G.

    2009-01-01

    The reservoir volumetric approach represents a widely accepted, but flawed method of petroleum play resource calculation.In this paper, we propose a combination of techniques that can improve the applicability and quality of the resource estimation.These techniques include: 1) the use of the Multivariate Discovery Process model (MDP) to derive unbiased distribution parameters of reservoir volumetric variables and to reveal correlations among the variables; 2) the use of the Geo-anchored method to estimate simultaneously the number of oil and gas pools in the same play; and 3) the cross-validation of assessment results from different methods.These techniques are illustrated by using an example of crude oil and natural gas resource assessment of the Sverdrup Basin, Canadian Archipelago.The example shows that when direct volumetric measurements of the untested prospects are not available, the MDP model can help derive unbiased estimates of the distribution parameters by using information from the discovered oil and gas accumulations.It also shows that an estimation of the number of oil and gas accumulations and associated size ranges from a discovery process model can provide an alternative and efficient approach when inadequate geological data hinder the estimation.Cross-examination of assessment results derived using different methods allows one to focus on and analyze the causes for the major differences, thus providing a more reliable assessment outcome.

  10. Improvement of Ocean State Estimation by Assimilating Mapped Argo Drift Data

    Directory of Open Access Journals (Sweden)

    Shuhei Masuda

    2014-01-01

    Full Text Available We investigated the impact of assimilating a mapped dataset of subsurface ocean currents into an ocean state estimation. We carried out two global ocean state estimations from 2000 to 2007 using the K7 four-dimensional variational data synthesis system, one of which included an additional map of climatological geostrophic currents estimated from the global set of Argo floats. We assessed the representativeness of the volume transport in the two exercises. The assimilation of Argo ocean current data at only one level, 1000 dbar depth, had subtle impacts on the estimated volume transports, which were strongest in the subtropical North Pacific. The corrections at 10°N, where the impact was most notable, arose through the nearly complete offset of wind stress curl by the data synthesis system in conjunction with the first mode baroclinic Rossby wave adjustment. Our results imply that subsurface current data can be effective for improving the estimation of global oceanic circulation by a data synthesis.

  11. Experimental and Analytical Studies on Improved Feedforward ML Estimation Based on LS-SVR

    Directory of Open Access Journals (Sweden)

    Xueqian Liu

    2013-01-01

    Full Text Available Maximum likelihood (ML algorithm is the most common and effective parameter estimation method. However, when dealing with small sample and low signal-to-noise ratio (SNR, threshold effects are resulted and estimation performance degrades greatly. It is proved that support vector machine (SVM is suitable for small sample. Consequently, we employ the linear relationship between least squares support vector regression (LS-SVR’s inputs and outputs and regard LS-SVR process as a time-varying linear filter to increase input SNR of received signals and decrease the threshold value of mean square error (MSE curve. Furthermore, it is verified that by taking single-tone sinusoidal frequency estimation, for example, and integrating data analysis and experimental validation, if LS-SVR’s parameters are set appropriately, not only can the LS-SVR process ensure the single-tone sinusoid and additive white Gaussian noise (AWGN channel characteristics of original signals well, but it can also improves the frequency estimation performance. During experimental simulations, LS-SVR process is applied to two common and representative single-tone sinusoidal ML frequency estimation algorithms, the DFT-based frequency-domain periodogram (FDP and phase-based Kay ones. And the threshold values of their MSE curves are decreased by 0.3 dB and 1.2 dB, respectively, which obviously exhibit the advantage of the proposed algorithm.

  12. Improving riverine constituent concentration and flux estimation by accounting for antecedent discharge conditions

    Science.gov (United States)

    Zhang, Qian; Ball, William P.

    2017-04-01

    Regression-based approaches are often employed to estimate riverine constituent concentrations and fluxes based on typically sparse concentration observations. One such approach is the recently developed WRTDS (;Weighted Regressions on Time, Discharge, and Season;) method, which has been shown to provide more accurate estimates than prior approaches in a wide range of applications. Centered on WRTDS, this work was aimed at developing improved models for constituent concentration and flux estimation by accounting for antecedent discharge conditions. Twelve modified models were developed and tested, each of which contains one additional flow variable to represent antecedent conditions and which can be directly derived from the daily discharge record. High-resolution (∼daily) data at nine diverse monitoring sites were used to evaluate the relative merits of the models for estimation of six constituents - chloride (Cl), nitrate-plus-nitrite (NOx), total Kjeldahl nitrogen (TKN), total phosphorus (TP), soluble reactive phosphorus (SRP), and suspended sediment (SS). For each site-constituent combination, 30 concentration subsets were generated from the original data through Monte Carlo subsampling and then used to evaluate model performance. For the subsampling, three sampling strategies were adopted: (A) 1 random sample each month (12/year), (B) 12 random monthly samples plus additional 8 random samples per year (20/year), and (C) flow-stratified sampling with 12 regular (non-storm) and 8 storm samples per year (20/year). Results reveal that estimation performance varies with both model choice and sampling strategy. In terms of model choice, the modified models show general improvement over the original model under all three sampling strategies. Major improvements were achieved for NOx by the long-term flow-anomaly model and for Cl by the ADF (average discounted flow) model and the short-term flow-anomaly model. Moderate improvements were achieved for SS, TP, and TKN

  13. Promoting patient-centered care: a qualitative study of facilitators and barriers in healthcare organizations with a reputation for improving the patient experience.

    Science.gov (United States)

    Luxford, Karen; Safran, Dana Gelb; Delbanco, Tom

    2011-10-01

    To investigate organizational facilitators and barriers to patient-centered care in US health care institutions renowned for improving the patient care experience. A qualitative study involving interviews of senior staff and patient representatives. Semi-structured interviews focused on organizational processes, senior leadership, work environment, measurement and feedback mechanisms, patient engagement and information technology and access. Eight health care organizations across the USA with a reputation for successfully promoting patient-centered care. Forty individuals, including chief executives, quality directors, chief medical officers, administrative directors and patient committee representatives. Interviewees reported that several organizational attributes and processes are key facilitators for making care more patient-centered: (i) strong, committed senior leadership, (ii) clear communication of strategic vision, (iii) active engagement of patient and families throughout the institution, (iv) sustained focus on staff satisfaction, (v) active measurement and feedback reporting of patient experiences, (vi) adequate resourcing of care delivery redesign, (vii) staff capacity building, (viii) accountability and incentives and (ix) a culture strongly supportive of change and learning. Interviewees reported that changing the organizational culture from a 'provider-focus' to a 'patient-focus' and the length of time it took to transition toward such a focus were the principal barriers against transforming delivery for patient-centered care. Organizations that have succeeded in fostering patient-centered care have gone beyond mainstream frameworks for quality improvement based on clinical measurement and audit and have adopted a strategic organizational approach to patient focus.

  14. Improved Forest Biomass and Carbon Estimations Using Texture Measures from WorldView-2 Satellite Data

    Directory of Open Access Journals (Sweden)

    Sandra Eckert

    2012-03-01

    Full Text Available Accurate estimation of aboveground biomass and carbon stock has gained importance in the context of the United Nations Framework Convention on Climate Change (UNFCCC and the Kyoto Protocol. In order to develop improved forest stratum–specific aboveground biomass and carbon estimation models for humid rainforest in northeast Madagascar, this study analyzed texture measures derived from WorldView-2 satellite data. A forest inventory was conducted to develop stratum-specific allometric equations for dry biomass. On this basis, carbon was calculated by applying a conversion factor. After satellite data preprocessing, vegetation indices, principal components, and texture measures were calculated. The strength of their relationships with the stratum-specific plot data was analyzed using Pearson’s correlation. Biomass and carbon estimation models were developed by performing stepwise multiple linear regression. Pearson’s correlation coefficients revealed that (a texture measures correlated more with biomass and carbon than spectral parameters, and (b correlations were stronger for degraded forest than for non-degraded forest. For degraded forest, the texture measures of Correlation, Angular Second Moment, and Contrast, derived from the red band, contributed to the best estimation model, which explained 84% of the variability in the field data (relative RMSE = 6.8%. For non-degraded forest, the vegetation index EVI and the texture measures of Variance, Mean, and Correlation, derived from the newly introduced coastal blue band, both NIR bands, and the red band, contributed to the best model, which explained 81% of the variability in the field data (relative RMSE = 11.8%. These results indicate that estimation of tropical rainforest biomass/carbon, based on very high resolution satellite data, can be improved by (a developing and applying forest stratum–specific models, and (b including textural information in addition to spectral information.

  15. Improved methods to estimate the effective impervious area in urban catchments using rainfall-runoff data

    Science.gov (United States)

    Ebrahimian, Ali; Wilson, Bruce N.; Gulliver, John S.

    2016-05-01

    Impervious surfaces are useful indicators of the urbanization impacts on water resources. Effective impervious area (EIA), which is the portion of total impervious area (TIA) that is hydraulically connected to the drainage system, is a better catchment parameter in the determination of actual urban runoff. Development of reliable methods for quantifying EIA rather than TIA is currently one of the knowledge gaps in the rainfall-runoff modeling context. The objective of this study is to improve the rainfall-runoff data analysis method for estimating EIA fraction in urban catchments by eliminating the subjective part of the existing method and by reducing the uncertainty of EIA estimates. First, the theoretical framework is generalized using a general linear least square model and using a general criterion for categorizing runoff events. Issues with the existing method that reduce the precision of the EIA fraction estimates are then identified and discussed. Two improved methods, based on ordinary least square (OLS) and weighted least square (WLS) estimates, are proposed to address these issues. The proposed weighted least squares method is then applied to eleven urban catchments in Europe, Canada, and Australia. The results are compared to map measured directly connected impervious area (DCIA) and are shown to be consistent with DCIA values. In addition, both of the improved methods are applied to nine urban catchments in Minnesota, USA. Both methods were successful in removing the subjective component inherent in the analysis of rainfall-runoff data of the current method. The WLS method is more robust than the OLS method and generates results that are different and more precise than the OLS method in the presence of heteroscedastic residuals in our rainfall-runoff data.

  16. Improving North American gross primary production (GPP) estimates using atmospheric measurements of carbonyl sulfide (COS)

    Science.gov (United States)

    Chen, Huilin; Montzka, Steve; Andrews, Arlyn; Sweeney, Colm; Jacobson, Andy; Miller, Ben; Masarie, Ken; Jung, Martin; Gerbig, Christoph; Campbell, Elliott; Abu-Naser, Mohammad; Berry, Joe; Baker, Ian; Tans, Pieter

    2013-04-01

    Understanding the responses of gross primary production (GPP) to climate change is essential for improving our prediction of climate change. To this end, it is important to accurately partition net ecosystem exchange of carbon into GPP and respiration. Recent studies suggest that carbonyl sulfide is a useful tracer to provide a constraint on GPP, based on the fact that both COS and CO2 are simultaneously taken up by plants and the quantitative correlation between GPP and COS plant uptake. We will present an assessment of North American GPP estimates from the Simple Biosphere (SiB) model, the Carnegie-Ames-Stanford Approach (CASA) model, and the MPI-BGC model through atmospheric transport simulations of COS in a receptor oriented framework. The newly upgraded Hybrid Single Particle Lagrangian Integrated Trajectory Model (HYSPLIT) will be employed to compute the influence functions, i.e. footprints, to link the surface fluxes to the concentration changes at the receptor observations. The HYSPLIT is driven by the 3-hourly archived NAM 12km meteorological data from NOAA NCEP. The background concentrations are calculated using empirical curtains along the west coast of North America that have been created by interpolating in time and space the observations at the NOAA/ESRL marine boundary layer stations and from aircraft vertical profiles. The plant uptake of COS is derived from GPP estimates of biospheric models. The soil uptake and anthropogenic emissions are from Kettle et al. 2002. In addition, we have developed a new soil flux map of COS based on observations of molecular hydrogen (H2), which shares a common soil uptake term but lacks a vegetative sink. We will also improve the GPP estimates by assimilating atmospheric observations of COS in the receptor oriented framework, and then present the assessment of the improved GPP estimates against variations of climate variables such as temperature and precipitation.

  17. Barriers and facilitators to evidence based care of type 2 diabetes patients: experiences of general practitioners participating to a quality improvement program

    Directory of Open Access Journals (Sweden)

    Hannes Karen

    2009-07-01

    Full Text Available Abstract Objective To evaluate the barriers and facilitators to high-quality diabetes care as experienced by general practitioners (GPs who participated in an 18-month quality improvement program (QIP. This QIP was implemented to promote compliance with international guidelines. Methods Twenty out of the 120 participating GPs in the QIP underwent semi-structured interviews that focused on three questions: 'Which changes did you implement or did you observe in the quality of diabetes care during your participation in the QIP?' 'According to your experience, what induced these changes?' and 'What difficulties did you experience in making the changes?' Results Most GPs reported that enhanced knowledge, improved motivation, and a greater sense of responsibility were the key factors that led to greater compliance with diabetes care guidelines and consequent improvements in diabetes care. Other factors were improved communication with patients and consulting specialists and reliance on diabetes nurse educators. Some GPs were reluctant to collaborate with specialists, and especially with diabetes educators and dieticians. Others blamed poor compliance with the guidelines on lack of time. Most interviewees reported that a considerable minority of patients were unwilling to change their lifestyles. Conclusion Qualitative research nested in an experimental trial may clarify the improvements that a QIP may bring about in a general practice, provide insight into GPs' approach to diabetes care and reveal the program's limits. Implementation of a QIP encounters an array of cognitive, motivational, and relational obstacles that are embedded in a patient-healthcare provider relationship.

  18. Improving the Network Scale-Up Estimator: Incorporating Means of Sums, Recursive Back Estimation, and Sampling Weights

    Science.gov (United States)

    Habecker, Patrick; Dombrowski, Kirk; Khan, Bilal

    2015-01-01

    Researchers interested in studying populations that are difficult to reach through traditional survey methods can now draw on a range of methods to access these populations. Yet many of these methods are more expensive and difficult to implement than studies using conventional sampling frames and trusted sampling methods. The network scale-up method (NSUM) provides a middle ground for researchers who wish to estimate the size of a hidden population, but lack the resources to conduct a more specialized hidden population study. Through this method it is possible to generate population estimates for a wide variety of groups that are perhaps unwilling to self-identify as such (for example, users of illegal drugs or other stigmatized populations) via traditional survey tools such as telephone or mail surveys—by asking a representative sample to estimate the number of people they know who are members of such a “hidden” subpopulation. The original estimator is formulated to minimize the weight a single scaling variable can exert upon the estimates. We argue that this introduces hidden and difficult to predict biases, and instead propose a series of methodological advances on the traditional scale-up estimation procedure, including a new estimator. Additionally, we formalize the incorporation of sample weights into the network scale-up estimation process, and propose a recursive process of back estimation “trimming” to identify and remove poorly performing predictors from the estimation process. To demonstrate these suggestions we use data from a network scale-up mail survey conducted in Nebraska during 2014. We find that using the new estimator and recursive trimming process provides more accurate estimates, especially when used in conjunction with sampling weights. PMID:26630261

  19. Improving the Network Scale-Up Estimator: Incorporating Means of Sums, Recursive Back Estimation, and Sampling Weights.

    Directory of Open Access Journals (Sweden)

    Patrick Habecker

    Full Text Available Researchers interested in studying populations that are difficult to reach through traditional survey methods can now draw on a range of methods to access these populations. Yet many of these methods are more expensive and difficult to implement than studies using conventional sampling frames and trusted sampling methods. The network scale-up method (NSUM provides a middle ground for researchers who wish to estimate the size of a hidden population, but lack the resources to conduct a more specialized hidden population study. Through this method it is possible to generate population estimates for a wide variety of groups that are perhaps unwilling to self-identify as such (for example, users of illegal drugs or other stigmatized populations via traditional survey tools such as telephone or mail surveys--by asking a representative sample to estimate the number of people they know who are members of such a "hidden" subpopulation. The original estimator is formulated to minimize the weight a single scaling variable can exert upon the estimates. We argue that this introduces hidden and difficult to predict biases, and instead propose a series of methodological advances on the traditional scale-up estimation procedure, including a new estimator. Additionally, we formalize the incorporation of sample weights into the network scale-up estimation process, and propose a recursive process of back estimation "trimming" to identify and remove poorly performing predictors from the estimation process. To demonstrate these suggestions we use data from a network scale-up mail survey conducted in Nebraska during 2014. We find that using the new estimator and recursive trimming process provides more accurate estimates, especially when used in conjunction with sampling weights.

  20. Collaborative Project: Building improved optimized parameter estimation algorithms to improve methane and nitrogen fluxes in a climate model

    Energy Technology Data Exchange (ETDEWEB)

    Mahowald, Natalie [Cornell Univ., Ithaca, NY (United States)

    2016-11-29

    Soils in natural and managed ecosystems and wetlands are well known sources of methane, nitrous oxides, and reactive nitrogen gases, but the magnitudes of gas flux to the atmosphere are still poorly constrained. Thus, the reasons for the large increases in atmospheric concentrations of methane and nitrous oxide since the preindustrial time period are not well understood. The low atmospheric concentrations of methane and nitrous oxide, despite being more potent greenhouse gases than carbon dioxide, complicate empirical studies to provide explanations. In addition to climate concerns, the emissions of reactive nitrogen gases from soils are important to the changing nitrogen balance in the earth system, subject to human management, and may change substantially in the future. Thus improved modeling of the emission fluxes of these species from the land surface is important. Currently, there are emission modules for methane and some nitrogen species in the Community Earth System Model’s Community Land Model (CLM-ME/N); however, there are large uncertainties and problems in the simulations, resulting in coarse estimates. In this proposal, we seek to improve these emission modules by combining state-of-the-art process modules for emissions, available data, and new optimization methods. In earth science problems, we often have substantial data and knowledge of processes in disparate systems, and thus we need to combine data and a general process level understanding into a model for projections of future climate that are as accurate as possible. The best methodologies for optimization of parameters in earth system models are still being developed. In this proposal we will develop and apply surrogate algorithms that a) were especially developed for computationally expensive simulations like CLM-ME/N models; b) were (in the earlier surrogate optimization Stochastic RBF) demonstrated to perform very well on computationally expensive complex partial differential equations in

  1. An improved modal pushover analysis procedure for estimating seismic demands of structures

    Institute of Scientific and Technical Information of China (English)

    Mao Jianmeng; Zhai Changhai; Xie Lili

    2008-01-01

    The pushover analysis (POA) procedure is difficult to apply to high-rise buildings, as it cannot account for the contributions of higher modes. To overcome this limitation, a modal pushover analysis (MPA) procedure was proposed by Chopra et al. (2001). However, invariable lateral force distributions are still adopted in the MPA. In this paper, an improved MPA procedure is presented to estimate the seismic demands of structures, considering the redistribution of inertia forces after the structure yields. This improved procedure is verified with numerical examples of 5-, 9- and 22-story buildings. It is concluded that the improved MPA procedure is more accurate than either the POA procedure or MPA procedure. In addition, the proposed procedure avoids a large computational effort by adopting a two-phase lateral force distribution..

  2. Improving regression-model-based streamwater constituent load estimates derived from serially correlated data

    Science.gov (United States)

    Aulenbach, Brent T.

    2013-10-01

    A regression-model based approach is a commonly used, efficient method for estimating streamwater constituent load when there is a relationship between streamwater constituent concentration and continuous variables such as streamwater discharge, season and time. A subsetting experiment using a 30-year dataset of daily suspended sediment observations from the Mississippi River at Thebes, Illinois, was performed to determine optimal sampling frequency, model calibration period length, and regression model methodology, as well as to determine the effect of serial correlation of model residuals on load estimate precision. Two regression-based methods were used to estimate streamwater loads, the Adjusted Maximum Likelihood Estimator (AMLE), and the composite method, a hybrid load estimation approach. While both methods accurately and precisely estimated loads at the model's calibration period time scale, precisions were progressively worse at shorter reporting periods, from annually to monthly. Serial correlation in model residuals resulted in observed AMLE precision to be significantly worse than the model calculated standard errors of prediction. The composite method effectively improved upon AMLE loads for shorter reporting periods, but required a sampling interval of at least 15-days or shorter, when the serial correlations in the observed load residuals were greater than 0.15. AMLE precision was better at shorter sampling intervals and when using the shortest model calibration periods, such that the regression models better fit the temporal changes in the concentration-discharge relationship. The models with the largest errors typically had poor high flow sampling coverage resulting in unrepresentative models. Increasing sampling frequency and/or targeted high flow sampling are more efficient approaches to ensure sufficient sampling and to avoid poorly performing models, than increasing calibration period length.

  3. When celibacy matters: incorporating non-breeders improves demographic parameter estimates.

    Directory of Open Access Journals (Sweden)

    Deborah Pardo

    Full Text Available In long-lived species only a fraction of a population breeds at a given time. Non-breeders can represent more than half of adult individuals, calling in doubt the relevance of estimating demographic parameters from the sole breeders. Here we demonstrate the importance of considering observable non-breeders to estimate reliable demographic traits: survival, return, breeding, hatching and fledging probabilities. We study the long-lived quasi-biennial breeding wandering albatross (Diomedea exulans. In this species, the breeding cycle lasts almost a year and birds that succeed a given year tend to skip the next breeding occasion while birds that fail tend to breed again the following year. Most non-breeders remain unobservable at sea, but still a substantial number of observable non-breeders (ONB was identified on breeding sites. Using multi-state capture-mark-recapture analyses, we used several measures to compare the performance of demographic estimates between models incorporating or ignoring ONB: bias (difference in mean, precision (difference is standard deviation and accuracy (both differences in mean and standard deviation. Our results highlight that ignoring ONB leads to bias and loss of accuracy on breeding probability and survival estimates. These effects are even stronger when studied in an age-dependent framework. Biases on breeding probabilities and survival increased with age leading to overestimation of survival at old age and thus actuarial senescence and underestimation of reproductive senescence. We believe our study sheds new light on the difficulties of estimating demographic parameters in species/taxa where a significant part of the population does not breed every year. Taking into account ONB appeared important to improve demographic parameter estimates, models of population dynamics and evolutionary conclusions regarding senescence within and across taxa.

  4. High-Resolution Spatial Distribution and Estimation of Access to Improved Sanitation in Kenya.

    Directory of Open Access Journals (Sweden)

    Peng Jia

    Full Text Available Access to sanitation facilities is imperative in reducing the risk of multiple adverse health outcomes. A distinct disparity in sanitation exists among different wealth levels in many low-income countries, which may hinder the progress across each of the Millennium Development Goals.The surveyed households in 397 clusters from 2008-2009 Kenya Demographic and Health Surveys were divided into five wealth quintiles based on their national asset scores. A series of spatial analysis methods including excess risk, local spatial autocorrelation, and spatial interpolation were applied to observe disparities in coverage of improved sanitation among different wealth categories. The total number of the population with improved sanitation was estimated by interpolating, time-adjusting, and multiplying the surveyed coverage rates by high-resolution population grids. A comparison was then made with the annual estimates from United Nations Population Division and World Health Organization /United Nations Children's Fund Joint Monitoring Program for Water Supply and Sanitation.The Empirical Bayesian Kriging interpolation produced minimal root mean squared error for all clusters and five quintiles while predicting the raw and spatial coverage rates of improved sanitation. The coverage in southern regions was generally higher than in the north and east, and the coverage in the south decreased from Nairobi in all directions, while Nyanza and North Eastern Province had relatively poor coverage. The general clustering trend of high and low sanitation improvement among surveyed clusters was confirmed after spatial smoothing.There exists an apparent disparity in sanitation among different wealth categories across Kenya and spatially smoothed coverage rates resulted in a closer estimation of the available statistics than raw coverage rates. Future intervention activities need to be tailored for both different wealth categories and nationally where there are areas of

  5. Using clinical indicators to facilitate quality improvement via the accreditation process: an adaptive study into the control relationship.

    Science.gov (United States)

    Chuang, Sheuwen; Howley, Peter P; Hancock, Stephen

    2013-07-01

    The aim of the study was to determine accreditation surveyors' and hospitals' use and perceived usefulness of clinical indicator reports and the potential to establish the control relationship between the accreditation and reporting systems. The control relationship refers to instructional directives, arising from appropriately designed methods and efforts towards using clinical indicators, which provide a directed moderating, balancing and best outcome for the connected systems. Web-based questionnaire survey. Australian Council on Healthcare Standards' (ACHS) accreditation and clinical indicator programmes. Seventy-three of 306 surveyors responded. Half used the reports always/most of the time. Five key messages were revealed: (i) report use was related to availability before on-site investigation; (ii) report use was associated with the use of non-ACHS reports; (iii) a clinical indicator set's perceived usefulness was associated with its reporting volume across hospitals; (iv) simpler measures and visual summaries in reports were rated the most useful; (v) reports were deemed to be suitable for the quality and safety objectives of the key groups of interested parties (hospitals' senior executive and management officers, clinicians, quality managers and surveyors). Implementing the control relationship between the reporting and accreditation systems is a promising expectation. Redesigning processes to ensure reports are available in pre-survey packages and refined education of surveyors and hospitals on how to better utilize the reports will support the relationship. Additional studies on the systems' theory-based model of the accreditation and reporting system are warranted to establish the control relationship, building integrated system-wide relationships with sustainable and improved outcomes.

  6. Measuring and Facilitating Client Engagement with Financial Incentives: Implications for Improving Clinical Outcomes in a Mental Health Setting.

    Science.gov (United States)

    Kotwicki, Raymond J; Balzer, Alexandra M; Harvey, Philip D

    2016-09-26

    Significant numbers of individuals with severe mental illnesses are difficult to engage in treatment services, presenting challenges for care. To be able to assess the relationship between engagement and discharge outcomes, we modified the "Milestones of Recovery Scale". This scale was modified for content to match the current clinical setting, evaluated for inter-rater reliability after modification in a sample of 233 cases receiving psychiatric rehabilitation, and then was administered to 423 additional psychiatric rehabilitation clients over a 24-month study period. In an effort to determine whether provision of financial incentives lead to sustained increases in client engagement, a cut off for client eligibility for financial incentives was evaluated on the basis of the reliability study and the course of engagement was related to receipt of this incentive and successful completion of treatment in a new sample of 423 patients. Of this sample, 78 % received an initial financial incentive during treatment (were initially engaged), and 93.3 % of that subgroup sustained this level of engagement it over their entire course of treatment. Of the 22 % of cases not receiving an initial incentive, only 5.4 % improved in their engagement to levels required for the incentive. Longitudinal analysis demonstrated that individuals who maintained or increased their level of engagement over time were more likely to complete treatment in accordance with planned treatment goals. The initial engagement and the course of engagement in treatment predicted successful completion, but incentives did not lead to increased engagement in initially poorly engaged patients. These data are interpreted in terms of the likely success of extrinsic rewards to increase engagement in mental health services.

  7. Improved global high resolution precipitation estimation using multi-satellite multi-spectral information

    Science.gov (United States)

    Behrangi, Ali

    In respond to the community demands, combining microwave (MW) and infrared (IR) estimates of precipitation has been an active area of research since past two decades. The anticipated launching of NASA's Global Precipitation Measurement (GPM) mission and the increasing number of spectral bands in recently launched geostationary platforms will provide greater opportunities for investigating new approaches to combine multi-source information towards improved global high resolution precipitation retrievals. After years of the communities' efforts the limitations of the existing techniques are: (1) Drawbacks of IR-only techniques to capture warm rainfall and screen out no-rain thin cirrus clouds; (2) Grid-box- only dependency of many algorithms with not much effort to capture the cloud textures whether in local or cloud patch scale; (3) Assumption of indirect relationship between rain rate and cloud-top temperature that force high intensity precipitation to any cold cloud; (4) Neglecting the dynamics and evolution of cloud in time; (5) Inconsistent combination of MW and IR-based precipitation estimations due to the combination strategies and as a result of above described shortcomings. This PhD dissertation attempts to improve the combination of data from Geostationary Earth Orbit (GEO) and Low-Earth Orbit (LEO) satellites in manners that will allow consistent high resolution integration of the more accurate precipitation estimates, directly observed through LEO's PMW sensors, into the short-term cloud evolution process, which can be inferred from GEO images. A set of novel approaches are introduced to cope with the listed limitations and is consist of the following four consecutive components: (1) starting with the GEO part and by using an artificial-neural network based method it is demonstrated that inclusion of multi-spectral data can ameliorate existing problems associated with IR-only precipitating retrievals; (2) through development of Precipitation Estimation

  8. Improved tilt sensing in an LGS-based tomographic AO system based on instantaneous PSF estimation

    Science.gov (United States)

    Veran, Jean-Pierre

    2013-12-01

    Laser guide star (LGS)-based tomographic AO systems, such as Multi-Conjugate AO (MCAO), Multi-Object AO (MOAO) and Laser Tomography AO (LTAO), require natural guide stars (NGSs) to sense tip-tilt (TT) and possibly other low order modes, to get rid of the LGS-tilt indetermination problem. For example, NFIRAOS, the first-light facility MCAO system for the Thirty Meter Telescope requires three NGSs, in addition to six LGSs: two to measure TT and one to measure TT and defocus. In order to improve sky coverage, these NGSs are selected in a so-called technical field (2 arcmin in diameter for NFIRAOS), which is much larger than the on-axis science field (17x17 arcsec for NFIRAOS), on which the AO correction is optimized. Most times, the NGSs are far off-axis and thus poorly corrected by the high-order AO loop, resulting in spots with low contrast and high speckle noise. Accurately finding the position of such spots is difficult, even with advanced methods such as matched-filtering or correlation, because these methods rely on the knowledge of an average spot image, which is quite different from the instantaneous spot image, especially in case of poor correction. This results in poor tilt estimation, which, ultimately, impacts sky coverage. We propose to improve the estimation of the position of the NGS spots by using, for each frame, a current estimate of the instantaneous spot profile instead of an average profile. This estimate can be readily obtained by tracing wavefront errors in the direction of the NGS through the turbulence volume. The latter is already computed by the tomographic process from the LGS measurements as part of the high order AO loop. Computing such a wavefront estimate has actually already been proposed for the purpose of driving a deformable mirror (DM) in each NGS WFS, to optically correct the NGS spot, which does lead to improved centroiding accuracy. Our approach, however, is much simpler, because it does not require the complication of extra DMs

  9. Integrating SAS and GIS software to improve habitat-use estimates from radiotelemetry data

    Science.gov (United States)

    Kenow, K.P.; Wright, R.G.; Samuel, M.D.; Rasmussen, P.W.

    2001-01-01

    Radiotelemetry has been used commonly to remotely determine habitat use by a variety of wildlife species. However, habitat misclassification can occur because the true location of a radiomarked animal can only be estimated. Analytical methods that provide improved estimates of habitat use from radiotelemetry location data using a subsampling approach have been proposed previously. We developed software, based on these methods, to conduct improved habitat-use analyses. A Statistical Analysis System (SAS)-executable file generates a random subsample of points from the error distribution of an estimated animal location and formats the output into ARC/INFO-compatible coordinate and attribute files. An associated ARC/INFO Arc Macro Language (AML) creates a coverage of the random points, determines the habitat type at each random point from an existing habitat coverage, sums the number of subsample points by habitat type for each location, and outputs tile results in ASCII format. The proportion and precision of habitat types used is calculated from the subsample of points generated for each radiotelemetry location. We illustrate the method and software by analysis of radiotelemetry data for a female wild turkey (Meleagris gallopavo).

  10. Ascertainment-adjusted parameter estimation approach to improve robustness against misspecification of health monitoring methods

    Science.gov (United States)

    Juesas, P.; Ramasso, E.

    2016-12-01

    Condition monitoring aims at ensuring system safety which is a fundamental requirement for industrial applications and that has become an inescapable social demand. This objective is attained by instrumenting the system and developing data analytics methods such as statistical models able to turn data into relevant knowledge. One difficulty is to be able to correctly estimate the parameters of those methods based on time-series data. This paper suggests the use of the Weighted Distribution Theory together with the Expectation-Maximization algorithm to improve parameter estimation in statistical models with latent variables with an application to health monotonic under uncertainty. The improvement of estimates is made possible by incorporating uncertain and possibly noisy prior knowledge on latent variables in a sound manner. The latent variables are exploited to build a degradation model of dynamical system represented as a sequence of discrete states. Examples on Gaussian Mixture Models, Hidden Markov Models (HMM) with discrete and continuous outputs are presented on both simulated data and benchmarks using the turbofan engine datasets. A focus on the application of a discrete HMM to health monitoring under uncertainty allows to emphasize the interest of the proposed approach in presence of different operating conditions and fault modes. It is shown that the proposed model depicts high robustness in presence of noisy and uncertain prior.

  11. Improving the Carbon Dioxide Emission Estimates from the Combustion of Fossil Fuels in California

    Energy Technology Data Exchange (ETDEWEB)

    de la Rue du Can, Stephane; Wenzel, Tom; Price, Lynn

    2008-08-13

    Central to any study of climate change is the development of an emission inventory that identifies and quantifies the State's primary anthropogenic sources and sinks of greenhouse gas (GHG) emissions. CO2 emissions from fossil fuel combustion accounted for 80 percent of California GHG emissions (CARB, 2007a). Even though these CO2 emissions are well characterized in the existing state inventory, there still exist significant sources of uncertainties regarding their accuracy. This report evaluates the CO2 emissions accounting based on the California Energy Balance database (CALEB) developed by Lawrence Berkeley National Laboratory (LBNL), in terms of what improvements are needed and where uncertainties lie. The estimated uncertainty for total CO2 emissions ranges between -21 and +37 million metric tons (Mt), or -6percent and +11percent of total CO2 emissions. The report also identifies where improvements are needed for the upcoming updates of CALEB. However, it is worth noting that the California Air Resources Board (CARB) GHG inventory did not use CALEB data for all combustion estimates. Therefore the range in uncertainty estimated in this report does not apply to the CARB's GHG inventory. As much as possible, additional data sources used by CARB in the development of its GHG inventory are summarized in this report for consideration in future updates to CALEB.

  12. Improvement of neurofeedback therapy for improved attention through facilitation of brain activity using local sinusoidal extremely low frequency magnetic field exposure.

    Science.gov (United States)

    Zandi Mehran, Yasaman; Firoozabadi, Mohammad; Rostami, Reza

    2015-04-01

    Traditional neurofeedback (NF) is a training approach aimed at altering brain activity using electroencephalography (EEG) rhythms as feedback. In NF training, external factors such as the subjects' intelligence can have an effect. In contrast, a low-energy NF system (LENS) does not require conscious effort from the subject, which results in fewer attendance sessions. However, eliminating the subject role seems to eliminate an important part of the NF system. This study investigated the facilitating effect on the theta-to-beta ratio from NF training, using a local sinusoidal extremely low frequency magnetic field (LSELF-MF) versus traditional NF. Twenty-four healthy, intelligent subjects underwent 10 training sessions to enhance beta (15-18 Hz), and simultaneously inhibit theta (4-7 Hz) and high beta (22-30 Hz) activity, at the Cz point in a 3-boat-race video game. Each session consisted of 3 statuses, PRE, DURING, and POST. In the DURING status, the NF training procedure lasted 10 minutes. Subjects were led to believe that they would be exposed to a magnetic field during NF training; however, 16 of the subjects who were assigned to the experimental group were really exposed to 45 Hz-360 µT LSELF-MF at Cz. For the 8 other subjects, only the coil was located at the Cz point with no exposure. The duty cycle of exposure was 40% (2-second exposure and 3-second pause). The results show that the theta-to-beta ratio in the DURING status of each group differs significantly from the PRE and POST statuses. Between-group analysis shows that the theta-to-beta ratio in the DURING status of the experimental group is significantly (P < .001) lower than in the sham group. The result shows the effect of LSELF-MF on NF training.

  13. Improving Estimation Accuracy of Quasars’ Photometric Redshifts by Integration of KNN and SVM

    Science.gov (United States)

    Han, Bo; Ding, Hongpeng; Zhang, Yanxia; Zhao, Yongheng

    2015-08-01

    The massive photometric data collected from multiple large-scale sky surveys offers significant opportunities for measuring distances of many celestial objects by photometric redshifts zphot in a wide coverage of the sky. However, catastrophic failure, an unsolved problem for a long time, exists in the current photometric redshift estimation approaches (such as k-nearest-neighbor). In this paper, we propose a novel two-stage approach by integration of k-nearest-neighbor (KNN) and support vector machine (SVM) methods together. In the first stage, we apply KNN algorithm on photometric data and estimate their corresponding zphot. By analysis, we observe two dense regions with catastrophic failure, one in the range of zphot [0.1,1.1], the other in the range of zphot [1.5,2.5]. In the second stage, we map the photometric multiband input pattern of points falling into the two ranges from original attribute space into high dimensional feature space by Gaussian kernel function in SVM. In the high dimensional feature space, many bad estimation points resulted from catastrophic failure by using simple Euclidean distance computation in KNN can be identified by classification hyperplane SVM and further be applied correction. Experimental results based on SDSS data for quasars showed that the two-stage fusion approach can significantly mitigate catastrophic failure and improve the estimation accuracy of photometric redshift.

  14. Motion correction for improved estimation of heart rate using a visual spectrum camera

    Science.gov (United States)

    Tarbox, Elizabeth A.; Rios, Christian; Kaur, Balvinder; Meyer, Shaun; Hirt, Lauren; Tran, Vy; Scott, Kaitlyn; Ikonomidou, Vasiliki

    2017-05-01

    Heart rate measurement using a visual spectrum recording of the face has drawn interest over the last few years as a technology that can have various health and security applications. In our previous work, we have shown that it is possible to estimate the heart beat timing accurately enough to perform heart rate variability analysis for contactless stress detection. However, a major confounding factor in this approach is the presence of movement, which can interfere with the measurements. To mitigate the effects of movement, in this work we propose the use of face detection and tracking based on the Karhunen-Loewe algorithm in order to counteract measurement errors introduced by normal subject motion, as expected during a common seated conversation setting. We analyze the requirements on image acquisition for the algorithm to work, and its performance under different ranges of motion, changes of distance to the camera, as well and the effect of illumination changes due to different positioning with respect to light sources on the acquired signal. Our results suggest that the effect of face tracking on visual-spectrum based cardiac signal estimation depends on the amplitude of the motion. While for larger-scale conversation-induced motion it can significantly improve estimation accuracy, with smaller-scale movements, such as the ones caused by breathing or talking without major movement errors in facial tracking may interfere with signal estimation. Overall, employing facial tracking is a crucial step in adapting this technology to real-life situations with satisfactory results.

  15. Estimation of contrast agent bolus arrival delays for improved reproducibility of liver DCE MRI

    Science.gov (United States)

    Chouhan, Manil D.; Bainbridge, Alan; Atkinson, David; Punwani, Shonit; Mookerjee, Rajeshwar P.; Lythgoe, Mark F.; Taylor, Stuart A.

    2016-10-01

    Delays between contrast agent (CA) arrival at the site of vascular input function (VIF) sampling and the tissue of interest affect dynamic contrast enhanced (DCE) MRI pharmacokinetic modelling. We investigate effects of altering VIF CA bolus arrival delays on liver DCE MRI perfusion parameters, propose an alternative approach to estimating delays and evaluate reproducibility. Thirteen healthy volunteers (28.7  ±  1.9 years, seven males) underwent liver DCE MRI using dual-input single compartment modelling, with reproducibility (n  =  9) measured at 7 days. Effects of VIF CA bolus arrival delays were assessed for arterial and portal venous input functions. Delays were pre-estimated using linear regression, with restricted free modelling around the pre-estimated delay. Perfusion parameters and 7 days reproducibility were compared using this method, freely modelled delays and no delays using one-way ANOVA. Reproducibility was assessed using Bland-Altman analysis of agreement. Maximum percent change relative to parameters obtained using zero delays, were  -31% for portal venous (PV) perfusion, +43% for total liver blood flow (TLBF), +3247% for hepatic arterial (HA) fraction, +150% for mean transit time and  -10% for distribution volume. Differences were demonstrated between the 3 methods for PV perfusion (p  =  0.0085) and HA fraction (p  liver DCE MRI quantification. Pre-estimation of delays with constrained free modelling improved 7 days reproducibility of perfusion parameters in volunteers.

  16. An Improved Approach for Estimating Daily Net Radiation over the Heihe River Basin

    Science.gov (United States)

    Wu, Bingfang; Liu, Shufu; Zhu, Weiwei; Yan, Nana; Xing, Qiang; Tan, Shen

    2017-01-01

    Net radiation plays an essential role in determining the thermal conditions of the Earth’s surface and is an important parameter for the study of land-surface processes and global climate change. In this paper, an improved satellite-based approach to estimate the daily net radiation is presented, in which sunshine duration were derived from the geostationary meteorological satellite (FY-2D) cloud classification product, the monthly empirical as and bs Angstrom coefficients for net shortwave radiation were calibrated by spatial fitting of the ground data from 1997 to 2006, and the daily net longwave radiation was calibrated with ground data from 2007 to 2010 over the Heihe River Basin in China. The estimated daily net radiation values were validated against ground data for 12 months in 2008 at four stations with different underlying surface types. The average coefficient of determination (R2) was 0.8489, and the averaged Nash-Sutcliffe equation (NSE) was 0.8356. The close agreement between the estimated daily net radiation and observations indicates that the proposed method is promising, especially given the comparison between the spatial distribution and the interpolation of sunshine duration. Potential applications include climate research, energy balance studies and the estimation of global evapotranspiration. PMID:28054976

  17. An improved quadratic inference function for parameter estimation in the analysis of correlated data.

    Science.gov (United States)

    Westgate, Philip M; Braun, Thomas M

    2013-08-30

    Generalized estimating equations (GEE) are commonly employed for the analysis of correlated data. However, the quadratic inference function (QIF) method is increasing in popularity because of its multiple theoretical advantages over GEE. We base our focus on the fact that the QIF method is more efficient than GEE when the working covariance structure for the data is misspecified. It has been shown that because of the use of an empirical weighting covariance matrix inside its estimating equations, the QIF method's realized estimation performance can potentially be inferior to GEE's when the number of independent clusters is not large. We therefore propose an alternative weighting matrix for the QIF, which asymptotically is an optimally weighted combination of the empirical covariance matrix and its model-based version, which is derived by minimizing its expected quadratic loss. Use of the proposed weighting matrix maintains the large-sample advantages the QIF approach has over GEE and, as shown via simulation, improves small-sample parameter estimation. We also illustrated the proposed method in the analysis of a longitudinal study. Copyright © 2012 John Wiley & Sons, Ltd.

  18. Improved shear wave group velocity estimation method based on spatiotemporal peak and thresholding motion search.

    Science.gov (United States)

    Amador Carrascal, Carolina; Chen, Shigao; Manduca, Armando; Greenleaf, James F; Urban, Matthew

    2017-01-11

    Quantitative ultrasound elastography is increasingly being used in the assessment of chronic liver disease. Many studies have reported ranges of liver shear wave velocities values for healthy individuals and patients with different stages of liver fibrosis. Nonetheless, ongoing efforts exist to stabilize quantitative ultrasound elastography measurements by assessing factors that influence tissue shear wave velocity values, such as food intake, body mass index (BMI), ultrasound scanners, scanning protocols, ultrasound image quality, etc. Time-to-peak (TTP) methods have been routinely used to measure the shear wave velocity. However, there is still a need for methods that can provide robust shear wave velocity estimation in the presence of noisy motion data. The conventional TTP algorithm is limited to searching for the maximum motion in time profiles at different spatial locations. In this study, two modified shear wave speed estimation algorithms are proposed. The first method searches for the maximum motion in both space and time (spatiotemporal peak, STP); the second method applies an amplitude filter (spatiotemporal thresholding, STTH) to select points with motion amplitude higher than a threshold for shear wave group velocity estimation. The two proposed methods (STP and STTH) showed higher precision in shear wave velocity estimates compared to TTP in phantom. Moreover, in a cohort of 14 healthy subjects STP and STTH methods improved both the shear wave velocity measurement precision and the.

  19. The Pose Estimation of Mobile Robot Based on Improved Point Cloud Registration

    Directory of Open Access Journals (Sweden)

    Yanzi Miao

    2016-03-01

    Full Text Available Due to GPS restrictions, an inertial sensor is usually used to estimate the location of indoor mobile robots. However, it is difficult to achieve high-accuracy localization and control by inertial sensors alone. In this paper, a new method is proposed to estimate an indoor mobile robot pose with six degrees of freedom based on an improved 3D-Normal Distributions Transform algorithm (3D-NDT. First, point cloud data are captured by a Kinect sensor and segmented according to the distance to the robot. After the segmentation, the input point cloud data are processed by the Approximate Voxel Grid Filter algorithm in different sized voxel grids. Second, the initial registration and precise registration are performed respectively according to the distance to the sensor. The most distant point cloud data use the 3D-Normal Distributions Transform algorithm (3D-NDT with large-sized voxel grids for initial registration, based on the transformation matrix from the odometry method. The closest point cloud data use the 3D-NDT algorithm with small-sized voxel grids for precise registration. After the registrations above, a final transformation matrix is obtained and coordinated. Based on this transformation matrix, the pose estimation problem of the indoor mobile robot is solved. Test results show that this method can obtain accurate robot pose estimation and has better robustness.

  20. Improved Shape Parameter Estimation in K Clutter with Neural Networks and Deep Learning

    Directory of Open Access Journals (Sweden)

    José Raúl Fernández Machado

    2016-06-01

    Full Text Available The discrimination of the clutter interfering signal is a current problem in modern radars’ design, especially in coastal or offshore environments where the histogram of the background signal often displays heavy tails. The statistical characterization of this signal is very important for the cancellation of sea clutter, whose behavior obeys a K distribution according to the commonly accepted criterion. By using neural networks, the authors propose a new method for estimating the K shape parameter, demonstrating its superiority over the classic alternative based on the Method of Moments. Whereas both solutions have a similar performance when the entire range of possible values of the shape parameter is evaluated, the neuronal alternative achieves a much more accurate estimation for the lower Fig.s of the parameter. This is exactly the desired behavior because the best estimate occurs for the most aggressive states of sea clutter. The final design, reached by processing three different sets of computer generated K samples, used a total of nine neural networks whose contribution is synthesized in the final estimate, thus the solution can be interpreted as a deep learning approximation. The results are to be applied in the improvement of radar detectors, particularly for maintaining the operational false alarm probability close to the one conceived in the design.

  1. Improving the precision of lake ecosystem metabolism estimates by identifying predictors of model uncertainty

    Science.gov (United States)

    Rose, Kevin C.; Winslow, Luke A.; Read, Jordan S.; Read, Emily K.; Solomon, Christopher T.; Adrian, Rita; Hanson, Paul C.

    2014-01-01

    Diel changes in dissolved oxygen are often used to estimate gross primary production (GPP) and ecosystem respiration (ER) in aquatic ecosystems. Despite the widespread use of this approach to understand ecosystem metabolism, we are only beginning to understand the degree and underlying causes of uncertainty for metabolism model parameter estimates. Here, we present a novel approach to improve the precision and accuracy of ecosystem metabolism estimates by identifying physical metrics that indicate when metabolism estimates are highly uncertain. Using datasets from seventeen instrumented GLEON (Global Lake Ecological Observatory Network) lakes, we discovered that many physical characteristics correlated with uncertainty, including PAR (photosynthetically active radiation, 400-700 nm), daily variance in Schmidt stability, and wind speed. Low PAR was a consistent predictor of high variance in GPP model parameters, but also corresponded with low ER model parameter variance. We identified a threshold (30% of clear sky PAR) below which GPP parameter variance increased rapidly and was significantly greater in nearly all lakes compared with variance on days with PAR levels above this threshold. The relationship between daily variance in Schmidt stability and GPP model parameter variance depended on trophic status, whereas daily variance in Schmidt stability was consistently positively related to ER model parameter variance. Wind speeds in the range of ~0.8-3 m s–1 were consistent predictors of high variance for both GPP and ER model parameters, with greater uncertainty in eutrophic lakes. Our findings can be used to reduce ecosystem metabolism model parameter uncertainty and identify potential sources of that uncertainty.

  2. Improved Estimates of the Milky Way's Disk Scale Length From Hierarchical Bayesian Techniques

    CERN Document Server

    Licquia, Timothy C

    2016-01-01

    The exponential scale length ($L_d$) of the Milky Way's (MW's) disk is a critical parameter for describing the global physical size of our Galaxy, important both for interpreting other Galactic measurements and helping us to understand how our Galaxy fits into extragalactic contexts. Unfortunately, current estimates span a wide range of values and often are statistically incompatible with one another. Here, we aim to determine an improved, aggregate estimate for $L_d$ by utilizing a hierarchical Bayesian (HB) meta-analysis technique that accounts for the possibility that any one measurement has not properly accounted for all statistical or systematic errors. Within this machinery we explore a variety of ways of modeling the nature of problematic measurements, and then use a Bayesian model averaging technique to derive net posterior distributions that incorporate any model-selection uncertainty. Our meta-analysis combines 29 different (15 visible and 14 infrared) photometric measurements of $L_d$ available in ...

  3. Improving control and estimation for distributed parameter systems utilizing mobile actuator-sensor network.

    Science.gov (United States)

    Mu, Wenying; Cui, Baotong; Li, Wen; Jiang, Zhengxian

    2014-07-01

    This paper proposes a scheme for non-collocated moving actuating and sensing devices which is unitized for improving performance in distributed parameter systems. By Lyapunov stability theorem, each moving actuator/sensor agent velocity is obtained. To enhance state estimation of a spatially distributes process, two kinds of filters with consensus terms which penalize the disagreement of the estimates are considered. Both filters can result in the well-posedness of the collective dynamics of state errors and can converge to the plant state. Numerical simulations demonstrate that the effectiveness of such a moving actuator-sensor network in enhancing system performance and the consensus filters converge faster to the plant state when consensus terms are included.

  4. Impact of an improved neutrino energy estimate on outflows in neutron star merger simulations

    CERN Document Server

    Foucart, Francois; Roberts, Luke; Kidder, Lawrence E; Pfeiffer, Harald P; Scheel, Mark A

    2016-01-01

    Binary neutron star mergers are promising sources of gravitational waves for ground-based detectors such as Advanced LIGO. Neutron-rich material ejected by these mergers may also be the main source of r-process elements in the Universe, while radioactive decays in the ejecta can power bright electromagnetic post-merger signals. Neutrino-matter interactions play a critical role in the evolution of the composition of the ejected material, which significantly impacts the outcome of nucleosynthesis and the properties of the associated electromagnetic signal. In this work, we present a simulation of a binary neutron star merger using an improved method for estimating the average neutrino energies in our energy-integrated neutrino transport scheme. These energy estimates are obtained by evolving the neutrino number density in addition to the neutrino energy and flux densities. We show that significant changes are observed in the composition of the polar ejecta when comparing our new results with earlier simulations...

  5. Improved estimation of anomalous diffusion exponents in single particle tracking experiments

    CERN Document Server

    Bronshtein, Eldad Kepten Irena

    2013-01-01

    The Mean Square Displacement is a central tool in the analysis of Single Particle Tracking experiments, shedding light on various biophysical phenomena. Frequently, parameters are extracted by performing time-averages on single particle trajectories followed by ensemble averaging. This procedure however, suffers from two systematic errors when applied to particles that perform anomalous diffusion. The first is significant at short time lags and is induced by measurement errors. The second arises from the natural heterogeneity in biophysical systems. We show how to estimate and correct these two errors and improve the estimation of the anomalous parameters for the whole particle distribution. As a consequence we manage to characterize ensembles of heterogeneous particles even for rather short and noisy measurements where regular time averaged mean square displacement analysis fails. We apply this method to both simulations and in vivo measurements of telomere diffusion in 3T3 mouse embryonic fibroblast cells. ...

  6. An improved technique for global daily sunshine duration estimation using satellite imagery

    Institute of Scientific and Technical Information of China (English)

    Muhammad Ali SHAMIM; Renji REMESAN; Da-wei HAN; Naeem EJAZ; Ayub ELAHI

    2012-01-01

    This paper presents an improved model for global sunshine duration estimation.The methodology incorporates geostationary satellite images by including snow cover information,sun and satellite angles and a trend correction factor for seasons,for the determination of cloud cover index.The effectiveness of the proposed methodology has been tested using Meteosat geostationary satellite images in the visible band with a temporal resolution of 1 h and spatial resolution of 2.5 km×2.5 km,for the Brue Catchment in the southwest of England.Validation results show a significant improvement in the estimation of global sunshine duration by the proposed method as compared to its predecessor (R2 is improved from 0.68 to 0.83,root mean squared error (RMSE) from 2.37 h/d to 1.19 h/d and the mean biased error (MBE) from 0.21 h/d to 0.08 h/d).Further studies are needed to test this method in other parts of the world with different climate and geographical conditions.

  7. [The improvement of the Doppler echocardiographic method for the estimation of pulmonary systolic pressure].

    Science.gov (United States)

    Tamborini, G; Pepi, M; Galli, C; Alimento, M; Barbier, P; Doria, E; Maltagliati, A; Berti, M; Fiorentini, C; Guazzi, M D

    1993-04-01

    The formulas currently utilized for noninvasive evaluation of right ventricular systolic pressure (RVSP) include right ventricular-right atrial pressure gradient (RV-RAG) and right atrial pressure (RAP). The former is expressed by trans-tricuspid systolic flow velocity, the latter is generally assumed. We recently observed that ultrasound estimation of RAP through inferior vena cava collapsibility index (CI) may help in the choice of the more appropriate formula for the evaluation of RVSP. However, these traditional methods (method A:RV-RAG + 10; method B:RV-RAG x 1.1 + 14) have limitations, particularly when RAP is low. The present study was undertaken to improve noninvasive estimation of RVSP through new formulas based on CI prediction of RAP. One hundred and four patients, in whom tricuspid regurgitation was adequately documented with CW-Doppler, were included in this study. They were classified into 3 groups: Group 1 with CI > 45%, Group 2 with CI < or = 35%, Group 3 with CI 35-45%. RVSP was evaluated by 3 different methods: A, B, and C. Method C was based on CI, assigning 6, 16, or 9 mmHg to RAP (respectively, the mean values in the 3 groups of our previous study). Results indicate that method C improves noninvasive estimation of RVSP in Group 1 and Group 2, with respect to other methods, with reduction of the SEE and of the mean difference of the t-test between hemodynamic and echographic values. In Group 3, Doppler estimation by method A and C, and catheter measurements are comparable, whereas method B significantly overestimates the actual value.(ABSTRACT TRUNCATED AT 250 WORDS)

  8. Estimation of Crop Gross Primary Production (GPP). 2; Do Scaled (MODIS) Vegetation Indices Improve Performance?

    Science.gov (United States)

    Zhang, Qingyuan; Cheng, Yen-Ben; Lyapustin, Alexei I.; Wang, Yujie; Zhang, Xiaoyang; Suyker, Andrew; Verma, Shashi; Shuai, Yanmin; Middleton, Elizabeth M.

    2015-01-01

    Satellite remote sensing estimates of Gross Primary Production (GPP) have routinely been made using spectral Vegetation Indices (VIs) over the past two decades. The Normalized Difference Vegetation Index (NDVI), the Enhanced Vegetation Index (EVI), the green band Wide Dynamic Range Vegetation Index (WDRVIgreen), and the green band Chlorophyll Index (CIgreen) have been employed to estimate GPP under the assumption that GPP is proportional to the product of VI and photosynthetically active radiation (PAR) (where VI is one of four VIs: NDVI, EVI, WDRVIgreen, or CIgreen). However, the empirical regressions between VI*PAR and GPP measured locally at flux towers do not pass through the origin (i.e., the zero X-Y value for regressions). Therefore they are somewhat difficult to interpret and apply. This study investigates (1) what are the scaling factors and offsets (i.e., regression slopes and intercepts) between the fraction of PAR absorbed by chlorophyll of a canopy (fAPARchl) and the VIs, and (2) whether the scaled VIs developed in (1) can eliminate the deficiency and improve the accuracy of GPP estimates. Three AmeriFlux maize and soybean fields were selected for this study, two of which are irrigated and one is rainfed. The four VIs and fAPARchl of the fields were computed with the MODerate resolution Imaging Spectroradiometer (MODIS) satellite images. The GPP estimation performance for the scaled VIs was compared to results obtained with the original VIs and evaluated with standard statistics: the coefficient of determination (R2), the root mean square error (RMSE), and the coefficient of variation (CV). Overall, the scaled EVI obtained the best performance. The performance of the scaled NDVI, EVI and WDRVIgreen was improved across sites, crop types and soil/background wetness conditions. The scaled CIgreen did not improve results, compared to the original CIgreen. The scaled green band indices (WDRVIgreen, CIgreen) did not exhibit superior performance to either the

  9. Recommendations to improve wildlife exposure estimation for development of soil screening and cleanup values.

    Science.gov (United States)

    Sample, Bradley E; Schlekat, Chris; Spurgeon, David J; Menzie, Charlie; Rauscher, Jon; Adams, Bill

    2014-07-01

    An integral component in the development of media-specific values for the ecological risk assessment of chemicals is the derivation of safe levels of exposure for wildlife. Although the derivation and subsequent application of these values can be used for screening purposes, there is a need to identify the threshold for effects when making remedial decisions during site-specific assessments. Methods for evaluation of wildlife exposure are included in the US Environmental Protection Agency (USEPA) ecological soil screening levels (Eco-SSLs), registration, evaluation, authorization, and restriction of chemicals (REACH), and other risk-based soil assessment approaches. The goal of these approaches is to ensure that soil-associated contaminants do not pose a risk to wildlife that directly ingest soil, or to species that may be exposed to contaminants that persist in the food chain. These approaches incorporate broad assumptions in the exposure and effects assessments and in the risk characterization process. Consequently, thresholds for concluding risk are frequently very low with conclusions of risk possible when soil metal concentrations fall in the range of natural background. A workshop held in September, 2012 evaluated existing methods and explored recent science about factors to consider when establishing appropriate remedial goals for concentrations of metals in soils. A Foodweb Exposure Workgroup was organized to evaluate methods for quantifying exposure of wildlife to soil-associated metals through soil and food consumption and to provide recommendations for the development of ecological soil cleanup values (Eco-SCVs) that are both practical and scientifically defensible. The specific goals of this article are to review the current practices for quantifying exposure of wildlife to soil-associated contaminants via bioaccumulation and trophic transfer, to identify potential opportunities for refining and improving these exposure estimates, and finally, to make

  10. Overexpression of CD44 in neural precursor cells improves trans-endothelial migration and facilitates their invasion of perivascular tissues in vivo.

    Directory of Open Access Journals (Sweden)

    Cyrille Deboux

    Full Text Available Neural precursor (NPC based therapies are used to restore neurons or oligodendrocytes and/or provide neuroprotection in a large variety of neurological diseases. In multiple sclerosis models, intravenously (i.v -delivered NPCs reduced clinical signs via immunomodulation. We demonstrated recently that NPCs were able to cross cerebral endothelial cells in vitro and that the multifunctional signalling molecule, CD44 involved in trans-endothelial migration of lymphocytes to sites of inflammation, plays a crucial role in extravasation of syngeneic NPCs. In view of the role of CD44 in NPCs trans-endothelial migration in vitro, we questioned presently the benefit of CD44 overexpression by NPCs in vitro and in vivo, in EAE mice. We show that overexpression of CD44 by NPCs enhanced over 2 folds their trans-endothelial migration in vitro, without impinging on the proliferation or differentiation potential of the transduced cells. Moreover, CD44 overexpression by NPCs improved significantly their elongation, spreading and number of filopodia over the extracellular matrix protein laminin in vitro. We then tested the effect of CD44 overexpression after i.v. delivery in the tail vein of EAE mice. CD44 overexpression was functional invivo as it accelerated trans-endothelial migration and facilitated invasion of HA expressing perivascular sites. These in vitro and in vivo data suggest that CD44 may be crucial not only for NPC crossing the endothelial layer but also for facilitating invasion of extravascular tissues.

  11. Spectral Indices to Improve Crop Residue Cover Estimation under Varying Moisture Conditions

    Directory of Open Access Journals (Sweden)

    Miguel Quemada

    2016-08-01

    Full Text Available Crop residues on the soil surface protect the soil against erosion, increase water infiltration and reduce agrochemicals in runoff water. Crop residues and soils are spectrally different in the absorption features associated with cellulose and lignin. Our objectives were to: (1 assess the impact of water on the spectral indices for estimating crop residue cover (fR; (2 evaluate spectral water indices for estimating the relative water content (RWC of crop residues and soils; and (3 propose methods that mitigate the uncertainty caused by variable moisture conditions on estimates of fR. Reflectance spectra of diverse crops and soils were acquired in the laboratory over the 400–2400-nm wavelength region. Using the laboratory data, a linear mixture model simulated the reflectance of scenes with various fR and levels of RWC. Additional reflectance spectra were acquired over agricultural fields with a wide range of crop residue covers and scene moisture conditions. Spectral indices for estimating crop residue cover that were evaluated in this study included the Normalized Difference Tillage Index (NDTI, the Shortwave Infrared Normalized Difference Residue Index (SINDRI and the Cellulose Absorption Index (CAI. Multivariate linear models that used pairs of spectral indices—one for RWC and one for fR—significantly improved estimates of fR using CAI and SINDRI. For NDTI to reliably assess fR, scene RWC should be relatively dry (RWC < 0.25. These techniques provide the tools needed to monitor the spatial and temporal changes in crop residue cover and help determine where additional conservation practices may be required.

  12. Improved estimates of the nuclear structure corrections in $\\mu$D

    CERN Document Server

    Hernandez, Oscar Javier; Bacca, Sonia; Dinur, Nir Nevo; Barnea, Nir

    2014-01-01

    We calculate the nuclear structure corrections to the Lamb shift in muonic deuterium by using state-of-the-art nucleon-nucleon potentials derived from chiral effective field theory. Our calculations complement previous theoretical work obtained from phenomenological potentials and the zero range approximation. The study of the chiral convergence order-by-order and the dependence on cutoff variations allows us to improve the estimates on the nuclear structure corrections and the theoretical uncertainty coming from nuclear potentials. This will enter the determination of the nuclear radius from ongoing muonic deuterium experiments at PSI.

  13. An Improved Performance Frequency Estimation Algorithm for Passive Wireless SAW Resonant Sensors

    Directory of Open Access Journals (Sweden)

    Boquan Liu

    2014-11-01

    Full Text Available Passive wireless surface acoustic wave (SAW resonant sensors are suitable for applications in harsh environments. The traditional SAW resonant sensor system requires, however, Fourier transformation (FT which has a resolution restriction and decreases the accuracy. In order to improve the accuracy and resolution of the measurement, the singular value decomposition (SVD-based frequency estimation algorithm is applied for wireless SAW resonant sensor responses, which is a combination of a single tone undamped and damped sinusoid signal with the same frequency. Compared with the FT algorithm, the accuracy and the resolution of the method used in the self-developed wireless SAW resonant sensor system are validated.

  14. An improved performance frequency estimation algorithm for passive wireless SAW resonant sensors.

    Science.gov (United States)

    Liu, Boquan; Zhang, Chenrui; Ji, Xiaojun; Chen, Jing; Han, Tao

    2014-11-25

    Passive wireless surface acoustic wave (SAW) resonant sensors are suitable for applications in harsh environments. The traditional SAW resonant sensor system requires, however, Fourier transformation (FT) which has a resolution restriction and decreases the accuracy. In order to improve the accuracy and resolution of the measurement, the singular value decomposition (SVD)-based frequency estimation algorithm is applied for wireless SAW resonant sensor responses, which is a combination of a single tone undamped and damped sinusoid signal with the same frequency. Compared with the FT algorithm, the accuracy and the resolution of the method used in the self-developed wireless SAW resonant sensor system are validated.

  15. IMPROVED ERROR ESTIMATES FOR MIXED FINITE ELEMENT FOR NONLINEAR HYPERBOLIC EQUATIONS: THE CONTINUOUS-TIME CASE

    Institute of Scientific and Technical Information of China (English)

    Yan-ping Chen; Yun-qing Huang

    2001-01-01

    Improved L2-error estimates are computed for mixed finite element methods for second order nonlinear hyperbolic equations. Results are given for the continuous-time case. The convergence of the values for both the scalar function and the flux is demonstrated. The technique used here covers the lowest-order Raviart-Thomas spaces, as well as the higherorder spaces. A second paper will present the analysis of a fully discrete scheme (Numer.Math. J. Chinese Univ. vol.9, no.2, 2000, 181-192).

  16. Improving high-resolution quantitative precipitation estimation via fusion of multiple radar-based precipitation products

    Science.gov (United States)

    Rafieeinasab, Arezoo; Norouzi, Amir; Seo, Dong-Jun; Nelson, Brian

    2015-12-01

    For monitoring and prediction of water-related hazards in urban areas such as flash flooding, high-resolution hydrologic and hydraulic modeling is necessary. Because of large sensitivity and scale dependence of rainfall-runoff models to errors in quantitative precipitation estimates (QPE), it is very important that the accuracy of QPE be improved in high-resolution hydrologic modeling to the greatest extent possible. With the availability of multiple radar-based precipitation products in many areas, one may now consider fusing them to produce more accurate high-resolution QPE for a wide spectrum of applications. In this work, we formulate and comparatively evaluate four relatively simple procedures for such fusion based on Fisher estimation and its conditional bias-penalized variant: Direct Estimation (DE), Bias Correction (BC), Reduced-Dimension Bias Correction (RBC) and Simple Estimation (SE). They are applied to fuse the Multisensor Precipitation Estimator (MPE) and radar-only Next Generation QPE (Q2) products at the 15-min 1-km resolution (Experiment 1), and the MPE and Collaborative Adaptive Sensing of the Atmosphere (CASA) QPE products at the 15-min 500-m resolution (Experiment 2). The resulting fused estimates are evaluated using the 15-min rain gauge observations from the City of Grand Prairie in the Dallas-Fort Worth Metroplex (DFW) in north Texas. The main criterion used for evaluation is that the fused QPE improves over the ingredient QPEs at their native spatial resolutions, and that, at the higher resolution, the fused QPE improves not only over the ingredient higher-resolution QPE but also over the ingredient lower-resolution QPE trivially disaggregated using the ingredient high-resolution QPE. All four procedures assume that the ingredient QPEs are unbiased, which is not likely to hold true in reality even if real-time bias correction is in operation. To test robustness under more realistic conditions, the fusion procedures were evaluated with and

  17. Impact of regression methods on improved effects of soil structure on soil water retention estimates

    Science.gov (United States)

    Nguyen, Phuong Minh; De Pue, Jan; Le, Khoa Van; Cornelis, Wim

    2015-06-01

    Increasing the accuracy of pedotransfer functions (PTFs), an indirect method for predicting non-readily available soil features such as soil water retention characteristics (SWRC), is of crucial importance for large scale agro-hydrological modeling. Adding significant predictors (i.e., soil structure), and implementing more flexible regression algorithms are among the main strategies of PTFs improvement. The aim of this study was to investigate whether the improved effect of categorical soil structure information on estimating soil-water content at various matric potentials, which has been reported in literature, could be enduringly captured by regression techniques other than the usually applied linear regression. Two data mining techniques, i.e., Support Vector Machines (SVM), and k-Nearest Neighbors (kNN), which have been recently introduced as promising tools for PTF development, were utilized to test if the incorporation of soil structure will improve PTF's accuracy under a context of rather limited training data. The results show that incorporating descriptive soil structure information, i.e., massive, structured and structureless, as grouping criterion can improve the accuracy of PTFs derived by SVM approach in the range of matric potential of -6 to -33 kPa (average RMSE decreased up to 0.005 m3 m-3 after grouping, depending on matric potentials). The improvement was primarily attributed to the outperformance of SVM-PTFs calibrated on structureless soils. No improvement was obtained with kNN technique, at least not in our study in which the data set became limited in size after grouping. Since there is an impact of regression techniques on the improved effect of incorporating qualitative soil structure information, selecting a proper technique will help to maximize the combined influence of flexible regression algorithms and soil structure information on PTF accuracy.

  18. Improvement and quantitative performance estimation of the back support muscle suit.

    Science.gov (United States)

    Muramatsu, Y; Umehara, H; Kobayashi, H

    2013-01-01

    We have been developing the wearable muscle suit for direct and physical motion supports. The use of the McKibben artificial muscle has opened the way to the introduction of "muscle suits" compact, lightweight, reliable, wearable "assist-bots" enabling manual worker to lift and carry weights. Since back pain is the most serious problem for manual worker, improvement of the back support muscle suit under the feasibility study and quantitative estimation are shown in this paper. The structure of the upper body frame, the method to attach to the body, and the axes addition were explained as for the improvement. In the experiments, we investigated quantitative performance results and efficiency of the back support muscle suit in terms of vertical lifting of heavy weights by employing integral electromyography (IEMG). The results indicated that the values of IEMG were reduced by about 40% by using the muscle suit.

  19. Worm-improved estimators in continuous-time quantum Monte Carlo

    Science.gov (United States)

    Gunacker, P.; Wallerberger, M.; Ribic, T.; Hausoel, A.; Sangiovanni, G.; Held, K.

    2016-09-01

    We derive the improved estimators for general interactions and employ these for the continuous-time quantum Monte Carlo method. Using a worm algorithm we show how measuring higher-ordered correlators leads to an improved high-frequency behavior in irreducible quantities such as the one-particle self-energy or the irreducible two-particle vertex for non-density-density interactions. A good knowledge of the asymptotics of the two-particle vertex is essential for calculating nonlocal electronic correlations using diagrammatic extensions to the dynamical mean field theory as well as for calculating susceptibilities. We test our algorithm against analytic results for the multiorbital atomic limit and the Falicov-Kimball model.

  20. An Improved Cuckoo Search Optimization Algorithm for the Problem of Chaotic Systems Parameter Estimation

    Directory of Open Access Journals (Sweden)

    Jun Wang

    2016-01-01

    Full Text Available This paper proposes an improved cuckoo search (ICS algorithm to establish the parameters of chaotic systems. In order to improve the optimization capability of the basic cuckoo search (CS algorithm, the orthogonal design and simulated annealing operation are incorporated in the CS algorithm to enhance the exploitation search ability. Then the proposed algorithm is used to establish parameters of the Lorenz chaotic system and Chen chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the algorithm can estimate parameters with high accuracy and reliability. Finally, the results are compared with the CS algorithm, genetic algorithm, and particle swarm optimization algorithm, and the compared results demonstrate the method is energy-efficient and superior.

  1. An Improved Cuckoo Search Optimization Algorithm for the Problem of Chaotic Systems Parameter Estimation.

    Science.gov (United States)

    Wang, Jun; Zhou, Bihua; Zhou, Shudao

    2016-01-01

    This paper proposes an improved cuckoo search (ICS) algorithm to establish the parameters of chaotic systems. In order to improve the optimization capability of the basic cuckoo search (CS) algorithm, the orthogonal design and simulated annealing operation are incorporated in the CS algorithm to enhance the exploitation search ability. Then the proposed algorithm is used to establish parameters of the Lorenz chaotic system and Chen chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the algorithm can estimate parameters with high accuracy and reliability. Finally, the results are compared with the CS algorithm, genetic algorithm, and particle swarm optimization algorithm, and the compared results demonstrate the method is energy-efficient and superior.

  2. Rolipram improves facilitation of contextual fear extinction in the 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine-induced mouse model of Parkinson's disease

    Directory of Open Access Journals (Sweden)

    Ken-ichi Kinoshita

    2017-05-01

    Full Text Available Cognitive impairment often occurs in Parkinson's disease (PD, but the mechanism of onset remains unknown. Recently, we reported that PD model mice produced by 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP show facilitation of hippocampal memory extinction, which may be the cause of cognitive impairment in PD. When we examined the cAMP/CREB signaling in the hippocampus, decreased levels of cAMP and phosphorylated CREB were observed in the dentate gyrus (DG of MPTP-treated mice. Administration of rolipram improved the memory deficits with concomitant recovery of cAMP and phosphorylated CREB levels, suggesting that reduced cAMP/CREB signaling in the DG leads to cognitive impairment in MPTP-treated mice.

  3. Facilitating the use of non-standard in vivo studies in health risk assessment of chemicals: a proposal to improve evaluation criteria and reporting.

    Science.gov (United States)

    Beronius, Anna; Molander, Linda; Rudén, Christina; Hanberg, Annika

    2014-06-01

    To improve data availability in health risk assessment of chemicals and fill information gaps there is a need to facilitate the use of non-standard toxicity studies, i.e. studies not conducted according to any standardized toxicity test guidelines. The purpose of this work was to propose criteria and guidance for the evaluation of reliability and relevance of non-standard in vivo studies, which could be used to facilitate systematic and transparent evaluation of such studies for health risk assessment. Another aim was to propose user friendly guidance for reporting of non-standard studies intended to promote an improvement in reporting of studies that could be of use in risk assessment. Requirements and recommendations for the design and execution of in vivo toxicity studies were identified from The Organisation for Economic Co-operation and Development (OECD) test guidelines, and served as basis for the data evaluation criteria and reporting guidelines. Feedback was also collected from experts within the field of toxicity testing and risk assessment and used to construct a two-tiered framework for study evaluation, as well as refine the reporting guidelines. The proposed framework emphasizes the importance of study relevance and an important aspect is to not completely dismiss studies from health risk assessment based on very strict criteria for reliability. The suggested reporting guidelines provide researchers with a tool to fulfill reporting requirements as stated by regulatory agencies. Together, these resources provide an approach to include all relevant data that may fill information gaps and reduce scientific uncertainty in health risk assessment conclusions, and subsequently also in chemical policy decisions. Copyright © 2014 John Wiley & Sons, Ltd.

  4. Strategies to facilitate implementation and sustainability of large system transformations: a case study of a national program for improving quality of care for elderly people.

    Science.gov (United States)

    Nyström, Monica Elisabeth; Strehlenert, Helena; Hansson, Johan; Hasson, Henna

    2014-09-18

    Large-scale change initiatives stimulating change in several organizational systems in the health and social care sector are challenging both to lead and evaluate. There is a lack of systematic research that can enrich our understanding of strategies to facilitate large system transformations in this sector. The purpose of this study was to examine the characteristics of core activities and strategies to facilitate implementation and change of a national program aimed at improving life for the most ill elderly people in Sweden. The program outcomes were also addressed to assess the impact of these strategies. A longitudinal case study design with multiple data collection methods was applied. Archival data (n = 795), interviews with key stakeholders (n = 11) and non-participant observations (n = 23) were analysed using content analysis. Outcome data was obtained from national quality registries. This study presents an approach for implementing a large national change program that is characterized by initial flexibility and dynamism regarding content and facilitation strategies and a growing complexity over time requiring more structure and coordination. The description of activities and strategies show that the program management team engaged a variety of stakeholders and actor groups and accordingly used a palate of different strategies. The main strategies used to influence change in the target organisations were to use regional improvement coaches, regional strategic management teams, national quality registries, financial incentives and annually revised agreements. Interactive learning sessions, intense communication, monitor and measurements, and active involvement of different experts and stakeholders, including elderly people, complemented these strategies. Program outcomes showed steady progress in most of the five target areas, less so for the target of achieving coordinated care. There is no blue-print on how to approach the challenging task of

  5. An Improved Weise’s Rule for Efficient Estimation of Stand Quadratic Mean Diameter

    Directory of Open Access Journals (Sweden)

    Róbert Sedmák

    2015-07-01

    Full Text Available The main objective of this study was to explore the accuracy of Weise’s rule of thumb applied to an estimation of the quadratic mean diameter of a forest stand. Virtual stands of European beech (Fagus sylvatica L. across a range of structure types were stochastically generated and random sampling was simulated. We compared the bias and accuracy of stand quadratic mean diameter estimates, employing different ranks of measured stems from a set of the 10 trees nearest to the sampling point. We proposed several modifications of the original Weise’s rule based on the measurement and averaging of two different ranks centered to a target rank. In accordance with the original formulation of the empirical rule, we recommend the application of the measurement of the 6th stem in rank corresponding to the 55% sample percentile of diameter distribution, irrespective of mean diameter size and degree of diameter dispersion. The study also revealed that the application of appropriate two-measurement modifications of Weise’s method, the 4th and 8th ranks or 3rd and 9th ranks averaged to the 6th central rank, should be preferred over the classic one-measurement estimation. The modified versions are characterised by an improved accuracy (about 25% without statistically significant bias and measurement costs comparable to the classic Weise method.

  6. Improving the complementary methods to estimate evapotranspiration under diverse climatic and physical conditions

    Science.gov (United States)

    Anayah, F. M.; Kaluarachchi, J. J.

    2014-06-01

    Reliable estimation of evapotranspiration (ET) is important for the purpose of water resources planning and management. Complementary methods, including complementary relationship areal evapotranspiration (CRAE), advection aridity (AA) and Granger and Gray (GG), have been used to estimate ET because these methods are simple and practical in estimating regional ET using meteorological data only. However, prior studies have found limitations in these methods especially in contrasting climates. This study aims to develop a calibration-free universal method using the complementary relationships to compute regional ET in contrasting climatic and physical conditions with meteorological data only. The proposed methodology consists of a systematic sensitivity analysis using the existing complementary methods. This work used 34 global FLUXNET sites where eddy covariance (EC) fluxes of ET are available for validation. A total of 33 alternative model variations from the original complementary methods were proposed. Further analysis using statistical methods and simplified climatic class definitions produced one distinctly improved GG-model-based alternative. The proposed model produced a single-step ET formulation with results equal to or better than the recent studies using data-intensive, classical methods. Average root mean square error (RMSE), mean absolute bias (BIAS) and R2 (coefficient of determination) across 34 global sites were 20.57 mm month-1, 10.55 mm month-1 and 0.64, respectively. The proposed model showed a step forward toward predicting ET in large river basins with limited data and requiring no calibration.

  7. State Estimation and Forecasting of the Ski-Slope Model Using an Improved Shadowing Filter

    Science.gov (United States)

    Mat Daud, Auni Aslah

    In this paper, we present the application of the gradient descent of indeterminism (GDI) shadowing filter to a chaotic system, that is the ski-slope model. The paper focuses on the quality of the estimated states and their usability for forecasting. One main problem is that the existing GDI shadowing filter fails to provide stability to the convergence of the root mean square error and the last point error of the ski-slope model. Furthermore, there are unexpected cases in which the better state estimates give worse forecasts than the worse state estimates. We investigate these unexpected cases in particular and show how the presence of the humps contributes to them. However, the results show that the GDI shadowing filter can successfully be applied to the ski-slope model with only slight modification, that is, by introducing the adaptive step-size to ensure the convergence of indeterminism. We investigate its advantages over fixed step-size and how it can improve the performance of our shadowing filter.

  8. Improving Electricity Consumption Estimation for Electric Vehicles Based on Sparse GPS Observations

    Directory of Open Access Journals (Sweden)

    Jiangbo Wang

    2017-01-01

    Full Text Available Improving the estimation accuracy for the energy consumption of electric vehicles (EVs would greatly contribute to alleviating the range anxiety of drivers and serve as a critical basis for the planning, operation, and management of charging infrastructures. To address the challenges in energy consumption estimation encountered due to sparse Global Positioning System (GPS observations, an estimation model is proposed that considers both the kinetic characteristics from sparse GPS observations and the unique attributes of EVs: (1 work opposing the rolling resistance; (2 aerodynamic friction losses; (3 energy consumption/generation depending on the grade of the route; (4 auxiliary load consumption; and (5 additional energy losses arising from the unstable power output of the electric motor. Two quantities, the average energy consumption per kilometer and the energy consumption for an entire trip, were focused on and compared for model fitness, parameter, and effectiveness, and the latter showed a higher fitness. Based on sparse GPS observations of 68 EVs in Aichi Prefecture, Japan, the traditional linear regression approach and a multilevel mixed-effects linear regression approach were used for model calibration. The proposed model showed a high accuracy and demonstrated a great potential for application in using sparse GPS observations to predict the energy consumption of EVs.

  9. Computer Vision Methods for Improved Mobile Robot State Estimation in Challenging Terrains

    Directory of Open Access Journals (Sweden)

    Annalisa Milella

    2006-11-01

    Full Text Available External perception based on vision plays a critical role in developing improved and robust localization algorithms, as well as gaining important information about the vehicle and the terrain it is traversing. This paper presents two novel methods for rough terrain-mobile robots, using visual input. The first method consists of a stereovision algorithm for real-time 6DoF ego-motion estimation. It integrates image intensity information and 3D stereo data in the well-known Iterative Closest Point (ICP scheme. Neither a-priori knowledge of the motion nor inputs from other sensors are required, while the only assumption is that the scene always contains visually distinctive features which can be tracked over subsequent stereo pairs. This generates what is usually referred to as visual odometry. The second method aims at estimating the wheel sinkage of a mobile robot on sandy soil, based on edge detection strategy. A semi-empirical model of wheel sinkage is also presented referring to the classical terramechanics theory. Experimental results obtained with an all-terrain mobile robot and with a wheel sinkage test bed are presented to validate our approach. It is shown that the proposed techniques can be integrated in control and planning algorithms to improve the performance of ground vehicles operating in uncharted environments.

  10. Coupling NLDAS Model Output with MODIS Products for Improved Spatial Evapotranspiration Estimates

    Science.gov (United States)

    Kim, J.; Hogue, T.

    2008-12-01

    Given the growing concern over regional water supplies in much of the arid west, the quantification of water use by urban and agricultural landscapes is critically important. Water lost through evapotranspiration (ET) typically can not be recaptured or recycled, increasing the need for accurate accounting of ET in regional water management and planning. In this study, we investigate a method to better capture the spatial characteristics of ET by coupling operational North American Land Data Assimilation System (NLDAS) Noah Land Surface Model (LSM) outputs and a previously developed MODIS-based Potential Evapotranspiration (PET) product. The resultant product is higher resolution (1km) than the NLDAS model ET outputs (~12.5 km) and provides improved estimates within highly heterogeneous terrain and landscapes. We undertake this study in the Southern California region which provides an excellent case study for examining the developed product's ability to estimate vegetation dynamics over rapidly growing, and highly-irrigated, urban ecosystems. General trends in both products are similar; however the coupled MODIS-NLDAS ET product shows higher spatial variability, better capturing land surface heterogeneity than the NLDAS-based ET. Improved ET representation is especially obvious during the spring season, when precipitation is muted and evaporative flux is dominant. We also quantify seasonal landscape water demand over urban landscapes in several major counties (i.e. Los Angeles, San Diego and Riverside) using the MODIS-NLDAS ET model.

  11. Using dark current data to estimate AVIRIS noise covariance and improve spectral analyses

    Science.gov (United States)

    Boardman, Joseph W.

    1995-01-01

    Starting in 1994, all AVIRIS data distributions include a new product useful for quantification and modeling of the noise in the reported radiance data. The 'postcal' file contains approximately 100 lines of dark current data collected at the end of each data acquisition run. In essence this is a regular spectral-image cube, with 614 samples, 100 lines and 224 channels, collected with a closed shutter. Since there is no incident radiance signal, the recorded DN measure only the DC signal level and the noise in the system. Similar dark current measurements, made at the end of each line are used, with a 100 line moving average, to remove the DC signal offset. Therefore, the pixel-by-pixel fluctuations about the mean of this dark current image provide an excellent model for the additive noise that is present in AVIRIS reported radiance data. The 61,400 dark current spectra can be used to calculate the noise levels in each channel and the noise covariance matrix. Both of these noise parameters should be used to improve spectral processing techniques. Some processing techniques, such as spectral curve fitting, will benefit from a robust estimate of the channel-dependent noise levels. Other techniques, such as automated unmixing and classification, will be improved by the stable and scene-independence noise covariance estimate. Future imaging spectrometry systems should have a similar ability to record dark current data, permitting this noise characterization and modeling.

  12. Improved Age Estimation for Solar-Type Dwarfs Using Activity-Rotation Diagnostics

    CERN Document Server

    Mamajek, Eric E

    2008-01-01

    While the strong anti-correlation between chromospheric activity and age has led to the common use of the Ca II H & K emission index (R'_HK = L_HK/L_bol) as an empirical age estimator for solar type dwarfs, existing activity-age relations produce implausible ages at both high and low activity levels. We have compiled R'_HK data from the literature for young stellar clusters, richly populating for the first time the young end of the activity-age relation. Combining the cluster activity data with modern cluster age estimates, and analyzing the color-dependence of the chromospheric activity age index, we derive an improved activity-age calibration for F7-K2 dwarfs (0.5 < B-V < 0.9 mag). We also present a more fundamentally motivated activity-age calibration that relies on conversion of R'_HK values through the Rossby number to rotation periods, and then makes use of improved gyrochronology relations. We demonstrate that our new activity-age calibration has typical age precision of ~0.2 dex for normal s...

  13. Improved estimation of anomalous diffusion exponents in single-particle tracking experiments.

    Science.gov (United States)

    Kepten, Eldad; Bronshtein, Irena; Garini, Yuval

    2013-05-01

    The mean square displacement is a central tool in the analysis of single-particle tracking experiments, shedding light on various biophysical phenomena. Frequently, parameters are extracted by performing time averages on single-particle trajectories followed by ensemble averaging. This procedure, however, suffers from two systematic errors when applied to particles that perform anomalous diffusion. The first is significant at short-time lags and is induced by measurement errors. The second arises from the natural heterogeneity in biophysical systems. We show how to estimate and correct these two errors and improve the estimation of the anomalous parameters for the whole particle distribution. As a consequence, we manage to characterize ensembles of heterogeneous particles even for rather short and noisy measurements where regular time-averaged mean square displacement analysis fails. We apply this method to both simulations and in vivo measurements of telomere diffusion in 3T3 mouse embryonic fibroblast cells. The motion of telomeres is found to be subdiffusive with an average exponent constant in time. Individual telomere exponents are normally distributed around the average exponent. The proposed methodology has the potential to improve experimental accuracy while maintaining lower experimental costs and complexity.

  14. Improvement of stratospheric balloon positioning and the impact on Antarctic gravity wave parameter estimation

    Science.gov (United States)

    Zhang, W.; Haase, J. S.; Hertzog, A.; Lou, Y.; Vincent, R. A.

    2015-12-01

    Gravity waves (GWs) play an important role in transferring energy and momentum from the troposphere to the middle atmosphere. However, shorter period GWs are generally not explicitly resolved in general circulation models but need to be parameterized instead. Super pressure balloons, which float on the isopycnal surfaces, provide a direct access to measure GW characteristics as a function of wave intrinsic frequency that are needed for these parameterizations. The 30 s sampling rate of the GPS receivers carried on the balloons deployed in 2010 Concordiasi campaign in the Antarctic region is much higher compared to the previous campaigns and can cover the full range of the GW spectrum. Two among 19 balloons in the Concordiasi campaign are also equipped with the high-accuracy dual-frequency GPS receivers initially developed for GPS radio occultation research in addition to the regular single-frequency receivers, which enables us to expect a better accuracy of balloon positions for the purpose of GW momentum flux estimates. The positions are estimated using the Precise Point Positioning with Ambiguity Resolution (PPPAR) method based on the GPS data. Improvements of the positions are significant, from ~3-10 m to ~0.1-0.2 m in 3-D positions, which makes it possible to resolve the Eulerian pressure independently of height for the estimation of the intrinsic phase speed. The impacts of the position improvements on the final GW parameters (momentum flux and intrinsic phase speed) retrievals are highlighted, with ~0.54 mPa difference of the mean absolute momentum flux in Antarctic region and considerable difference in the distribution of the intrinsic phase speed.

  15. Improving the spatial estimation of evapotranspiration by assimilating land surface temperature data

    Science.gov (United States)

    Zink, Matthias; Samaniego, Luis; Cuntz, Matthias

    2013-04-01

    A combined investigation of the water and energy balance in hydrologic models might lead to a more accurate estimation of hydrological fluxes and state variables, such as evapotranspiration ET and soil moisture. Hydrologic models are usually calibrated against discharge measurements, and thus are only trained on the integrated signal at few points within a catchment. This procedure does not take into account any spatial variability of fluxes or state variables. Satellite data are a useful source of information to incorporate spatial information into hydrologic models. The objective of this study is to improve the estimation of evapotranspiration in the spatial domain by using satellite derived land surface temperature Ts for the calibration of the distributed hydrological model mHM. The satellite products are based on data of Meteosat Second Generation (MSG) and are provided by the Land Surface Analysis - Satellite Application Facility (LSA-SAF). mHM simulations of Ts are obtained by solving the energy balance wherein evapotranspiration is determined by closing the water balance. Net radiation is calculated by using incoming short- and longwave radiation, albedo and emissivity data provided by LSA-SAF. The Multiscale Parameter Regionalization technique (MPR, Samaniego et al. 2010) is applied to determine the aerodynamic resistance among other parameters. The optimization is performed for the year 2009 using three objective functions that consider (1) only discharge, (2) only Ts, and (3) both discharge and Ts. For the spatial comparison of satellite derived and estimated Ts fields, a new measure accounting for local spatial variabilities is introduced. The proposed method is applied to seven major German river basins, i.e. Danube, Ems, Main, Mulde, Neckar, Saale, and Weser. The results of the Ts simulations show a bias of 4.1 K compared to the satellite data. We hypothesize that this bias is inherent to the satellite data rather than to the model simulations. This

  16. Environmental DNA (eDNA) sampling improves occurrence and detection estimates of invasive burmese pythons.

    Science.gov (United States)

    Hunter, Margaret E; Oyler-McCance, Sara J; Dorazio, Robert M; Fike, Jennifer A; Smith, Brian J; Hunter, Charles T; Reed, Robert N; Hart, Kristen M

    2015-01-01

    Environmental DNA (eDNA) methods are used to detect DNA that is shed into the aquatic environment by cryptic or low density species. Applied in eDNA studies, occupancy models can be used to estimate occurrence and detection probabilities and thereby account for imperfect detection. However, occupancy terminology has been applied inconsistently in eDNA studies, and many have calculated occurrence probabilities while not considering the effects of imperfect detection. Low detection of invasive giant constrictors using visual surveys and traps has hampered the estimation of occupancy and detection estimates needed for population management in southern Florida, USA. Giant constrictor snakes pose a threat to native species and the ecological restoration of the Florida Everglades. To assist with detection, we developed species-specific eDNA assays using quantitative PCR (qPCR) for the Burmese python (Python molurus bivittatus), Northern African python (P. sebae), boa constrictor (Boa constrictor), and the green (Eunectes murinus) and yellow anaconda (E. notaeus). Burmese pythons, Northern African pythons, and boa constrictors are established and reproducing, while the green and yellow anaconda have the potential to become established. We validated the python and boa constrictor assays using laboratory trials and tested all species in 21 field locations distributed in eight southern Florida regions. Burmese python eDNA was detected in 37 of 63 field sampling events; however, the other species were not detected. Although eDNA was heterogeneously distributed in the environment, occupancy models were able to provide the first estimates of detection probabilities, which were greater than 91%. Burmese python eDNA was detected along the leading northern edge of the known population boundary. The development of informative detection tools and eDNA occupancy models can improve conservation efforts in southern Florida and support more extensive studies of invasive constrictors

  17. An Improved Global Wind Resource Estimate for Integrated Assessment Models: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Eurek, Kelly [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sullivan, Patrick [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gleason, Michael [National Renewable Energy Lab. (NREL), Golden, CO (United States); Hettinger, Dylan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Heimiller, Donna [National Renewable Energy Lab. (NREL), Golden, CO (United States); Lopez, Anthony [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-02-01

    This paper summarizes initial steps to improving the robustness and accuracy of global renewable resource and techno-economic assessments for use in integrated assessment models. We outline a method to construct country-level wind resource supply curves, delineated by resource quality and other parameters. Using mesoscale reanalysis data, we generate estimates for wind quality, both terrestrial and offshore, across the globe. Because not all land or water area is suitable for development, appropriate database layers provide exclusions to reduce the total resource to its technical potential. We expand upon estimates from related studies by: using a globally consistent data source of uniquely detailed wind speed characterizations; assuming a non-constant coefficient of performance for adjusting power curves for altitude; categorizing the distance from resource sites to the electric power grid; and characterizing offshore exclusions on the basis of sea ice concentrations. The product, then, is technical potential by country, classified by resource quality as determined by net capacity factor. Additional classifications dimensions are available, including distance to transmission networks for terrestrial wind and distance to shore and water depth for offshore. We estimate the total global wind generation potential of 560 PWh for terrestrial wind with 90% of resource classified as low-to-mid quality, and 315 PWh for offshore wind with 67% classified as mid-to-high quality. These estimates are based on 3.5 MW composite wind turbines with 90 m hub heights, 0.95 availability, 90% array efficiency, and 5 MW/km2 deployment density in non-excluded areas. We compare the underlying technical assumption and results with other global assessments.

  18. Environmental DNA (eDNA) sampling improves occurrence and detection estimates of invasive Burmese pythons

    Science.gov (United States)

    Hunter, Margaret E.; Oyler-McCance, Sara J.; Dorazio, Robert M.; Fike, Jennifer A.; Smith, Brian J.; Hunter, Charles T.; Reed, Robert N.; Hart, Kristen M.

    2015-01-01

    Environmental DNA (eDNA) methods are used to detect DNA that is shed into the aquatic environment by cryptic or low density species. Applied in eDNA studies, occupancy models can be used to estimate occurrence and detection probabilities and thereby account for imperfect detection. However, occupancy terminology has been applied inconsistently in eDNA studies, and many have calculated occurrence probabilities while not considering the effects of imperfect detection. Low detection of invasive giant constrictors using visual surveys and traps has hampered the estimation of occupancy and detection estimates needed for population management in southern Florida, USA. Giant constrictor snakes pose a threat to native species and the ecological restoration of the Florida Everglades. To assist with detection, we developed species-specific eDNA assays using quantitative PCR (qPCR) for the Burmese python (Python molurus bivittatus), Northern African python (P. sebae), boa constrictor (Boa constrictor), and the green (Eunectes murinus) and yellow anaconda (E. notaeus). Burmese pythons, Northern African pythons, and boa constrictors are established and reproducing, while the green and yellow anaconda have the potential to become established. We validated the python and boa constrictor assays using laboratory trials and tested all species in 21 field locations distributed in eight southern Florida regions. Burmese python eDNA was detected in 37 of 63 field sampling events; however, the other species were not detected. Although eDNA was heterogeneously distributed in the environment, occupancy models were able to provide the first estimates of detection probabilities, which were greater than 91%. Burmese python eDNA was detected along the leading northern edge of the known population boundary. The development of informative detection tools and eDNA occupancy models can improve conservation efforts in southern Florida and support more extensive studies of invasive constrictors

  19. Improving Maryland's Offshore Wind Energy Resource Estimate Using Doppler Wind Lidar Technology to Assess Microtmeteorology Controls

    Science.gov (United States)

    St. Pé, Alexandra; Wesloh, Daniel; Antoszewski, Graham; Daham, Farrah; Goudarzi, Navid; Rabenhorst, Scott; Delgado, Ruben

    2016-06-01

    There is enormous potential to harness the kinetic energy of offshore wind and produce power. However significant uncertainties are introduced in the offshore wind resource assessment process, due in part to limited observational networks and a poor understanding of the marine atmosphere's complexity. Given the cubic relationship between a turbine's power output and wind speed, a relatively small error in the wind speed estimate translates to a significant error in expected power production. The University of Maryland Baltimore County (UMBC) collected in-situ measurements offshore, within Maryland's Wind Energy Area (WEA) from July-August 2013. This research demonstrates the ability of Doppler wind lidar technology to reduce uncertainty in estimating an offshore wind resource, compared to traditional resource assessment techniques, by providing a more accurate representation of the wind profile and associated hub-height wind speed variability. The second objective of this research is to elucidate the impact of offshore micrometeorology controls (stability, wind shear, turbulence) on a turbine's ability to produce power. Compared to lidar measurements, power law extrapolation estimates and operational National Weather Service models underestimated hub-height wind speeds in the WEA. In addition, lidar observations suggest the frequent development of a low-level wind maximum (LLWM), with high turbinelayer wind shear and low turbulence intensity within a turbine's rotor layer (40m-160m). Results elucidate the advantages of using Doppler wind lidar technology to improve offshore wind resource estimates and its ability to monitor under-sampled offshore meteorological controls impact on a potential turbine's ability to produce power.

  20. Environmental DNA (eDNA sampling improves occurrence and detection estimates of invasive burmese pythons.

    Directory of Open Access Journals (Sweden)

    Margaret E Hunter

    Full Text Available Environmental DNA (eDNA methods are used to detect DNA that is shed into the aquatic environment by cryptic or low density species. Applied in eDNA studies, occupancy models can be used to estimate occurrence and detection probabilities and thereby account for imperfect detection. However, occupancy terminology has been applied inconsistently in eDNA studies, and many have calculated occurrence probabilities while not considering the effects of imperfect detection. Low detection of invasive giant constrictors using visual surveys and traps has hampered the estimation of occupancy and detection estimates needed for population management in southern Florida, USA. Giant constrictor snakes pose a threat to native species and the ecological restoration of the Florida Everglades. To assist with detection, we developed species-specific eDNA assays using quantitative PCR (qPCR for the Burmese python (Python molurus bivittatus, Northern African python (P. sebae, boa constrictor (Boa constrictor, and the green (Eunectes murinus and yellow anaconda (E. notaeus. Burmese pythons, Northern African pythons, and boa constrictors are established and reproducing, while the green and yellow anaconda have the potential to become established. We validated the python and boa constrictor assays using laboratory trials and tested all species in 21 field locations distributed in eight southern Florida regions. Burmese python eDNA was detected in 37 of 63 field sampling events; however, the other species were not detected. Although eDNA was heterogeneously distributed in the environment, occupancy models were able to provide the first estimates of detection probabilities, which were greater than 91%. Burmese python eDNA was detected along the leading northern edge of the known population boundary. The development of informative detection tools and eDNA occupancy models can improve conservation efforts in southern Florida and support more extensive studies of invasive

  1. A hidden state space modeling approach for improving glacier surface velocity estimates using remotely sensed data

    Science.gov (United States)

    Henke, D.; Schubert, A.; Small, D.; Meier, E.; Lüthi, M. P.; Vieli, A.

    2014-12-01

    A new method for glacier surface velocity (GSV) estimates is proposed here which combines ground- and space-based measurements with hidden state space modeling (HSSM). Examples of such a fusion of physical models with remote sensing (RS) observations were described in (Henke & Meier, Hidden State Space Models for Improved Remote Sensing Applications, ITISE 2014, p. 1242-1255) and are currently adapted for GSV estimation. GSV can be estimated using in situ measurements, RS methods or numerical simulations based on ice-flow models. In situ measurements ensure high accuracy but limited coverage and time consuming field work, while RS methods offer regular observations with high spatial coverage generally not possible with in situ methods. In particular, spaceborne Synthetic Aperture Radar (SAR) can obtain useful images independent of daytime and cloud cover. A ground portable radar interferometer (GPRI) is useful for investigating a particular area in more detail than is possible from space, but provides local coverage only. Several processing methods for deriving GSV from radar sensors have been established, including interferometry and offset tracking (Schubert et al, Glacier surface velocity estimation using repeat TerraSAR-X images. ISPRS Journal of P&RS, p. 49-62, 2013). On the other hand, it is also possible to derive glacier parameters from numerical ice-flow modeling alone. Given a well-parameterized model, GSV can in theory be derived and propagated continuously in time. However, uncertainties in the glacier flow dynamics and model errors increase with excessive propagation. All of these methods have been studied independently, but attempts to combine them have only rarely been made. The HSSM we propose recursively estimates the GSV based on 1) a process model making use of temporal and spatial interdependencies between adjacent states, and 2) observations (RS and optional in situ). The in situ and GPRI images currently being processed were acquired in the

  2. Combining Satellite and Ground Magnetic Measurements to Improve Estimates of Electromagnetic Induction Transfer Functions

    Science.gov (United States)

    Balasis, G.; Egbert, G. D.

    2005-12-01

    Electromagnetic (EM) induction studies using satellite and ground-based magnetic data may ultimately provide critical new constraints on the electrical conductivity of Earth's mantle. Unlike ground-based observatories, which leave large areas of the Earth (especially the ocean basins) unsampled, satellites have the potential for nearly complete global coverage. However, because the number of operating satellites is limited, spatially complex (especially non-zonal) external current sources are sampled relatively poorly by satellites at any fixed time. The comparatively much larger number of ground-based observatories provides more complete synoptic sampling of external source structure. By combining data from both satellites and observatories models of external sources can be improved, leading to more reliable global mapping of Earth conductivity. For example, estimates of EM induction transfer functions estimated from night-side CHAMP data have been previously shown to have biases which depend systematically on local time (LT). This pattern of biases suggests that a purely zonal model does not adequately describe magnetospheric sources. As a first step toward improved modeling of spatial complexity in sources, we have applied empirical orthogonal function (EOF) methods to exploratory analysis of night-side observatory data. After subtraction of the predictions of the CM4 comprehensive model, which includes a zonally symmetric storm-time correction based on Dst, we find significant non-axisymmetric, but large scale coherent variability in the mid-latitude night-side observatory residuals. Over the restricted range of local times (18:00-6:00) and latitudes (50°S to 50°N) considered, the dominant spatial mode of variability is reasonably approximated by a q21 quadrupole spherical harmonic. Temporal variability of this leading EOF mode is well correlated with Dst. Strategies for moving beyond this initial exploratory EOF analysis to combine observatory data with

  3. Using pan-sharpened high resolution satellite data to improve impervious surfaces estimation

    Science.gov (United States)

    Xu, Ru; Zhang, Hongsheng; Wang, Ting; Lin, Hui

    2017-05-01

    Impervious surface is an important environmental and socio-economic indicator for numerous urban studies. While a large number of researches have been conducted to estimate the area and distribution of impervious surface from satellite data, the accuracy for impervious surface estimation (ISE) is insufficient due to high diversity of urban land cover types. This study evaluated the use of panchromatic (PAN) data in very high resolution satellite image for improving the accuracy of ISE by various pan-sharpening approaches, with a further comprehensive analysis of its scale effects. Three benchmark pan-sharpening approaches, Gram-Schmidt (GS), PANSHARP and principal component analysis (PCA) were applied to WorldView-2 in three spots of Hong Kong. The on-screen digitization were carried out based on Google Map and the results were viewed as referenced impervious surfaces. The referenced impervious surfaces and the ISE results were then re-scaled to various spatial resolutions to obtain the percentage of impervious surfaces. The correlation coefficient (CC) and root mean square error (RMSE) were adopted as the quantitative indicator to assess the accuracy. The accuracy differences between three research areas were further illustrated by the average local variance (ALV) which was used for landscape pattern analysis. The experimental results suggested that 1) three research regions have various landscape patterns; 2) ISE accuracy extracted from pan-sharpened data was better than ISE from original multispectral (MS) data; and 3) this improvement has a noticeable scale effects with various resolutions. The improvement was reduced slightly as the resolution became coarser.

  4. An improved multilevel Monte Carlo method for estimating probability distribution functions in stochastic oil reservoir simulations

    Science.gov (United States)

    Lu, Dan; Zhang, Guannan; Webster, Clayton; Barbier, Charlotte

    2016-12-01

    In this work, we develop an improved multilevel Monte Carlo (MLMC) method for estimating cumulative distribution functions (CDFs) of a quantity of interest, coming from numerical approximation of large-scale stochastic subsurface simulations. Compared with Monte Carlo (MC) methods, that require a significantly large number of high-fidelity model executions to achieve a prescribed accuracy when computing statistical expectations, MLMC methods were originally proposed to significantly reduce the computational cost with the use of multifidelity approximations. The improved performance of the MLMC methods depends strongly on the decay of the variance of the integrand as the level increases. However, the main challenge in estimating CDFs is that the integrand is a discontinuous indicator function whose variance decays slowly. To address this difficult task, we approximate the integrand using a smoothing function that accelerates the decay of the variance. In addition, we design a novel a posteriori optimization strategy to calibrate the smoothing function, so as to balance the computational gain and the approximation error. The combined proposed techniques are integrated into a very general and practical algorithm that can be applied to a wide range of subsurface problems for high-dimensional uncertainty quantification, such as a fine-grid oil reservoir model considered in this effort. The numerical results reveal that with the use of the calibrated smoothing function, the improved MLMC technique significantly reduces the computational complexity compared to the standard MC approach. Finally, we discuss several factors that affect the performance of the MLMC method and provide guidance for effective and efficient usage in practice.

  5. TCP- Costco Reno: New Variant by Improving Bandwidth Estimation to adapt over MANETs

    Directory of Open Access Journals (Sweden)

    Prakash B. Khelage

    2014-01-01

    Full Text Available The Transmission Control Protocol (TCP is traditional, dominant and has been de facto standard protocol, used as transport agent at transport layer of TCP/IP protocol suite. Basically it is designed to provide reliability and assure guaranty to end-to-end delivery of data over unreliable networks. In practice, most TCP deployments have been carefully designed in the context of wired networks. Ignoring the properties of wireless Ad Hoc Networks, therefore it can lead to TCP implementations with poor performance. The problem of TCP and all its existing variations within MANETs resides in its inability to distinguish between different data packet loss causes, whenever the data loss occur traditional TCP congestion control algorithm assumes loss is due to congestion episode and reduces sending parameters value unnecessary. Thus, TCP has not always the optimum behavior in front of packet losses which might cause network performance degradation and resources waste. In order to adapt TCP over mobile Ad hoc environment, improvements have been proposed based on RTT and BW estimation technique in the literature to help TCP to differentiate accurate causes between the different types of losses. But still does not handle all the problems accurately and effectively. In this paper, a proposed TCP-Costco Reno a New Variant, accurately estimates the available bandwidth over Mobile Ad Hoc networks and sets sending rate accordingly to maximize utilization of available resources and hence improves performance of TCP over mobile Ad hoc networks. The results of the simulation indicate an improvement in throughput over interference, link failure and signal loss validation scenarios. Further, it shows highest average of average throughput then those variants which are most successful over MANETs.

  6. Improvement of the size estimation of 3D tracked droplets using digital in-line holography with joint estimation reconstruction

    Science.gov (United States)

    Verrier, N.; Grosjean, N.; Dib, E.; Méès, L.; Fournier, C.; Marié, J.-L.

    2016-04-01

    Digital holography is a valuable tool for three-dimensional information extraction. Among existing configurations, the originally proposed set-up (i.e. Gabor, or in-line holography), is reasonably immune to variations in the experimental environment making it a method of choice for studies of fluid dynamics. Nevertheless, standard hologram reconstruction techniques, based on numerical light back-propagation are prone to artifacts such as twin images or aliases that limit both the quality and quantity of information extracted from the acquired holograms. To get round this issue, the hologram reconstruction as a parametric inverse problem has been shown to accurately estimate 3D positions and the size of seeding particles directly from the hologram. To push the bounds of accuracy on size estimation still further, we propose to fully exploit the information redundancy of a hologram video sequence using joint estimation reconstruction. Applying this approach in a bench-top experiment, we show that it led to a relative precision of 0.13% (for a 60 μm diameter droplet) for droplet size estimation, and a tracking precision of {σx}× {σy}× {σz}=0.15× 0.15× 1~\\text{pixels} .

  7. Improving the detection of explosive hazards with LIDAR-based ground plane estimation

    Science.gov (United States)

    Buck, A.; Keller, J. M.; Popescu, M.

    2016-05-01

    Three-dimensional point clouds generated by LIDAR offer the potential to build a more complete understanding of the environment in front of a moving vehicle. In particular, LIDAR data facilitates the development of a non-parametric ground plane model that can filter target predictions from other sensors into above-ground and below-ground sets. This allows for improved detection performance when, for example, a system designed to locate above-ground targets considers only the set of above-ground predictions. In this paper, we apply LIDAR-based ground plane filtering to a forward looking ground penetrating radar (FLGPR) sensor system and a side looking synthetic aperture acoustic (SAA) sensor system designed to detect explosive hazards along the side of a road. Additionally, we consider the value of the visual magnitude of the LIDAR return as a feature for identifying anomalies. The predictions from these sensors are evaluated independently with and without ground plane filtering and then fused to produce a combined prediction confidence. Sensor fusion is accomplished by interpolating the confidence scores of each sensor along the ground plane model to create a combined confidence vector at specified points in the environment. The methods are tested along an unpaved desert road at an arid U.S. Army test site.

  8. Kalman smoothing improves the estimation of joint kinematics and kinetics in marker-based human gait analysis.

    Science.gov (United States)

    De Groote, F; De Laet, T; Jonkers, I; De Schutter, J

    2008-12-05

    We developed a Kalman smoothing algorithm to improve estimates of joint kinematics from measured marker trajectories during motion analysis. Kalman smoothing estimates are based on complete marker trajectories. This is an improvement over other techniques, such as the global optimisation method (GOM), Kalman filtering, and local marker estimation (LME), where the estimate at each time instant is only based on part of the marker trajectories. We applied GOM, Kalman filtering, LME, and Kalman smoothing to marker trajectories from both simulated and experimental gait motion, to estimate the joint kinematics of a ten segment biomechanical model, with 21 degrees of freedom. Three simulated marker trajectories were studied: without errors, with instrumental errors, and with soft tissue artefacts (STA). Two modelling errors were studied: increased thigh length and hip centre dislocation. We calculated estimation errors from the known joint kinematics in the simulation study. Compared with other techniques, Kalman smoothing reduced the estimation errors for the joint positions, by more than 50% for the simulated marker trajectories without errors and with instrumental errors. Compared with GOM, Kalman smoothing reduced the estimation errors for the joint moments by more than 35%. Compared with Kalman filtering and LME, Kalman smoothing reduced the estimation errors for the joint accelerations by at least 50%. Our simulation results show that the use of Kalman smoothing substantially improves the estimates of joint kinematics and kinetics compared with previously proposed techniques (GOM, Kalman filtering, and LME) for both simulated, with and without modelling errors, and experimentally measured gait motion.

  9. Road boundary estimation to improve vehicle detection and tracking in UAV video

    Institute of Scientific and Technical Information of China (English)

    张立业; 彭仲仁; 李立; 王华

    2014-01-01

    Video processing is one challenge in collecting vehicle trajectories from unmanned aerial vehicle (UAV) and road boundary estimation is one way to improve the video processing algorithms. However, current methods do not work well for low volume road, which is not well-marked and with noises such as vehicle tracks. A fusion-based method termed Dempster-Shafer-based road detection (DSRD) is proposed to address this issue. This method detects road boundary by combining multiple information sources using Dempster-Shafer theory (DST). In order to test the performance of the proposed method, two field experiments were conducted, one of which was on a highway partially covered by snow and another was on a dense traffic highway. The results show that DSRD is robust and accurate, whose detection rates are 100%and 99.8%compared with manual detection results. Then, DSRD is adopted to improve UAV video processing algorithm, and the vehicle detection and tracking rate are improved by 2.7%and 5.5%, respectively. Also, the computation time has decreased by 5%and 8.3%for two experiments, respectively.

  10. A maximum noise fraction transform with improved noise estimation for hyperspectral images

    Institute of Scientific and Technical Information of China (English)

    LIU Xiang; ZHANG Bing; GAO LianRu; CHEN DongMei

    2009-01-01

    Feature extraction is often performed to reduce spectral dimension of hyperspectral images before image classification.The maximum noise fraction (MNF) transform is one of the most commonly used spectral feature extraction methods.The spectral features in several bands of hyperspectral images are submerged by the noise.The MNF transform is advantageous over the principle component (PC) transform because it takes the noise information in the spatial domain into consideration.However,the experiments described in this paper demonstrate that classification accuracy is greatly influenced by the MNF transform when the ground objects are mixed together.The underlying mechanism of it is revealed and analyzed by mathematical theory.In order to improve the performance of classification after feature extraction when ground objects are mixed in hyperspectral images,a new MNF transform,with an Improved method of estimating hyperspectral Image noise covariance matrix (NCM),is presented.This improved MNF transform is applied to both the simulated data and real data.The results show that compared with the classical MNF transform,this new method enhanced the ability of feature extraction and increased classification accuracy.

  11. Improved OFDM bandwidth estimation scheme%改进的OFDM带宽盲估计方法

    Institute of Scientific and Technical Information of China (English)

    刘明骞; 李兵兵; 王婧舒

    2011-01-01

    The traditional orthogonal frequency division multiplexing (OFDM) bandwidth estimation scheme uses fast Fourier transform (FFT) to estimate the spectrum, while the spectrum is not very precise and the quantity of calculation is larger. Thus, a bandwidth estimation scheme based the Welch method was proposed. First the scheme estimated the power spectrum of OFDM with the Welch method. Second the spectrum was decomposed and reconstructed by wavelet transform in order to become smooth. Then the moving covariance values of the smooth spectrum were calculated and the positions of the two maximum of covariance values were extracted in order to find the beginning and the end of the spectrum. Finally the statistical average of computed bandwidth was calculated used as the final bandwidth. Simulation results show that correct estimated rate of the improved scheme is 99.1 % when signal nojse ratio is 0 dB. The scheme has higher precision and smaller computation compared with the traditional scheme.%针对正交频分复用信号通过快速傅里叶变换变换得到的频谱不够精确且计算量较大的问题,提出一种基于Welch法的带宽盲估计方法.首先用Welch法求得功率谱,再进行小波分解、重构,得到平滑的功率谱;然后提取出最大移动协方差的2个值所在的位置进而估计带宽;最后多次循环求统计平均,得到信号的精确带宽.实验仿真结果表明:在多径且低信噪比为0 dB的条件下,该方法的正确估计率达99.1%,比传统方法的带宽估计精度更高,计算复杂度更低.

  12. Analysis and improvement of estimated snow water equivalent (SWE) using Artificial Neural Networks

    Science.gov (United States)

    E Azar, A.; Ghedira, H.; Khanbilvardi, R.

    2005-12-01

    The goal of this study is to improve the retrieval of SWE/Snow depth in Great lakes area, United States using passive microwave images along with Normalized Difference Vegetation Index NDVI and Artificial Neural Networks (ANNs). Passive microwave images have been successfully used to estimate snow characteristics such as Snow Water Equivalent (SWE) and snow depth. Despite considerable progress, challenges still exist with respect to accuracy and reliability. In this study, Special Sensor Microwave Imager (SSM/I) channels which are available in Equal-Area Scalable Earth Grid (EASE-GRID) format are used. The study area is covered by a 28 by 35 grid of EASE-Grid pixels, 25km by 25km each. To have a comprehensive data set of brightness temperatures (Tb) of SSM/I channels, an assortment of pixels were selected based on latitude and land cover. A time series analysis was conducted for three winter seasons to assess the SSM/I capability to estimates snow depth and SWE for various land covers. Ground truth data' were obtained from the National Climate Data Center (NCDC) and the National Operational Hydrological Remote Sensing Center (NOHRSC). The NCDC provided daily snow depth measurements reported from various stations located in the study area. Measurements were recorded and projected to match EASE-GRID formatting. The NOHRSC produces SNODAS dataset using airborne Gamma radiation and gauge measurements combined with a physical model. The data set consisted of different snow characteristics such as SWE and snow depth. Landcover characteristics are introduced by using Normalized Difference Vegetation Index (NDVI). An Artificial Neural Network (ANN) algorithm has been employed to evaluate the effect of landcover in estimating snow depth and Snow Water Equivalent (SWE). The model is trained using SSM/I channels (19v, 19h, 37v, 37h, 22v, 85v, 85h) and the mean and standard deviation of NDVI for the each pixel. The preliminary time series results showed various degrees of

  13. Evaluation of PCR based assays for the improvement of proportion estimation of bacterial and viral pathogens in diarrheal surveillance

    Directory of Open Access Journals (Sweden)

    Hongxia eGuan

    2016-03-01

    Full Text Available AbstractDiarrhea can be caused by a variety of bacterial, viral and parasitic organisms. Laboratory diagnosis is essential in the pathogen-specific burden assessment. In the pathogen spectrum monitoring in the diarrheal surveillance, culture methods are commonly used for the bacterial pathogens’ detection whereas nucleic acid based amplification, the non-cultural methods are used for the viral pathogens. Different methodology may cause the inaccurate pathogen spectrum for the bacterial pathogens because of their different culture abilities with the different media, and for the comparison of bacterial vs. viral pathogens. The application of nucleic acid-based methods in the detection of viral and bacterial pathogens will likely increase the number of confirmed positive diagnoses, and will be comparable since all pathogens will be detected based on the same nucleic acid extracts from the same sample. In this study, bacterial pathogens, including diarrheagenic Escherichia coli (DEC, Salmonella spp., Shigella spp., Vibrio parahaemolyticus and V. cholerae, were detected in 334 diarrheal samples by PCR-based methods using nucleic acid extracted from stool samples and associated enrichment cultures. A protocol was established to facilitate the consistent identification of bacterial pathogens in diarrheal patients. Five common enteric viruses were also detected by RT-PCR, including rotavirus, sapovirus, norovirus (I and II, human astrovirus, and enteric adenovirus. Higher positive rates were found for the bacterial pathogens, showing the lower proportion estimation if only using culture methods. This application will improve the quality of bacterial diarrheagenic pathogen survey, providing more accurate information pertaining to the pathogen spectrum associated with finding of food safety problems and disease burden evaluation.

  14. What are the elements required to improve exposure estimates in life cycle assessments?

    DEFF Research Database (Denmark)

    Ernstoff, Alexi; Rosenbaum, Ralph K.; Margni, Manuele

    2016-01-01

    In this study we aim to identify and discuss priority elements required to improve exposure estimates in Life cycle assessment (LCA). LCA aims at guiding decision-support to minimize damages on resources, humans, and ecosystems which incur via providing society with products and services. Potential...... human toxicity and ecosystem toxicity of chemicals posed by different product life cycle stages are characterized in the life cycle impact assessment (LCIA) phase. Exposure and effect quantification as part of LCIA toxicity characterization faces numerous challenges related to inventory analysis (e.......g. number and quantity of chemicals emitted), substance-specific modelling (e.g. organics, inorganics, nano-materials) in various environments and time horizons, human and ecosystem exposure quantification (e.g. exposed organisms and exposure pathways), and toxicity end-points (e.g. carcinogenicity...

  15. Improving absolute gravity estimates by the $L_p$-norm approximation of the ballistic trajectory

    CERN Document Server

    Nagornyi, V D; Araya, A

    2015-01-01

    Iteratively Re-weighted Least Squares (IRLS) were used to simulate the $L_p$-norm approximation of the ballistic trajectory in absolute gravimeters. Two iterations of the IRLS delivered sufficient accuracy of the approximation, with the bias indiscernible in random noise. The simulations were performed for different samplings of the trajectory and different distributions of the data noise. On the platykurtic distributions, the simulations found $L_p$-approximation with $p\\approx 3.25$ to yield several times more precise gravity estimates than those obtained with the standard least-squares ($p=2$). The similar improvement at $p\\approx 3.5$ was observed on real data measured at the excessive noise conditions.

  16. Improved Estimation of Earth Rotation Parameters Using the Adaptive Ridge Regression

    Science.gov (United States)

    Huang, Chengli; Jin, Wenjing

    1998-05-01

    The multicollinearity among regression variables is a common phenomenon in the reduction of astronomical data. The phenomenon of multicollinearity and the diagnostic factors are introduced first. As a remedy, a new method, called adaptive ridge regression (ARR), which is an improved method of choosing the departure constant θ in ridge regression, is suggested and applied in a case that the Earth orientation parameters (EOP) are determined by lunar laser ranging (LLR). It is pointed out, via a diagnosis, the variance inflation factors (VIFs), that there exists serious multicollinearity among the regression variables. It is shown that the ARR method is effective in reducing the multicollinearity and makes the regression coefficients more stable than that of using ordinary least squares estimation (LS), especially when there is serious multicollinearity.

  17. Improved Estimators of the Mean of a Normal Distribution with a Known Coefficient of Variation

    Directory of Open Access Journals (Sweden)

    Wuttichai Srisodaphol

    2012-01-01

    Full Text Available This paper is to find the estimators of the mean θ for a normal distribution with mean θ and variance aθ2, a>0, θ>0. These estimators are proposed when the coefficient of variation is known. A mean square error (MSE is a criterion to evaluate the estimators. The results show that the proposed estimators have preference for asymptotic comparisons. Moreover, the estimator based on jackknife technique has preference over others proposed estimators with some simulations studies.

  18. Estimation of anisotropy parameters for shale based on an improved rock physics model, part 1: theory

    Science.gov (United States)

    Zhang, Feng; Li, Xiang-yang; Qian, Keran

    2017-02-01

    Shale is observed to have strong transverse isotropy due to its complex intrinsic properties on a small scale. An improved rock physics model has been developed to effectively model this intrinsic anisotropy. Several effective medium theories (Backus averaging, differential effective medium theory and self-consistent approximation) are validated and used in different steps of the workflow to simulate the effects of clay minerals, crack-like pores, kerogen and their preferred orientation on the elastic anisotropy. Anisotropic solid clay is constructed by using different clay mineral constituents instead of assuming it to be an equivalent isotropic or transversely isotropic medium. We differentiate between the voids associated with clay and the voids associated with other minerals based on their varied geometries and their different contributions to the anisotropy. The degree of alignment of clay particles, interconnected pore fluid and kerogen has a great influence on the elastic properties of shale. Therefore, in addition to the pore aspect ratio (asp), a new parameter called the lamination index (LI) related to the distribution of clay particle orientation is proposed and needs to be estimated during the modeling. We then present a practical inversion scheme to enable the prediction of anisotropy parameters for both vertical and horizontal well logs by estimating the lamination index and the pore aspect ratio simultaneously. The predicted elastic constants are demonstrated by using the published laboratory measurements of some Greenhorn shale, and they show better accuracy than the estimations in the existing literature. This model takes different rock properties into consideration and is thus generalized for shale formations from different areas. The application of this model to the well logs of some Upper Triassic shale in the Sichuan basin, and the analyzed results, are presented in part 2 of this paper.

  19. Mathematical modeling improves EC50 estimations from classical dose-response curves.

    Science.gov (United States)

    Nyman, Elin; Lindgren, Isa; Lövfors, William; Lundengård, Karin; Cervin, Ida; Sjöström, Theresia Arbring; Altimiras, Jordi; Cedersund, Gunnar

    2015-03-01

    The β-adrenergic response is impaired in failing hearts. When studying β-adrenergic function in vitro, the half-maximal effective concentration (EC50 ) is an important measure of ligand response. We previously measured the in vitro contraction force response of chicken heart tissue to increasing concentrations of adrenaline, and observed a decreasing response at high concentrations. The classical interpretation of such data is to assume a maximal response before the decrease, and to fit a sigmoid curve to the remaining data to determine EC50 . Instead, we have applied a mathematical modeling approach to interpret the full dose-response curve in a new way. The developed model predicts a non-steady-state caused by a short resting time between increased concentrations of agonist, which affect the dose-response characterization. Therefore, an improved estimate of EC50 may be calculated using steady-state simulations of the model. The model-based estimation of EC50 is further refined using additional time-resolved data to decrease the uncertainty of the prediction. The resulting model-based EC50 (180-525 nm) is higher than the classically interpreted EC50 (46-191 nm). Mathematical modeling thus makes it possible to re-interpret previously obtained datasets, and to make accurate estimates of EC50 even when steady-state measurements are not experimentally feasible. The mathematical models described here have been submitted to the JWS Online Cellular Systems Modelling Database, and may be accessed at http://jjj.bio.vu.nl/database/nyman. © 2015 FEBS.

  20. Improving local PCA in pseudo phase space for fetal heart rate estimation from single lead abdominal ECG.

    Science.gov (United States)

    Wei, Zheng; Hongxing, Liu; Jianchun, Cheng

    2011-12-01

    This paper proposes an improved local principal component analysis (LPCA) in pseudo phase space for fetal heart rate estimation from a single lead abdominal ECG signal. The improved LPCA process can extract both the maternal ECG component and the fetal ECG component in an abdominal signal. The instantaneous fetal heart rate can then be estimated from the extracted fetal ECG waveform. Compared with the classical LPCA procedure and another single lead based fetal heart rate estimation method, our improved LPCA method has shown better robustness and efficiency in fetal heart estimation, testing with synthetic ECG signals and a real fetal ECG database from PhysioBank. For the real fetal ECG validating dataset of six long-duration recordings (obtained between the 22(nd) and 40(th) week of gestation), the average accuracy of the improved LPCA method is 84.1%.

  1. Delay Estimator and Improved Proportionate Multi-Delay Adaptive Filtering Algorithm

    Directory of Open Access Journals (Sweden)

    E. Verteletskaya

    2012-04-01

    Full Text Available This paper pertains to speech and acoustic signal processing, and particularly to a determination of echo path delay and operation of echo cancellers. To cancel long echoes, the number of weights in a conventional adaptive filter must be large. The length of the adaptive filter will directly affect both the degree of accuracy and the convergence speed of the adaptation process. We present a new adaptive structure which is capable to deal with multiple dispersive echo paths. An adaptive filter according to the present invention includes means for storing an impulse response in a memory, the impulse response being indicative of the characteristics of a transmission line. It also includes a delay estimator for detecting ranges of samples within the impulse response having relatively large distribution of echo energy. These ranges of samples are being indicative of echoes on the transmission line. An adaptive filter has a plurality of weighted taps, each of the weighted taps having an associated tap weight value. A tap allocation/control circuit establishes the tap weight values in response to said detecting means so that only taps within the regions of relatively large distributions of echo energy are turned on. Thus, the convergence speed and the degree of estimation in the adaptation process can be improved.

  2. Improved radar data processing algorithms for quantitative rainfall estimation in real time.

    Science.gov (United States)

    Krämer, S; Verworn, H R

    2009-01-01

    This paper describes a new methodology to process C-band radar data for direct use as rainfall input to hydrologic and hydrodynamic models and in real time control of urban drainage systems. In contrast to the adjustment of radar data with the help of rain gauges, the new approach accounts for the microphysical properties of current rainfall. In a first step radar data are corrected for attenuation. This phenomenon has been identified as the main cause for the general underestimation of radar rainfall. Systematic variation of the attenuation coefficients within predefined bounds allows robust reflectivity profiling. Secondly, event specific R-Z relations are applied to the corrected radar reflectivity data in order to generate quantitative reliable radar rainfall estimates. The results of the methodology are validated by a network of 37 rain gauges located in the Emscher and Lippe river basins. Finally, the relevance of the correction methodology for radar rainfall forecasts is demonstrated. It has become clearly obvious, that the new methodology significantly improves the radar rainfall estimation and rainfall forecasts. The algorithms are applicable in real time.

  3. Consistent Estimates of Tsunami Energy Show Promise for Improved Early Warning

    Science.gov (United States)

    Titov, V.; Song, Y. Tony; Tang, L.; Bernard, E. N.; Bar-Sever, Y.; Wei, Y.

    2016-12-01

    Early tsunami warning critically hinges on rapid determination of the tsunami hazard potential in real-time, before waves inundate critical coastlines. Tsunami energy can quickly characterize the destructive potential of generated waves. Traditional seismic analysis is inadequate to accurately predict a tsunami's energy. Recently, two independent approaches have been proposed to determine tsunami source energy: one inverted from the Deep-ocean Assessment and Reporting of Tsunamis (DART) data during the tsunami propagation, and the other derived from the land-based coastal global positioning system (GPS) during tsunami generation. Here, we focus on assessing these two approaches with data from the March 11, 2011 Japanese tsunami. While the GPS approach takes into consideration the dynamic earthquake process, the DART inversion approach provides the actual tsunami energy estimation of the propagating tsunami waves; both approaches lead to consistent energy scales for previously studied tsunamis. Encouraged by these promising results, we examined a real-time approach to determine tsunami source energy by combining these two methods: first, determine the tsunami source from the globally expanding GPS network immediately after an earthquake for near-field early warnings; and then to refine the tsunami energy estimate from nearby DART measurements for improving forecast accuracy and early cancelations. The combination of these two real-time networks may offer an appealing opportunity for: early determination of the tsunami threat for the purpose of saving more lives, and early cancelation of tsunami warnings to avoid unnecessary false alarms.

  4. An Improved Iterative Fitting Method to Estimate Nocturnal Residual Layer Height

    Directory of Open Access Journals (Sweden)

    Wei Wang

    2016-08-01

    Full Text Available The planetary boundary layer (PBL is an atmospheric region near the Earth’s surface. It is significant for weather forecasting and for the study of air quality and climate. In this study, the top of nocturnal residual layers—which are what remain of the daytime mixing layer—are estimated by an elastic backscatter Lidar in Wuhan (30.5°N, 114.4°E, a city in Central China. The ideal profile fitting method is widely applied to determine the nocturnal residual layer height (RLH from Lidar data. However, the method is seriously affected by an optical thick layer. Thus, we propose an improved iterative fitting method to eliminate the optical thick layer effect on RLH detection using Lidar. Two typical case studies observed by elastic Lidar are presented to demonstrate the theory and advantage of the proposed method. Results of case analysis indicate that the improved method is more practical and precise than profile-fitting, gradient, and wavelet covariance transform method in terms of nocturnal RLH evaluation under low cloud conditions. Long-term observations of RLH performed with ideal profile fitting and improved methods were carried out in Wuhan from 28 May 2011 to 17 June 2016. Comparisons of Lidar-derived RLHs with the two types of methods verify that the improved solution is practical. Statistical analysis of a six-year Lidar signal was conducted to reveal the monthly average values of nocturnal RLH in Wuhan. A clear RLH monthly cycle with a maximum mean height of about 1.8 km above ground level was observed in August, and a minimum height of about 0.7 km was observed in January. The variation in monthly mean RLH displays an obvious quarterly dependence, which coincides with the annual variation in local surface temperature.

  5. Improving root-zone soil moisture estimations using dynamic root growth and crop phenology

    Science.gov (United States)

    Hashemian, Minoo; Ryu, Dongryeol; Crow, Wade T.; Kustas, William P.

    2015-12-01

    Water Energy Balance (WEB) Soil Vegetation Atmosphere Transfer (SVAT) modelling can be used to estimate soil moisture by forcing the model with observed data such as precipitation and solar radiation. Recently, an innovative approach that assimilates remotely sensed thermal infrared (TIR) observations into WEB-SVAT to improve the results has been proposed. However, the efficacy of the model-observation integration relies on the model's realistic representation of soil water processes. Here, we explore methods to improve the soil water processes of a simple WEB-SVAT model by adopting and incorporating an exponential root water uptake model with water stress compensation and establishing a more appropriate soil-biophysical linkage between root-zone moisture content, above-ground states and biophysical indices. The existing WEB-SVAT model is extended to a new Multi-layer WEB-SVAT with Dynamic Root distribution (MWSDR) that has five soil layers. Impacts of plant root depth variations, growth stages and phenological cycle of the vegetation on transpiration are considered in developing stages. Hydrometeorological and biogeophysical measurements collected from two experimental sites, one in Dookie, Victoria, Australia and the other in Ponca, Oklahoma, USA, are used to validate the new model. Results demonstrate that MWSDR provides improved soil moisture, transpiration and evaporation predictions which, in turn, can provide an improved physical basis for assimilating remotely sensed data into the model. Results also show the importance of having an adequate representation of vegetation-related transpiration process for an appropriate simulation of water transfer in a complicated system of soil, plants and atmosphere.

  6. A model to estimate the cost effectiveness of the indoorenvironment improvements in office work

    Energy Technology Data Exchange (ETDEWEB)

    Seppanen, Olli; Fisk, William J.

    2004-06-01

    Deteriorated indoor climate is commonly related to increases in sick building syndrome symptoms, respiratory illnesses, sick leave, reduced comfort and losses in productivity. The cost of deteriorated indoor climate for the society is high. Some calculations show that the cost is higher than the heating energy costs of the same buildings. Also building-level calculations have shown that many measures taken to improve indoor air quality and climate are cost-effective when the potential monetary savings resulting from an improved indoor climate are included as benefits gained. As an initial step towards systemizing these building level calculations we have developed a conceptual model to estimate the cost-effectiveness of various measures. The model shows the links between the improvements in the indoor environment and the following potential financial benefits: reduced medical care cost, reduced sick leave, better performance of work, lower turn over of employees, and lower cost of building maintenance due to fewer complaints about indoor air quality and climate. The pathways to these potential benefits from changes in building technology and practices go via several human responses to the indoor environment such as infectious diseases, allergies and asthma, sick building syndrome symptoms, perceived air quality, and thermal environment. The model also includes the annual cost of investments, operation costs, and cost savings of improved indoor climate. The conceptual model illustrates how various factors are linked to each other. SBS symptoms are probably the most commonly assessed health responses in IEQ studies and have been linked to several characteristics of buildings and IEQ. While the available evidence indicates that SBS symptoms can affect these outcomes and suspects that such a linkage exists, at present we can not quantify the relationships sufficiently for cost-benefit modeling. New research and analyses of existing data to quantify the financial

  7. Improved frequency and time of arrival estimation methods in search and rescue system based on MEO satellites

    Science.gov (United States)

    Lin, Mo; Li, Rui; Li, Jilin

    2007-11-01

    This paper deals with several key points including parameter estimation such as frequency of arrival (FOA), time of arrival (TOA) estimation algorithm and signal processing techniques in Medium-altitude Earth Orbit Local User Terminals (MEOLUT) based on Cospas-Sarsat Medium-altitude Earth Orbit Search and Rescue system (MEOSAR). Based on an analytical description of distress beacon, improved TOA and FOA estimation methods have been proposed. An improved FOA estimation method which integrates bi-FOA measurement, FFT method, Rife algorithm and Gaussian window is proposed to improve the accuracy of FOA estimation. In addition, TPD algorithm and signal correlation techniques are used to achieve a high performance of TOA estimation. Parameter estimation problems are solved by proposed FOA/TOA methods under quite poor Carrier-to-Noise (C/N0). A number of simulations are done to show the improvements. FOA and TOA estimation error are lower than 0.1Hz and 11μs respectively which is very high system requirement for MEOSAR system MEOLUT.

  8. Improvement of Epicentral Direction Estimation by P-wave Polarization Analysis

    Science.gov (United States)

    Oshima, Mitsutaka

    2016-04-01

    Polarization analysis has been used to analyze the polarization characteristics of waves and developed in various spheres, for example, electromagnetics, optics, and seismology. As for seismology, polarization analysis is used to discriminate seismic phases or to enhance specific phase (e.g., Flinn, 1965)[1], by taking advantage of the difference in polarization characteristics of seismic phases. In earthquake early warning, polarization analysis is used to estimate the epicentral direction using single station, based on the polarization direction of P-wave portion in seismic records (e.g., Smart and Sproules(1981) [2], Noda et al.,(2012) [3]). Therefore, improvement of the Estimation of Epicentral Direction by Polarization Analysis (EEDPA) directly leads to enhance the accuracy and promptness of earthquake early warning. In this study, the author tried to improve EEDPA by using seismic records of events occurred around Japan from 2003 to 2013. The author selected the events that satisfy following conditions. MJMA larger than 6.5 (JMA: Japan Meteorological Agency). Seismic records are available at least 3 stations within 300km in epicentral distance. Seismic records obtained at stations with no information on seismometer orientation were excluded, so that precise and quantitative evaluation of accuracy of EEDPA becomes possible. In the analysis, polarization has calculated by Vidale(1986) [4] that extended the method proposed by Montalbetti and Kanasewich(1970)[5] to use analytical signal. As a result of the analysis, the author found that accuracy of EEDPA improves by about 15% if velocity records, not displacement records, are used contrary to the author's expectation. Use of velocity records enables reduction of CPU time in integration of seismic records and improvement in promptness of EEDPA, although this analysis is still rough and further scrutiny is essential. At this moment, the author used seismic records that obtained by simply integrating acceleration

  9. Optimization of {sup 210}Po estimation in environmental samples using an improved deposition unit

    Energy Technology Data Exchange (ETDEWEB)

    Dubey, Jay Singh; Sahoo, Sunil Kumar; Mohapatra, Swagatika; Lenka, Pradyumna; Patra, Aditi Chakravarty; Thakur, Virender Kumar; Ravi, Pazhayath Mana; Tripathi, Raj Mangal [Bhabha Atomic Research Centre, Trombay, Mumbai (India). Health Physics Div.

    2015-06-01

    Measurement of {sup 210}Po in environmental matrices is important due to its very high specific activity, present in every compartment of the environment due to a daughter product of uranium ({sup 238}U), accumulative and highly toxic in nature. Conventional method for {sup 210}Po estimation is by auto-deposition onto both sides of a silver disc followed by alpha spectrometry of both the sides. A new deposition unit having the facility to hold the silver disc and magnetic stirring bar has designed and fabricated for {sup 210}Po estimation in which only one side is counted. In the conventional method, the total activity is distributed to the both sides of the silver disc and more counting time is required whereas in the improved deposition unit, only one side contain all the activity so that one time counting is required with better statistical significance. The same has been observed in spike recovery and water sample assessment. The tracer recovery in the conventional method was 72%-88% and 70%-85% whereas for the new deposition the recovery is 87%-99% and 78%-94% for spike recovery study and environmental samples, respectively. Certified tracers were analysed for the assurance of the reliability of the method and the results were in good agreement with the recommended value with a relative error <20%. The MDA of the method is 1.5 mBq for the estimation of {sup 210}Po at 3σ confidence level, 86400 s. counting time and 100 ml of water sample, taking the detector efficiency and chemical yield into consideration. The results obtained from both the methods were compared statistically. χ{sup 2} test, repeatability parameters, relative bias measurement and linearity test was performed for both the methods. The % difference between the two methods in terms of linearity is 0.2%. From the χ{sup 2} test it can be concluded that the measured data by two methods falls within 99% confidence interval. The modified deposition unit enhance the statistical significance, reduce

  10. A novel method for estimating the track-soil parameters based on Kalman and improved strong tracking filters.

    Science.gov (United States)

    Yao, Yu; Cheng, Kai; Zhou, Zhi-Jie; Zhang, Bang-Cheng; Dong, Chao; Zheng, Sen

    2015-11-01

    A tracked vehicle has been widely used in exploring unknown environments and military fields. In current methods for suiting soil conditions, soil parameters need to be given and the traction performance cannot always be satisfied on soft soil. To solve the problem, it is essential to estimate track-soil parameters in real-time. Therefore, a detailed mathematical model is proposed for the first time. Furthermore, a novel algorithm which is composed of Kalman filter (KF) and improved strong tracking filter (STF) is developed for online track-soil estimation and named as KF-ISTF. By this method, the KF is used to estimate slip parameters, and the ISTF is used to estimate motion states. Then the key soil parameters can be estimated by using a suitable soil model. The experimental results show that equipped with the estimation algorithm, the proposed model can be used to estimate the track-soil parameters, and make the traction performance satisfied with soil conditions.

  11. An Improved Distance and Mass Estimate for Sgr A* from a Multistar Orbit Analysis

    CERN Document Server

    Boehle, A; Schödel, R; Meyer, L; Yelda, S; Albers, S; Martinez, G D; Becklin, E E; Do, T; Lu, J R; Matthews, K; Morris, M R; Sitarski, B; Witzel, G

    2016-01-01

    We present new, more precise measurements of the mass and distance of our Galaxy's central supermassive black hole, Sgr A*. These results stem from a new analysis that more than doubles the time baseline for astrometry of faint stars orbiting Sgr A*, combining two decades of speckle imaging and adaptive optics data. Specifically, we improve our analysis of the speckle images by using information about a star's orbit from the deep adaptive optics data (2005 - 2013) to inform the search for the star in the speckle years (1995 - 2005). When this new analysis technique is combined with the first complete re-reduction of Keck Galactic Center speckle images using speckle holography, we are able to track the short-period star S0-38 (K-band magnitude = 17, orbital period = 19 years) through the speckle years. We use the kinematic measurements from speckle holography and adaptive optics to estimate the orbits of S0-38 and S0-2 and thereby improve our constraints of the mass ($M_{bh}$) and distance ($R_o$) of Sgr A*: $...

  12. Modified ensemble Kalman filter for nuclear accident atmospheric dispersion: prediction improved and source estimated.

    Science.gov (United States)

    Zhang, X L; Su, G F; Yuan, H Y; Chen, J G; Huang, Q Y

    2014-09-15

    Atmospheric dispersion models play an important role in nuclear power plant accident management. A reliable estimation of radioactive material distribution in short range (about 50 km) is in urgent need for population sheltering and evacuation planning. However, the meteorological data and the source term which greatly influence the accuracy of the atmospheric dispersion models are usually poorly known at the early phase of the emergency. In this study, a modified ensemble Kalman filter data assimilation method in conjunction with a Lagrangian puff-model is proposed to simultaneously improve the model prediction and reconstruct the source terms for short range atmospheric dispersion using the off-site environmental monitoring data. Four main uncertainty parameters are considered: source release rate, plume rise height, wind speed and wind direction. Twin experiments show that the method effectively improves the predicted concentration distribution, and the temporal profiles of source release rate and plume rise height are also successfully reconstructed. Moreover, the time lag in the response of ensemble Kalman filter is shortened. The method proposed here can be a useful tool not only in the nuclear power plant accident emergency management but also in other similar situation where hazardous material is released into the atmosphere.

  13. A novel melatonin agonist Neu-P11 facilitates memory performance and improves cognitive impairment in a rat model of Alzheimer' disease.

    Science.gov (United States)

    He, Pingping; Ouyang, Xinping; Zhou, Shouhong; Yin, Weidong; Tang, Chaoke; Laudon, Moshe; Tian, Shaowen

    2013-06-01

    Previous studies have shown that melatonin is implicated in modulating learning and memory processing. Melatonin also exerts neuroprotective activities against Aβ-induced injury in vitro and in vivo. Neu-P11 (piromelatine, N-(2-(5-methoxy-1H-indol-3-yl)ethyl)-4-oxo-4H-pyran-2-carboxamide) is a novel melatonin (MT1/MT2) receptor agonist and a serotonin 5-HT1A/1D receptor agonist recently developed for the treatment of insomnia. In the present study we firstly investigated whether Neu-P11 and melatonin enhance memory performance in the novel object recognition (NOR) task in rats, and then assessed whether Neu-P11 and melatonin improve neuronal and cognitive impairment in a rat model of Alzheimer' disease (AD) induced by intrahippocampal Aβ(1-42) injection. The results showed that a single morning or afternoon administration of Neu-P11 enhanced object recognition memory measured at 4 or 24h after training. Melatonin was effective in the memory facilitating effects only when administered in the afternoon. Further results showed that intrahippocampal Aβ(1-42) injection resulted in hippocampal cellular loss, as well as decreased learning ability and memory in the Y maze and NOR tasks in rats. Neu-P11 but not melatonin attenuated cellular loss and cognitive impairment in the rat AD model. The current data suggest that Neu-P11 may serve as a novel agent for the treatment of AD.

  14. Improved ESPRIT Method for Joint Direction-of-Arrival and Frequency Estimation Using Multiple-Delay Output

    Directory of Open Access Journals (Sweden)

    Wang Xudong

    2012-01-01

    Full Text Available An automatic pairing joint direction-of-arrival (DOA and frequency estimation is presented to overcome the unsatisfactory performances of estimation of signal parameter via rotational invariance techniques- (ESPRIT- like algorithm of Wang (2010, which requires an additional pairing. By using multiple-delay output of a uniform linear antenna arrays (ULA, the proposed algorithm can estimate joint angles and frequencies with an improved ESPRIT. Compared with Wang’s ESPRIT algorithm, the angle estimation performance of the proposed algorithm is greatly improved. The frequency estimation performance of the proposed algorithm is same with that of Wang’s ESPRIT algorithm. Furthermore, the proposed algorithm can obtain automatic pairing DOA and frequency parameters, and it has a comparative computational complexity in contrast to Wang’s ESPRIT algorithm. By the way, this proposed algorithm can also work well for nonuniform linear arrays. The useful behavior of this proposed algorithm is verified by simulations.

  15. The use of a policy dialogue to facilitate evidence-informed policy development for improved access to care: the case of the Winnipeg Central Intake Service (WCIS).

    Science.gov (United States)

    Damani, Zaheed; MacKean, Gail; Bohm, Eric; DeMone, Brie; Wright, Brock; Noseworthy, Tom; Holroyd-Leduc, Jayna; Marshall, Deborah A

    2016-10-18

    Policy dialogues are critical for developing responsive, effective, sustainable, evidence-informed policy. Our multidisciplinary team, including researchers, physicians and senior decision-makers, comprehensively evaluated The Winnipeg Central Intake Service, a single-entry model in Winnipeg, Manitoba, to improve patient access to hip/knee replacement surgery. We used the evaluation findings to develop five evidence-informed policy directions to help improve access to scheduled clinical services across Manitoba. Using guiding principles of public participation processes, we hosted a policy roundtable meeting to engage stakeholders and use their input to refine the policy directions. Here, we report on the use and input of a policy roundtable meeting and its role in contributing to the development of evidence-informed policy. Our evidence-informed policy directions focused on formal measurement/monitoring of quality, central intake as a preferred model for service delivery, provincial scope, transparent processes/performance indicators, and patient choice of provider. We held a policy roundtable meeting and used outcomes of facilitated discussions to refine these directions. Individuals from our team and six stakeholder groups across Manitoba participated (n = 44), including patients, family physicians, orthopaedic surgeons, surgical office assistants, Winnipeg Central Intake team, and administrators/managers. We developed evaluation forms to assess the meeting process, and collected decision-maker partners' perspectives on the value of the policy roundtable meeting and use of policy directions to improve access to scheduled clinical services after the meeting, and again 15 months later. We analyzed roundtable and evaluation data using thematic analysis to identify key themes. Four key findings emerged. First, participants supported all policy directions, with revisions and key implementation considerations identified. Second, participants felt the policy roundtable

  16. Developing Improved Water Velocity and Flux Estimation from AUVs - Results From Recent ASTEP Field Programs

    Science.gov (United States)

    Kinsey, J. C.; Yoerger, D. R.; Camilli, R.; German, C. R.

    2010-12-01

    Water velocity measurements are crucial to quantifying fluxes and better understanding water as a fundamental transport mechanism for marine chemical and biological processes. The importance of flux to understanding these processes makes it a crucial component of astrobiological exploration to moons possessing large bodies of water, such as Europa. Present technology allows us to obtain submerged water velocity measurements from stationary platforms; rarer are measurements from submerged vehicles which possess the ability to autonomously survey tens of kilometers over extended periods. Improving this capability would also allow us to obtain co-registered water velocity and other sensor data (e.g., mass spectrometers, temperature, oxygen, etc) and significantly enhance our ability to estimate fluxes. We report results from 4 recent expeditions in which we measured water velocities from autonomous underwater vehicles (AUVs) to help quantify flux in three different oceanographic contexts: hydrothermal vent plumes; an oil spill cruise responding to the 2010 Deepwater Horizon blowout; and two expeditions investigating naturally occurring methane seeps. On all of these cruises, we directly measured the water velocities with an acoustic Doppler current profiler (ADCP) mounted on the AUV. Vehicle motion was corrected for using bottom-lock Doppler tracks when available and, in the absence of bottom-lock, estimates of vehicle velocity based on dynamic models. In addition, on the methane seep cruises, we explored the potential of using acoustic mapping sonars, such as multi-beam and sub-bottom profiling systems, to localize plumes and indirectly quantify flux. Data obtained on these expeditions enhanced our scientific investigations and provides data for future development of algorithms for autonomously processing, identifying, and classifying water velocity and flux measurements. Such technology will be crucial in future astrobiology missions where highly constrained

  17. Improved barometric and loading efficiency estimates using packers in monitoring wells

    Science.gov (United States)

    Cook, Scott B.; Timms, Wendy A.; Kelly, Bryce F. J.; Barbour, S. Lee

    2017-08-01

    Measurement of barometric efficiency (BE) from open monitoring wells or loading efficiency (LE) from formation pore pressures provides valuable information about the hydraulic properties and confinement of a formation. Drained compressibility ( α) can be calculated from LE (or BE) in confined and semi-confined formations and used to calculate specific storage ( S s). S s and α are important for predicting the effects of groundwater extraction and therefore for sustainable extraction management. However, in low hydraulic conductivity ( K) formations or large diameter monitoring wells, time lags caused by well storage may be so long that BE cannot be properly assessed in open monitoring wells in confined or unconfined settings. This study demonstrates the use of packers to reduce monitoring-well time lags and enable reliable assessments of LE. In one example from a confined, high- K formation, estimates of BE in the open monitoring well were in good agreement with shut-in LE estimates. In a second example, from a low- K confining clay layer, BE could not be adequately assessed in the open monitoring well due to time lag. Sealing the monitoring well with a packer reduced the time lag sufficiently that a reliable assessment of LE could be made from a 24-day monitoring period. The shut-in response confirmed confined conditions at the well screen and provided confidence in the assessment of hydraulic parameters. A short (time-lag-dependent) period of high-frequency shut-in monitoring can therefore enhance understanding of hydrogeological systems and potentially provide hydraulic parameters to improve conceptual/numerical groundwater models.

  18. Improved barometric and loading efficiency estimates using packers in monitoring wells

    Science.gov (United States)

    Cook, Scott B.; Timms, Wendy A.; Kelly, Bryce F. J.; Barbour, S. Lee

    2017-02-01

    Measurement of barometric efficiency (BE) from open monitoring wells or loading efficiency (LE) from formation pore pressures provides valuable information about the hydraulic properties and confinement of a formation. Drained compressibility (α) can be calculated from LE (or BE) in confined and semi-confined formations and used to calculate specific storage (S s). S s and α are important for predicting the effects of groundwater extraction and therefore for sustainable extraction management. However, in low hydraulic conductivity (K) formations or large diameter monitoring wells, time lags caused by well storage may be so long that BE cannot be properly assessed in open monitoring wells in confined or unconfined settings. This study demonstrates the use of packers to reduce monitoring-well time lags and enable reliable assessments of LE. In one example from a confined, high-K formation, estimates of BE in the open monitoring well were in good agreement with shut-in LE estimates. In a second example, from a low-K confining clay layer, BE could not be adequately assessed in the open monitoring well due to time lag. Sealing the monitoring well with a packer reduced the time lag sufficiently that a reliable assessment of LE could be made from a 24-day monitoring period. The shut-in response confirmed confined conditions at the well screen and provided confidence in the assessment of hydraulic parameters. A short (time-lag-dependent) period of high-frequency shut-in monitoring can therefore enhance understanding of hydrogeological systems and potentially provide hydraulic parameters to improve conceptual/numerical groundwater models.

  19. An improved Q estimation approach: the weighted centroid frequency shift method

    Science.gov (United States)

    Li, Jingnan; Wang, Shangxu; Yang, Dengfeng; Dong, Chunhui; Tao, Yonghui; Zhou, Yatao

    2016-06-01

    Seismic wave propagation in subsurface media suffers from absorption, which can be quantified by the quality factor Q. Accurate estimation of the Q factor is of great importance for the resolution enhancement of seismic data, precise imaging and interpretation, and reservoir prediction and characterization. The centroid frequency shift method (CFS) is currently one of the most commonly used Q estimation methods. However, for seismic data that contain noise, the accuracy and stability of Q extracted using CFS depend on the choice of frequency band. In order to reduce the influence of frequency band choices and obtain Q with greater precision and robustness, we present an improved CFS Q measurement approach—the weighted CFS method (WCFS), which incorporates a Gaussian weighting coefficient into the calculation procedure of the conventional CFS. The basic idea is to enhance the proportion of advantageous frequencies in the amplitude spectrum and reduce the weight of disadvantageous frequencies. In this novel method, we first construct a Gauss function using the centroid frequency and variance of the reference wavelet. Then we employ it as the weighting coefficient for the amplitude spectrum of the original signal. Finally, the conventional CFS is adopted for the weighted amplitude spectrum to extract the Q factor. Numerical tests of noise-free synthetic data demonstrate that the WCFS is feasible and efficient, and produces more accurate results than the conventional CFS. Tests for noisy synthetic data indicate that the new method has better anti-noise capability than the CFS. The application to field vertical seismic profile (VSP) data further demonstrates its validity5.

  20. Improving terrestrial evaporation estimates over continental Australia through assimilation of SMOS soil moisture

    Science.gov (United States)

    Martens, B.; Miralles, D.; Lievens, H.; Fernández-Prieto, D.; Verhoest, N. E. C.

    2016-06-01

    Terrestrial evaporation is an essential variable in the climate system that links the water, energy and carbon cycles over land. Despite this crucial importance, it remains one of the most uncertain components of the hydrological cycle, mainly due to known difficulties to model the constraints imposed by land water availability on terrestrial evaporation. The main objective of this study is to assimilate satellite soil moisture observations from the Soil Moisture and Ocean Salinity (SMOS) mission into an existing evaporation model. Our over-arching goal is to find an optimal use of satellite soil moisture that can help to improve our understanding of evaporation at continental scales. To this end, the Global Land Evaporation Amsterdam Model (GLEAM) is used to simulate evaporation fields over continental Australia for the period September 2010-December 2013. SMOS soil moisture observations are assimilated using a Newtonian Nudging algorithm in a series of experiments. Model estimates of surface soil moisture and evaporation are validated against soil moisture probe and eddy-covariance measurements, respectively. Finally, an analogous experiment in which Advanced Microwave Scanning Radiometer (AMSR-E) soil moisture is assimilated (instead of SMOS) allows to perform a relative assessment of the quality of both satellite soil moisture products. Results indicate that the modelled soil moisture from GLEAM can be improved through the assimilation of SMOS soil moisture: the average correlation coefficient between in situ measurements and the modelled soil moisture over the complete sample of stations increased from 0.68 to 0.71 and a statistical significant increase in the correlations is achieved for 17 out of the 25 individual stations. Our results also suggest a higher accuracy of the ascending SMOS data compared to the descending data, and overall higher quality of SMOS compared to AMSR-E retrievals over Australia. On the other hand, the effect of soil moisture data

  1. Better estimation of protein-DNA interaction parameters improve prediction of functional sites

    Directory of Open Access Journals (Sweden)

    O'Flanagan Ruadhan A

    2008-12-01

    Full Text Available Abstract Background Characterizing transcription factor binding motifs is a common bioinformatics task. For transcription factors with variable binding sites, we need to get many suboptimal binding sites in our training dataset to get accurate estimates of free energy penalties for deviating from the consensus DNA sequence. One procedure to do that involves a modified SELEX (Systematic Evolution of Ligands by Exponential Enrichment method designed to produce many such sequences. Results We analyzed low stringency SELEX data for E. coli Catabolic Activator Protein (CAP, and we show here that appropriate quantitative analysis improves our ability to predict in vitro affinity. To obtain large number of sequences required for this analysis we used a SELEX SAGE protocol developed by Roulet et al. The sequences obtained from here were subjected to bioinformatic analysis. The resulting bioinformatic model characterizes the sequence specificity of the protein more accurately than those sequence specificities predicted from previous analysis just by using a few known binding sites available in the literature. The consequences of this increase in accuracy for prediction of in vivo binding sites (and especially functional ones in the E. coli genome are also discussed. We measured the dissociation constants of several putative CAP binding sites by EMSA (Electrophoretic Mobility Shift Assay and compared the affinities to the bioinformatics scores provided by methods like the weight matrix method and QPMEME (Quadratic Programming Method of Energy Matrix Estimation trained on known binding sites as well as on the new sites from SELEX SAGE data. We also checked predicted genome sites for conservation in the related species S. typhimurium. We found that bioinformatics scores based on SELEX SAGE data does better in terms of prediction of physical binding energies as well as in detecting functional sites. Conclusion We think that training binding site detection

  2. Improved phase arrival estimate and location for local earthquakes in South Korea

    Science.gov (United States)

    Morton, E. A.; Rowe, C. A.; Begnaud, M. L.

    2012-12-01

    The Korean Institute of Geoscience and Mineral Resources (KIGAM) and the Korean Meteorological Agency (KMA) regularly report local (distance travel-time information for events within the KIGAM and KMA networks, and also recorded by some regional stations. Toward that end, we are using a combination of manual phase identification and arrival-time picking, with waveform cross-correlation, to cluster events that have occurred in close proximity to one another, which allows for improved phase identification by comparing the highly correlating waveforms. We cross-correlate the known events with one another on 5 seismic stations and cluster events that correlate above a correlation coefficient threshold of 0.7, which reveals few clusters containing few events each. The small number of repeating events suggests that the online catalogs have had mining and quarry blasts removed before publication, as these can contribute significantly to repeating seismic sources in relatively aseismic regions such as South Korea. The dispersed source locations in our catalog, however, are ideal for seismic velocity modeling by providing superior sampling through the dense seismic station arrangement, which produces favorable event-to-station ray path coverage. Following careful manual phase picking on 104 events chosen to provide adequate ray coverage, we re-locate the events to obtain improved source coordinates. The re-located events are used with Thurber's Simul2000 pseudo-bending local tomography code to estimate the crustal structure on the Korean Peninsula, which is an important contribution to ongoing calibration for events of interest in the region.

  3. Dual-hemisphere tDCS facilitates greater improvements for healthy subjects' non-dominant hand compared to uni-hemisphere stimulation

    Directory of Open Access Journals (Sweden)

    Cerruti Carlo

    2008-10-01

    Full Text Available Abstract Background Transcranial direct current stimulation (tDCS is a non-invasive technique that has been found to modulate the excitability of neurons in the brain. The polarity of the current applied to the scalp determines the effects of tDCS on the underlying tissue: anodal tDCS increases excitability, whereas cathodal tDCS decreases excitability. Research has shown that applying anodal tDCS to the non-dominant motor cortex can improve motor performance for the non-dominant hand, presumably by means of changes in synaptic plasticity between neurons. Our previous studies also suggest that applying cathodal tDCS over the dominant motor cortex can improve performance for the non-dominant hand; this effect may result from modulating inhibitory projections (interhemispheric inhibition between the motor cortices of the two hemispheres. We hypothesized that stimultaneously applying cathodal tDCS over the dominant motor cortex and anodal tDCS over the non-dominant motor cortex would have a greater effect on finger sequence performance for the non-dominant hand, compared to stimulating only the non-dominant motor cortex. Sixteen right-handed participants underwent three stimulation conditions: 1 dual-hemisphere – with anodal tDCS over the non-dominant motor cortex, and cathodal tDCS over the dominant motor cortex, 2 uni-hemisphere – with anodal tDCS over the non-dominant motor cortex, and 3 sham tDCS. Participants performed a finger-sequencing task with the non-dominant hand before and after each stimulation. The dependent variable was the percentage of change in performance, comparing pre- and post-tDCS scores. Results A repeated measures ANOVA yielded a significant effect of tDCS condition (F(2,30 = 4.468, p = .037. Post-hoc analyses revealed that dual-hemisphere stimulation improved performance significantly more than both uni-hemisphere (p = .021 and sham stimulation (p = .041. Conclusion We propose that simultaneously applying cathodal t

  4. Using satellite-based evapotranspiration estimates to improve the structure of a simple conceptual rainfall-runoff model

    Science.gov (United States)

    Roy, Tirthankar; Gupta, Hoshin V.; Serrat-Capdevila, Aleix; Valdes, Juan B.

    2017-02-01

    Daily, quasi-global (50° N-S and 180° W-E), satellite-based estimates of actual evapotranspiration at 0.25° spatial resolution have recently become available, generated by the Global Land Evaporation Amsterdam Model (GLEAM). We investigate the use of these data to improve the performance of a simple lumped catchment-scale hydrologic model driven by satellite-based precipitation estimates to generate streamflow simulations for a poorly gauged basin in Africa. In one approach, we use GLEAM to constrain the evapotranspiration estimates generated by the model, thereby modifying daily water balance and improving model performance. In an alternative approach, we instead change the structure of the model to improve its ability to simulate actual evapotranspiration (as estimated by GLEAM). Finally, we test whether the GLEAM product is able to further improve the performance of the structurally modified model. Results indicate that while both approaches can provide improved simulations of streamflow, the second approach also improves the simulation of actual evapotranspiration significantly, which substantiates the importance of making diagnostic structural improvements to hydrologic models whenever possible.

  5. Improving estimates of surface water radiocarbon reservoir ages in the northeastern Atlantic Ocean.

    Science.gov (United States)

    Greenop, Rosanna; Burke, Andrea; Rae, James; Austin, William; Reimer, Paula; Blaauw, Maarten; Crocker, Anya; Chalk, Thomas; Barker, Stephen; Knutz, Paul; Hall, Ian

    2016-04-01

    Radiocarbon measurements from foraminifera in marine sediment cores are widely used to constrain age models and the timing of paleoceanographic events, as well as past changes in ocean circulation and carbon cycling. However, the use of radiocarbon for both dating and palaeoceanographic applications is limited in sediment cores by a lack of knowledge about the surface ocean radiocarbon reservoir age and how it varies in both space and time. Typically, to convert a planktic radiocarbon age into a calendar age, an assumed constant reservoir age is applied. However, there is mounting evidence to suggest that this assumption of constant reservoir age through time is an oversimplification, particularly for the high latitude oceans during the cold climates of the last glacial and deglacial periods. Here we present new high-resolution radiocarbon records together with tephra tie points and 230-thorium (230Th) constrained sedimentation rates to improve estimates of radiocarbon reservoir age in the Northeast Atlantic Ocean. In addition we will explore the impact of the new reservoir ages for both the age models of the cores studied, as well as the palaeoceanographic implications of these reservoir age changes during intervals of rapid climate change over the past 40,000 years.

  6. ROBUST HYPERPARAMETER ESTIMATION PROTECTS AGAINST HYPERVARIABLE GENES AND IMPROVES POWER TO DETECT DIFFERENTIAL EXPRESSION

    Science.gov (United States)

    Phipson, Belinda; Lee, Stanley; Majewski, Ian J.; Alexander, Warren S.; Smyth, Gordon K.

    2017-01-01

    One of the most common analysis tasks in genomic research is to identify genes that are differentially expressed (DE) between experimental conditions. Empirical Bayes (EB) statistical tests using moderated genewise variances have been very effective for this purpose, especially when the number of biological replicate samples is small. The EB procedures can however be heavily influenced by a small number of genes with very large or very small variances. This article improves the differential expression tests by robustifying the hyperparameter estimation procedure. The robust procedure has the effect of decreasing the informativeness of the prior distribution for outlier genes while increasing its informativeness for other genes. This effect has the double benefit of reducing the chance that hypervariable genes will be spuriously identified as DE while increasing statistical power for the main body of genes. The robust EB algorithm is fast and numerically stable. The procedure allows exact small-sample null distributions for the test statistics and reduces exactly to the original EB procedure when no outlier genes are present. Simulations show that the robustified tests have similar performance to the original tests in the absence of outlier genes but have greater power and robustness when outliers are present. The article includes case studies for which the robust method correctly identifies and downweights genes associated with hidden covariates and detects more genes likely to be scientifically relevant to the experimental conditions. The new procedure is implemented in the limma software package freely available from the Bioconductor repository.

  7. Estimating cultural benefits from surface water status improvements in freshwater wetland ecosystems.

    Science.gov (United States)

    Roebeling, Peter; Abrantes, Nelson; Ribeiro, Sofia; Almeida, Pedro

    2016-03-01

    Freshwater wetlands provide crucial ecosystem services, though are subject to anthropogenic/natural stressors that provoke negative impacts on these ecosystems, services and values. The European Union Water Framework Directive aims to achieve good status of surface waters by 2015, through implementation of Catchment Management Plans. Implementation of Catchment Management Plans is costly, though associated benefits from improvements in surface water status are less well known. This paper establishes a functional relationship between surface water status and cultural ecosystem service values of freshwater systems. Hence, we develop a bio-economic valuation approach in which we relate ecological status and chemical status of surface waters (based on local physio-chemical and benthic macro-invertebrates survey data) to willingness-to-pay (using benefit-function transfer). Results for the Pateira de Fermentelos freshwater wetland (Portugal) show that the current status of surface waters is good from a chemical though only moderate from an ecological perspective. The current cultural ecosystem service value of the wetland is estimated at 1.54 m€/yr- increasing to 2.02 m€/yr in case good status of surface waters is obtained. Taking into account ecosystem services and values in decision making is essential to avoid costs from externalities and capture benefits from spill-overs--leading to more equitable, effective and efficient water resources management. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Estimation of Catchment Transit Time in Fuji River Basin by using an improved Tank model

    Science.gov (United States)

    Wenchao, M.; Yamanaka, T.; Wakiyama, Y.; Wang, P.

    2013-12-01

    As an important parameter that reflects the characteristics of catchments, the catchment transit time (CTT) has been given much more widely attentions especially in recent years. The CTT is defined as the time water spends travelling through a catchment to the stream network [1], and it describes how catchments retain and release water and solutes and thus control geochemical and biogeochemical cycling and contamination persistence [2]. The objectives of the present study are to develop a new approach for estimating CTT without prior information on such TTD functions and to apply it to the Fuji River basin in the Central Japan Alps Region. In this study, an improved Tank model was used to compute mean CTT and TTD functions simultaneously. It involved water fluxes and isotope mass balance. Water storage capacity in the catchment, which strongly affects CTT, is reflected in isotope mass balance more sensitively than in water fluxes. A model calibrated with observed discharge and isotope data is used for virtual age tracer computation to estimate CTT. This model does not only consider the hydrological data and physical process of the research area but also reflects the actual TTD with considering the geological condition, land use and the other catchment-hydrological conditions. For the calibration of the model, we used river discharge record obtained by the Ministry of Land, Infrastructure and Transportation, and are collecting isotope data of precipitation and river waters monthly or semi-weekly. Three sub-catchments (SC1~SC3) in the Fuji River basin was selected to test the model with five layers: the surface layer, upper-soil layer, lower-soil layer, groundwater aquifer layer and bedrock layer (Layer 1- Layer 5). The evaluation of the model output was assessed using Nash-Sutcliffe efficiency (NSE), root mean square error-observations standard deviation ratio (RSR), and percent bias (PBIAS). Using long time-series of discharge records for calibration, the simulated

  9. Improving The Accuracy Of Bluetooth Based Travel Time Estimation Using Low-Level Sensor Data

    DEFF Research Database (Denmark)

    Araghi, Bahar Namaki; Tørholm Christensen, Lars; Krishnan, Rajesh

    2013-01-01

    triggered by a single device. This could lead to location ambiguity and reduced accuracy of travel time estimation. Therefore, the accuracy of travel time estimations by Bluetooth Technology (BT) depends upon how location ambiguity is handled by the estimation method. The issue of multiple detection events...... in the context of travel time estimation by BT has been considered by various researchers. However, treatment of this issue has remained simplistic so far. Most previous studies simply used the first detection event (Enter-Enter) as the best estimate. No systematic analysis for exploring the most accurate method...... of estimating travel time using multiple detection events has been conducted. In this study different aspects of BT detection zone, including size and its impact on the accuracy of travel time estimation, are discussed. Moreover, four alternative methods are applied; namely, Enter-Enter, Leave-Leave, Peak...

  10. 一种改进的欠定混合矩阵估计算法%An improved estimation algorithm of underdetermined mixing matrix estimation

    Institute of Scientific and Technical Information of China (English)

    付卫红; 杨帅; 熊超; 刘乃安

    2016-01-01

    For the estimation of the mixing matrix in underdetermined blind source separation ,most of the existing algorithms have the problem of high complexity and low estimation accuracy .Based on the analysis of K‐Plane algorithm ,an improved algorithm ,called IK‐Plane (improved K‐Plane) algo‐rithm ,was proposed for the estimation of the mixing matrix .IK‐Plane algorithm calculated the vector that has the minimum sum of inner product with all the observation signals ,and viewed the vector as the new normal vector .The method was improved for updating the normal vector ,so as to improve the time complexity of the algorithm and the estimation accuracy .The result shows that ,compared to K‐Plane algorithm , the IK‐Plane algorithm can improve the estimation accuracy , w hile reduce the time complexity of the algorithm significantly .%针对欠定盲源分离中混合矩阵的估计问题,以及现有算法大多存在复杂度高、估计精度低的缺陷,在分析K‐Plane算法的基础上,提出了一种改进的欠定混合矩阵估计算法———IK‐Plane(improved K‐Plane)算法。 IK‐Plane算法通过最优化方法,计算与所有观测信号的内积和最小的向量,并将该向量作为新的法向量,改进了法向量的更新方法,从而改善了算法的时间复杂度及估计精度。实验结果表明:相对于K‐Plane算法,IK‐Plane算法在提高估计精度的同时,能够显著地降低算法的时间复杂度。

  11. Improving the estimation of complete field soil water characteristic curves through field monitoring data

    Science.gov (United States)

    Bordoni, M.; Bittelli, M.; Valentino, R.; Chersich, S.; Meisina, C.

    2017-09-01

    In this work, Soil Water Characteristic Curves (SWCCs) were reconstructed through simultaneous field measurements of soil pore water pressure and water content. The objective was to evaluate whether field-based monitoring can allow for the improvement of the accuracy in SWCCs estimation with respect to the use of laboratory techniques. Moreover, field assessment of SWCCs allowed to: a) quantify the hydrological hysteresis affecting SWCCs through field data; b) analyze the effect of different temporal resolution of field measures; c) highlight the differences in SWCCs reconstructed for a particular soil during different hydrological years; d) evaluate the reliability of field reconstructed SWCCs, by the comparison between assessed and measured trends of a component of the soil water balance. These aspects were fundamental for assessing the reliability of the field reconstructed SWCCs. Field data at two Italian test-sites were measured. These test-sites were used to evaluate the goodness of field reconstructed SWCCs for soils characterized by different geomorphological, geological, physical and pedological features. Field measured or laboratory measured SWCCs data of 5 soil horizons (3 in a predominantly silty soil, 2 in a predominantly clayey one) were fitted by Van Genuchten model. Different field drying and wetting periods were identified, based on monthly meteorological conditions, in terms of rainfall and evapotranspiration amounts, of different cycles. This method allowed for a correct discrimination of the main drying and the main wetting paths from field data related and for a more reliable quantification of soil hydrological properties with respect to laboratory methodologies. Particular patterns of changes in SWCCs forms along depth could be also identified. Field SWCCs estimation is not affected by the temporal resolution of the acquisition (hours or days), as testified by similar values of Van Genuchten equation fitting parameters. Instead, hourly data

  12. An amino acid substitution-selection model adjusts residue fitness to improve phylogenetic estimation.

    Science.gov (United States)

    Wang, Huai-Chun; Susko, Edward; Roger, Andrew J

    2014-04-01

    Standard protein phylogenetic models use fixed rate matrices of amino acid interchange derived from analyses of large databases. Differences between the stationary amino acid frequencies of these rate matrices from those of a data set of interest are typically adjusted for by matrix multiplication that converts the empirical rate matrix to an exchangeability matrix which is then postmultiplied by the amino acid frequencies in the alignment. The result is a time-reversible rate matrix with stationary amino acid frequencies equal to the data set frequencies. On the basis of population genetics principles, we develop an amino acid substitution-selection model that parameterizes the fitness of an amino acid as the logarithm of the ratio of the frequency of the amino acid to the frequency of the same amino acid under no selection. The model gives rise to a different sequence of matrix multiplications to convert an empirical rate matrix to one that has stationary amino acid frequencies equal to the data set frequencies. We incorporated the substitution-selection model with an improved amino acid class frequency mixture (cF) model to partially take into account site-specific amino acid frequencies in the phylogenetic models. We show that 1) the selection models fit data significantly better than corresponding models without selection for most of the 21 test data sets; 2) both cF and cF selection models favored the phylogenetic trees that were inferred under current sophisticated models and methods for three difficult phylogenetic problems (the positions of microsporidia and breviates in eukaryote phylogeny and the position of the root of the angiosperm tree); and 3) for data simulated under site-specific residue frequencies, the cF selection models estimated trees closer to the generating trees than a standard Г model or cF without selection. We also explored several ways of estimating amino acid frequencies under neutral evolution that are required for these selection

  13. Re-evaluation of individual diameter : height allometric models to improve biomass estimation of tropical trees.

    Science.gov (United States)

    Ledo, Alicia; Cornulier, Thomas; Illian, Janine B; Iida, Yoshiko; Kassim, Abdul Rahman; Burslem, David F R P

    2016-12-01

    Accurate estimation of tree biomass is necessary to provide realistic values of the carbon stored in the terrestrial biosphere. A recognized source of errors in tree aboveground biomass (AGB) estimation is introduced when individual tree height values (H) are not directly measured but estimated from diameter at breast height (DBH) using allometric equations. In this paper, we evaluate the performance of 12 alternative DBH : H equations and compare their effects on AGB estimation for three tropical forests that occur in contrasting climatic and altitudinal zones. We found that fitting a three-parameter Weibull function using data collected locally generated the lowest errors and bias in H estimation, and that equations fitted to these data were more accurate than equations with parameters derived from the literature. For computing AGB, the introduced error values differed notably among DBH : H allometric equations, and in most cases showed a clear bias that resulted in either over- or under-estimation of AGB. Fitting the three-parameter Weibull function minimized errors in AGB estimates in our study and we recommend its widespread adoption for carbon stock estimation. We conclude that many previous studies are likely to present biased estimates of AGB due to the method of H estimation. © 2016 by the Ecological Society of America.

  14. Improving autocorrelation regression for the Hurst parameter estimation of long-range dependent time series based on golden section search

    Science.gov (United States)

    Li, Ming; Zhang, Peidong; Leng, Jianxing

    2016-03-01

    This article presents an improved autocorrelation correlation function (ACF) regression method of estimating the Hurst parameter of a time series with long-range dependence (LRD) by using golden section search (GSS). We shall show that the present method is substantially efficient than the conventional ACF regression method of H estimation. Our research uses fractional Gaussian noise as a data case but the method introduced is applicable to time series with LRD in general.

  15. Improving the Estimation of Above Ground Biomass Using Dual Polarimetric PALSAR and ETM+ Data in the Hyrcanian Mountain Forest (Iran

    Directory of Open Access Journals (Sweden)

    Sara Attarchi

    2014-04-01

    Full Text Available The objective of this study is to develop models based on both optical and L-band Synthetic Aperture Radar (SAR data for above ground dry biomass (hereafter AGB estimation in mountain forests. We chose the site of the Loveh forest, a part of the Hyrcanian forest for which previous attempts to estimate AGB have proven difficult. Uncorrected ETM+ data allow a relatively poor AGB estimation, because topography can hinder AGB estimation in mountain terrain. Therefore, we focused on the use of atmospherically and topographically corrected multispectral Landsat ETM+ and Advanced Land-Observing Satellite/Phased Array L-band Synthetic Aperture Radar (ALOS/PALSAR to estimate forest AGB. We then evaluated 11 different multiple linear regression models using different combinations of corrected spectral and PolSAR bands and their derived features. The use of corrected ETM+ spectral bands and GLCM textures improves AGB estimation significantly (adjusted R2 = 0.59; RMSE = 31.5 Mg/ha. Adding SAR backscattering coefficients as well as PolSAR features and textures increase substantially the accuracy of AGB estimation (adjusted R2 = 0.76; RMSE = 25.04 Mg/ha. Our results confirm that topographically and atmospherically corrected data are indispensable for the estimation of mountain forest’s physical properties. We also demonstrate that only the joint use of PolSAR and multispectral data allows a good estimation of AGB in those regions.

  16. 回归系数LS估计的改进%Improvement of LS Estimation in Regression Coefficients

    Institute of Scientific and Technical Information of China (English)

    赵丽棉; 黄基廷

    2011-01-01

    This paper discusses the influence of multicollinearity on the LS estimate.When multicollinearity exists,comparing the LS estimate with the principal component estimate,there is a smaller mean square error.And then some examples are given to demonstrate the methods and steps by which the principal component estimate is used to improve the LS estimation.%论述了复共线性对LS估计的影响,复共线性存在时主成分估计比LS估计有较小均方误差.通过实例说明利用主成分估计对LS估计改进的方法与步骤.

  17. Improved seismic risk estimation for Bucharest, based on multiple hazard scenarios, analytical methods and new techniques

    Science.gov (United States)

    Toma-Danila, Dragos; Florinela Manea, Elena; Ortanza Cioflan, Carmen

    2014-05-01

    Bucharest, capital of Romania (with 1678000 inhabitants in 2011), is one of the most exposed big cities in Europe to seismic damage. The major earthquakes affecting the city have their origin in the Vrancea region. The Vrancea intermediate-depth source generates, statistically, 2-3 shocks with moment magnitude >7.0 per century. Although the focal distance is greater than 170 km, the historical records (from the 1838, 1894, 1908, 1940 and 1977 events) reveal severe effects in the Bucharest area, e.g. intensities IX (MSK) for the case of 1940 event. During the 1977 earthquake, 1420 people were killed and 33 large buildings collapsed. The nowadays building stock is vulnerable both due to construction (material, age) and soil conditions (high amplification, generated within the weak consolidated Quaternary deposits, their thickness is varying 250-500m throughout the city). A number of 373 old buildings, out of 2563, evaluated by experts are more likely to experience severe damage/collapse in the next major earthquake. The total number of residential buildings, in 2011, was 113900. In order to guide the mitigation measures, different studies tried to estimate the seismic risk of Bucharest, in terms of buildings, population or economic damage probability. Unfortunately, most of them were based on incomplete sets of data, whether regarding the hazard or the building stock in detail. However, during the DACEA Project, the National Institute for Earth Physics, together with the Technical University of Civil Engineering Bucharest and NORSAR Institute managed to compile a database for buildings in southern Romania (according to the 1999 census), with 48 associated capacity and fragility curves. Until now, the developed real-time estimation system was not implemented for Bucharest. This paper presents more than an adaptation of this system to Bucharest; first, we analyze the previous seismic risk studies, from a SWOT perspective. This reveals that most of the studies don't use

  18. Initial position estimation method for permanent magnet synchronous motor based on improved pulse voltage injection

    DEFF Research Database (Denmark)

    Wang, Z.; Lu, K.; Ye, Y.

    2011-01-01

    According to saliency of permanent magnet synchronous motor (PMSM), the information of rotor position is implied in performance of stator inductances due to the magnetic saturation effect. Researches focused on the initial rotor position estimation of PMSM by injecting modulated pulse voltage vec....... The experimental results show that the proposed method estimates the initial rotor position reliably and efficently. The method is also simple and can achieve satisfied estimation accuracy....

  19. Improving uncertainty estimation in urban hydrological modeling by statistically describing bias

    Directory of Open Access Journals (Sweden)

    D. Del Giudice

    2013-10-01

    Full Text Available Hydrodynamic models are useful tools for urban water management. Unfortunately, it is still challenging to obtain accurate results and plausible uncertainty estimates when using these models. In particular, with the currently applied statistical techniques, flow predictions are usually overconfident and biased. In this study, we present a flexible and relatively efficient methodology (i to obtain more reliable hydrological simulations in terms of coverage of validation data by the uncertainty bands and (ii to separate prediction uncertainty into its components. Our approach acknowledges that urban drainage predictions are biased. This is mostly due to input errors and structural deficits of the model. We address this issue by describing model bias in a Bayesian framework. The bias becomes an autoregressive term additional to white measurement noise, the only error type accounted for in traditional uncertainty analysis. To allow for bigger discrepancies during wet weather, we make the variance of bias dependent on the input (rainfall or/and output (runoff of the system. Specifically, we present a structured approach to select, among five variants, the optimal bias description for a given urban or natural case study. We tested the methodology in a small monitored stormwater system described with a parsimonious model. Our results clearly show that flow simulations are much more reliable when bias is accounted for than when it is neglected. Furthermore, our probabilistic predictions can discriminate between three uncertainty contributions: parametric uncertainty, bias, and measurement errors. In our case study, the best performing bias description is the output-dependent bias using a log-sinh transformation of data and model results. The limitations of the framework presented are some ambiguity due to the subjective choice of priors for bias parameters and its inability to address the causes of model discrepancies. Further research should focus on

  20. Improved allometric models to estimate the aboveground biomass of tropical trees.

    Science.gov (United States)

    Chave, Jérôme; Réjou-Méchain, Maxime; Búrquez, Alberto; Chidumayo, Emmanuel; Colgan, Matthew S; Delitti, Welington B C; Duque, Alvaro; Eid, Tron; Fearnside, Philip M; Goodman, Rosa C; Henry, Matieu; Martínez-Yrízar, Angelina; Mugasha, Wilson A; Muller-Landau, Helene C; Mencuccini, Maurizio; Nelson, Bruce W; Ngomanda, Alfred; Nogueira, Euler M; Ortiz-Malavassi, Edgar; Pélissier, Raphaël; Ploton, Pierre; Ryan, Casey M; Saldarriaga, Juan G; Vieilledent, Ghislain

    2014-10-01

    Terrestrial carbon stock mapping is important for the successful implementation of climate change mitigation policies. Its accuracy depends on the availability of reliable allometric models to infer oven-dry aboveground biomass of trees from census data. The degree of uncertainty associated with previously published pantropical aboveground biomass allometries is large. We analyzed a global database of directly harvested trees at 58 sites, spanning a wide range of climatic conditions and vegetation types (4004 trees ≥ 5 cm trunk diameter). When trunk diameter, total tree height, and wood specific gravity were included in the aboveground biomass model as covariates, a single model was found to hold across tropical vegetation types, with no detectable effect of region or environmental factors. The mean percent bias and variance of this model was only slightly higher than that of locally fitted models. Wood specific gravity was an important predictor of aboveground biomass, especially when including a much broader range of vegetation types than previous studies. The generic tree diameter-height relationship depended linearly on a bioclimatic stress variable E, which compounds indices of temperature variability, precipitation variability, and drought intensity. For cases in which total tree height is unavailable for aboveground biomass estimation, a pantropical model incorporating wood density, trunk diameter, and the variable E outperformed previously published models without height. However, to minimize bias, the development of locally derived diameter-height relationships is advised whenever possible. Both new allometric models should contribute to improve the accuracy of biomass assessment protocols in tropical vegetation types, and to advancing our understanding of architectural and evolutionary constraints on woody plant development.

  1. Fusion of electromagnetic trackers to improve needle deflection estimation: simulation study.

    Science.gov (United States)

    Sadjadi, Hossein; Hashtrudi-Zaad, Keyvan; Fichtinger, Gabor

    2013-10-01

    We present a needle deflection estimation method to anticipate needle bending during insertion into deformable tissue. Using limited additional sensory information, our approach reduces the estimation error caused by uncertainties inherent in the conventional needle deflection estimation methods. We use Kalman filters to combine a kinematic needle deflection model with the position measurements of the base and the tip of the needle taken by electromagnetic (EM) trackers. One EM tracker is installed on the needle base and estimates the needle tip position indirectly using the kinematic needle deflection model. Another EM tracker is installed on the needle tip and estimates the needle tip position through direct, but noisy measurements. Kalman filters are then employed to fuse these two estimates in real time and provide a reliable estimate of the needle tip position, with reduced variance in the estimation error. We implemented this method to compensate for needle deflection during simulated needle insertions and performed sensitivity analysis for various conditions. At an insertion depth of 150 mm, we observed needle tip estimation error reductions in the range of 28% (from 1.8 to 1.3 mm) to 74% (from 4.8 to 1.2 mm), which demonstrates the effectiveness of our method, offering a clinically practical solution.

  2. Spatial-temporal models for improved county-level annual estimates

    Science.gov (United States)

    Francis Roesch

    2009-01-01

    The consumers of data derived from extensive forest inventories often seek annual estimates at a finer spatial scale than that which the inventory was designed to provide. This paper discusses a few model-based and model-assisted estimators to consider for county level attributes that can be applied when the sample would otherwise be inadequate for producing low-...

  3. Improved Margin of Error Estimates for Proportions in Business: An Educational Example

    Science.gov (United States)

    Arzumanyan, George; Halcoussis, Dennis; Phillips, G. Michael

    2015-01-01

    This paper presents the Agresti & Coull "Adjusted Wald" method for computing confidence intervals and margins of error for common proportion estimates. The presented method is easily implementable by business students and practitioners and provides more accurate estimates of proportions particularly in extreme samples and small…

  4. Estimation of key parameters in adaptive neuron model according to firing patterns based on improved particle swarm optimization algorithm

    Science.gov (United States)

    Yuan, Chunhua; Wang, Jiang; Yi, Guosheng

    2017-03-01

    Estimation of ion channel parameters is crucial to spike initiation of neurons. The biophysical neuron models have numerous ion channel parameters, but only a few of them play key roles in the firing patterns of the models. So we choose three parameters featuring the adaptation in the Ermentrout neuron model to be estimated. However, the traditional particle swarm optimization (PSO) algorithm is still easy to fall into local optimum and has the premature convergence phenomenon in the study of some problems. In this paper, we propose an improved method that uses a concave function and dynamic logistic chaotic mapping mixed to adjust the inertia weights of the fitness value, effectively improve the global convergence ability of the algorithm. The perfect predicting firing trajectories of the rebuilt model using the estimated parameters prove that only estimating a few important ion channel parameters can establish the model well and the proposed algorithm is effective. Estimations using two classic PSO algorithms are also compared to the improved PSO to verify that the algorithm proposed in this paper can avoid local optimum and quickly converge to the optimal value. The results provide important theoretical foundations for building biologically realistic neuron models.

  5. An Improved Estimation of Regional Fractional Woody/Herbaceous Cover Using Combined Satellite Data and High-Quality Training Samples

    Directory of Open Access Journals (Sweden)

    Xu Liu

    2017-01-01

    Full Text Available Mapping vegetation cover is critical for understanding and monitoring ecosystem functions in semi-arid biomes. As existing estimates tend to underestimate the woody cover in areas with dry deciduous shrubland and woodland, we present an approach to improve the regional estimation of woody and herbaceous fractional cover in the East Asia steppe. This developed approach uses Random Forest models by combining multiple remote sensing data—training samples derived from high-resolution image in a tailored spatial sampling and model inputs composed of specific metrics from MODIS sensor and ancillary variables including topographic, bioclimatic, and land surface information. We emphasize that effective spatial sampling, high-quality classification, and adequate geospatial information are important prerequisites of establishing appropriate model inputs and achieving high-quality training samples. This study suggests that the optimal models improve estimation accuracy (NMSE 0.47 for woody and 0.64 for herbaceous plants and show a consistent agreement with field observations. Compared with existing woody estimate product, the proposed woody cover estimation can delineate regions with subshrubs and shrubs, showing an improved capability of capturing spatialized detail of vegetation signals. This approach can be applicable over sizable semi-arid areas such as temperate steppes, savannas, and prairies.

  6. Improving The Accuracy Of Bluetooth Based Travel Time Estimation Using Low-Level Sensor Data

    DEFF Research Database (Denmark)

    Araghi, Bahar Namaki; Tørholm Christensen, Lars; Krishnan, Rajesh

    2013-01-01

    Bluetooth sensors have a large detection zone compared to other static Vehicle Re-Identification Systems (VRIS). Although a larger detection zone increases the probability of detecting a Bluetooth-enabled device in a fast-moving vehicle, it increases the probability of multiple detection events...... triggered by a single device. This could lead to location ambiguity and reduced accuracy of travel time estimation. Therefore, the accuracy of travel time estimations by Bluetooth Technology (BT) depends upon how location ambiguity is handled by the estimation method. The issue of multiple detection events...

  7. Search Space Calculation to Improve Parameter Estimation of Excitation Control Systems

    Directory of Open Access Journals (Sweden)

    Andrés J. Saavedra-Montes

    2013-11-01

    Full Text Available A method to calculate the search space for each parameter in an excitation control system is presented in this paper. The calculated search space is intended to reduce the number of parameter solution sets that can be found by an estimation algorithm, reducing its processing time. The method considers a synchronous generator time constant range between 4s and 10s, an excitation control system performance index, a controller design technique, and the excitation control system model structure. When the obtained search space is used to estimate the parameters, less processing time is used by the algorithm. Also the estimated parameters are closer to the reference ones.

  8. Improving uncertainty estimation in urban hydrological modeling by statistically describing bias

    Directory of Open Access Journals (Sweden)

    D. Del Giudice

    2013-04-01

    Full Text Available Hydrodynamic models are useful tools for urban water management. Unfortunately, it is still challenging to obtain accurate results and plausible uncertainty estimates when using these models. In particular, with the currently applied statistical techniques, flow predictions are usually overconfident and biased. In this study, we present a flexible and computationally efficient methodology (i to obtain more reliable hydrological simulations in terms of coverage of validation data by the uncertainty bands and (ii to separate prediction uncertainty into its components. Our approach acknowledges that urban drainage predictions are biased. This is mostly due to input errors and structural deficits of the model. We address this issue by describing model bias in a Bayesian framework. The bias becomes an autoregressive term additional to white measurement noise, the only error type accounted for in traditional uncertainty analysis in urban hydrology. To allow for bigger discrepancies during wet weather, we make the variance of bias dependent on the input (rainfall or/and output (runoff of the system. Specifically, we present a structured approach to select, among five variants, the optimal bias description for a given urban or natural case study. We tested the methodology in a small monitored stormwater system described by means of a parsimonious model. Our results clearly show that flow simulations are much more reliable when bias is accounted for than when it is neglected. Furthermore, our probabilistic predictions can discriminate between three uncertainty contributions: parametric uncertainty, bias (due to input and structural errors, and measurement errors. In our case study, the best performing bias description was the output-dependent bias using a log-sinh transformation of data and model results. The limitations of the framework presented are some ambiguity due to the subjective choice of priors for bias parameters and its inability to directly

  9. Dust indicator maps for improving solar radiation estimation from satellite data

    Science.gov (United States)

    Marpu, P. R.; Eissa, Y.; Al Meqbali, N.; Ghedira, H.

    2012-12-01

    Measurement of solar radiation from ground-based sensors is an expensive process as it requires large number of ground measurement stations to account for the spatial variability. Moreover, the instruments require regular maintenance. Satellite data can be used to model solar radiation and produce maps in regular intervals, which can be used for solar resource assessment. The models can either be empirical, physics-based or statistical models. However, in environments such as the United Arab Emirates (UAE) which are characterized by heavy dust, the results obtained by the models will lead to lower accuracies. In this study, we build on the model developed in [1], where ensembles of ANNs are used separately for cloudy and cloud-free pixels to derive solar radiation maps using the data acquired in the thermal channels of the Meteosat SEVIRI instrument. The model showed good accuracies for the estimation of direct normal irradiance (DNI), diffuse horizontal irradiance (DHI) and global horizontal irradiance (GHI); where the relative root mean square error (rRMSE) values for the DNI, DHI and GHI were 15.7, 23.6 and 7.2%, respectively, while the relative mean bias error (rMBE) values were +0.8, +8.3 and +1.9%, respectively. However, an analysis of the results on different dusty days showed varying accuracy. To further improve the model, we propose to use the dust indicator maps as inputs to the model. An interception index was proposed in [2] to detect dust over desert regions using visible channels of the SEVIRI instrument. The index has a range of 0 to 1 where the value of 1 corresponds to heavy dust and 0 corresponds to clear conditions. There is ongoing work to use the measurements from AERONET stations to derive dust indicator maps based on canonical correlation analysis, which relates the thermal channels to the aerosol optical depth (AOD) derived at different wavelengths from the AERONET measurements. There is also an ongoing work to analyze the time series of the

  10. Improved exposure estimation in soil screening and clean-up criteria for volatile organic chemicals.

    Science.gov (United States)

    DeVaull, George E

    2017-02-18

    Soil clean-up criteria define acceptable concentrations of organic chemical constituents for exposed humans. These criteria sum the estimated soil exposure over multiple pathways. Assumptions for ingestion, dermal contact, and dust exposure generally presume a chemical persists in surface soils at a constant concentration level for the entire exposure duration. For volatile chemicals this is an unrealistic assumption. A calculation method is presented for surficial soil criteria which include volatile depletion of chemical for these uptake pathways. The depletion estimates compare favorably with measured concentration profiles and with field measurements of soil concentration. Corresponding volatilization estimates compare favorably with measured data for a wide range of volatile and semi-volatile chemicals, including instances with and without the presence of a mixed-chemical residual phase. Selected examples show application of the revised factors in estimating screening levels for benzene in surficial soils. This article is protected by copyright. All rights reserved.

  11. Improved covariance matrix estimation in spectrally inhomogeneous sea clutter with application to adaptive small boat detection.

    CSIR Research Space (South Africa)

    Herselman, PL

    2008-09-01

    Full Text Available Asymptotically optimal coherent detection techniques yield sub-clutter visibility in heavy-tailed sea clutter. The adaptive linear quadratic detector inherently assumes spectral homogeneity for the reference window of the covariance matrix estimator...

  12. Improved estimation of human cortical activity and connectivity with the multimodal integration of neuroelectric and hemodynamic data.

    Science.gov (United States)

    Babiloni, F; Mattia, D; Basilisco, A; Astolfi, L; Cincotti, F; Ding, L; Christine, K; Sweeney, J; Edgar, J C; Miller, G A; He, B

    2005-01-01

    In the last decade, the possibility to noninvasively estimate cortical activity and connectivity has been highlighted by the application of the techniques known as high resolution EEG. These techniques include a subject's multi-compartment head model (scalp, skull, dura mater, cortex) constructed from individual magnetic resonance images, multi-dipole source model, and regularized linear inverse source estimates of cortical current density. More recently, it has proved as the use of information from the hemodynamic responses of the cortical areas as revealed by block-designed (strength of activated voxels) fMRI improves dramatically the estimates of cortical activity and connectivity. Here, we present some applications of such estimation in two set of high resolution EEG and fMRI data, related to the motor (finger tapping) and cognitive (Stroop) tasks. We observed that the proposed technology was able to unveil the direction of the information flow between the cortical regions of interest.

  13. Improved estimation of rotor position for sensorless control of a PMSM based on a sliding mode observer

    Institute of Scientific and Technical Information of China (English)

    Wahyu Kunto Wibowo; Seok-Kwon Jeong

    2016-01-01

    This work proposes a new strategy to improve the rotor position estimation of a permanent magnet synchronous motor (PMSM) over wide speed range. Rotor position estimation of a PMSM is performed by using sliding mode observer (SMO). An adaptive observer gain was designed based on Lyapunov function and applied to solve the chattering problem caused by the discontinuous function of the SMO in the wide speed range. The cascade low-pass filter (LPF) with variable cut-off frequency was proposed to reduce the chattering problem and to attenuate the filtering capability of the SMO. In addition, the phase shift caused by the filter was counterbalanced by applying the variable phase delay compensation for the whole speed area. High accuracy estimation result of the rotor position was obtained in the experiment by applying the proposed estimation strategy.

  14. Improved Estimation of Subsurface Magnetic Properties using Minimum Mean-Square Error Methods

    Energy Technology Data Exchange (ETDEWEB)

    Saether, Bjoern

    1997-12-31

    This thesis proposes an inversion method for the interpretation of complicated geological susceptibility models. The method is based on constrained Minimum Mean-Square Error (MMSE) estimation. The MMSE method allows the incorporation of available prior information, i.e., the geometries of the rock bodies and their susceptibilities. Uncertainties may be included into the estimation process. The computation exploits the subtle information inherent in magnetic data sets in an optimal way in order to tune the initial susceptibility model. The MMSE method includes a statistical framework that allows the computation not only of the estimated susceptibilities, given by the magnetic measurements, but also of the associated reliabilities of these estimations. This allows the evaluation of the reliabilities in the estimates before any measurements are made, an option, which can be useful for survey planning. The MMSE method has been tested on a synthetic data set in order to compare the effects of various prior information. When more information is given as input to the estimation, the estimated models come closer to the true model, and the reliabilities in their estimates are increased. In addition, the method was evaluated using a real geological model from a North Sea oil field, based on seismic data and well information, including susceptibilities. Given that the geometrical model is correct, the observed mismatch between the forward calculated magnetic anomalies and the measured anomalies causes changes in the susceptibility model, which may show features of interesting geological significance to the explorationists. Such magnetic anomalies may be due to small fractures and faults not detectable on seismic, or local geochemical changes due to the upward migration of water or hydrocarbons. 76 refs., 42 figs., 18 tabs.

  15. Improved Inference for Respondent-Driven Sampling Data with Application to HIV Prevalence Estimation

    OpenAIRE

    Gile, Krista J.

    2010-01-01

    Respondent-driven sampling is a form of link-tracing network sampling, which is widely used to study hard-to-reach populations, often to estimate population proportions. Previous treatments of this process have used a with-replacement approximation, which we show induces bias in estimates for large sample fractions and differential network connectedness by characteristic of interest. We present a treatment of respondent-driven sampling as a successive sampling process. Unlike existing represe...

  16. High Resolution Bathymetry Estimation Improvement with Single ImageSuper Resolution Algorithm Super Resolution Forests

    Science.gov (United States)

    2017-01-26

    process of the SRF algorithm, we were able to further increase the mean PSNR score of the high resolution estimated data from previously used bicubic...This meant that implementing the edited variance before the bicubic estimates were created caused the mean PSNR to increase the most, and all...interpolation (by about 1 dB). Figure 7: PSNR comparison (with mean scores) between Bicubic Interpolation and SRF Figure 7 shows the comparison between

  17. Cost Estimation of Web Projects inContext with Agile Paradigm: Improvements and Validation

    OpenAIRE

    2013-01-01

    Agile practitioners have expressed concern over their inability to correctly estimate costs associated with Agile web software development. This concern has become even more critical as costs associated with development continue to increase. As a result, significant research attention is now intended for gaining a better understanding of the web based projects in context with Agile software-development process as well as constructing and evaluating calibrated software cost estimating tools. T...

  18. Directional Variance Adjustment: improving covariance estimates for high-dimensional portfolio optimization

    OpenAIRE

    Daniel Bartz; Kerr Hatrick; Hesse, Christian W.; Klaus-Robert M\\"uller; Steven Lemm

    2011-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on Factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, w...

  19. Incorporating movement patterns to improve survival estimates for juvenile bull trout

    Science.gov (United States)

    Bowerman, Tracy; Budy, Phaedra

    2012-01-01

    Populations of many fish species are sensitive to changes in vital rates during early life stages, but our understanding of the factors affecting growth, survival, and movement patterns is often extremely limited for juvenile fish. These critical information gaps are particularly evident for bull trout Salvelinus confluentus, a threatened Pacific Northwest char. We combined several active and passive mark–recapture and resight techniques to assess migration rates and estimate survival for juvenile bull trout (70–170 mm total length). We evaluated the relative performance of multiple survival estimation techniques by comparing results from a common Cormack–Jolly–Seber (CJS) model, the less widely used Barker model, and a simple return rate (an index of survival). Juvenile bull trout of all sizes emigrated from their natal habitat throughout the year, and thereafter migrated up to 50 km downstream. With the CJS model, high emigration rates led to an extreme underestimate of apparent survival, a combined estimate of site fidelity and survival. In contrast, the Barker model, which allows survival and emigration to be modeled as separate parameters, produced estimates of survival that were much less biased than the return rate. Estimates of age-class-specific annual survival from the Barker model based on all available data were 0.218±0.028 (estimate±SE) for age-1 bull trout and 0.231±0.065 for age-2 bull trout. This research demonstrates the importance of incorporating movement patterns into survival analyses, and we provide one of the first field-based estimates of juvenile bull trout annual survival in relatively pristine rearing conditions. These estimates can provide a baseline for comparison with future studies in more impacted systems and will help managers develop reliable stage-structured population models to evaluate future recovery strategies.

  20. Improving the Parametric Method of Cost Estimating Relationships of Naval Ships

    Science.gov (United States)

    2014-06-01

    Department of Defense sponsored software which works together with the Automated Cost Estimating Integrated Tools ( ACEIT ) suite. Depending on the software...basis used by the estimator, either the Microsoft 30 Excel add-on will be used or the ACEIT based. Crystal Ball shown in Figure 10, uses a...20 Hu and Smith, Proceedings of the 2004 Crystal Ball User Conference COMPARING CRYSTAL BALL ® WITH ACEIT . 21 Smart, “The Portfolio Effect

  1. Facilitering som styringsredskab

    OpenAIRE

    Jørgensen, Karen Overgaard

    2006-01-01

    #This thesis surveys facilitation as a new tool of steering within the public sector in Denmark. It is explored how facilitation is articulated and practiced among facilitators from the public, private and voluntary sector. Furthermore, the facilitator’s challenges by using facilitation are examined. The thesis is based on the presumption that facilitation is articulated by rationalities, which influence how facilitation is practiced and performed. Also, a facilitator is seen as a performer a...

  2. Correlation-agnostic fusion for improved uncertainty estimation in multi-view geo-location from UAVs

    Science.gov (United States)

    Taylor, Clark N.; Sundlie, Paul O.

    2017-05-01

    When geo-locating ground objects from a UAV, multiple views of the same object can lead to improved geo- location accuracy. Of equal importance to the location estimate, however, is the uncertainty estimate associated with that location. Standard methods for estimating uncertainty from multiple views generally assume that each view represents an independent measurement of the geo-location. Unfortunately, this assumption is often violated due to correlation between the location estimates. This correlation may occur due to the measurements coming from the same platform, meaning that the error in attitude or location may be correlated across time; or it may be due to external sources (such as GPS) having the same error in multiple aircraft. In either case, the geo-location estimates are not truly independent, leading to optimistic estimates of the geo-location uncertainty. For distributed data fusion applications, correlation-agnostic fusion methods have been developed that can fuse data together regardless of how much correlation may be present between the two estimates. While the results are generally not as impressive as when correlation is perfectly known and taken into account, the fused uncertainty results are guaranteed to be conservative and an improvement on operating without fusion. In this paper, we apply a selection of these correlation-agnostic fusion techniques to the multi-view geo-location problem and analyze their effects on geo-location and predicted uncertainty accuracy. We find that significant benefits can be found from applying these correlation agnostic fusion effects, but that they vary greatly in how well they estimate their own uncertainty.

  3. Estimating habitat carrying capacity for migrating and wintering waterfowl: Considerations, pitfalls and improvements

    Science.gov (United States)

    Williams, Christopher; Dugger, Bruce D.; Brasher, Michael G.; Coluccy, John M.; Cramer, Dane M.; Eadie, John M.; Gray, Matthew J.; Hagy, Heath M.; Livolsi, Mark; McWilliams, Scott R.; Petrie, Matthew; Soulliere, Gregory J.; Tirpak, John M.; Webb, Elisabeth B.

    2014-01-01

    Population-based habitat conservation planning for migrating and wintering waterfowl in North America is carried out by habitat Joint Venture (JV) initiatives and is based on the premise that food can limit demography (i.e. food limitation hypothesis). Consequently, planners use bioenergetic models to estimate food (energy) availability and population-level energy demands at appropriate spatial and temporal scales, and translate these values into regional habitat objectives. While simple in principle, there are both empirical and theoretical challenges associated with calculating energy supply and demand including: 1) estimating food availability, 2) estimating the energy content of specific foods, 3) extrapolating site-specific estimates of food availability to landscapes for focal species, 4) applicability of estimates from a single species to other species, 5) estimating resting metabolic rate, 6) estimating cost of daily behaviours, and 7) estimating costs of thermoregulation or tissue synthesis. Most models being used are daily ration models (DRMs) whose set of simplifying assumptions are well established and whose use is widely accepted and feasible given the empirical data available to populate such models. However, DRMs do not link habitat objectives to metrics of ultimate ecological importance such as individual body condition or survival, and largely only consider food-producing habitats. Agent-based models (ABMs) provide a possible alternative for creating more biologically realistic models under some conditions; however, ABMs require different types of empirical inputs, many of which have yet to be estimated for key North American waterfowl. Decisions about how JVs can best proceed with habitat conservation would benefit from the use of sensitivity analyses that could identify the empirical and theoretical uncertainties that have the greatest influence on efforts to estimate habitat carrying capacity. Development of ABMs at

  4. Improved estimates of boreal Fire Radiative Energy using high temporal resolution data and a modified active fire detection algorithm

    Science.gov (United States)

    Barrett, Kirsten

    2016-04-01

    Reliable estimates of biomass combusted during wildfires can be obtained from satellite observations of fire radiative power (FRP). Total fire radiative energy (FRE) is typically estimated by integrating instantaneous measurements of fire radiative power (FRP) at the time of orbital satellite overpass or geostationary observation. Remotely-sensed FRP products from orbital satellites are usually global in extent, requiring several thresholding and filtering operations to reduce the number of false fire detections. Some filters required for a global product may not be appropriate to fire detection in the boreal forest resulting in errors of omission and increased data processing times. We evaluate the effect of a boreal-specific active fire detection algorithm and estimates of FRP/FRE. Boreal fires are more likely to escape detection due to lower intensity smouldering combustion and sub canopy fires, therefore improvements in boreal fire detection could substantially reduce the uncertainty of emissions from biomass combustion in the region. High temporal resolution data from geostationary satellites have led to improvements in FRE estimation in tropical and temperate forests, but such a perspective is not possible for high latitude ecosystems given the equatorial orbit of geostationary observation. The increased density of overpasses in high latitudes from polar-orbiting satellites, however, may provide adequate temporal sampling for estimating FRE.

  5. Improving Distributed Runoff Prediction in Urbanized Catchments with Remote Sensing based Estimates of Impervious Surface Cover

    Science.gov (United States)

    Chormanski, Jaroslaw; Van de Voorde, Tim; De Roeck, Tim; Batelaan, Okke; Canters, Frank

    2008-01-01

    The amount and intensity of runoff on catchment scale are strongly determined by the presence of impervious land-cover types, which are the predominant cover types in urbanized areas. This paper examines the impact of different methods for estimating impervious surface cover on the prediction of peak discharges, as determined by a fully distributed rainfall-runoff model (WetSpa), for the upper part of the Woluwe River catchment in the southeastern part of Brussels. The study shows that detailed information on the spatial distribution of impervious surfaces, as obtained from remotely sensed data, produces substantially different estimates of peak discharges than traditional approaches based on expert judgment of average imperviousness for different types of urban land use. The study also demonstrates that sub-pixel estimation of imperviousness may be a useful alternative for more expensive high-resolution mapping for rainfall-runoff modelling at catchment scale.

  6. Power outage estimation for tropical cyclones: improved accuracy with simpler models.

    Science.gov (United States)

    Nateghi, Roshanak; Guikema, Seth; Quiring, Steven M

    2014-06-01

    In this article, we discuss an outage-forecasting model that we have developed. This model uses very few input variables to estimate hurricane-induced outages prior to landfall with great predictive accuracy. We also show the results for a series of simpler models that use only publicly available data and can still estimate outages with reasonable accuracy. The intended users of these models are emergency response planners within power utilities and related government agencies. We developed our models based on the method of random forest, using data from a power distribution system serving two states in the Gulf Coast region of the United States. We also show that estimates of system reliability based on wind speed alone are not sufficient for adequately capturing the reliability of system components. We demonstrate that a multivariate approach can produce more accurate power outage predictions.

  7. DOA Estimation under Unknown Mutual Coupling and Multipath with Improved Effective Array Aperture.

    Science.gov (United States)

    Wang, Yuexian; Trinkle, Matthew; Ng, Brian W-H

    2015-12-08

    Subspace-based high-resolution direction of arrival (DOA) estimation significantly deteriorates under array manifold perturbation and rank deficiency of the covariance matrix due to mutual coupling and multipath propagation, respectively. In this correspondence, the unknown mutual coupling can be circumvented by the proposed method without any passive or active calibration process, and the DOA of the coherent signals can be accurately estimated accordingly. With a newly constructed matrix, the deficient rank can be restored, and the effective array aperture can be extended compared with conventional spatial smoothing. The proposed method achieves a good robustness and DOA estimation accuracy with unknown mutual coupling. The simulation results demonstrate the validity and efficiency of the proposed method.

  8. Highway traffic estimation of improved precision using the derivative-free nonlinear Kalman Filter

    Science.gov (United States)

    Rigatos, Gerasimos; Siano, Pierluigi; Zervos, Nikolaos; Melkikh, Alexey

    2015-12-01

    The paper proves that the PDE dynamic model of the highway traffic is a differentially flat one and by applying spatial discretization its shows that the model's transformation into an equivalent linear canonical state-space form is possible. For the latter representation of the traffic's dynamics, state estimation is performed with the use of the Derivative-free nonlinear Kalman Filter. The proposed filter consists of the Kalman Filter recursion applied on the transformed state-space model of the highway traffic. Moreover, it makes use of an inverse transformation, based again on differential flatness theory which enables to obtain estimates of the state variables of the initial nonlinear PDE model. By avoiding approximate linearizations and the truncation of nonlinear terms from the PDE model of the traffic's dynamics the proposed filtering methods outperforms, in terms of accuracy, other nonlinear estimators such as the Extended Kalman Filter. The article's theoretical findings are confirmed through simulation experiments.

  9. Fossils matter: improved estimates of divergence times in Pinus reveal older diversification.

    Science.gov (United States)

    Saladin, Bianca; Leslie, Andrew B; Wüest, Rafael O; Litsios, Glenn; Conti, Elena; Salamin, Nicolas; Zimmermann, Niklaus E

    2017-04-04

    The taxonomy of pines (genus Pinus) is widely accepted and a robust gene tree based on entire plastome sequences exists. However, there is a large discrepancy in estimated divergence times of major pine clades among existing studies, mainly due to differences in fossil placement and dating methods used. We currently lack a dated molecular phylogeny that makes use of the rich pine fossil record, and this study is the first to estimate the divergence dates of pines based on a large number of fossils (21) evenly distributed across all major clades, in combination with applying both node and tip dating methods. We present a range of molecular phylogenetic trees of Pinus generated within a Bayesian framework. We find the origin of crown Pinus is likely up to 30 Myr older (Early Cretaceous) than inferred in most previous studies (Late Cretaceous) and propose generally older divergence times for major clades within Pinus than previously thought. Our age estimates vary significantly between the different dating approaches, but the results generally agree on older divergence times. We present a revised list of 21 fossils that are suitable to use in dating or comparative analyses of pines. Reliable estimates of divergence times in pines are essential if we are to link diversification processes and functional adaptation of this genus to geological events or to changing climates. In addition to older divergence times in Pinus, our results also indicate that node age estimates in pines depend on dating approaches and the specific fossil sets used, reflecting inherent differences in various dating approaches. The sets of dated phylogenetic trees of pines presented here provide a way to account for uncertainties in age estimations when applying comparative phylogenetic methods.

  10. Performance of velocity vector estimation using an improved dynamic beamforming setup

    DEFF Research Database (Denmark)

    Munk, Peter; Jensen, Jørgen Arendt

    2001-01-01

    control of the acoustic field, based on the Pulsed Plane Wave Decomposition (PPWD), is presented. The PPWD gives an unambigious relation between a given acoustic field and the time functions needed on an array transducer for transmission. Applying this method for the receive beamformation results in a set......-up of the beamformer with different filters for each channel for each estimation depth. The method of the PPWD is illustrated by analytical expressions of the decomposed acoustic field and these results are used for simulation. Results of velocity estimates using the new setup are given on the basis of simulated...

  11. Adaptive feedforward of estimated ripple improves the closed loop system performance significantly

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, S.; Regan, A.; Wang, Y.M.; Rohlev, T.

    1998-12-31

    The Low Energy Demonstration Accelerator (LEDA) being constructed at Los Alamos National Laboratory will serve as the prototype for the low energy section of Acceleration Production of Tritium (APT) accelerator. This paper addresses the problem of LLRF control system for LEDA. The authors propose an estimator of the ripple and its time derivative and a control law which is based on PID control and adaptive feedforward of estimated ripple. The control law reduces the effect of the deterministic cathode ripple that is due to high voltage power supply and achieves tracking of desired set points.

  12. PIGS: improved estimates of identity-by-descent probabilities by probabilistic IBD graph sampling.

    Science.gov (United States)

    Park, Danny S; Baran, Yael; Hormozdiari, Farhad; Eng, Celeste; Torgerson, Dara G; Burchard, Esteban G; Zaitlen, Noah

    2015-01-01

    Identifying segments in the genome of different individuals that are identical-by-descent (IBD) is a fundamental element of genetics. IBD data is used for numerous applications including demographic inference, heritability estimation, and mapping disease loci. Simultaneous detection of IBD over multiple haplotypes has proven to be computationally difficult. To overcome this, many state of the art methods estimate the probability of IBD between each pair of haplotypes separately. While computationally efficient, these methods fail to leverage the clique structure of IBD resulting in less powerful IBD identification, especially for small IBD segments.

  13. Improving volume loss estimates of the northwestern Greenland Ice Sheet 2002-2010

    DEFF Research Database (Denmark)

    Korsgaard, Niels Jákup; Khan, Shfaqat Abbas; Kjeldsen, Kristian Kjellerup

    Studies have been carried out using various methods to estimate the Greenland ice sheet mass balance. Remote sensing techniques used to determine the ice sheet volume includes airborne and satellite radar and laser methods and measurements of ice flow of outlet glaciers use InSAR satellite radar......) does not work on sloping surfaces and is affected by radar penetration into the snow. InSAR estimates require knowledge of outlet glacier thickness. GRACE has limited spatial resolution and is affected by mass variations not just from ice changes, but also from hydrologic and ocean mass variability...

  14. Improving demand response potential of a supermarket refrigeration system: A food temperature estimation approach

    DEFF Research Database (Denmark)

    Pedersen, Rasmus; Schwensen, John; Biegel, Benjamin

    2017-01-01

    a method for estimating food temperature based on measurements of evaporator expansion valve opening degree. This method requires no additional hardware or system modeling. We demonstrate the estimation method on a real supermarket display case and the applicability of knowing food temperature is shown...... through tests on a full scale supermarket refrigeration system made available by Danfoss A/S. The conducted application test shows that feedback based on food temperature can increase the demand flexibility during a step by approx. 60 % the first 70 minutes and up to 100%over the first 150 minutes...... - thereby strengthening the demand response potential of supermarket refrigeration systems....

  15. Improved accuracy in the estimation of blood velocity vectors using matched filtering

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Gori, P.

    2000-01-01

    The blood velocity can be estimated by finding the shift in position of the blood scatterers between subsequent ultrasonic pulse emissions through cross-correlation of the received RF signals. Usually only the velocity component along the beam direction is found. It was shown in a previous paper...... that the complete velocity vector can be found, if the received signals are focused along lines parallel to the direction of the blood flow. A fairly broad beam is emitted in the approach, and this gives rise to a widening in the profiles of the estimated velocity. To reduce this effect, a focused ultrasound...

  16. Making It Count: Improving Estimates of the Size of Transgender and Gender Nonconforming Populations.

    Science.gov (United States)

    Deutsch, Madeline B

    2016-06-01

    An accurate estimate of the number of transgender and gender nonconforming people is essential to inform policy and funding priorities and decisions. Historical reports of population sizes of 1 in 4000 to 1 in 50,000 have been based on clinical populations and likely underestimate the size of the transgender population. More recent population-based studies have found a 10- to 100-fold increase in population size. Studies that estimate population size should be population based, employ the two-step method to allow for collection of both gender identity and sex assigned at birth, and include measures to capture the range of transgender people with nonbinary gender identities.

  17. Improving Maryland’s Offshore Wind Energy Resource Estimate Using Doppler Wind Lidar Technology to Assess Microtmeteorology Controls

    Directory of Open Access Journals (Sweden)

    Pé Alexandra St.

    2016-01-01

    Compared to lidar measurements, power law extrapolation estimates and operational National Weather Service models underestimated hub-height wind speeds in the WEA. In addition, lidar observations suggest the frequent development of a low-level wind maximum (LLWM, with high turbinelayer wind shear and low turbulence intensity within a turbine’s rotor layer (40m-160m. Results elucidate the advantages of using Doppler wind lidar technology to improve offshore wind resource estimates and its ability to monitor under-sampled offshore meteorological controls impact on a potential turbine’s ability to produce power.

  18. High Resolution Direction of Arrival (DOA Estimation Based on Improved Orthogonal Matching Pursuit (OMP Algorithm by Iterative Local Searching

    Directory of Open Access Journals (Sweden)

    Renbiao Wu

    2013-08-01

    Full Text Available DOA (Direction of Arrival estimation is a major problem in array signal processing applications. Recently, compressive sensing algorithms, including convex relaxation algorithms and greedy algorithms, have been recognized as a kind of novel DOA estimation algorithm. However, the success of these algorithms is limited by the RIP (Restricted Isometry Property condition or the mutual coherence of measurement matrix. In the DOA estimation problem, the columns of measurement matrix are steering vectors corresponding to different DOAs. Thus, it violates the mutual coherence condition. The situation gets worse when there are two sources from two adjacent DOAs. In this paper, an algorithm based on OMP (Orthogonal Matching Pursuit, called ILS-OMP (Iterative Local Searching-Orthogonal Matching Pursuit, is proposed to improve DOA resolution by Iterative Local Searching. Firstly, the conventional OMP algorithm is used to obtain initial estimated DOAs. Then, in each iteration, a local searching process for every estimated DOA is utilized to find a new DOA in a given DOA set to further decrease the residual. Additionally, the estimated DOAs are updated by substituting the initial DOA with the new one. The simulation results demonstrate the advantages of the proposed algorithm.

  19. High resolution direction of arrival (DOA) estimation based on improved orthogonal matching pursuit (OMP) algorithm by iterative local searching.

    Science.gov (United States)

    Wang, Wenyi; Wu, Renbiao

    2013-08-22

    DOA (Direction of Arrival) estimation is a major problem in array signal processing applications. Recently, compressive sensing algorithms, including convex relaxation algorithms and greedy algorithms, have been recognized as a kind of novel DOA estimation algorithm. However, the success of these algorithms is limited by the RIP (Restricted Isometry Property) condition or the mutual coherence of measurement matrix. In the DOA estimation problem, the columns of measurement matrix are steering vectors corresponding to different DOAs. Thus, it violates the mutual coherence condition. The situation gets worse when there are two sources from two adjacent DOAs. In this paper, an algorithm based on OMP (Orthogonal Matching Pursuit), called ILS-OMP (Iterative Local Searching-Orthogonal Matching Pursuit), is proposed to improve DOA resolution by Iterative Local Searching. Firstly, the conventional OMP algorithm is used to obtain initial estimated DOAs. Then, in each iteration, a local searching process for every estimated DOA is utilized to find a new DOA in a given DOA set to further decrease the residual. Additionally, the estimated DOAs are updated by substituting the initial DOA with the new one. The simulation results demonstrate the advantages of the proposed algorithm.

  20. Oblique projection pre-processing and TLS application for diagnosing rotor bar defects by improving power spectrum estimation

    Science.gov (United States)

    Bouleux, Guillaume

    2013-12-01

    Diagnosing defects on rotating machines can be reached by several angles. When dealing with asynchronous motor drive, such physical elements rotate that a natural angle for treating the healthiness of the motor can be obtained by the use of spectral analysis tools. It is now stated that electrical or mechanical defects, which appear periodically as well, can be retrieved by analyzing the amplitude of particular frequencies inside an estimated power spectrum. When dealing with broken rotor bars detection it is essential to accurately localize the frequencies related to the slip inside the power spectrum. The diagnosis is thereafter made by indicators given with respect to their power. For actual low level of load operations, the supply frequency generally masks the frequencies which could be exploited for indicators. Therefore, we propose to cancel, as well as possible, the contribution of this supply frequency to develop the useful and closely components. The resolution should be very thin for the components to be estimated. In consequence, we use a prior-knowledge subspace-based frequency estimator, already developed in the literature, we complete with an Oblique Projection coupled with a Total Least Squares solution for estimating the power of the resulting estimated frequencies. Finally, we show by means of a real application how it contributes to improve the power spectrum estimation when compared to the FFT or periodogram-based analysis and how the aforementioned power spectrum makes the diagnosis indicator of rotor bars efficient.

  1. Improved OCV Model of a Li-Ion NMC Battery for Online SOC Estimation Using the Extended Kalman Filter

    Directory of Open Access Journals (Sweden)

    Ines Baccouche

    2017-05-01

    Full Text Available Accurate modeling of the nonlinear relationship between the open circuit voltage (OCV and the state of charge (SOC is required for adaptive SOC estimation during the lithium-ion (Li-ion battery operation. Online SOC estimation should meet several constraints, such as the computational cost, the number of parameters, as well as the accuracy of the model. In this paper, these challenges are considered by proposing an improved simplified and accurate OCV model of a nickel manganese cobalt (NMC Li-ion battery, based on an empirical analytical characterization approach. In fact, composed of double exponential and simple quadratic functions containing only five parameters, the proposed model accurately follows the experimental curve with a minor fitting error of 1 mV. The model is also valid at a wide temperature range and takes into account the voltage hysteresis of the OCV. Using this model in SOC estimation by the extended Kalman filter (EKF contributes to minimizing the execution time and to reducing the SOC estimation error to only 3% compared to other existing models where the estimation error is about 5%. Experiments are also performed to prove that the proposed OCV model incorporated in the EKF estimator exhibits good reliability and precision under various loading profiles and temperatures.

  2. Improved estimation of MR relaxation parameters using complex-valued data

    NARCIS (Netherlands)

    Umesh Rudrapatna, S.; Bakker, C J G; Viergever, M A; van der Toorn, A; Dijkhuizen, R M

    PURPOSE: In MR image analysis, T1 , T2 , and T2* maps are generally calculated using magnitude MR data. Without knowledge of the underlying noise variance, parameter estimates at low signal to noise ratio (SNR) are usually biased. This leads to confounds in studies that compare parameters across

  3. Improving estimates of numbers of children with severe acute malnutrition using cohort and survey data

    DEFF Research Database (Denmark)

    Isanaka, Sheila; Boundy, Ellen O neal; Grais, Rebecca F

    2016-01-01

    Severe acute malnutrition (SAM) is reported to affect 19 million children worldwide. However, this estimate is based on prevalence data from cross-sectional surveys and can be expected to miss some children affected by an acute condition such as SAM. The burden of acute conditions is more...

  4. Improved estimation of the temporal decay function of in vivo metabolite signals

    NARCIS (Netherlands)

    Van Ormondt, D.; De Beer, R.; Van der Veen, J.W.C.; Sima, D.M.; Graveron-Demilly, D.

    2015-01-01

    MRI-scanners enable non-invasive, in vivo quantitation of metabolites in, e.g., the brain of a patient. Among other things, this requires adequate estimation of the unknown temporal decay function of the complex-valued signal emanating from the metabolites. We propose a method to render a current de

  5. Using convolutional decoding to improve time delay and phase estimation in digital communications

    Energy Technology Data Exchange (ETDEWEB)

    Ormesher, Richard C. (Albuquerque, NM); Mason, John J. (Albuquerque, NM)

    2010-01-26

    The time delay and/or phase of a communication signal received by a digital communication receiver can be estimated based on a convolutional decoding operation that the communication receiver performs on the received communication signal. If the original transmitted communication signal has been spread according to a spreading operation, a corresponding despreading operation can be integrated into the convolutional decoding operation.

  6. Improved Accuracy of Nonlinear Parameter Estimation with LAV and Interval Arithmetic Methods

    Directory of Open Access Journals (Sweden)

    Humberto Muñoz

    2009-06-01

    Full Text Available The reliable solution of nonlinear parameter es- timation problems is an important computational problem in many areas of science and engineering, including such applications as real time optimization. Its goal is to estimate accurate model parameters that provide the best fit to measured data, despite small- scale noise in the data or occasional large-scale mea- surement errors (outliers. In general, the estimation techniques are based on some kind of least squares or maximum likelihood criterion, and these require the solution of a nonlinear and non-convex optimiza- tion problem. Classical solution methods for these problems are local methods, and may not be reliable for finding the global optimum, with no guarantee the best model parameters have been found. Interval arithmetic can be used to compute completely and reliably the global optimum for the nonlinear para- meter estimation problem. Finally, experimental re- sults will compare the least squares, l2, and the least absolute value, l1, estimates using interval arithmetic in a chemical engineering application.

  7. An improved parameter estimation scheme for image modification detection based on DCT coefficient analysis.

    Science.gov (United States)

    Yu, Liyang; Han, Qi; Niu, Xiamu; Yiu, S M; Fang, Junbin; Zhang, Ye

    2016-02-01

    Most of the existing image modification detection methods which are based on DCT coefficient analysis model the distribution of DCT coefficients as a mixture of a modified and an unchanged component. To separate the two components, two parameters, which are the primary quantization step, Q1, and the portion of the modified region, α, have to be estimated, and more accurate estimations of α and Q1 lead to better detection and localization results. Existing methods estimate α and Q1 in a completely blind manner, without considering the characteristics of the mixture model and the constraints to which α should conform. In this paper, we propose a more effective scheme for estimating α and Q1, based on the observations that, the curves on the surface of the likelihood function corresponding to the mixture model is largely smooth, and α can take values only in a discrete set. We conduct extensive experiments to evaluate the proposed method, and the experimental results confirm the efficacy of our method.

  8. Nuclear Weapons Sustainment: Improvements Made to Budget Estimates, but Opportunities Exist to Further Enhance Transparency

    Science.gov (United States)

    2015-07-01

    estimates in future joint reports. CIO officials told us that for consistency within the department and for external consumers of funding...Connect with GAO on Facebook, Flickr, Twitter, and YouTube . Subscribe to our RSS Feeds or E-mail Updates. Listen to our Podcasts. Visit GAO on the web

  9. Improving the Estimation of Moderating Effects by Using Computer-Administered Questionnaires.

    Science.gov (United States)

    Aguinis, Herman; And Others

    1996-01-01

    A program designed to administer questionnaires on IBM and IBM-compatible personal computers is described. The program prompts subjects to indicate responses by clicking on a graphic line segment or entering a numeric value. The program enhances accuracy in estimating moderating effects by overcoming transcriptional errors and scale coarseness.…

  10. Improved estimation of hydraulic conductivity by combining stochastically simulated hydrofacies with geophysical data.

    Science.gov (United States)

    Zhu, Lin; Gong, Huili; Chen, Yun; Li, Xiaojuan; Chang, Xiang; Cui, Yijiao

    2016-03-01

    Hydraulic conductivity is a major parameter affecting the output accuracy of groundwater flow and transport models. The most commonly used semi-empirical formula for estimating conductivity is Kozeny-Carman equation. However, this method alone does not work well with heterogeneous strata. Two important parameters, grain size and porosity, often show spatial variations at different scales. This study proposes a method for estimating conductivity distributions by combining a stochastic hydrofacies model with geophysical methods. The Markov chain model with transition probability matrix was adopted to re-construct structures of hydrofacies for deriving spatial deposit information. The geophysical and hydro-chemical data were used to estimate the porosity distribution through the Archie's law. Results show that the stochastic simulated hydrofacies model reflects the sedimentary features with an average model accuracy of 78% in comparison with borehole log data in the Chaobai alluvial fan. The estimated conductivity is reasonable and of the same order of magnitude of the outcomes of the pumping tests. The conductivity distribution is consistent with the sedimentary distributions. This study provides more reliable spatial distributions of the hydraulic parameters for further numerical modeling.

  11. Improved sampling for airborne surveys to estimate wildlife population parameters in the African Savannah

    NARCIS (Netherlands)

    Khaemba, W.; Stein, A.

    2002-01-01

    Parameter estimates, obtained from airborne surveys of wildlife populations, often have large bias and large standard errors. Sampling error is one of the major causes of this imprecision and the occurrence of many animals in herds violates the common assumptions in traditional sampling designs like

  12. An Error-Reduction Algorithm to Improve Lidar Turbulence Estimates for Wind Energy

    Energy Technology Data Exchange (ETDEWEB)

    Newman, Jennifer F.; Clifton, Andrew

    2016-08-01

    Currently, cup anemometers on meteorological (met) towers are used to measure wind speeds and turbulence intensity to make decisions about wind turbine class and site suitability. However, as modern turbine hub heights increase and wind energy expands to complex and remote sites, it becomes more difficult and costly to install met towers at potential sites. As a result, remote sensing devices (e.g., lidars) are now commonly used by wind farm managers and researchers to estimate the flow field at heights spanned by a turbine. While lidars can accurately estimate mean wind speeds and wind directions, there is still a large amount of uncertainty surrounding the measurement of turbulence with lidars. This uncertainty in lidar turbulence measurements is one of the key roadblocks that must be overcome in order to replace met towers with lidars for wind energy applications. In this talk, a model for reducing errors in lidar turbulence estimates is presented. Techniques for reducing errors from instrument noise, volume averaging, and variance contamination are combined in the model to produce a corrected value of the turbulence intensity (TI), a commonly used parameter in wind energy. In the next step of the model, machine learning techniques are used to further decrease the error in lidar TI estimates.

  13. A Multidimensional Item Response Modeling Approach for Improving Subscale Proficiency Estimation and Classification

    Science.gov (United States)

    Yao, Lihua; Boughton, Keith A.

    2007-01-01

    Several approaches to reporting subscale scores can be found in the literature. This research explores a multidimensional compensatory dichotomous and polytomous item response theory modeling approach for subscale score proficiency estimation, leading toward a more diagnostic solution. It also develops and explores the recovery of a Markov chain…

  14. Electron affinity of p-quinones. Improved method of electrochemical estimation

    Science.gov (United States)

    Jaworski, Jan S.

    1986-06-01

    Electron affinities of four p-quinones are estimated from enthalpy changes obtained on the basis of measured formal potentials and reaction entropies in the electroreduction process. A linear correlation between electron affinities of p-quinones and parent hydrocarbons is found.

  15. Improving Indonesian peatland C stock estimates using ground penetrating radar (GPR) and electrical resistivity imaging (ERI)

    Science.gov (United States)

    Terry, N.; Comas, X.; Slater, L. D.; Warren, M.; Kolka, R. K.; Kristijono, A.; Sudiana, N.; Nurjaman, D.; Darusman, T.

    2014-12-01

    Tropical peatlands sequester an estimated 15% of the carbon pool from peatlands worldwide. Indonesian peatlands account for approximately 65% of all tropical peat, and are believed to be the largest global source of carbon dioxide emissions to the atmosphere from degrading peat. However, there is great uncertainty in these estimates due to insufficient data regarding the thickness of organic peat soils and their carbon content. Meanwhile, Indonesian peatlands are threatened by heightening pressure to drain and develop. Indirect geophysical methods have garnered interest for their potential to non-invasively estimate peat depth and gas content in boreal peatlands. Drawing from these techniques, we employed ground penetrating radar (GPR) and electrical resistivity imaging (ERI) in tandem with direct methods (core sampling) to evaluate the potential of these methods for tropical peatland mapping at 2 distinct study sites on West Kalimantan (Indonesia). We find that: [1] West Kalimantan peatland thicknesses estimated from GPR and ERI in intermediate/shallow peat can vary substantially over short distances (for example, > 2% over less than 0.02° surface topography gradient), [2] despite having less vertical resolution, ERI is able to better resolve peatland thickness in deep peat, and [3] GPR provides useful data regarding peat matrix attributes (such as the presence of wood layers). These results indicate GPR and ERI could help reduce uncertainty in carbon stocks and aid in responsible land management decisions in Indonesia.

  16. Improving the Estimation of Moderating Effects by Using Computer-Administered Questionnaires.

    Science.gov (United States)

    Aguinis, Herman; And Others

    1996-01-01

    A program designed to administer questionnaires on IBM and IBM-compatible personal computers is described. The program prompts subjects to indicate responses by clicking on a graphic line segment or entering a numeric value. The program enhances accuracy in estimating moderating effects by overcoming transcriptional errors and scale coarseness.…

  17. Improved sampling for airborne surveys to estimate wildlife population parameters in the African Savannah

    NARCIS (Netherlands)

    Khaemba, W.; Stein, A.

    2002-01-01

    Parameter estimates, obtained from airborne surveys of wildlife populations, often have large bias and large standard errors. Sampling error is one of the major causes of this imprecision and the occurrence of many animals in herds violates the common assumptions in traditional sampling designs like

  18. Improvement of theoretik-methodical approaches to the estimation of regional marketing attractiveness

    Directory of Open Access Journals (Sweden)

    O.A. Bilovodska

    2011-01-01

    Full Text Available Modern approaches to understanding of territory as geogoods are analysed, differences and the general lines of concepts "competitiveness of territory" and "marketing attractiveness of territory" are defined. Methodical approaches to the analysis of regional marketing attractiveness are offered and the corresponding estimation of the Sumy, Poltava and Chernigov regions is done.

  19. Improved vertical streambed flux estimation using multiple diurnal temperature methods in series

    Science.gov (United States)

    Irvine, Dylan; Briggs, Martin; Cartwright, Ian; Scruggs, Courtney; Lautz, Laura K.

    2017-01-01

    Analytical solutions that use diurnal temperature signals to estimate vertical fluxes between groundwater and surface water based on either amplitude ratios (Ar) or phase shifts (Δϕ) produce results that rarely agree. Analytical solutions that simultaneously utilize Ar and Δϕ within a single solution have more recently been derived, decreasing uncertainty in flux estimates in some applications. Benefits of combined (ArΔϕ) methods also include that thermal diffusivity and sensor spacing can be calculated. However, poor identification of either Ar or Δϕ from raw temperature signals can lead to erratic parameter estimates from ArΔϕ methods. An add-on program for VFLUX 2 is presented to address this issue. Using thermal diffusivity selected from an ArΔϕ method during a reliable time period, fluxes are recalculated using an Ar method. This approach maximizes the benefits of the Ar and ArΔϕ methods. Additionally, sensor spacing calculations can be used to identify periods with unreliable flux estimates, or to assess streambed scour. Using synthetic and field examples, the use of these solutions in series was particularly useful for gaining conditions where fluxes exceeded 1 m/d.

  20. A hierarchical Bayesian GEV model for improving local and regional flood quantile estimates

    Science.gov (United States)

    Lima, Carlos H. R.; Lall, Upmanu; Troy, Tara; Devineni, Naresh

    2016-10-01

    We estimate local and regional Generalized Extreme Value (GEV) distribution parameters for flood frequency analysis in a multilevel, hierarchical Bayesian framework, to explicitly model and reduce uncertainties. As prior information for the model, we assume that the GEV location and scale parameters for each site come from independent log-normal distributions, whose mean parameter scales with the drainage area. From empirical and theoretical arguments, the shape parameter for each site is shrunk towards a common mean. Non-informative prior distributions are assumed for the hyperparameters and the MCMC method is used to sample from the joint posterior distribution. The model is tested using annual maximum series from 20 streamflow gauges located in an 83,000 km2 flood prone basin in Southeast Brazil. The results show a significant reduction of uncertainty estimates of flood quantile estimates over the traditional GEV model, particularly for sites with shorter records. For return periods within the range of the data (around 50 years), the Bayesian credible intervals for the flood quantiles tend to be narrower than the classical confidence limits based on the delta method. As the return period increases beyond the range of the data, the confidence limits from the delta method become unreliable and the Bayesian credible intervals provide a way to estimate satisfactory confidence bands for the flood quantiles considering parameter uncertainties and regional information. In order to evaluate the applicability of the proposed hierarchical Bayesian model for regional flood frequency analysis, we estimate flood quantiles for three randomly chosen out-of-sample sites and compare with classical estimates using the index flood method. The posterior distributions of the scaling law coefficients are used to define the predictive distributions of the GEV location and scale parameters for the out-of-sample sites given only their drainage areas and the posterior distribution of the

  1. Incorporating 16S Gene Copy Number Information Improves Estimates of Microbial Diversity and Abundance

    Science.gov (United States)

    Kembel, Steven W.; Wu, Martin; Eisen, Jonathan A.; Green, Jessica L.

    2012-01-01

    The abundance of different SSU rRNA (“16S”) gene sequences in environmental samples is widely used in studies of microbial ecology as a measure of microbial community structure and diversity. However, the genomic copy number of the 16S gene varies greatly – from one in many species to up to 15 in some bacteria and to hundreds in some microbial eukaryotes. As a result of this variation the relative abundance of 16S genes in environmental samples can be attributed both to variation in the relative abundance of different organisms, and to variation in genomic 16S copy number among those organisms. Despite this fact, many studies assume that the abundance of 16S gene sequences is a surrogate measure of the relative abundance of the organisms containing those sequences. Here we present a method that uses data on sequences and genomic copy number of 16S genes along with phylogenetic placement and ancestral state estimation to estimate organismal abundances from environmental DNA sequence data. We use theory and simulations to demonstrate that 16S genomic copy number can be accurately estimated from the short reads typically obtained from high-throughput environmental sequencing of the 16S gene, and that organismal abundances in microbial communities are more strongly correlated with estimated abundances obtained from our method than with gene abundances. We re-analyze several published empirical data sets and demonstrate that the use of gene abundance versus estimated organismal abundance can lead to different inferences about community diversity and structure and the identity of the dominant taxa in microbial communities. Our approach will allow microbial ecologists to make more accurate inferences about microbial diversity and abundance based on 16S sequence data. PMID:23133348

  2. Improved estimates of the European winter wind storm climate and the risk of reinsurance loss

    Science.gov (United States)

    Della-Marta, P. M.; Liniger, M. A.; Appenzeller, C.; Bresch, D. N.; Koellner-Heck, P.; Muccione, V.

    2009-04-01

    Current estimates of the European wind storm climate and their associated losses are often hampered by either relatively short, coarse resolution or inhomogeneous datasets. This study estimates the European wind storm climate using dynamical seasonal-to-decadal (s2d) climate forecasts from the European Centre for Medium-Range Weather Forecasts (ECMWF). The current s2d models' have limited predictive skill of European storminess, making the ensemble forecasts ergodic samples on which to build pseudo climates of 310 to 396 years in length. Extended winter (ONDJFMA) wind storm climatologies are created using a scalar extreme wind index considering only data above a high threshold. The method identifes between 2331 and 2471 wind storms using s2d data and 380 wind storms in ERA-40. Classical extreme value analysis (EVA) techniques are used to determine the wind storm climatologies. We suggest that the ERA-40 climatology, by virtue of its length, limiting form, and the fitting method, overestimates the return period (RP) of wind storms with RPs between 10-300 years and underestimates the return period of wind storms with RPs greater than 300 years. A 50 year event in ERA-40 is approximately a 33 year event using s2d. The largest influence on ERA-40 RP uncertainties is the sampling variability associated with only 45 seasons of storms. The climatologies are linked to the Swiss Reinsurance Company (Swiss Re) European wind storm loss model. New estimates of the risk of loss are compared with those from historical and stochastically generated wind storm fields used by Swiss Re. The resulting loss-frequency relationship matches well with the two independently modelled estimates and clearly demonstrates the added value by using alternative data and methods, as proposed in this study, to estimate the RP of high RP losses.

  3. Multinomial N-mixture models improve the applicability of electrofishing for developing population estimates of stream-dwelling Smallmouth Bass

    Science.gov (United States)

    Mollenhauer, Robert; Brewer, Shannon K.

    2017-01-01

    Failure to account for variable detection across survey conditions constrains progressive stream ecology and can lead to erroneous stream fish management and conservation decisions. In addition to variable detection’s confounding long-term stream fish population trends, reliable abundance estimates across a wide range of survey conditions are fundamental to establishing species–environment relationships. Despite major advancements in accounting for variable detection when surveying animal populations, these approaches remain largely ignored by stream fish scientists, and CPUE remains the most common metric used by researchers and managers. One notable advancement for addressing the challenges of variable detection is the multinomial N-mixture model. Multinomial N-mixture models use a flexible hierarchical framework to model the detection process across sites as a function of covariates; they also accommodate common fisheries survey methods, such as removal and capture–recapture. Effective monitoring of stream-dwelling Smallmouth Bass Micropterus dolomieu populations has long been challenging; therefore, our objective was to examine the use of multinomial N-mixture models to improve the applicability of electrofishing for estimating absolute abundance. We sampled Smallmouth Bass populations by using tow-barge electrofishing across a range of environmental conditions in streams of the Ozark Highlands ecoregion. Using an information-theoretic approach, we identified effort, water clarity, wetted channel width, and water depth as covariates that were related to variable Smallmouth Bass electrofishing detection. Smallmouth Bass abundance estimates derived from our top model consistently agreed with baseline estimates obtained via snorkel surveys. Additionally, confidence intervals from the multinomial N-mixture models were consistently more precise than those of unbiased Petersen capture–recapture estimates due to the dependency among data sets in the

  4. A conceptual model to estimate cost effectiveness of the indoor environment improvements

    Energy Technology Data Exchange (ETDEWEB)

    Seppanen, Olli; Fisk, William J.

    2003-06-01

    Macroeconomic analyses indicate a high cost to society of a deteriorated indoor climate. The few example calculations performed to date indicate that measures taken to improve IEQ are highly cost-effective when health and productivity benefits are considered. We believe that cost-benefit analyses of building designs and operations should routinely incorporate health and productivity impacts. As an initial step, we developed a conceptual model that shows the links between improvements in IEQ and the financial gains from reductions in medical care and sick leave, improved work performance, lower employee turn over, and reduced maintenance due to fewer complaints.

  5. Study and Tests of Improved Rain Estimates from the TRMM Precipitation Radar.

    Science.gov (United States)

    Ferreira, Franck; Amayenc, Paul; Oury, Stéphane; Testud, Jacques

    2001-11-01

    Rain rate R estimation from the 2A-25 profiling algorithm of the Tropical Rainfall Measuring Mission (TRMM) precipitation radar (PR) is analyzed in two ways. Standard results from the operating version-5 algorithm are compared with those from the previous version 4. Also, various adjustments of the involved rain relationships in version 4 are explored, which leads to the proposal of two alternatives to the standard rain rate (Rstd-V4). The first one, (RN0), is based on N(0-scaled relations exploiting the concept of normalized -shaped drop size distributions; the second one, (RkR), relies on using constant R-k instead of constant R-Z relation as in the standard, where Z is reflectivity and k is attenuation coefficient. Error analysis points out a lower sensitivity of the alternative estimates to errors in radar calibration, or initial relations, than the standard. Results from a set of PR data, over ocean and land, show that the version-4 alternatives, and version-5 standard (Rstd-V5), produce more rain than the version-4 standard, which may correct for some reported underestimation. These approaches are tested via point-to-point comparisons of 3D PR-derived Z and R fields (versions 4 and 5) with `reference' fields derived from airborne dual-beam radar on board a National Oceanic and Atmospheric Administration P3-42 aircraft in Hurricanes Bonnie and Brett, for good cases of TRMM overpasses over the ocean. In the comparison domains, Bonnie is dominated by stratiform rain, and Brett includes convective and stratiform rain. In stratiform rain, the mean difference in Z, accounting for different frequencies and scanning geometries of both radars, lies within the uncertainty margin of residual errors in the radar calibrations. Also, the PR mean rain-rate estimates, RkR and Rstd-V5, agree fairly well with the P3 estimate, RP3, whereas Rstd-V4 and RN0 respectively underestimate and overestimate RP3. In convective rain (Brett case), the PR estimates of Z and R largely exceed

  6. An Improved Azimuth Angle Estimation Method with a Single Acoustic Vector Sensor Based on an Active Sonar Detection System.

    Science.gov (United States)

    Zhao, Anbang; Ma, Lin; Ma, Xuefei; Hui, Juan

    2017-02-20

    In this paper, an improved azimuth angle estimation method with a single acoustic vector sensor (AVS) is proposed based on matched filtering theory. The proposed method is mainly applied in an active sonar detection system. According to the conventional passive method based on complex acoustic intensity measurement, the mathematical and physical model of this proposed method is described in detail. The computer simulation and lake experiments results indicate that this method can realize the azimuth angle estimation with high precision by using only a single AVS. Compared with the conventional method, the proposed method achieves better estimation performance. Moreover, the proposed method does not require complex operations in frequencydomain and achieves computational complexity reduction.

  7. Fetal weight estimation for prediction of fetal macrosomia: does additional clinical and demographic data using pattern recognition algorithm improve detection?

    Science.gov (United States)

    Degani, Shimon; Peleg, Dori; Bahous, Karina; Leibovitz, Zvi; Shapiro, Israel; Ohel, Gonen

    2008-01-01

    Objective The aim of this study was to test whether pattern recognition classifiers with multiple clinical and sonographic variables could improve ultrasound prediction of fetal macrosomia over prediction which relies on the commonly used formulas for the sonographic estimation of fetal weight. Methods The SVM algorithm was used for binary classification between two categories of weight estimation: >4000gr and macrosomia with a sensitivity of 81%, specificity of 73%, positive predictive value of 81% and negative predictive value of 73%. The comparative figures according to the combined criteria based on two commonly used formulas generated from regression analysis were 88.1%, 34%, 65.8%, 66.7%. Conclusions The SVM algorithm provides a comparable prediction of LGA fetuses as other commonly used formulas generated from regression analysis. The better specificity and better positive predictive value suggest potential value for this method and further accumulation of data may improve the reliability of this approach. PMID:22439018

  8. Using the past to constrain the future: how the palaeorecord can improve estimates of global warming

    CERN Document Server

    Edwards, Tamsin L; Harrison, Sandy P; 10.1177/0309133307083295

    2012-01-01

    Climate sensitivity is defined as the change in global mean equilibrium temperature after a doubling of atmospheric CO2 concentration and provides a simple measure of global warming. An early estimate of climate sensitivity, 1.5-4.5{\\deg}C, has changed little subsequently, including the latest assessment by the Intergovernmental Panel on Climate Change. The persistence of such large uncertainties in this simple measure casts doubt on our understanding of the mechanisms of climate change and our ability to predict the response of the climate system to future perturbations. This has motivated continued attempts to constrain the range with climate data, alone or in conjunction with models. The majority of studies use data from the instrumental period (post-1850) but recent work has made use of information about the large climate changes experienced in the geological past. In this review, we first outline approaches that estimate climate sensitivity using instrumental climate observations and then summarise attem...

  9. Improved formula for estimating added resistance of ships in engineering applications

    Science.gov (United States)

    Liu, Shukui; Shang, Baoguo; Papanikolaou, Apostolos; Bolbot, Victor

    2016-12-01

    The authors previously introduced a semi-empirical formula that enabled fast estimation of the added resistance of ships in head waves, and in this study the formula is further refined for easy use in engineering applications. It includes an alternative ship draft correction coefficient, which better accounts for the wave pressure decay with ship's draft. In addition, it only uses the speed and main characteristics of the ship and wave environment as input, and has been simplified to the extent that it can be readily processed using a pocket calculator. Extensive validations are conducted for different ship types at low to moderate speeds in various typical irregular sea conditions, and encouraging results are obtained. This relevant and topical research lies within the framework of the recent IMO MEPC.232(65) (2013) EEDI guidelines for estimating the minimum powering of ships in adverse weather conditions, which specify for the use of simple methods in current Level 2 assessment within engineering applications.

  10. The Parameter Estimation and Stability Improvement of the Brushless DC Motor

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Cherl Jin [Halla University (Korea, Republic of); Im, Tae Bin [Korea Electronics Technology Institute (Korea, Republic of)

    1999-03-01

    Generally, the digital controller has many advantages such as high precision, robustness to electrical noise, capability of flexible programming and fast response to the load variation. In this study, we have established proper mathematical equivalent model of brushless DC (BLDC) motor and estimated the motor parameter by means of the back-emf measurement as being the step input to the controlled target BLDC motor. And the validity of proposed estimation method is confirmed by the test result of step response. As well, we have designed the reasonable digital controller as a consequence of the root locus method which is obtained from the open-loop transfer function of BLDC motor with hall sensor, and the determination of control gain for variable speed control. Here, revised Ziegler-Nichols tuning method is applied for the proper digital gain establishment, and the system stability is verified by the frequency domain analysis with Bode-plot and experimentation. (author). 8 refs., 18 figs., 1 tab.

  11. Improving Range Estimation of a 3-Dimensional Flash Ladar via Blind Deconvolution

    Science.gov (United States)

    2010-09-01

    1√ 2 )] = K ∑ k=1 [−(dk − Apk (R)− B)2 22 + ln ( 1√ 2 )] (4.4) Because the range and amplitude are both unknown parameters, the estimation...setting it equal to zero results in K ∑ k=1 [ 2(dk − Apk (R)− B) 22 ] pk(R) = 0 (4.6) where the term that doesn’t depend on A has been dropped

  12. Skewness of cloud droplet spectrum and an improved estimation for its relative dispersion

    Science.gov (United States)

    Liu, Yu; Lu, Chunsong; Li, Weiliang

    2017-02-01

    The relative dispersion of the cloud droplet spectrum is a very important parameter in describing and modeling cloud microphysical processes. Based on the definition of skewness as well as theoretical and data analyses, a linear fitting relationship ( α = 2.91 ɛ-0.59) between skewness ( α) and relative dispersion ( ɛ) is established and a new method is developed to estimate the relative dispersion of the cloud droplet spectrum. The new method does not depend on any assumption of a particular distribution for the cloud droplet spectrum and has broader applicability than the previous methods. Comparisons of the three methods for the relative dispersion with the observed data supported the following conclusions. (1) The skewness of the cloud droplet spectrum is asymmetrically distributed. An assumption of zero skewness in quantifying the relative dispersion inevitably results in relatively large deviations from the observations. Errors of the estimated relative dispersion due to the omission of the skewness term are not solely related to the skewness, but rather to the product of the skewness and relative dispersion. (2) The use of the assumption that the cloud droplet spectrum takes a gamma distribution is similar to the assumption that the skewness is twice the relative dispersion. This leads to a better accuracy in estimating the relative dispersion than that with zero skewness assumption. (3) Comparisons with observations show that the new method is more accurate than the one under gamma distribution assumption and is the best among all the three methods. (4) It is believed that finding a better correlation between the skewness and the relative dispersion would further reduce the deviations for the estimated relative dispersion.

  13. VA Construction: Improved Processes Needed to Monitor Contract Modifications, Develop Schedules, and Estimate Costs

    Science.gov (United States)

    2017-03-01

    that designs should avoid costly and unwarranted architectural and engineering embellishments and unnecessary construction and maintenance expenses ...relevant data taken from the existing contract, remaining construction work, and cost information from other hospitals . Credible A cost estimate is...Office of Management and Budget Circular A-11 states that the cost of a capital asset such as a new hospital is its full life-cycle cost, which

  14. Search Space Calculation to Improve Parameter Estimation of Excitation Control Systems

    OpenAIRE

    2013-01-01

    A method to calculate the search space for each parameter in an excitation control system is presented in this paper. The calculated search space is intended to reduce the number of parameter solution sets that can be found by an estimation algorithm, reducing its processing time. The method considers a synchronous generator time constant range between 4s and 10s, an excitation control system performance index, a controller design technique, and the excitation control system model structure. ...

  15. An improved self-adaptive membrane computing optimization algorithm and its applications in residue hydrogenating model parameter estimation

    Institute of Scientific and Technical Information of China (English)

    芦会彬; 薄翠梅; 杨世品

    2015-01-01

    In order to solve the non-linear and high-dimensional optimization problems more effectively, an improved self-adaptive membrane computing (ISMC) optimization algorithm was proposed. The proposed ISMC algorithm applied improved self-adaptive crossover and mutation formulae that can provide appropriate crossover operator and mutation operator based on different functions of the objects and the number of iterations. The performance of ISMC was tested by the benchmark functions. The simulation results for residue hydrogenating kinetics model parameter estimation show that the proposed method is superior to the traditional intelligent algorithms in terms of convergence accuracy and stability in solving the complex parameter optimization problems.

  16. An Improved Fuzzy Based Missing Value Estimation in DNA Microarray Validated by Gene Ranking

    Directory of Open Access Journals (Sweden)

    Sujay Saha

    2016-01-01

    Full Text Available Most of the gene expression data analysis algorithms require the entire gene expression matrix without any missing values. Hence, it is necessary to devise methods which would impute missing data values accurately. There exist a number of imputation algorithms to estimate those missing values. This work starts with a microarray dataset containing multiple missing values. We first apply the modified version of the fuzzy theory based existing method LRFDVImpute to impute multiple missing values of time series gene expression data and then validate the result of imputation by genetic algorithm (GA based gene ranking methodology along with some regular statistical validation techniques, like RMSE method. Gene ranking, as far as our knowledge, has not been used yet to validate the result of missing value estimation. Firstly, the proposed method has been tested on the very popular Spellman dataset and results show that error margins have been drastically reduced compared to some previous works, which indirectly validates the statistical significance of the proposed method. Then it has been applied on four other 2-class benchmark datasets, like Colorectal Cancer tumours dataset (GDS4382, Breast Cancer dataset (GSE349-350, Prostate Cancer dataset, and DLBCL-FL (Leukaemia for both missing value estimation and ranking the genes, and the results show that the proposed method can reach 100% classification accuracy with very few dominant genes, which indirectly validates the biological significance of the proposed method.

  17. Exploiting magnetic resonance angiography imaging improves model estimation of BOLD signal.

    Directory of Open Access Journals (Sweden)

    Zhenghui Hu

    Full Text Available The change of BOLD signal relies heavily upon the resting blood volume fraction ([Formula: see text] associated with regional vasculature. However, existing hemodynamic data assimilation studies pretermit such concern. They simply assign the value in a physiologically plausible range to get over ill-conditioning of the assimilation problem and fail to explore actual [Formula: see text]. Such performance might lead to unreliable model estimation. In this work, we present the first exploration of the influence of [Formula: see text] on fMRI data assimilation, where actual [Formula: see text] within a given cortical area was calibrated by an MR angiography experiment and then was augmented into the assimilation scheme. We have investigated the impact of [Formula: see text] on single-region data assimilation and multi-region data assimilation (dynamic cause modeling, DCM in a classical flashing checkerboard experiment. Results show that the employment of an assumed [Formula: see text] in fMRI data assimilation is only suitable for fMRI signal reconstruction and activation detection grounded on this signal, and not suitable for estimation of unobserved states and effective connectivity study. We thereby argue that introducing physically realistic [Formula: see text] in the assimilation process may provide more reliable estimation of physiological information, which contributes to a better understanding of the underlying hemodynamic processes. Such an effort is valuable and should be well appreciated.

  18. Carbon Footprint estimation for a Sustainable Improvement of Supply Chains: State of the Art

    Directory of Open Access Journals (Sweden)

    Pilar Cordero

    2013-07-01

    Full Text Available Purpose: This paper examines the current methodologies and approaches developed to estimate carbon footprint in supply chains and the studies existing in the literature review about the application of these methodologies and other new approaches proposed by some authors.Design/methodology/approach: Literature review about methodologies developed by some authors for determining greenhouse gases emissions throughout the supply chain of a given sector or organization.Findings and Originality/value: Due to its usefulness for the design and management of a sustainable supply chain management, methodologies for calculating carbon footprint across the supply chain are recommended by many authors not only to reduce GHG emissions but also to optimize it in a cost-effective manner. Although these approaches are in first stages of development and the literature is scarce, different methodologies for estimating CF emissions which include EIO analysis models and standardized methods and guidance have been developed, some of them applicable to supply chains especially methodologies for calculating CF of a specific economic sector supply chain in a territory or country and for calculating CF of an organization applicable to the estimation of GHG emissions of a specific company supply chain.

  19. The new approach of polarimetric attenuation correction for improving radar quantitative precipitation estimation(QPE)

    Science.gov (United States)

    Gu, Ji-Young; Suk, Mi-Kyung; Nam, Kyung-Yeub; Ko, Jeong-Seok; Ryzhkov, Alexander

    2016-04-01

    To obtain high-quality radar quantitative precipitation estimation data, reliable radar calibration and efficient attenuation correction are very important. Because microwave radiation at shorter wavelength experiences strong attenuation in precipitation, accounting for this attenuation is the essential work at shorter wavelength radar. In this study, the performance of different attenuation/differential attenuation correction schemes at C band is tested for two strong rain events which occurred in central Oklahoma. And also, a new attenuation correction scheme (combination of self-consistency and hot-spot concept methodology) that separates relative contributions of strong convective cells and the rest of the storm to the path-integrated total and differential attenuation is among the algorithms explored. A quantitative use of weather radar measurement such as rainfall estimation relies on the reliable attenuation correction. We examined the impact of attenuation correction on estimates of rainfall in heavy rain events by using cross-checking with S-band radar measurements which are much less affected by attenuation and compared the storm rain totals obtained from the corrected Z and KDP and rain gages in these cases. This new approach can be utilized at shorter wavelength radars efficiently. Therefore, it is very useful to Weather Radar Center of Korea Meteorological Administration preparing X-band research dual Pol radar network.

  20. Improved dose-volume histogram estimates for radiopharmaceutical therapy by optimizing quantitative SPECT reconstruction parameters

    Science.gov (United States)

    Cheng, Lishui; Hobbs, Robert F.; Segars, Paul W.; Sgouros, George; Frey, Eric C.

    2013-06-01

    In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose-volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator-detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less

  1. Improved estimation of sediment source contributions by concentration-dependent Bayesian isotopic mixing model

    Science.gov (United States)

    Ram Upadhayay, Hari; Bodé, Samuel; Griepentrog, Marco; Bajracharya, Roshan Man; Blake, Will; Cornelis, Wim; Boeckx, Pascal

    2017-04-01

    The implementation of compound-specific stable isotope (CSSI) analyses of biotracers (e.g. fatty acids, FAs) as constraints on sediment-source contributions has become increasingly relevant to understand the origin of sediments in catchments. The CSSI fingerprinting of sediment utilizes CSSI signature of biotracer as input in an isotopic mixing model (IMM) to apportion source soil contributions. So far source studies relied on the linear mixing assumptions of CSSI signature of sources to the sediment without accounting for potential effects of source biotracer concentration. Here we evaluated the effect of FAs concentration in sources on the accuracy of source contribution estimations in artificial soil mixture of three well-separated land use sources. Soil samples from land use sources were mixed to create three groups of artificial mixture with known source contributions. Sources and artificial mixture were analysed for δ13C of FAs using gas chromatography-combustion-isotope ratio mass spectrometry. The source contributions to the mixture were estimated using with and without concentration-dependent MixSIAR, a Bayesian isotopic mixing model. The concentration-dependent MixSIAR provided the closest estimates to the known artificial mixture source contributions (mean absolute error, MAE = 10.9%, and standard error, SE = 1.4%). In contrast, the concentration-independent MixSIAR with post mixing correction of tracer proportions based on aggregated concentration of FAs of sources biased the source contributions (MAE = 22.0%, SE = 3.4%). This study highlights the importance of accounting the potential effect of a source FA concentration for isotopic mixing in sediments that adds realisms to mixing model and allows more accurate estimates of contributions of sources to the mixture. The potential influence of FA concentration on CSSI signature of sediments is an important underlying factor that determines whether the isotopic signature of a given source is observable

  2. Using ancillary information to improve hypocenter estimation: Bayesian single event location (BSEL)

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Dale N [Los Alamos National Laboratory

    2008-01-01

    We have developed and tested an algorithm, Bayesian Single Event Location (BSEL), for estimating the location of a seismic event. The main driver for our research is the inadequate representation of ancillary information in the hypocenter estimation procedure. The added benefit is that we have also addressed instability issues often encountered with historical NLR solvers (e.g., non-convergence or seismically infeasible results). BSEL differs from established nonlinear regression techniques by using a Bayesian prior probability density function (prior PDF) to incorporate ancillary physical basis constraints about event location. P-wave arrival times from seismic events are used in the development. Depth, a focus of this paper, may be modeled with a prior PDF (potentially skewed) that captures physical basis bounds from surface wave observations. This PDF is constructed from a Rayleigh wave depth excitation eigenfunction that is based on the observed minimum period from a spectrogram analysis and estimated near-source elastic parameters. For example, if the surface wave is an Rg phase, it potentially provides a strong constraint for depth, which has important implications for remote monitoring of nuclear explosions. The proposed Bayesian algorithm is illustrated with events that demonstrate its congruity with established hypocenter estimation methods and its application potential. The BSEL method is applied to three events: (1) A shallow Mw 4 earthquake that occurred near Bardwell, KY on June 6, 2003, (2) the Mw 5.6 earthquake of July 26, 2005 that occurred near Dillon, MT, and (3) a deep Mw 5.7 earthquake that occurred off the coast of Japan on April 22, 1980. A strong Rg was observed from the Bardwell, KY earthquake that places very strong constraints on depth and origin time. No Rg was observed for the Dillon, MT earthquake, but we used the minimum observed period of a Rayleigh wave (7 seconds) to reduce the depth and origin time uncertainty. Because the Japan

  3. Improving Drought Monitoring and Predictions Using Physically Based Evaporative Demand Estimates

    Science.gov (United States)

    Hobbins, M. T.; Wood, A. W.; Werner, K.

    2011-12-01

    Existing drought monitors rely heavily on precipitation (Prcp) and temperature (T) data to derive moisture fluxes at the surface, often using estimates of evaporative demand (Eo) based only on T to derive actual evapotranspiration (ET) from land surface models (LSMs). An example of this is the popular Palmer Drought Severity Index (PDSI). In the analysis of drought trends and dynamics, however, the choice of Eo-driver for LSMs is crucial: it significantly affects both the direction and magnitude of trends in estimated ET and soil moisture, particularly in energy-limited areas (in water-limited areas, ET and soil moisture trends are driven by Prcp trends). All else equal, in the long term, T-based Eo measures result in declining ET estimates (i.e., drying) as T rises, whereas using more appropriate, physically based Eo estimates will more accurately reflect observations of both wetting and drying under warming. With regard to the short-term variabilities more appropriate to monitoring ongoing droughts, we contend that, given that various requirements are met, using an appropriate Eo driver (i) as a drought metric in itself, (ii) to drive drought monitors' LSMs, and (iii) in combination with short-term Eo forecasts will enhance characterization of the evaporative dynamics of ongoing drought and permit more accurate predictions of drought development. The requirements of an appropriate Eo estimate are as follows: that at operationally appropriate time and space scales Eo is diagnostic of ET (i.e., ET and Eo co-vary in a complementary fashion); that the Eo formulation and driving data produce good estimates of Eo (i.e., the model is physically based in that it combines radiative and advective drivers, and produces Eo estimates that are accurate and unbiased with respect to observations from drivers that are available with limited latency on a daily basis) and at operational spatio-temporal resolutions; and that Eo can be forecast at operational time and space scales

  4. Improving the rainfall rate estimation in the midstream of the Heihe River Basin using rain drop size distribution

    Directory of Open Access Journals (Sweden)

    G. Zhao

    2009-09-01

    Full Text Available During the intensive observation period of the Watershed Allied Telemetry Experimental Research (WATER, a total of 1074 raindrop size distribution were measured by the Parsivel disdrometer, a latest state of the art optical laser instrument. Because of the limited observation data in Qinghai-Tibet Plateau, the modeling behavior was not well-done. We used raindrop size distributions to improve the rain rate estimator of meteorological radar, in order to obtain many accurate rain rate data in this area. We got the relationship between the terminal velocity of the rain drop and the diameter (mm of a rain drop: v(D=4.67 D0.53. Then four types of estimators for X-band polarimetric radar are examined. The simulation results show that the classical estimator R(Z is most sensitive to variations in DSD and the estimator R (KDP, Z, ZDR is the best estimator for estimating the rain rate. The lowest sensitivity of the rain rate estimator R (KDP, Z, ZDP to variations in DSD can be explained by the following facts. The difference in the forward-scattering amplitudes at horizontal and vertical polarizations, which contributes KDP, is proportional to the 3rd power of the drop diameter. On the other hand, the exponent of the backscatter cross section, which contributes to Z, is proportional to the 6th power of the drop diameter. Because the rain rate R is proportional to the 3.57th power of the drop diameter, KDP is less sensitive to DSD variations than Z.

  5. Investigation of the $\\eta'$-$\\eta_c$-mixing with improved stochastic estimators

    CERN Document Server

    Ehmann, Christian

    2009-01-01

    Charmonia are flavour singlet mesons and thus in principle contributions from disconnected quark line diagrams might affect their masses, either directly or via mixing with other flavour singlet channels. We present a first study that takes both effects into account. We employ improved stochastic all-to-all propagator techniques (including new methods) to calculate the diagrams that appear within the mixing matrix between the $\\eta'$ and the $\\eta_c$. The runs are initially performed on $N_f=2$ $16^3\\times 32$ configurations with the non-perturbatively improved Sheikholeslami-Wilson action, both for valence and sea quarks.

  6. Statistical innovations improve prevalence estimates of nutrient risk populations: applications in São Paulo, Brazil.

    Science.gov (United States)

    Morimoto, Juliana Masami; Marchioni, Dirce Maria Lobo; Cesar, Chester Luiz Galvão; Fisberg, Regina Mara

    2012-10-01

    The objective of this study was to estimate the prevalence of inadequate micronutrient intake and excess sodium intake among adults age 19 years and older in the city of São Paulo, Brazil. Twenty-four-hour dietary recall and sociodemographic data were collected from each participant (n=1,663) in a cross-sectional study, Inquiry of Health of São Paulo, of a representative sample of the adult population of the city of São Paulo in 2003 (ISA-2003). The variability in intake was measured through two replications of the 24-hour recall in a subsample of this population in 2007 (ISA-2007). Usual intake was estimated by the PC-SIDE program (version 1.0, 2003, Department of Statistics, Iowa State University), which uses an approach developed by Iowa State University. The prevalence of nutrient inadequacy was calculated using the Estimated Average Requirement cut-point method for vitamins A and C, thiamin, riboflavin, niacin, copper, phosphorus, and selenium. For vitamin D, pantothenic acid, manganese, and sodium, the proportion of individuals with usual intake equal to or more than the Adequate Intake value was calculated. The percentage of individuals with intake equal to more than the Tolerable Upper Intake Level was calculated for sodium. The highest prevalence of inadequacy for males and females, respectively, occurred for vitamin A (67% and 58%), vitamin C (52% and 62%), thiamin (41% and 50%), and riboflavin (29% and 19%). The adjustment for the within-person variation presented lower prevalence of inadequacy due to removal of within-person variability. All adult residents of São Paulo had excess sodium intake, and the rates of nutrient inadequacy were high for certain key micronutrients.

  7. Improved quantitative visualization of hypervelocity flow through wavefront estimation based on shadow casting of sinusoidal gratings.

    Science.gov (United States)

    Medhi, Biswajit; Hegde, Gopalakrishna M; Gorthi, Sai Siva; Reddy, Kalidevapura Jagannath; Roy, Debasish; Vasu, Ram Mohan

    2016-08-01

    A simple noninterferometric optical probe is developed to estimate wavefront distortion suffered by a plane wave in its passage through density variations in a hypersonic flow obstructed by a test model in a typical shock tunnel. The probe has a plane light wave trans-illuminating the flow and casting a shadow of a continuous-tone sinusoidal grating. Through a geometrical optics, eikonal approximation to the distorted wavefront, a bilinear approximation to it is related to the location-dependent shift (distortion) suffered by the grating, which can be read out space-continuously from the projected grating image. The processing of the grating shadow is done through an efficient Fourier fringe analysis scheme, either with a windowed or global Fourier transform (WFT and FT). For comparison, wavefront slopes are also estimated from shadows of random-dot patterns, processed through cross correlation. The measured slopes are suitably unwrapped by using a discrete cosine transform (DCT)-based phase unwrapping procedure, and also through iterative procedures. The unwrapped phase information is used in an iterative scheme, for a full quantitative recovery of density distribution in the shock around the model, through refraction tomographic inversion. Hypersonic flow field parameters around a missile-shaped body at a free-stream Mach number of ∼8 measured using this technique are compared with the numerically estimated values. It is shown that, while processing a wavefront with small space-bandwidth product (SBP) the FT inversion gave accurate results with computational efficiency; computation-intensive WFT was needed for similar results when dealing with larger SBP wavefronts.

  8. Improvement of risk estimate on wind turbine tower buckled by hurricane

    CERN Document Server

    Li, Jingwei

    2013-01-01

    Wind is one of the important reasonable resources. However, wind turbine towers are sure to be threatened by hurricanes. In this paper, method to estimate the number of wind turbine towers that would be buckled by hurricanes is discussed. Monte Carlo simulations show that our method is much better than the previous one. Since in our method, the probability density function of the buckling probability of a single turbine tower in a single hurricane is obtained accurately but not from one approximated expression. The result in this paper may be useful to the design and maintenance of wind farms.

  9. Estimation Algorithm of Contending Stations Based on Improved DCF Model in IEEE 802.11

    Institute of Scientific and Technical Information of China (English)

    HU He-fei; LIU Yuan-an; LI Shu-lan

    2004-01-01

    The fundamental access method of IEEE 802.11 is a DCF known as carrier sense multiple access with collision avoidance (CSMA/CA) scheme with exponential back-off. RTS_threshold is used to determine whether to deploy RTS/CTS access method. This threshold should vary with the number of contending stations which contend wireless media to get better throughput. The paper proposes an algorithm which estimates the number of contending stations in BSS. The algorithm is shown to be accurate which is verified by elaborate simulations.

  10. An Improved Doppler Rate Estimation Approach for Sliding Spotlight SAR Data Based on the Transposition Domain

    Directory of Open Access Journals (Sweden)

    She Xiao-qiang

    2014-08-01

    Full Text Available In image processing of high-resolution sliding spotlight SAR, it is important to know the Doppler rate with accuracy; however, traditional Doppler rate estimation algorithms are not very helpful because of the azimuth spectrum folding phenomenon. In this study, an algorithm that works on the transposition domain is proposed to solve this problem. Furthermore, the algorithm is also helpful in obtaining excellent focused images by embedding it in the two-step technique. Finally, the proposed algorithm is verified using computer simulations.

  11. Use of Cokriging to Improve Spatial Resolution of Ambient Airborne Contaminant Concentration Estimates in Detroit and Windsor

    Science.gov (United States)

    Lemke, L. D.; Bobryk, S. M.; Xu, X.

    2010-12-01

    A combination of active and passive air sampling devices was deployed to measure ambient air quality over a two-week period during September 2008 in Detroit, Michigan, USA and Windsor, Ontario, Canada. Passive diffusion monitors were used to measure nitrogen dioxide (NO2), sulfur dioxide (SO2) and 26 volatile organic compounds (VOCs) at 100 sampling sites with an approximate spacing of 1 per 5 km2. Active samplers utilizing a pump were collocated at 50 of the passive sites to sample particulate matter (PM) and 23 polycyclic aromatic hydrocarbons (PAHs) at an approximate sample density of 1 per 10 km2. The field campaign yielded acceptable data at 98 of the 100 passive monitoring sites. However, pump failures and power outages limited acceptable data to only 38 out of 50 active sites and the intended spatial coverage was not achieved. The utility of cokriging was therefore investigated as a means of improving PAH and PM concentration estimates by using more densely spaced passive sampler analyte concentrations as secondary information. Moderate positive correlation coefficients (panalysis was performed to specify the cross-covariance structure between each pair of pollutants using a linear model of coregionalization. Concentration maps produced through both ordinary kriging (OK) and ordinary cokriging (OCK) were compared and statistical metrics were used to quantify improvement in estimates for sampled points attributable to cokriging. Scatter plots of measured vs. estimated values indicate that both OK and OCK were able to reliably predict concentrations near measurement points. Modest improvement in cross validation correlation coefficients and residual error statistics were observed for PAH cokriged with NO2 and benzene; however, predictive performance for PM1-2.5 cokriged with NO2 was slightly degraded. Nevertheless, mapped results demonstrate that PAH and PM1-2.5 concentration surfaces generated with OCK maps have improved spatial resolution over OK maps and

  12. Estimation of the FRF Through the Improved Local Bandwidth Selection in the Local Polynomial Method

    DEFF Research Database (Denmark)

    Thummala, Prasanth; Schoukens, Johan

    2012-01-01

    This paper presents a nonparametric method to measure an improved frequency response function (FRF) of a linear dynamic system excited by a random input. Recently, the local polynomial method (LPM) has been proposed as a technique to reduce the leakage errors on FRF measurements. The noise...

  13. Outlier treatment for improving parameter estimation of group contribution based models for upper flammability limit

    DEFF Research Database (Denmark)

    Frutiger, Jerome; Abildskov, Jens; Sin, Gürkan

    2015-01-01

    Flammability data is needed to assess the risk of fire and explosions. This study presents a new group contribution (GC) model to predict the upper flammability limit UFL oforganic chemicals. Furthermore, it provides a systematic method for outlier treatment inorder to improve the parameter...

  14. Improving head and body pose estimation through semi-supervised manifold alignment

    KAUST Repository

    Heili, Alexandre

    2014-10-27

    In this paper, we explore the use of a semi-supervised manifold alignment method for domain adaptation in the context of human body and head pose estimation in videos. We build upon an existing state-of-the-art system that leverages on external labelled datasets for the body and head features, and on the unlabelled test data with weak velocity labels to do a coupled estimation of the body and head pose. While this previous approach showed promising results, the learning of the underlying manifold structure of the features in the train and target data and the need to align them were not explored despite the fact that the pose features between two datasets may vary according to the scene, e.g. due to different camera point of view or perspective. In this paper, we propose to use a semi-supervised manifold alignment method to bring the train and target samples closer within the resulting embedded space. To this end, we consider an adaptation set from the target data and rely on (weak) labels, given for example by the velocity direction whenever they are reliable. These labels, along with the training labels are used to bias the manifold distance within each manifold and to establish correspondences for alignment.

  15. Assimilation of ice and water observations from SAR imagery to improve estimates of sea ice concentration

    Directory of Open Access Journals (Sweden)

    K. Andrea Scott

    2015-09-01

    Full Text Available In this paper, the assimilation of binary observations calculated from synthetic aperture radar (SAR images of sea ice is investigated. Ice and water observations are obtained from a set of SAR images by thresholding ice and water probabilities calculated using a supervised maximum likelihood estimator (MLE. These ice and water observations are then assimilated in combination with ice concentration from passive microwave imagery for the purpose of estimating sea ice concentration. Due to the fact that the observations are binary, consisting of zeros and ones, while the state vector is a continuous variable (ice concentration, the forward model used to map the state vector to the observation space requires special consideration. Both linear and non-linear forward models were investigated. In both cases, the assimilation of SAR data was able to produce ice concentration analyses in closer agreement with image analysis charts than when assimilating passive microwave data only. When both passive microwave and SAR data are assimilated, the bias between the ice concentration analyses and the ice concentration from ice charts is 19.78%, as compared to 26.72% when only passive microwave data are assimilated. The method presented here for the assimilation of SAR data could be applied to other binary observations, such as ice/water information from visual/infrared sensors.

  16. Improving Accuracy and Temporal Resolution of Learning Curve Estimation for within- and across-Session Analysis.

    Directory of Open Access Journals (Sweden)

    Matthias Deliano

    Full Text Available Estimation of learning curves is ubiquitously based on proportions of correct responses within moving trial windows. Thereby, it is tacitly assumed that learning performance is constant within the moving windows, which, however, is often not the case. In the present study we demonstrate that violations of this assumption lead to systematic errors in the analysis of learning curves, and we explored the dependency of these errors on window size, different statistical models, and learning phase. To reduce these errors in the analysis of single-subject data as well as on the population level, we propose adequate statistical methods for the estimation of learning curves and the construction of confidence intervals, trial by trial. Applied to data from an avoidance learning experiment with rodents, these methods revealed performance changes occurring at multiple time scales within and across training sessions which were otherwise obscured in the conventional analysis. Our work shows that the proper assessment of the behavioral dynamics of learning at high temporal resolution can shed new light on specific learning processes, and, thus, allows to refine existing learning concepts. It further disambiguates the interpretation of neurophysiological signal changes recorded during training in relation to learning.

  17. Improved Formula for Estimating Added Resistance of Ships in Engineering Applications

    Institute of Scientific and Technical Information of China (English)

    Shukui Liu; Baoguo Shang; Apostolos Papanikolaou; Victor Bolbot

    2016-01-01

    The authors previously introduced a semi-empirical formula that enabled fast estimation of the added resistance of ships in head waves, and in this study the formula is further refined for easy use in engineering applications. It includes an alternative ship draft correction coefficient, which better accounts for the wave pressure decay with ship’s draft. In addition, it only uses the speed and main characteristics of the ship and wave environment as input, and has been simplified to the extent that it can be readily processed using a pocket calculator. Extensive validations are conducted for different ship types at low to moderate speeds in various typical irregular sea conditions, and encouraging results are obtained. This relevant and topical research lies within the framework of the recent IMO MEPC.232(65) (2013) EEDI guidelines for estimating the minimum powering of ships in adverse weather conditions, which specify for the use of simple methods in current Level 2 assessment within engineering applications.

  18. Estimation of relative permeability curves using an improved Levenberg-Marquardt method with simultaneous perturbation Jacobian approximation

    Science.gov (United States)

    Zhou, Kang; Hou, Jian; Fu, Hongfei; Wei, Bei; Liu, Yongge

    2017-01-01

    Relative permeability controls the flow of multiphase fluids in porous media. The estimation of relative permeability is generally solved by Levenberg-Marquardt method with finite difference Jacobian approximation (LM-FD). However, the method can hardly be used in large-scale reservoirs because of unbearably huge computational cost. To eliminate this problem, the paper introduces the idea of simultaneous perturbation to simplify the generation of the Jacobian matrix needed in the Levenberg-Marquardt procedure and denotes the improved method as LM-SP. It is verified by numerical experiments and then applied to laboratory experiments and a real commercial oilfield. Numerical experiment indicates that LM-SP uses only 16.1% computational cost to obtain similar estimation of relative permeability and prediction of production performance compared with LM-FD. Laboratory experiment also shows the LM-SP has a 60.4% decrease in simulation cost while a 68.5% increase in estimation accuracy compared with the earlier published results. This is mainly because LM-FD needs 2n (n is the number of controlling knots) simulations to approximate Jacobian in each iteration, while only 2 simulations are enough in basic LM-SP. The convergence rate and estimation accuracy of LM-SP can be improved by averaging several simultaneous perturbation Jacobian approximations but the computational cost of each iteration may be increased. Considering the estimation accuracy and computational cost, averaging two Jacobian approximations is recommended in this paper. As the number of unknown controlling knots increases from 7 to 15, the saved simulation runs by LM-SP than LM-FD increases from 114 to 1164. This indicates LM-SP is more suitable than LM-FD for multivariate problems. Field application further proves the applicability of LM-SP on large real field as well as small laboratory problems.

  19. Investigation for improving Global Positioning System (GPS) orbits using a discrete sequential estimator and stochastic models of selected physical processes

    Science.gov (United States)

    Goad, Clyde C.; Chadwell, C. David

    1993-01-01

    GEODYNII is a conventional batch least-squares differential corrector computer program with deterministic models of the physical environment. Conventional algorithms were used to process differenced phase and pseudorange data to determine eight-day Global Positioning system (GPS) orbits with several meter accuracy. However, random physical processes drive the errors whose magnitudes prevent improving the GPS orbit accuracy. To improve the orbit accuracy, these random processes should be modeled stochastically. The conventional batch least-squares algorithm cannot accommodate stochastic models, only a stochastic estimation algorithm is suitable, such as a sequential filter/smoother. Also, GEODYNII cannot currently model the correlation among data values. Differenced pseudorange, and especially differenced phase, are precise data types that can be used to improve the GPS orbit precision. To overcome these limitations and improve the accuracy of GPS orbits computed using GEODYNII, we proposed to develop a sequential stochastic filter/smoother processor by using GEODYNII as a type of trajectory preprocessor. Our proposed processor is now completed. It contains a correlated double difference range processing capability, first order Gauss Markov models for the solar radiation pressure scale coefficient and y-bias acceleration, and a random walk model for the tropospheric refraction correction. The development approach was to interface the standard GEODYNII output files (measurement partials and variationals) with software modules containing the stochastic estimator, the stochastic models, and a double differenced phase range processing routine. Thus, no modifications to the original GEODYNII software were required. A schematic of the development is shown. The observational data are edited in the preprocessor and the data are passed to GEODYNII as one of its standard data types. A reference orbit is determined using GEODYNII as a batch least-squares processor and the

  20. Improved best estimate plus uncertainty methodology including advanced validation concepts to license evolving nuclear reactors

    Energy Technology Data Exchange (ETDEWEB)

    Unal, Cetin [Los Alamos National Laboratory; Williams, Brian [Los Alamos National Laboratory; Mc Clure, Patrick [Los Alamos National Laboratory; Nelson, Ralph A [IDAHO NATIONAL LAB

    2010-01-01

    Many evolving nuclear energy programs plan to use advanced predictive multi-scale multi-physics simulation and modeling capabilities to reduce cost and time from design through licensing. Historically, the role of experiments was primary tool for design and understanding of nuclear system behavior while modeling and simulation played the subordinate role of supporting experiments. In the new era of multi-scale multi-physics computational based technology development, the experiments will still be needed but they will be performed at different scales to calibrate and validate models leading predictive simulations. Cost saving goals of programs will require us to minimize the required number of validation experiments. Utilization of more multi-scale multi-physics models introduces complexities in the validation of predictive tools. Traditional methodologies will have to be modified to address these arising issues. This paper lays out the basic aspects of a methodology that can be potentially used to address these new challenges in design and licensing of evolving nuclear technology programs. The main components of the proposed methodology are verification, validation, calibration, and uncertainty quantification. An enhanced calibration concept is introduced and is accomplished through data assimilation. The goal is to enable best-estimate prediction of system behaviors in both normal and safety related environments. To achieve this goal requires the additional steps of estimating the domain of validation and quantification of uncertainties that allow for extension of results to areas of the validation domain that are not directly tested with experiments, which might include extension of the modeling and simulation (M&S) capabilities for application to full-scale systems. The new methodology suggests a formalism to quantify an adequate level of validation (predictive maturity) with respect to required selective data so that required testing can be minimized for cost

  1. Theme: The Role of the Teacher in Facilitation of Learning.

    Science.gov (United States)

    Agricultural Education Magazine, 2003

    2003-01-01

    Contains 13 articles on facilitation in agricultural education that address improving student learning, teaching methods, the teacher's role as a facilitator, preparing students for the workplace, and the facilitator's role in student-centered classrooms. (JOW)

  2. Addressing the Issue of Microplastics in the Wake of the Microbead-Free Waters Act-A New Standard Can Facilitate Improved Policy.

    Science.gov (United States)

    McDevitt, Jason P; Criddle, Craig S; Morse, Molly; Hale, Robert C; Bott, Charles B; Rochman, Chelsea M

    2017-06-20

    The United States Microbead-Free Waters Act was signed into law in December 2015. It is a bipartisan agreement that will eliminate one preventable source of microplastic pollution in the United States. Still, the bill is criticized for being too limited in scope, and also for discouraging the development of biodegradable alternatives that ultimately are needed to solve the bigger issue of plastics in the environment. Due to a lack of an acknowledged, appropriate standard for environmentally safe microplastics, the bill banned all plastic microbeads in selected cosmetic products. Here, we review the history of the legislation and how it relates to the issue of microplastic pollution in general, and we suggest a framework for a standard (which we call "Ecocyclable") that includes relative requirements related to toxicity, bioaccumulation, and degradation/assimilation into the natural carbon cycle. We suggest that such a standard will facilitate future regulation and legislation to reduce pollution while also encouraging innovation of sustainable technologies.

  3. Estimating greenhouse gas fluxes from constructed wetlands used for water quality improvement

    OpenAIRE<