WorldWideScience

Sample records for nci study estimates

  1. Pharmacokinetics and Safety of Bortezomib in Patients with Advanced Malignancies and Varying Degrees of Liver Dysfunction: Phase 1 NCI Organ Dysfunction Working Group Study NCI-6432

    Science.gov (United States)

    LoRusso, Patricia M; Venkatakrishnan, Karthik; Ramanathan, Ramesh K; Sarantopoulos, John; Mulkerin, Daniel; Shibata, Stephen I; Hamilton, Anne; Dowlati, Afshin; Mani, Sridhar; Rudek, Michelle A; Takimoto, Chris H; Neuwirth, Rachel; Esseltine, Dixie-Lee; Ivy, Percy

    2013-01-01

    Purpose The proteasome inhibitor bortezomib undergoes oxidative hepatic metabolism. This study (NCI-6432; NCT00091117) was conducted to evaluate bortezomib pharmacokinetics and safety in patients with varying degrees of hepatic impairment, to inform dosing recommendations in these special populations. Methods Patients received bortezomib on days 1, 4, 8, and 11 of 21-day cycles. Patients were assigned to four hepatic function groups based on the National Cancer Institute Organ Dysfunction Working Group classification. Those with normal function received bortezomib at the 1.3 mg/m2 standard dose. Patients with severe, moderate, and mild impairment received escalating doses from 0.5, 0.7, and 1.0 mg/m2, respectively, up to a 1.3 mg/m2 maximum. Serial blood samples were collected for 24 hours post-dose on days 1 and 8, cycle 1, for bortezomib plasma concentration measurements. Results Sixty-one patients were treated, including 14 with normal hepatic function and 17, 12, and 18 with mild, moderate, and severe impairment, respectively. Mild hepatic impairment did not alter dose-normalized bortezomib exposure (AUC0-tlast) or Cmax compared with patients with normal function. Mean dose-normalized AUC0-tlast was increased by approximately 60% on day 8 in patients with moderate or severe impairment. Conclusions Patients with mild hepatic impairment do not require a starting dose adjustment of bortezomib. Patients with moderate or severe hepatic impairment should be started at a reduced dose of 0.7 mg/m2. PMID:22394984

  2. NCI & Division Obligations

    Science.gov (United States)

    Displays obligations for grants, contracts, training fellowships, intramural research, and management and support, including the number of grant awards, funding amounts, and percent of the total NCI budget.

  3. THE NCI STUDIES ON RADIATION DOSES AND CANCER RISKS IN THE MARSHALL ISLANDS ASSOCIATED WITH EXPOSURE TO RADIOACTIVE FALLOUT

    OpenAIRE

    Simon, Steven L.

    2012-01-01

    The U.S. National Cancer Institute (NCI, National Institutes of Health) was requested by the U.S. Congress in 2004 to assess the number of radiation-related illnesses to be expected among the people of the Marshall Islands from nuclear tests conducted there during 1946-1958. A thorough analysis conducted by the NCI concluded that 20 of the 66 nuclear devices tested in or near the Marshall Islands resulted in measurable fallout deposition on one or more of the inhabited atolls of the Marshall ...

  4. NCI Visuals Online

    Science.gov (United States)

    NCI Visuals Online contains images from the collections of the National Cancer Institute's Office of Communications and Public Liaison, including general biomedical and science-related images, cancer-specific scientific and patient care-related images, and portraits of directors and staff of the National Cancer Institute.

  5. Data Sets from Major NCI Initiaves

    Science.gov (United States)

    The NCI Data Catalog includes links to data collections produced by major NCI initiatives and other widely used data sets, including animal models, human tumor cell lines, epidemiology data sets, genomics data sets from TCGA, TARGET, COSMIC, GSK, NCI60.

  6. NCI International EBV-Gastric Cancer Consortium

    Science.gov (United States)

    A collaboration among NCI and extramural investigators, established by DCEG in 2006, that utilizes data and biospecimens from completed and ongoing case series and observational studies of gastric cancer to replicate and extend findings from previous studies hindered by small numbers of EBV-positive cases, and to stimulate multidisciplinary research in this area.

  7. NCI Pediatric Preclinical Testing Consortium

    Science.gov (United States)

    NCI has awarded grants to five research teams to participate in its Pediatric Preclinical Testing Consortium, which is intended to help to prioritize which agents to pursue in pediatric clinical trials.

  8. NCI's Role in Immunotherapy Research

    Science.gov (United States)

    ... Reporting & Auditing Grant Transfer Grant Closeout Contracts & Small Business Training Cancer Training at NCI (Intramural) Resources for ... promising immunotherapies to the clinic more efficiently and cost effectively. For ... of the checkpoint inhibitor pembrolizumab in patients with ...

  9. Screening mammography. A missed clinical opportunity? Results of the NCI [National Cancer Institute] Breast Cancer Screening Consortium and national health interview survey studies

    International Nuclear Information System (INIS)

    Anon.

    1990-01-01

    Data from seven studies sponsored by the National Cancer Institute (NCI) were used to determine current rates of breast cancer screening and to identify the characteristics of and reasons for women not being screened. All seven studies were population-based surveys of women aged 50 to 74 years without breast cancer. While over 90% of non-Hispanic white respondents had regular sources of medical care, 46% to 76% had a clinical breast examination within the previous year, and only 25% to 41% had a mammogram. Less educated and poorer women had fewer mammograms. The two most common reasons women gave for never having had a mammogram were that they did not known they needed it and that their physician had not recommended it. Many physicians may have overlooked the opportunity to recommend mammography for older women when performing a clinical breast examination and to educate their patients about the benefit of screening mammography

  10. Ternary copper(II) complex: NCI60 screening, toxicity studies, and evaluation of efficacy in xenograft models of nasopharyngeal carcinoma

    Science.gov (United States)

    Chu, Tai-Lin; Abdul Aziz, Norazlin; Mohd Kornain, Noor-Kaslina; Samiulla, D. S.; Lo, Kwok-Wai; Ng, Chew-Hee

    2018-01-01

    Copper(II) ternary complex, [Cu(phen)(C-dmg)(H2O)]NO3 was evaluated against a panel of cell lines, tested for in vivo efficacy in nasopharyngeal carcinoma xenograft models as well as for toxicity in NOD scid gamma mice. The Cu(II) complex displayed broad spectrum cytotoxicity against multiple cancer types, including lung, colon, central nervous system, melanoma, ovarian, and prostate cancer cell lines in the NCI-60 panel. The Cu(II) complex did not cause significant induction of cytochrome P450 (CYP) 3A and 1A enzymes but moderately inhibited CYP isoforms 1A2, 2C9, 2C19, 2D6, 2B6, 2C8 and 3A4. The complex significantly inhibited tumor growth in nasopharyngeal carcinoma xenograft bearing mice models at doses which were well tolerated without causing significant or permanent toxic side effects. However, higher doses which resulted in better inhibition of tumor growth also resulted in toxicity. PMID:29329342

  11. The generalizability of NCI-sponsored clinical trials accrual among women with gynecologic malignancies.

    Science.gov (United States)

    Mishkin, Grace; Minasian, Lori M; Kohn, Elise C; Noone, Anne-Michelle; Temkin, Sarah M

    2016-12-01

    Enrollment of a representative population to cancer clinical trials ensures scientific reliability and generalizability of results. This study evaluated the similarity of patients enrolled in NCI-supported group gynecologic cancer trials to the incident US population. Accrual to NCI-sponsored ovarian, uterine, and cervical cancer treatment trials between 2003 and 2012 were examined. Race, ethnicity, age, and insurance status were compared to the analogous US patient population estimated using adjusted SEER incidence data. There were 18,913 accruals to 156 NCI-sponsored gynecologic cancer treatment trials, ovarian (56%), uterine (32%), and cervical cancers (12%). Ovarian cancer trials included the least racial, ethnic and age diversity. Black women were notably underrepresented in ovarian trials (4% versus 11%). Hispanic patients were underrepresented in ovarian and uterine trials (4% and 5% versus 18% and 19%, respectively), but not in cervical cancer trials (14 versus 11%). Elderly patients were underrepresented in each disease area, with the greatest underrepresentation seen in ovarian cancer patients over the age of 75 (7% versus 29%). Privately insured women were overrepresented among accrued ovarian cancer patients (87% versus 76%), and the uninsured were overrepresented among women with uterine or cervical cancers. These patterns did not change over time. Several notable differences were observed between the patients accrued to NCI funded trials and the incident population. Improving representation of racial and ethnic minorities and elderly patients on cancer clinical trials continues to be a challenge and priority. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Theoretical analysis of the binding of iron(III) protoporphyrin IX to 4-methoxyacetophenone thiosemicarbazone via DFT-D3, MEP, QTAIM, NCI, ELF, and LOL studies.

    Science.gov (United States)

    Nkungli, Nyiang Kennet; Ghogomu, Julius Numbonui

    2017-07-01

    Thiosemicarbazones display diverse pharmacological properties, including antimalarial activities. Their pharmacological activities have been studied in depth, but little of this research has focused on their antimalarial mode of action. To elucidate this antimalarial mechanism, we investigated the nature of the interactions between iron(III) protoporphyrin IX (Fe(III)PPIX) and the thione-thiol tautomers of 4-methoxyacetophenone thiosemicarbazone (MAPTSC). Dispersion-corrected density functional theory (DFT-D3), the quantum theory of atoms in molecules (QTAIM), the noncovalent interaction (NCI) index, the electron localization function (ELF), the localized orbital locator (LOL), and thermodynamic calculations were employed in this work. Fe(III)PPIX-MAPTSC binding is expected to inhibit hemozoin formation, thereby preventing Fe(III)PPIX detoxification in plasmodia. Preliminary studies geared toward the identification of atomic binding sites in the thione-thiol tautomers of MAPTSC were carried out using molecular electrostatic potential (MEP) maps and conceptual DFT-based local reactivity indices. The thionic sulfur and the 2 N-azomethine nitrogen/thiol sulfur of, respectively, the thione and thiol tautomers of MAPTSC were identified as the most favorable nucleophilic sites for electrophilic attack. The negative values of the computed Fe(III)PPIX-MAPTSC binding energies, enthalpies, and Gibbs free energies are indicative of the existence and stability of Fe(III)PPIX-MAPTSC complexes. MAPTSC-Fe(III) coordinate bonds and strong hydrogen bonds (N-H···O) between the NH 2 group in MAPTSC and the C=O group in one propionate side chain of Fe(III)PPIX are crucial to Fe(III)PPIX-MAPTSC binding. QTAIM, NCI, ELF, and LOL analyses revealed a subtle interplay of weak noncovalent interactions dominated by dispersive-like van der Waals interactions between Fe(III)PPIX and MAPTSC that stabilize the Fe(III)PPIX-MAPTSC complexes.

  13. NCI-MATCH Trial Links Targeted Drugs to Mutations

    Science.gov (United States)

    Investigators for the nationwide trial, NCI-MATCH: Molecular Analysis for Therapy Choice, announced that the trial will seek to determine whether targeted therapies for people whose tumors have specific gene mutations will be effective regardless of their cancer type. NCI-MATCH will incorporate more than 20 different study drugs or drug combinations, each targeting a specific gene mutation, in order to match each patient in the trial with a therapy that targets a molecular abnormality in their tumor.

  14. NCI collaborates with Multiple Myeloma Research Foundation

    Science.gov (United States)

    The National Cancer Institute (NCI) announced a collaboration with the Multiple Myeloma Research Foundation (MMRF) to incorporate MMRF's wealth of genomic and clinical data on the disease into the NCI Genomic Data Commons (GDC), a publicly available datab

  15. NCI Alliance for Nanotechnology in Cancer

    Science.gov (United States)

    The NCI Alliance for Nanotechnology in Cancer funds the Cancer Nanotechnology Training Centers collectively with the NCI Cancer Training Center. Find out about the funded Centers, to date, that train our next generation of scientists in the field of Canc

  16. License Agreements | NCI Technology Transfer Center | TTC

    Science.gov (United States)

    NCI Technology Transfer Center (TTC) licenses the discoveries of NCI and nine other NIH Institutes so new technologies can be developed and commercialized, to convert them into public health benefits.

  17. NCI Holds on to Defelice Cup | Poster

    Science.gov (United States)

    NCI kept the Defelice Cup trophy this year after beating Leidos Biomedical Research, 15 to 9, at the 10th annual Ronald H. Defelice Golf Tournament held on Columbus Day. Sixteen players on each team battled it out at the yearly contractor vs. government tournament held at Rattlewood Golf Course in Mount Airy, Md. NCI leads the series 6–4. “The score was the highest NCI margin

  18. A Phase 2 Study of Flavopiridol (Alvocidib) in Combination with Docetaxel in Refractory, Metastatic Pancreatic Cancer (NCI#6366)

    Science.gov (United States)

    Carvajal, Richard D.; Tse, Archie; Shah, Manish A.; Lefkowitz, Robert A.; Gonen, Mithat; Gilman-Rosen, Lisa; Kortmansky, Jeremy; Kelsen, David P.; Schwartz, Gary K.; O'Reilly, Eileen M.

    2014-01-01

    Background/Aims Pancreatic adenocarcinoma (PC) harbors frequent alterations of p16, resulting in cell cycle dysregulation. A phase I study of docetaxel and flavopiridol, a pan-cyclin dependent kinase inhibitor, demonstrated encouraging clinical activity in PC. This phase II study was designed to further define the efficacy and toxicity of this regimen in patients with previously treated PC. Methods Patients with gemcitabine-refractory, metastatic PC were treated with docetaxel 35 mg/m2 followed by flavopiridol 80 mg/m2 on days 1, 8, and 15 of a 28 day cycle. Tumor measurements were performed every two cycles. A Simon two-stage design was used to evaluate the primary endpoint of response. Results Ten patients were enrolled; nine were evaluable for response. No objective responses were observed; however, three patients (33%) achieved transient stable disease, with one of these patients achieving a 20% reduction in tumor size. Median survival was 4.2 months, with no patients alive at the time of analysis. Adverse events were significant, with seven patients (78%) requiring ≥1 dose reduction for transaminitis (11%), grade 4 neutropenia (33%), grade 3 fatigue (44%), and grade 3 diarrhea (22%) Conclusions The combination of flavopiridol and docetaxel has minimal activity and significant toxicity in this patient population. These results reflect the challenges of treating patients with PC in a second-line setting where the risk/benefit equation is tightly balanced. PMID:19451750

  19. NCCAM/NCI Phase 1 Study of Mistletoe Extract and Gemcitabine in Patients with Advanced Solid Tumors

    Directory of Open Access Journals (Sweden)

    Patrick J. Mansky

    2013-01-01

    Full Text Available Purpose. European Mistletoe (Viscum album L. extracts (mistletoe are commonly used for cancer treatment in Europe. This phase I study of gemcitabine (GEM and mistletoe in advanced solid cancers (ASC evaluated: (1 safety, toxicity, and maximum tolerated dose (MTD, (2 absolute neutrophil count (ANC recovery, (3 formation of mistletoe lectin antibodies (ML ab, (4 cytokine plasma concentrations, (5 clinical response, and (6 pharmacokinetics of GEM. Methods. Design: increasing mistletoe and fixed GEM dose in stage I and increasing doses of GEM with a fixed dose of mistletoe in stage II. Dose limiting toxicities (DLT were grade (G 3 nonhematologic and G4 hematologic events; MTD was reached with 2 DLTs in one dosage level. Response in stage IV ASC was assessed with descriptive statistics. Statistical analyses examined clinical response/survival and ANC recovery. Results. DLTs were G4 neutropenia, G4 thrombocytopenia, G4 acute renal failure, and G3 cellulitis, attributed to mistletoe. GEM 1380 mg/m2 and mistletoe 250 mg combined were the MTD. Of 44 patients, 24 developed nonneutropenic fever and flu-like syndrome. GEM pharmacokinetics were unaffected by mistletoe. All patients developed ML3 IgG antibodies. ANC showed a trend to increase between baseline and cycle 2 in stage I dose escalation. 6% of patients showed partial response, 42% stable disease. Median survival was 200 days. Compliance with mistletoe injections was high. Conclusion. GEM plus mistletoe is well tolerated. No botanical/drug interactions were observed. Clinical response is similar to GEM alone.

  20. Selected Publications by the NCI Director

    Science.gov (United States)

    Dr. Norman Sharpless's written work on cancer research appears in many leading scientific journals, as well as a variety of other publications. This page lists some of the articles published by Dr. Sharpless since becoming NCI director.

  1. Find an NCI-Designated Cancer Center

    Science.gov (United States)

    Find the locations of NCI-designated cancer centers by area, region, state, or name that includes contact information to help health care providers and cancer patients with referrals to clinical trials.

  2. About TTC | NCI Technology Transfer Center | TTC

    Science.gov (United States)

    The TTC facilitates licensing and co-development partnerships between biomedical industry, academia, and government agencies and the research laboratories of the NCI and nine other institutes and centers of NIH.

  3. Life Outside NCI | Cancer Prevention Fellowship Program

    Science.gov (United States)

    The CPFP Office is located at the NCI facilities in Rockville, Maryland, near the Nation’s Capital. With the convenient Metro subway reaching throughout the metropolitan area, transportation is within easy reach.

  4. NCI at Frederick Ebola Response Team | Poster

    Science.gov (United States)

    Editor’s note: This article was adapted from the Employee Diversity Team’s display case exhibit “Recognizing the NCI at Frederick Ebola Response Team,” in the lobby of Building 549. The Poster staff recognizes that this article does not include everyone who was involved in the response to the Ebola crisis, both at NCI at Frederick and in Africa. When the Ebola crisis broke out

  5. International Fellows of NCI at Frederick | Poster

    Science.gov (United States)

    Each year, the Employee Diversity Team (EDT) acknowledges members of the NCI at Frederick Community for their achievements and contributions towards the mission of facility.  Historically, the team has profiled the “Women of NCI at Frederick,” but this year, the team decided to instead shed light on the diverse and successful individuals who make up the international fellows community.

  6. NCI's Distributed Geospatial Data Server

    Science.gov (United States)

    Larraondo, P. R.; Evans, B. J. K.; Antony, J.

    2016-12-01

    Earth systems, environmental and geophysics datasets are an extremely valuable source of information about the state and evolution of the Earth. However, different disciplines and applications require this data to be post-processed in different ways before it can be used. For researchers experimenting with algorithms across large datasets or combining multiple data sets, the traditional approach to batch data processing and storing all the output for later analysis rapidly becomes unfeasible, and often requires additional work to publish for others to use. Recent developments on distributed computing using interactive access to significant cloud infrastructure opens the door for new ways of processing data on demand, hence alleviating the need for storage space for each individual copy of each product. The Australian National Computational Infrastructure (NCI) has developed a highly distributed geospatial data server which supports interactive processing of large geospatial data products, including satellite Earth Observation data and global model data, using flexible user-defined functions. This system dynamically and efficiently distributes the required computations among cloud nodes and thus provides a scalable analysis capability. In many cases this completely alleviates the need to preprocess and store the data as products. This system presents a standards-compliant interface, allowing ready accessibility for users of the data. Typical data wrangling problems such as handling different file formats and data types, or harmonising the coordinate projections or temporal and spatial resolutions, can now be handled automatically by this service. The geospatial data server exposes functionality for specifying how the data should be aggregated and transformed. The resulting products can be served using several standards such as the Open Geospatial Consortium's (OGC) Web Map Service (WMS) or Web Feature Service (WFS), Open Street Map tiles, or raw binary arrays under

  7. The relationship between CNS prophylactic treatment and smoking behavior in adult survivors of childhood leukemia: a National Cancer Institute and Children's Cancer Group (NCI/CCG) study

    International Nuclear Information System (INIS)

    Tao, M.L.; Weiss, R.E.; Guo, M.D.; Byrne, J.; Mills, J.L.; Robison, L.L.; Zeltzer, L.K.

    1997-01-01

    Purpose/Objective: To determine the relationship of both cranial radiation dose (CRD) and intra-thecal methotrexate (IT-MTX) dose with smoking behavior in survivors of childhood acute lymphoblastic leukemia (ALL). Material and Methods: A retrospective cohort study was conducted by NCI/CCG with 593 young adult survivors (median age, 21.6 years), treated prior to age 20 years on CCG ALL protocols from 1970 to 1986, and 409 sibling controls (median age, 24.5 years). Subjects were telephone surveyed regarding risk-taking behaviors, including cigarette smoking. A previous report has compared the smoking behavior of survivors to controls; this report will focus on the association between CNS treatment variables and smoking behavior for survivors only. Contingency table analysis was used to determine the prevalence of having ever smoked regularly (i.e. ≥ 100 cigarettes total and daily use for ≥ 6 months) for each treatment group: combinations of CRD (0-18 Gy vs. 24 Gy) and IT-MTX (0 to ≤ 83 mg vs. >83 mg). Logistic regression analysis was used to examine CRD, IT-MTX dose, age at diagnosis and age at follow-up as predictors for smoking. Too few subjects received intravenous methotrexate to evaluate this as an explanatory variable. The analysis was done separately for survivors from treatment periods 1 and 2 (1970-77 and 1978-86, respectively) to control for the time period cohort effect (which we have previously demonstrated to be significant). These treatment period definitions also correlated with a shift in protocol treatment trends from 24 Gy to 0-18 Gy and lower dose IT-MTX to higher dose IT-MTX. Results: Among the survivors from treatment period 1 who received 24 Gy CRD, those treated with higher dose IT-MTX (>83 mg) were significantly more likely to have ever been regular smokers than those treated with no or lower dose IT-MTX (31% vs. 16%, p=0.016). Among survivors from treatment period 1 who received 0-18 Gy CRD, the smoking prevalence was also greater in

  8. Global Proteome Analysis of the NCI-60 Cell Line Panel

    Directory of Open Access Journals (Sweden)

    Amin Moghaddas Gholami

    2013-08-01

    Full Text Available The NCI-60 cell line collection is a very widely used panel for the study of cellular mechanisms of cancer in general and in vitro drug action in particular. It is a model system for the tissue types and genetic diversity of human cancers and has been extensively molecularly characterized. Here, we present a quantitative proteome and kinome profile of the NCI-60 panel covering, in total, 10,350 proteins (including 375 protein kinases and including a core cancer proteome of 5,578 proteins that were consistently quantified across all tissue types. Bioinformatic analysis revealed strong cell line clusters according to tissue type and disclosed hundreds of differentially regulated proteins representing potential biomarkers for numerous tumor properties. Integration with public transcriptome data showed considerable similarity between mRNA and protein expression. Modeling of proteome and drug-response profiles for 108 FDA-approved drugs identified known and potential protein markers for drug sensitivity and resistance. To enable community access to this unique resource, we incorporated it into a public database for comparative and integrative analysis (http://wzw.tum.de/proteomics/nci60.

  9. CRADA Payment Options | NCI Technology Transfer Center | TTC

    Science.gov (United States)

    NCI TTC CRADA PAYMENT OPTIONS: Electronic Payments by Wire Transfer via Fedwire, Mail a check to the Institute or Center, or Automated Clearing House (ACH)/Electronic Funds Transfer (ETF) payments via Pay.gov (NCI ONLY).

  10. At NCI, Supporting the Best Science

    Science.gov (United States)

    Yesterday, at the AACR annual meeting, Dr. Doug Lowy spoke directly to the research community about his goals as NCI Acting Director. Dr. Lowy said that he plans to continue many of the programs launched by his predecessor, Dr. Harold Varmus, and to sharp

  11. NCI designated cancer center funding not influenced by organizational structure.

    Science.gov (United States)

    Wolfe, Margaret E; Yagoda, Daniel; Thurman, Paul W; Luna, Jorge M; Figg, William Douglas

    2009-05-01

    National Cancer Institutes (NCI) designated cancer centers use one of three organizational structures. The hypothesis of this study is that there are differences in the amount of annual NCI funding per faculty member based on a cancer center's organizational structure. The study also considers the impact of secondary factors (i.e., the existence of a clinical program, the region and the size of the city in which the cancer center is located) on funding and the number of Howard Hughes Medical Institute (HHMI) investigators at each cancer center. Of the 63 cancer centers, 44 use a matrix structure, 16 have a freestanding structure, and three have a Department of Oncology structure. Kruskal-Wallis tests reveal no statistically significant differences in the amount of funding per faculty member or the number of HHMI investigators between centers with a matrix, freestanding or Department of Oncology structure. Online research and telephone interviews with each cancer center were used to gather information, including: organizational structure, the presence of a clinical program, the number of faculty members, and the number of Howard Hughes Medical Institute investigators. Statistical tests were used to assess the impact which organizational structure has on the amount of funding per faculty member and number of HHMI investigators. While the results seem to suggest that the organizational structure of a given cancer center does not impact the amount of NCI funding or number of HHMI investigators which it attracts, the existence of this relationship is likely masked by the small sample size in this study. Further studies may be appropriate to examine the effect organizational structure has on other measurements which are relevant to cancer centers, such as quality and quantity of research produced.

  12. UNC Cancer Center Director to Lead NCI.

    Science.gov (United States)

    2017-08-01

    President Donald Trump has selected Norman "Ned" Sharpless, MD, director of the University of North Carolina Lineberger Comprehensive Cancer Center, to lead the NCI. The news was met with widespread approval among cancer researchers, who view Sharpless as a strong communicator who can ably represent the needs of the cancer community in the face of proposed funding cuts. ©2017 American Association for Cancer Research.

  13. Ressonàncies en plasmons sobre grafè

    OpenAIRE

    Alcaraz Iranzo, David

    2014-01-01

    Treball final de màster oficial fet en col·laboració amb Universitat Autònoma de Barcelona (UAB), Universitat de Barcelona (UB) i Institut de Ciències Fotòniques (ICFO) [ANGLÈS] Graphene is used as a novel, versatile plasmonic material. The most common way to implement resonant light-plasmon coupling is to etch graphene into periodic nanostructures, which is invasive. Here, we study a non-invasive way to engineer graphene plasmon resonances, based on periodic doping profiles. The plasmon r...

  14. NCI's Transdisciplinary High Performance Scientific Data Platform

    Science.gov (United States)

    Evans, Ben; Antony, Joseph; Bastrakova, Irina; Car, Nicholas; Cox, Simon; Druken, Kelsey; Evans, Bradley; Fraser, Ryan; Ip, Alex; Kemp, Carina; King, Edward; Minchin, Stuart; Larraondo, Pablo; Pugh, Tim; Richards, Clare; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2016-04-01

    The Australian National Computational Infrastructure (NCI) manages Earth Systems data collections sourced from several domains and organisations onto a single High Performance Data (HPD) Node to further Australia's national priority research and innovation agenda. The NCI HPD Node has rapidly established its value, currently managing over 10 PBytes of datasets from collections that span a wide range of disciplines including climate, weather, environment, geoscience, geophysics, water resources and social sciences. Importantly, in order to facilitate broad user uptake, maximise reuse and enable transdisciplinary access through software and standardised interfaces, the datasets, associated information systems and processes have been incorporated into the design and operation of a unified platform that NCI has called, the National Environmental Research Data Interoperability Platform (NERDIP). The key goal of the NERDIP is to regularise data access so that it is easily discoverable, interoperable for different domains and enabled for high performance methods. It adopts and implements international standards and data conventions, and promotes scientific integrity within a high performance computing and data analysis environment. NCI has established a rich and flexible computing environment to access to this data, through the NCI supercomputer; a private cloud that supports both domain focused virtual laboratories and in-common interactive analysis interfaces; as well as remotely through scalable data services. Data collections of this importance must be managed with careful consideration of both their current use and the needs of the end-communities, as well as its future potential use, such as transitioning to more advanced software and improved methods. It is therefore critical that the data platform is both well-managed and trusted for stable production use (including transparency and reproducibility), agile enough to incorporate new technological advances and

  15. Resveratrol enhances radiosensitivity of human non-small cell lung cancer NCI-H838 cells accompanied by inhibition of nuclear factor-kappa B activation

    International Nuclear Information System (INIS)

    Liao, Hui-Fen; Kuo Cheng-Deng; Yang, Yuh-Cheng; Lin, Chin-Ping; Tai, Hung-Chi; Chen, Yu-Jen; Chen, Yu-Yawn

    2005-01-01

    Resveratrol, a polyphenol in red wine, possesses many pharmacological activities including cardio-protection, chemoprevention, anti-tumor effects, and nuclear factor-kappa B (NF-κB) inactivation. The present study was designed to evaluate the effects and possible mechanism of resveratrol in enhancing radiosensitivity of lung cancer cells. Human non-small cell lung cancer NCI-H838 cells were irradiated with or without resveratrol pretreatment. The surviving fraction and sensitizer enhancement ratio (SER) were estimated by using a colony formation assay and linear-quadratic model. The cell-cycle distribution was evaluated by using prospidium iodide staining and flow cytometry. An enzyme-linked immunosorbent assay (ELISA)-based assay with immobilized oligonucleotide was performed to assess the DNA binding activity of NF-κB. Resveratrol had no direct growth-inhibitory effect on NCI-H838 cells treated for 24 hours with doses up to 25 μM. Pretreatment with resveratrol significantly enhanced cell killing by radiation, with an SER up to 2.2. Radiation activated NF-κB, an effect reversed by resveratrol pretreatment. Resveratrol resulted in a decrease of cells in the G 0 /G 1 phase and an increase in the S phase. Our results demonstrate that resveratrol enhances the radiosensitivity of NCI-H838 cells accompanied by NF-κB inhibition and S-phase arrest. (author)

  16. Spatial patterns of FUS-immunoreactive neuronal cytoplasmic inclusions (NCI) in neuronal intermediate filament inclusion disease (NIFID).

    Science.gov (United States)

    Armstrong, Richard A; Gearing, Marla; Bigio, Eileen H; Cruz-Sanchez, Felix F; Duyckaerts, Charles; Mackenzie, Ian R A; Perry, Robert H; Skullerud, Kari; Yokoo, Hideaki; Cairns, Nigel J

    2011-11-01

    Neuronal intermediate filament inclusion disease (NIFID), a rare form of frontotemporal lobar degeneration (FTLD), is characterized neuropathologically by focal atrophy of the frontal and temporal lobes, neuronal loss, gliosis, and neuronal cytoplasmic inclusions (NCI) containing epitopes of ubiquitin and neuronal intermediate filament (IF) proteins. Recently, the 'fused in sarcoma' (FUS) protein (encoded by the FUS gene) has been shown to be a component of the inclusions of NIFID. To further characterize FUS proteinopathy in NIFID, we studied the spatial patterns of the FUS-immunoreactive NCI in frontal and temporal cortex of 10 cases. In the cerebral cortex, sectors CA1/2 of the hippocampus, and the dentate gyrus (DG), the FUS-immunoreactive NCI were frequently clustered and the clusters were regularly distributed parallel to the tissue boundary. In a proportion of cortical gyri, cluster size of the NCI approximated to those of the columns of cells was associated with the cortico-cortical projections. There were no significant differences in the frequency of different types of spatial patterns with disease duration or disease stage. Clusters of NCI in the upper and lower cortex were significantly larger using FUS compared with phosphorylated, neurofilament heavy polypeptide (NEFH) or α-internexin (INA) immunohistochemistry (IHC). We concluded: (1) FUS-immunoreactive NCI exhibit similar spatial patterns to analogous inclusions in the tauopathies and synucleinopathies, (2) clusters of FUS-immunoreactive NCI are larger than those revealed by NEFH or ΙΝΑ, and (3) the spatial patterns of the FUS-immunoreactive NCI suggest the degeneration of the cortico-cortical projections in NIFID.

  17. A sense of urgency: Evaluating the link between clinical trial development time and the accrual performance of cancer therapy evaluation program (NCI-CTEP) sponsored studies.

    Science.gov (United States)

    Cheng, Steven K; Dietrich, Mary S; Dilts, David M

    2010-11-15

    Postactivation barriers to oncology clinical trial accruals are well documented; however, potential barriers prior to trial opening are not. We investigate one such barrier: trial development time. National Cancer Institute Cancer Therapy Evaluation Program (CTEP)-sponsored trials for all therapeutic, nonpediatric phase I, I/II, II, and III studies activated between 2000 and 2004 were investigated for an 8-year period (n = 419). Successful trials were those achieving 100% of minimum accrual goal. Time to open a study was the calendar time from initial CTEP submission to trial activation. Multivariate logistic regression analysis was used to calculate unadjusted and adjusted odds ratios (OR), controlling for study phase and size of expected accruals. Among the CTEP-approved oncology trials, 37.9% (n = 221) failed to attain the minimum accrual goals, with 70.8% (n = 14) of phase III trials resulting in poor accrual. A total of 16,474 patients (42.5% of accruals) accrued to those studies were unable to achieve the projected minimum accrual goal. Trials requiring less than 12 months of development were significantly more likely to achieve accrual goals (OR, 2.15; 95% confidence interval, 1.29-3.57, P = 0.003) than trials with the median development times of 12 to 18 months. Trials requiring a development time of greater than 24 months were significantly less likely to achieve accrual goals (OR, 0.40; 95% confidence interval, 0.20-0.78; P = 0.011) than trials with the median development time. A large percentage of oncology clinical trials do not achieve minimum projected accruals. Trial development time appears to be one important predictor of the likelihood of successfully achieving the minimum accrual goals. ©2010 AACR.

  18. Paracytosis of Haemophilus influenzae through cell layers of NCI-H292 lung epithelial cells

    NARCIS (Netherlands)

    van Schilfgaarde, M.; van Alphen, L.; Eijk, P.; Everts, V.; Dankert, J.

    1995-01-01

    Haemophilus influenzae penetrates the respiratory epithelium during carriage and invasive disease, including respiratory tract infections. We developed an in vitro model system consisting of lung epithelial NCI-H292 cells on permeable supports to study the passage of H. influenzae through lung

  19. A Comparative Dosimetric Study of Adjuvant 3D Conformal Radiotherapy for Operable Stomach Cancer Versus AP-PA Conventional Radiotherapy in NCI-Cairo

    International Nuclear Information System (INIS)

    El-Hossiny, H.A.; Diab, N.A.; El-Taher, M.M.

    2009-01-01

    This study was to compare this multiple field conformal technique to the AP-PA technique with respect to target volume coverage and dose to normal tissues. Materials and Methods: Seventeen patients with stages II-III denocarcinoma of the stomach were treated with adjuvant postoperative chemoradiotherapy presented to radiotherapy department in National Cancer Institute, Cairo in period between February 2009 to March 2010 using 3D conformal radiotherapy technique that consisted of a mono isocentric arrangement employing 4-6 radiation fields. For each patient, a second radiotherapy treatment plan was done using an antroposterior (AP-PA) fields, the two techniques were then compared using dose volume histogram (DVH) analysis. Results: Comparing different DVHs, it was found that the planning target volume (PTV) was adequately covered in both (3D and 2D) plans while the left kidney and spinal cord demonstrate lower radiation doses on using the conformal technique. The liver doses is higher in the 3D tecq, but still well below liver tolerance. Conclusions: Both 3D conformal radiotherapy and AP-PA conventional techniques doses are within range of normal tissues tolerance. Regarding the left kidney and spinal cord the 3D conformal radiotherapy is superior than the AP-PA conventional techniques but with higher doses to the liver in the 3D conformal radiotherapy compared to the AP-PA conventional techniques

  20. Mission & Role | NCI Technology Transfer Center | TTC

    Science.gov (United States)

    The NCI TTC serves as the focal point for implementing the Federal Technology Transfer Act to utilize patents as incentive for commercial development of technologies and to establish research collaborations and licensing among academia, federal laboratories, non-profit organizations, and industry. The TTC supports technology development activities for the National Cancer Institute and nine other NIH Institutes and Centers. TTC staff negotiate co-development agreements and licenses with universities, non-profit organizations, and pharmaceutical and biotechnology companies to ensure compliance with Federal statutes, regulations and the policies of the National Institutes of Health. TTC also reviews employee invention reports and makes recommendations concerning filing of domestic and foreign patent applications. | [google6f4cd5334ac394ab.html

  1. The Prostate cancer Intervention Versus Observation Trial:VA/NCI/AHRQ Cooperative Studies Program #407 (PIVOT): design and baseline results of a randomized controlled trial comparing radical prostatectomy to watchful waiting for men with clinically localized prostate cancer.

    Science.gov (United States)

    Wilt, Timothy J; Brawer, Michael K; Barry, Michael J; Jones, Karen M; Kwon, Young; Gingrich, Jeffrey R; Aronson, William J; Nsouli, Imad; Iyer, Padmini; Cartagena, Ruben; Snider, Glenn; Roehrborn, Claus; Fox, Steven

    2009-01-01

    Prostate cancer is the most common noncutaneous malignancy and the second leading cause of cancer death in men. Ninety percent of men with prostate cancer are over aged 60 years, diagnosed by early detection with the prostate specific antigen (PSA) blood test and have disease believed confined to the prostate gland (clinically localized). Common treatments for clinically localized prostate cancer include watchful waiting surgery to remove the prostate gland (radical prostatectomy), external beam radiation therapy and interstitial radiation therapy (brachytherapy) and androgen deprivation. Little is known about the relative effectiveness and harms of treatments due to the paucity of randomized controlled trials. The VA/NCI/AHRQ Cooperative Studies Program Study #407: Prostate cancer Intervention Versus Observation Trial (PIVOT), initiated in 1994, is a multicenter randomized controlled trial comparing radical prostatectomy to watchful waiting in men with clinically localized prostate cancer. We describe the study rationale, design, recruitment methods and baseline characteristics of PIVOT enrollees. We provide comparisons with eligible men declining enrollment and men participating in another recently reported randomized trial of radical prostatectomy versus watchful waiting conducted in Scandinavia. We screened 13,022 men with prostate cancer at 52 United States medical centers for potential enrollment. From these, 5023 met initial age, comorbidity and disease eligibility criteria and a total of 731 men agreed to participate and were randomized. The mean age of enrollees was 67 years. Nearly one-third were African-American. Approximately 85% reported they were fully active. The median prostate specific antigen (PSA) was 7.8 ng/mL (mean 10.2 ng/mL). In three-fourths of men the primary reason for biopsy leading to a diagnosis of prostate cancer was a PSA elevation or rise. Using previously developed tumor risk categorizations incorporating PSA levels, Gleason

  2. Differential regulation of human 3β-hydroxysteroid dehydrogenase type 2 for steroid hormone biosynthesis by starvation and cyclic AMP stimulation: studies in the human adrenal NCI-H295R cell model.

    Directory of Open Access Journals (Sweden)

    Sameer Udhane

    Full Text Available Human steroid biosynthesis depends on a specifically regulated cascade of enzymes including 3β-hydroxysteroid dehydrogenases (HSD3Bs. Type 2 HSD3B catalyzes the conversion of pregnenolone, 17α-hydroxypregnenolone and dehydroepiandrosterone to progesterone, 17α-hydroxyprogesterone and androstenedione in the human adrenal cortex and the gonads but the exact regulation of this enzyme is unknown. Therefore, specific downregulation of HSD3B2 at adrenarche around age 6-8 years and characteristic upregulation of HSD3B2 in the ovaries of women suffering from the polycystic ovary syndrome remain unexplained prompting us to study the regulation of HSD3B2 in adrenal NCI-H295R cells. Our studies confirm that the HSD3B2 promoter is regulated by transcription factors GATA, Nur77 and SF1/LRH1 in concert and that the NBRE/Nur77 site is crucial for hormonal stimulation with cAMP. In fact, these three transcription factors together were able to transactivate the HSD3B2 promoter in placental JEG3 cells which normally do not express HSD3B2. By contrast, epigenetic mechanisms such as methylation and acetylation seem not involved in controlling HSD3B2 expression. Cyclic AMP was found to exert differential effects on HSD3B2 when comparing short (acute versus long-term (chronic stimulation. Short cAMP stimulation inhibited HSD3B2 activity directly possibly due to regulation at co-factor or substrate level or posttranslational modification of the protein. Long cAMP stimulation attenuated HSD3B2 inhibition and increased HSD3B2 expression through transcriptional regulation. Although PKA and MAPK pathways are obvious candidates for possibly transmitting the cAMP signal to HSD3B2, our studies using PKA and MEK1/2 inhibitors revealed no such downstream signaling of cAMP. However, both signaling pathways were clearly regulating HSD3B2 expression.

  3. Published Research - NCI Alliance for Nanotechnology in Cancer

    Science.gov (United States)

    The NCI Alliance for Nanotechnology in Cancer has published much exciting and impactful research over the years. Find here a list of all of these listed in PubMed and others across the field of Cancer Nanotechnology.

  4. NCI and the Precision Medicine Initiative®

    Science.gov (United States)

    NCI's activities related to precision medicine focuses on new and expanded precision medicine clinical trials; mechanisms to overcome drug resistance to cancer treatments; and developing a shared digital repository of precision medicine trials data.

  5. Invention Development Program Helps Nurture NCI at Frederick Technologies | Poster

    Science.gov (United States)

    The Invention Development Fund (IDF) was piloted by the Technology Transfer Center (TTC) in 2014 to facilitate the commercial development of NCI technologies. The IDF received a second round of funding from the NCI Office of the Director and the Office of Budget and Management to establish the Invention Development Program (IDP) for fiscal year 2016. The IDP is using these funds to help advance a second set of inventions.

  6. Curcumin Inhibits Growth of Human NCI-H292 Lung Squamous Cell Carcinoma Cells by Increasing FOXA2 Expression

    Directory of Open Access Journals (Sweden)

    Lingling Tang

    2018-02-01

    Full Text Available Lung squamous cell carcinoma (LSCC is a common histological lung cancer subtype, but unlike lung adenocarcinoma, limited therapeutic options are available for treatment. Curcumin, a natural compound, may have anticancer effects in various cancer cells, but how it may be used to treat LSCC has not been well studied. Here, we applied curcumin to a human NCI-H292 LSCC cell line to test anticancer effects and explored underlying potential mechanisms of action. Curcumin treatment inhibited NCI-H292 cell growth and increased FOXA2 expression in a time-dependent manner. FOXA2 expression was decreased in LSCC tissues compared with adjacent normal tissues and knockdown of FOXA2 increased NCI-H292 cells proliferation. Inhibition of cell proliferation by curcumin was attenuated by FOXA2 knockdown. Moreover inhibition of STAT3 pathways by curcumin increased FOXA2 expression in NCI-H292 cells whereas a STAT3 activator (IL-6 significantly inhibited curcumin-induced FOXA2 expression. Also, SOCS1 and SOCS3, negative regulators of STAT3 activity, were upregulated by curcumin treatment. Thus, curcumin inhibited human NCI-H292 cells growth by increasing FOXA2 expression via regulation of STAT3 signaling pathways.

  7. Analysis of 125I-[Tyr3] octreotide receptors of NCI-H466 cell line

    International Nuclear Information System (INIS)

    Sun Junjie; Fan Wo; Xu Yujie; Zhang Youjiu; Zhu Ran

    2002-01-01

    Objective: To study the affinity of small cell lung carcinoma to [Tyr 3 ] octreotide (TOC). Methods: Taking 125 I-[Tyr 3 ] octreotide (labeled by chloramine-T method), as the ligand, small cell lung carcinoma NCI-H466 cell line was inspected for the receptor-binding points and affinity constant. Results: The radio-chemical purity of 125 I-TOC purified through sephadex G-10 was higher than 95%. Receptor analysis study showed that the expression of somatostatin receptors on NCI-H446 cells was numerous (Bmax = 1.17 x 10 5 /cell) with strong affinity to 125 I-TOC (Kd = 0.56 nM). Conclusion: Labeled TOC could be used for small cell lung carcinoma receptor imaging and radio-pharmaceutical therapy

  8. DNA fingerprinting of the NCI-60 cell line panel.

    Science.gov (United States)

    Lorenzi, Philip L; Reinhold, William C; Varma, Sudhir; Hutchinson, Amy A; Pommier, Yves; Chanock, Stephen J; Weinstein, John N

    2009-04-01

    The National Cancer Institute's NCI-60 cell line panel, the most extensively characterized set of cells in existence and a public resource, is frequently used as a screening tool for drug discovery. Because many laboratories around the world rely on data from the NCI-60 cells, confirmation of their genetic identities represents an essential step in validating results from them. Given the consequences of cell line contamination or misidentification, quality control measures should routinely include DNA fingerprinting. We have, therefore, used standard DNA microsatellite short tandem repeats to profile the NCI-60, and the resulting DNA fingerprints are provided here as a reference. Consistent with previous reports, the fingerprints suggest that several NCI-60 lines have common origins: the melanoma lines MDA-MB-435, MDA-N, and M14; the central nervous system lines U251 and SNB-19; the ovarian lines OVCAR-8 and OVCAR-8/ADR (also called NCI/ADR); and the prostate lines DU-145, DU-145 (ATCC), and RC0.1. Those lines also show that the ability to connect two fingerprints to the same origin is not affected by stable transfection or by the development of multidrug resistance. As expected, DNA fingerprints were not able to distinguish different tissues-of-origin. The fingerprints serve principally as a barcodes.

  9. An NCI perspective on creating sustainable biospecimen resources.

    Science.gov (United States)

    Vaught, Jimmie; Rogers, Joyce; Myers, Kimberly; Lim, Mark David; Lockhart, Nicole; Moore, Helen; Sawyer, Sherilyn; Furman, Jeffrey L; Compton, Carolyn

    2011-01-01

    High-quality biospecimens with appropriate clinical annotation are critical in the era of personalized medicine. It is now widely recognized that biospecimen resources need to be developed and operated under established scientific, technical, business, and ethical/legal standards. To date, such standards have not been widely practiced, resulting in variable biospecimen quality that may compromise research efforts. The National Cancer Institute (NCI) Office of Biorepositories and Biospecimen Research (OBBR) was established in 2005 to coordinate NCI's biospecimen resource activities and address those issues that affect access to the high-quality specimens and data necessary for its research enterprises as well as the broader translational research field. OBBR and the NCI Biorepository Coordinating Committee developed NCI's "Best Practices for Biospecimen Resources" after consultation with a broad array of experts. A Biospecimen Research Network was established to fund research to develop additional evidence-based practices. Although these initiatives will improve the overall availability of high-quality specimens and data for cancer research, OBBR has been authorized to implement a national biobanking effort, cancer HUman Biobank (caHUB). caHUB will address systematically the gaps in knowledge needed to improve the state-of-the-science and strengthen the standards for human biobanking. This commentary outlines the progressive efforts by NCI in technical, governance, and economic considerations that will be important as the new caHUB enterprise is undertaken.

  10. Usual Intake Distribution of Vitamins and Prevalence of Inadequacy in a Large Sample of Iranian At-Risk Population: Application of NCI Method.

    Science.gov (United States)

    Heidari, Zahra; Feizi, Awat; Azadbakht, Leila; Sarrafzadegan, Nizal

    2016-01-01

    This study provides an assessment of usual intake distribution of vitamins and estimating prevalence of inadequacy and excess among a large representative sample of middle-aged and elderly people in central regions of Iran. A cross-sectional study that is a second follow-up to the Isfahan Cohort Study (ICS). The study setting included urban and rural areas from 3 cities (Isfahan, Najafabad, and Arak) in central regions of Iran. Subjects included 1922 people aged 40 years and older, with a mean age of 55.9 ± 10.6; 50.4% were male and the majority (79.3%) were urban. Dietary intakes were collected using a 24-hour recall and 2 food records. Distribution of vitamins intake was estimated using traditional and national cancer institute (NCI) methods. The proportion of subjects at risk of vitamin intake inadequacy or excess was estimated using the estimated average requirement (EAR) cut-point method and the tolerable upper intake levels (UL) index. There were differences between values obtained from traditional and NCI methods, particularly in the lower and upper percentiles of the intake distribution. High prevalence of inadequacies for vitamins A, D, E, B2, B3 (especially among females), and B9 was observed. Significant gender differences were found in terms of inadequate intakes for vitamins A, B1, B2, B3, B6, B9, B12, and C (p vitamin intake was observed in the middle-aged and elderly Iranian population. Nutritional interventions particularly through population-based educational programs in order to improve diet variety and consume nutrient supplements may be necessary.

  11. The NCI Digital Divide Pilot Projects: implications for cancer education.

    Science.gov (United States)

    Kreps, Gary L; Gustafson, David; Salovey, Peter; Perocchia, Rosemarie Slevin; Wilbright, Wayne; Bright, Mary Anne; Muha, Cathy

    2007-01-01

    The National Cancer Institute (NCI) supported four innovative demonstration research projects, "The Digital Divide Pilot Projects," to test new strategies for disseminating health information via computer to vulnerable consumers. These projects involved active research collaborations between the NCI's Cancer Information Service (CIS) and regional cancer control researchers to field test new approaches for enhancing cancer communication in vulnerable communities. The projects were able to use computers to successfully disseminate relevant cancer information to vulnerable populations. These demonstration research projects suggested effective new strategies for using communication technologies to educate underserved populations about cancer prevention, control, and care.

  12. Robert Wiltrout Says Goodbye to NCI in 2015 | Poster

    Science.gov (United States)

    After 34 years at NCI, Robert Wiltrout, Ph.D., said he is looking forward to trading his I-270 commute for another type of commute: exploring the waterways of Maryland, Alaska, and Wyoming to fulfill his love of fishing. Wiltrout officially retired as director of the NCI Center for Cancer Research (CCR) on July 2 of last year. Throughout his college academic career, Wiltrout had an interest in science, but it was not until he was working on a research project for his master’s degree that he considered a career in scientific research.

  13. NCI Helps Children’s Hospital of Philadelphia to Identify and Treat New Target in Pediatric Cancer | Poster

    Science.gov (United States)

    There may be a new, more effective method for treating high-risk neuroblastoma, according to scientists at the Children’s Hospital of Philadelphia and collaborators in the Cancer and Inflammation Program at NCI at Frederick. Together, the groups published a study describing a previously unrecognized protein on neuroblastoma cells, called GPC2, as well as the creation of a

  14. New Phone System Coming to NCI Campus at Frederick | Poster

    Science.gov (United States)

    By Travis Fouche and Trent McKee, Guest Writers Beginning in September, phones at the NCI Campus at Frederick will begin to be replaced, as the project to upgrade the current phone system ramps up. Over the next 16 months, the Information Systems Program (ISP) will be working with Facilities Maintenance and Engineering and Computer & Statistical Services to replace the current

  15. NCI intramural research highlighted at 2014 AACR meeting

    Science.gov (United States)

    This year’s American Association for Cancer Research meeting featured plenary talks by two NCI scientists, Steven Rosenberg, M.D., and Louis Staudt, M.D., Ph.D., that highlighted the challenges in developing varied and potentially synergistic treatments f

  16. NIH Employee Invention Report (EIR) | NCI Technology Transfer Center | TTC

    Science.gov (United States)

    NIH researchers must immediately contact their Laboratory or Branch Chief and inform him or her of a possible invention, and then consult with your NCI TTC Technology Transfer Manager about submitting an Employee Invention Report (EIR) Form. | [google6f4cd5334ac394ab.html

  17. Russian delegation visits NIH and NCI to discuss research collaboration

    Science.gov (United States)

    The NCI Center for Global Health hosted a delegation from the Russian Foundation for Basic Research to discuss ongoing and future collaborations in cancer research. The delegation was accompanied by representatives from the US Embassy in Moscow and the Embassy of the Russian Federation in Washington DC.

  18. Creating Start-up Companies around NCI Inventions | Poster

    Science.gov (United States)

    By Karen Surabian, Thomas Stackhouse, and Rose Freel, Contributing Writers, and Rosemarie Truman, Guest Writer The National Cancer Institute (NCI), led by the Technology Transfer Center (TTC),  the Avon Foundation, and The Center for Advancing Innovation have partnered to create a “first-of-a-kind” Breast Cancer Start-up Challenge.

  19. Help NCI at Frederick “Knock Out Hunger” | Poster

    Science.gov (United States)

    NCI at Frederick is once again participating in the Feds Feed Families initiative, an annual food drive that addresses severe shortages of non-perishable items in food banks across D.C., Maryland, and Virginia during the summer months, when giving is at its lowest.

  20. IJUE. Tema 3. Les competències de la Unió Europea

    OpenAIRE

    Torres Pérez, María

    2018-01-01

    PowerPoint del Tema 3 de la asignatura "Institucions Jurídiques de la Unió Europea". Curso académico 2017-2018. Tema 3. Les competències de la Unió Europea. 1. L’atribució de competències a la Unió Europea. 2. La delimitació de les competències a la Unió Europea. 3. Els principis que regeixen l’exercici de les competències. 4. L’exercici de les competències de la Unió per “alguns Estats membres”.

  1. Direct cortical hemodynamic mapping of somatotopy of pig nostril sensation by functional near-infrared cortical imaging (fNCI).

    Science.gov (United States)

    Uga, Minako; Saito, Toshiyuki; Sano, Toshifumi; Yokota, Hidenori; Oguro, Keiji; Rizki, Edmi Edison; Mizutani, Tsutomu; Katura, Takusige; Dan, Ippeita; Watanabe, Eiju

    2014-05-01

    Functional near-infrared spectroscopy (fNIRS) is a neuroimaging technique for the noninvasive monitoring of human brain activation states utilizing the coupling between neural activity and regional cerebral hemodynamics. Illuminators and detectors, together constituting optodes, are placed on the scalp, but due to the presence of head tissues, an inter-optode distance of more than 2.5cm is necessary to detect cortical signals. Although direct cortical monitoring with fNIRS has been pursued, a high-resolution visualization of hemodynamic changes associated with sensory, motor and cognitive neural responses directly from the cortical surface has yet to be realized. To acquire robust information on the hemodynamics of the cortex, devoid of signal complications in transcranial measurement, we devised a functional near-infrared cortical imaging (fNCI) technique. Here we demonstrate the first direct functional measurement of temporal and spatial patterns of cortical hemodynamics using the fNCI technique. For fNCI, inter-optode distance was set at 5mm, and light leakage from illuminators was prevented by a special optode holder made of a light-shielding rubber sheet. fNCI successfully detected the somatotopy of pig nostril sensation, as assessed in comparison with concurrent and sequential somatosensory-evoked potential (SEP) measurements on the same stimulation sites. Accordingly, the fNCI system realized a direct cortical hemodynamic measurement with a spatial resolution comparable to that of SEP mapping on the rostral region of the pig brain. This study provides an important initial step toward realizing functional cortical hemodynamic monitoring during neurosurgery of human brains. Copyright © 2014. Published by Elsevier Inc.

  2. 76 FR 28439 - Submission for OMB Review; Comment Request; NCI Cancer Genetics Services Directory Web-Based...

    Science.gov (United States)

    2011-05-17

    ...; Comment Request; NCI Cancer Genetics Services Directory Web-Based Application Form and Update Mailer... currently valid OMB control number. Proposed Collection: Title: NCI Cancer Genetics Services Directory Web... included in the NCI Cancer Genetics Services Directory on NCI's Cancer.gov Web site. The information...

  3. Regular paths in SparQL: querying the NCI Thesaurus.

    Science.gov (United States)

    Detwiler, Landon T; Suciu, Dan; Brinkley, James F

    2008-11-06

    OWL, the Web Ontology Language, provides syntax and semantics for representing knowledge for the semantic web. Many of the constructs of OWL have a basis in the field of description logics. While the formal underpinnings of description logics have lead to a highly computable language, it has come at a cognitive cost. OWL ontologies are often unintuitive to readers lacking a strong logic background. In this work we describe GLEEN, a regular path expression library, which extends the RDF query language SparQL to support complex path expressions over OWL and other RDF-based ontologies. We illustrate the utility of GLEEN by showing how it can be used in a query-based approach to defining simpler, more intuitive views of OWL ontologies. In particular we show how relatively simple GLEEN-enhanced SparQL queries can create views of the OWL version of the NCI Thesaurus that match the views generated by the web-based NCI browser.

  4. Like a Good Neighbor, NCI-Frederick Is There | Poster

    Science.gov (United States)

    The main campus of the National Cancer Institute at Frederick is an island of sorts: 68 acres of land that was once part of Fort Detrick. Accessing NCI property means passing through the Fort Detrick gates and crossing the post. While the campus is surrounded by the military installation, is protected by NIH police, and doesn’t allow the use of tobacco products, it is not a

  5. Softball Games Bring NCI and Leidos Biomed Employees Together | Poster

    Science.gov (United States)

    NCI and Leidos Biomed employees took to the fields at Nallin Pond for the third annual slow-pitch softball games on August 26. The series attracted 54 employees who were divided into four teams, Red, Blue, Gray, and White, and they were cheered on by about 40 enthusiastic spectators. In the first set of games, the Gray team defeated the Blue team, 15–8, and the White team

  6. NCI investment in nanotechnology: achievements and challenges for the future.

    Science.gov (United States)

    Dickherber, Anthony; Morris, Stephanie A; Grodzinski, Piotr

    2015-01-01

    Nanotechnology offers an exceptional and unique opportunity for developing a new generation of tools addressing persistent challenges to progress in cancer research and clinical care. The National Cancer Institute (NCI) recognizes this potential, which is why it invests roughly $150 M per year in nanobiotechnology training, research and development. By exploiting the various capacities of nanomaterials, the range of nanoscale vectors and probes potentially available suggests much is possible for precisely investigating, manipulating, and targeting the mechanisms of cancer across the full spectrum of research and clinical care. NCI has played a key role among federal R&D agencies in recognizing early the value of nanobiotechnology in medicine and committing to its development as well as providing training support for new investigators in the field. These investments have allowed many in the research community to pursue breakthrough capabilities that have already yielded broad benefits. Presented here is an overview of how NCI has made these investments with some consideration of how it will continue to work with this research community to pursue paradigm-changing innovations that offer relief from the burdens of cancer. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.

  7. a comparative study of some robust ridge and liu estimators

    African Journals Online (AJOL)

    Dr A.B.Ahmed

    estimation techniques such as Ridge and Liu Estimators are preferable to Ordinary Least Square. On the other hand, when outliers exist in the data, robust estimators like M, MM, LTS and S. Estimators, are preferred. To handle these two problems jointly, the study combines the Ridge and Liu Estimators with Robust.

  8. NCI Takes Back the Defelice Cup at Ninth Annual Golf Tournament | Poster

    Science.gov (United States)

    By Ashley DeVine, Staff Writer After being down by a point in the morning, NCI reclaimed the Defelice Cup trophy from Leidos Biomedical Research, with a final score of 12 ½ to 11 ½, at the ninth annual Ronald H. Defelice Golf Tournament, held Oct. 13. “The tightest matches in the nine-year history of this cup competition resulted in a narrow victory for NCI and allowed NCI to

  9. NCI Scientists Awarded National Medal of Technology and Innovation by President Obama | Poster

    Science.gov (United States)

    Two NCI scientists received the National Medal of Technology and Innovation, the nation’s highest honor for technological achievement. The award was announced by President Obama in October. The honorees, John Schiller, Ph.D., Laboratory of Cellular Oncology (LCO), Center for Cancer Research, NCI, and Douglas Lowy, M.D., also from LCO and NCI deputy director, received their medals at a White House ceremony on Nov. 20.

  10. Landfill Lifespan Estimation: A Case Study

    African Journals Online (AJOL)

    Michael

    2017-12-02

    Dec 2, 2017 ... site selection, design, construction, operation and management. For this reason, it ... This research used the future value of money equation to estimate the lifespan of the ..... Geomatic Engineering, UMaT, Tarkwa, Ghana, pp.

  11. Metformin synergistically enhances antiproliferative effects of cisplatin and etoposide in NCI-H460 human lung cancer cells

    Directory of Open Access Journals (Sweden)

    Sarah Fernandes Teixeira

    2013-12-01

    Full Text Available OBJECTIVE: To test the effectiveness of combining conventional antineoplastic drugs (cisplatin and etoposide with metformin in the treatment of non-small cell lung cancer in the NCI-H460 cell line, in order to develop new therapeutic options with high efficacy and low toxicity.METHODS: We used the 3-(4,5-dimethylthiazol-2-yl-2,5-diphenyltetrazolium bromide (MTT assay and calculated the combination index for the drugs studied.RESULTS: We found that the use of metformin as monotherapy reduced the metabolic viability of the cell line studied. Combining metformin with cisplatin or etoposide produced a synergistic effect and was more effective than was the use of cisplatin or etoposide as monotherapy.CONCLUSIONS: Metformin, due to its independent effects on liver kinase B1, had antiproliferative effects on the NCI-H460 cell line. When metformin was combined with cisplatin or etoposide, the cell death rate was even higher.

  12. Vaccines for HIV | NCI Technology Transfer Center | TTC

    Science.gov (United States)

    The development of an effective HIV vaccine has been an ongoing area of research. The high variability in HIV-1 virus strains has represented a major challenge in successful development. Ideally, an effective candidate vaccine would provide protection against the majority of clades of HIV. Two major hurdles to overcome are immunodominance and sequence diversity. This vaccine utilizes a strategy for overcoming these two issues by identifying the conserved regions of the virus and exploiting them for use in a targeted therapy. NCI seeks licensees and/or research collaborators to commercialize this technology, which has been validated in macaque models.

  13. Scientific and Engineering Studies; Spectral Estimation.

    Science.gov (United States)

    1977-01-01

    L NIfN-l DE6X.I,/N ISZN/16 S=IS0ED. 114-zINTI|L0OG(|FL0t.O(M)) *1.ti427,5) CALL OTRCOS(CNe,) CAL QTRCO$(cMr*t) CA6 .L "ODEUS(ZvO) CAL StBJEi,(Z5. -00...6lNES6(Z#,PLAT(IP)emLOO.) CALL SLTS*6(Zo~nelo) CALL 6141S&(Z#O.SseemLOO) CALL LLCSLL5e. CA6 .L L1NLSa(Zokl 16*einLOO.) CALL 61N4ESS(Z@1L.0-1000OO) CA6L...square error of the MC estimate § can be obtained. A significant difference now exists between treatment of the MC estimate and the MSC estimate: whereas

  14. NCI-FDA Interagency Oncology Task Force Workshop Provides Guidance for Analytical Validation of Protein-based Multiplex Assays | Office of Cancer Clinical Proteomics Research

    Science.gov (United States)

    An NCI-FDA Interagency Oncology Task Force (IOTF) Molecular Diagnostics Workshop was held on October 30, 2008 in Cambridge, MA, to discuss requirements for analytical validation of protein-based multiplex technologies in the context of its intended use. This workshop developed through NCI's Clinical Proteomic Technologies for Cancer initiative and the FDA focused on technology-specific analytical validation processes to be addressed prior to use in clinical settings. In making this workshop unique, a case study approach was used to discuss issues related to

  15. A critique of the exposure assessment in the epidemiologic study of benzene-exposed workers in China conducted by the Chinese Academy of Preventive Medicine and the US National Cancer Institute.

    Science.gov (United States)

    Wong, O

    1999-12-01

    As reviewed in some detail in the present paper, workers employed in a wide variety of industries were included in the Chinese benzene study, and were exposed to not only benzene but also a wide range of other industrial chemicals. To attribute any or all health effects observed in the exposed cohort to benzene without examining other concomitant exposures is not appropriate. Although it was stated that one of the major objectives of the expanded study was to examine the effects of other risk factors, no such examination was made in any of the analyses in the expanded CAPM-NCI study. The CAPM-NCI study suffered from a number of limitations. One of the most serious limitations of the study involved the exposure estimates developed by the US NCI team. Comparing the assumptions used in the development of estimates and the exposure estimates themselves to actual data reported previously by the Chinese investigators revealed numerous inconsistencies and, in many cases, large discrepancies. It appeared that the exposure estimates were consistently lower than the actual exposure data. The so-called indirect validation conducted by the NCI team served no useful purpose, since by definition it could not validate the absolute values of the estimates. NCI was fully aware of some of the inadequacies of its exposure estimates. Although in a 1994 paper, the NCI team recognized that little confidence could be attached to the estimated (e.g., only 2% of the estimates for the time interval 1949-1959 and only 6% of the estimates prior to 1975 were rated in the high confidence category), the inadequacy of the estimates was never mentioned or discussed in any subsequent analyses or in the latest report (Hayes et al., 1998). Instead, the exposure of the workers was hailed as "well characterized" (Hayes et al., 1998). In conclusion both CAPM and NCI have made substantial efforts in studying the relationship between benzene exposure and various malignancies. Unfortunately, there were

  16. Enhanced Missing Proteins Detection in NCI60 Cell Lines Using an Integrative Search Engine Approach.

    Science.gov (United States)

    Guruceaga, Elizabeth; Garin-Muga, Alba; Prieto, Gorka; Bejarano, Bartolomé; Marcilla, Miguel; Marín-Vicente, Consuelo; Perez-Riverol, Yasset; Casal, J Ignacio; Vizcaíno, Juan Antonio; Corrales, Fernando J; Segura, Victor

    2017-12-01

    The Human Proteome Project (HPP) aims deciphering the complete map of the human proteome. In the past few years, significant efforts of the HPP teams have been dedicated to the experimental detection of the missing proteins, which lack reliable mass spectrometry evidence of their existence. In this endeavor, an in depth analysis of shotgun experiments might represent a valuable resource to select a biological matrix in design validation experiments. In this work, we used all the proteomic experiments from the NCI60 cell lines and applied an integrative approach based on the results obtained from Comet, Mascot, OMSSA, and X!Tandem. This workflow benefits from the complementarity of these search engines to increase the proteome coverage. Five missing proteins C-HPP guidelines compliant were identified, although further validation is needed. Moreover, 165 missing proteins were detected with only one unique peptide, and their functional analysis supported their participation in cellular pathways as was also proposed in other studies. Finally, we performed a combined analysis of the gene expression levels and the proteomic identifications from the common cell lines between the NCI60 and the CCLE project to suggest alternatives for further validation of missing protein observations.

  17. College Graduate with NCI Internship Gains Experience, Carries Chemistry into Medicine | Poster

    Science.gov (United States)

    For Jennifer Marshall, the skills learned through an internship at the National Cancer Institute (NCI) at Frederick have prepared her for the next step of her life—medical school. Marshall, who will be attending the West Virginia University School of Medicine in the fall, spent three summers in NCI at Frederick’s Summer Internship Program expanding her love and passion for

  18. Human Monoclonal Antibodies Targeting Glypican-2 in Neuroblastoma | NCI Technology Transfer Center | TTC

    Science.gov (United States)

    Researchers at the National Cancer Institute’s Laboratory of Molecular Biology (NCI LMB) have developed and isolated several single domain monoclonal human antibodies against GPC2. NCI seeks parties interested in licensing or co-developing GPC2 antibodies and/or conjugates.

  19. NCI Core Open House Shines Spotlight on Supportive Science and Basic Research | Poster

    Science.gov (United States)

    The lobby of Building 549 at NCI at Frederick bustled with activity for two hours on Tuesday, May 1, as several dozen scientists and staff gathered for the NCI Core Open House. The event aimed to encourage discussion and educate visitors about the capabilities of the cores, laboratories, and facilities that offer support to NCI’s Center for Cancer Research.

  20. Vaccine for BK Polyomavirus-associated Infections in Transplant Recipients | NCI Technology Transfer Center | TTC

    Science.gov (United States)

    NCI researches identified a BK polyomavirus (BKV) virulent strain that causes chronic urinary tract infections, and the development of vaccine and therapeutic methods that would block BKV pathogenesis. The NCI Laboratory of Cellular Oncology, seek parties to license or co-develop this technology.

  1. A comparative study of the performances of some estimators of ...

    African Journals Online (AJOL)

    These estimators are compared using the finite properties of estimators' criteria namely; sum of biases, sum of variances and sum of the mean squared error of the estimated parameter of the model at different levels of autocorrelation and sample size through Monte – Carlo studies. Results show that at each level of ...

  2. Persistent Identifier Practice for Big Data Management at NCI

    Directory of Open Access Journals (Sweden)

    Jingbo Wang

    2017-04-01

    Full Text Available The National Computational Infrastructure (NCI manages over 10 PB research data, which is co-located with the high performance computer (Raijin and an HPC class 3000 core OpenStack cloud system (Tenjin. In support of this integrated High Performance Computing/High Performance Data (HPC/HPD infrastructure, NCI’s data management practices includes building catalogues, DOI minting, data curation, data publishing, and data delivery through a variety of data services. The metadata catalogues, DOIs, THREDDS, and Vocabularies, all use different Uniform Resource Locator (URL styles. A Persistent IDentifier (PID service provides an important utility to manage URLs in a consistent, controlled and monitored manner to support the robustness of our national ‘Big Data’ infrastructure. In this paper we demonstrate NCI’s approach of utilising the NCI’s 'PID Service 'to consistently manage its persistent identifiers with various applications.

  3. Time, Concentration, and pH-Dependent Transport and Uptake of Anthocyanins in a Human Gastric Epithelial (NCI-N87 Cell Line

    Directory of Open Access Journals (Sweden)

    Allison A. Atnip

    2017-02-01

    Full Text Available Anthocyanins are the largest class of water soluble plant pigments and a common part of the human diet. They may have many potential health benefits, including antioxidant, anti-inflammatory, anti-cancer, and cardioprotective activities. However, anthocyanin metabolism is not well understood. Studies suggest that anthocyanins absorption may occur in the stomach, in which the acidic pH favors anthocyanin stability. A gastric epithelial cell line (NCI-N87 has been used to study the behavior of anthocyanins at a pH range of 3.0–7.4. This work examines the effects of time (0–3 h, concentration (50–1500 µM, and pH (3.0, 5.0, 7.4 on the transport and uptake of anthocyanins using NCI-N87 cells. Anthocyanins were transported from the apical to basolateral side of NCI-N87 cells in time and dose dependent manners. Over the treatment time of 3 h the rate of transport increased, especially with higher anthocyanin concentrations. The non-linear rate of transport may suggest an active mechanism for the transport of anthocyanins across the NCI-N87 monolayer. At apical pH 3.0, higher anthocyanin transport was observed compared to pH 5.0 and 7.4. Reduced transport of anthocyanins was found to occur at apical pH 5.0.

  4. The Prostate Cancer Intervention Versus Observation Trial: VA/NCI/AHRQ Cooperative Studies Program #407 (PIVOT): design and baseline results of a randomized controlled trial comparing radical prostatectomy with watchful waiting for men with clinically localized prostate cancer.

    Science.gov (United States)

    Wilt, Timothy J

    2012-12-01

    Prostate cancer is the most common noncutaneous malignancy and the second leading cause of cancer death in men. In the United States, 90% of men with prostate cancer are more than age 60 years, diagnosed by early detection with the prostate-specific antigen (PSA) blood test, and have disease believed confined to the prostate gland (clinically localized). Common treatments for clinically localized prostate cancer include watchful waiting (WW), surgery to remove the prostate gland (radical prostatectomy), external-beam radiation therapy and interstitial radiation therapy (brachytherapy), and androgen deprivation. Little is known about the relative effectiveness and harms of treatments because of the paucity of randomized controlled trials. The Department of Veterans Affairs/National Cancer Institute/Agency for Healthcare Research and Quality Cooperative Studies Program Study #407:Prostate Cancer Intervention Versus Observation Trial (PIVOT), initiated in 1994, is a multicenter randomized controlled trial comparing radical prostatectomy with WW in men with clinically localized prostate cancer. We describe the study rationale, design, recruitment methods, and baseline characteristics of PIVOT enrollees. We provide comparisons with eligible men declining enrollment and men participating in another recently reported randomized trial of radical prostatectomy vs WW conducted in Scandinavia. We screened 13 022 men with prostate cancer at 52 US medical centers for potential enrollment. From these, 5023 met initial age, comorbidity, and disease eligibility criteria, and a total of 731 men agreed to participate and were randomized. The mean age of enrollees was 67 years. Nearly one-third were African American. Approximately 85% reported that they were fully active. The median PSA was 7.8ng/mL (mean 10.2ng/mL). In three-fourths of men, the primary reason for biopsy leading to a diagnosis of prostate cancer was a PSA elevation or rise. Using previously developed tumor risk

  5. The Estimation of Gestational Age at Birth in Database Studies.

    Science.gov (United States)

    Eberg, Maria; Platt, Robert W; Filion, Kristian B

    2017-11-01

    Studies on the safety of prenatal medication use require valid estimation of the pregnancy duration. However, gestational age is often incompletely recorded in administrative and clinical databases. Our objective was to compare different approaches to estimating the pregnancy duration. Using data from the Clinical Practice Research Datalink and Hospital Episode Statistics, we examined the following four approaches to estimating missing gestational age: (1) generalized estimating equations for longitudinal data; (2) multiple imputation; (3) estimation based on fetal birth weight and sex; and (4) conventional approaches that assigned a fixed value (39 weeks for all or 39 weeks for full term and 35 weeks for preterm). The gestational age recorded in Hospital Episode Statistics was considered the gold standard. We conducted a simulation study comparing the described approaches in terms of estimated bias and mean square error. A total of 25,929 infants from 22,774 mothers were included in our "gold standard" cohort. The smallest average absolute bias was observed for the generalized estimating equation that included birth weight, while the largest absolute bias occurred when assigning 39-week gestation to all those with missing values. The smallest mean square errors were detected with generalized estimating equations while multiple imputation had the highest mean square errors. The use of generalized estimating equations resulted in the most accurate estimation of missing gestational age when birth weight information was available. In the absence of birth weight, assignment of fixed gestational age based on term/preterm status may be the optimal approach.

  6. Silica-Coated Nanodiamonds for Imaging and Delivery of Therapeutic Agents | NCI Technology Transfer Center | TTC

    Science.gov (United States)

    The NCI Radiation Oncology Branch and the NHLBI Laboratory of Single Molecule Biophysics seek parties to co-develop fluorescent nanodiamonds for use as in vivo and in vitro optical tracking probes toward commercialization.

  7. NCI and the Chinese Academy of Medical Sciences Sign Statement of Intent

    Science.gov (United States)

    Today the National Cancer Institute (NCI) and the Cancer Institute/Hospital of the Chinese Academy of Medical Sciences (CICAMS) signed a statement of intent to share an interest in fostering collaborative biomedical research in oncology and a common goal

  8. History of the Diet History Questionnaire (DHQ) | EGRP/DCCPS/NCI/NIH

    Science.gov (United States)

    Learn about the evolution of the Diet History Questionnaire (DHQ), developed by the National Cancer Institute (NCI) initially in 2001, to the DHQ II in 2010, up to the present version, DHQ III, launched in 2018.

  9. Program Spotlight: Ground Broken for NCI-supported Cancer Treatment Center in Puerto Rico

    Science.gov (United States)

    Dr. Sanya A. Springfield represented NCI at the groundbreaking ceremonies for the University of Puerto Rico (UPR) cancer hospital. In her remarks, she acknowledged the driving force behind this development is the UPR and the MD Anderson Cancer Center partnership.

  10. Craig Reynolds, Ph.D., to Retire as NCI Associate Director for Frederick | Poster

    Science.gov (United States)

    On December 2, Craig Reynolds, Ph.D., director, Office of Scientific Operations, and NCI associate director for Frederick, will put the finishing touches on a 37-year career with the National Cancer Institute.

  11. NCI Requests Cancer Targets for Monoclonal Antibody Production and Characterization | Office of Cancer Clinical Proteomics Research

    Science.gov (United States)

    In an effort to provide well-characterized monoclonal antibodies to the scientific community, NCI's Antibody Characterization Program requests cancer-related protein targets for affinity production and distribution. Submissions will be accepted through July 11, 2014.

  12. NCI Requests Targets for Monoclonal Antibody Production and Characterization | Office of Cancer Clinical Proteomics Research

    Science.gov (United States)

    In an effort to provide well-characterized monoclonal antibodies to the scientific community, NCI's Antibody Characterization Program requests cancer-related protein targets for affinity production and distribution. Submissions will be accepted through July 9, 2012.

  13. Gardasil® and Cervarix® | NCI Technology Transfer Center | TTC

    Science.gov (United States)

    Vaccine for human papilloma virus (HPV) to protect from cancers Key elements of the technology for Gardasil® and Cervarix originated from the HPV research of the laboratory of Drs. Douglas Lowy and John Schiller of the NCI.

  14. Ratio Based Biomarkers for the Prediction of Cancer Survival | NCI Technology Transfer Center | TTC

    Science.gov (United States)

    The NCI seeks licensees or co-development partners for this technology, which describes compositions, methods and kits for identifying, characterizing biomolecules expressed in a sample that are associated with the presence, the development, or progression of cancer.

  15. How You Can Partner with NIH | NCI Technology Transfer Center | TTC

    Science.gov (United States)

    NCI Technology Transfer Center (TTC) provides an array of agreements to support the National Cancer Institute's partnering. Deciding which type of agreement to use can be a challenge: CRADA, MTA, collaboration, agreement, CTA, Materials-CRADA

  16. Composite likelihood and two-stage estimation in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2004-01-01

    In this paper register based family studies provide the motivation for linking a two-stage estimation procedure in copula models for multivariate failure time data with a composite likelihood approach. The asymptotic properties of the estimators in both parametric and semi-parametric models are d...

  17. An ensemble based top performing approach for NCI-DREAM drug sensitivity prediction challenge.

    Directory of Open Access Journals (Sweden)

    Qian Wan

    Full Text Available We consider the problem of predicting sensitivity of cancer cell lines to new drugs based on supervised learning on genomic profiles. The genetic and epigenetic characterization of a cell line provides observations on various aspects of regulation including DNA copy number variations, gene expression, DNA methylation and protein abundance. To extract relevant information from the various data types, we applied a random forest based approach to generate sensitivity predictions from each type of data and combined the predictions in a linear regression model to generate the final drug sensitivity prediction. Our approach when applied to the NCI-DREAM drug sensitivity prediction challenge was a top performer among 47 teams and produced high accuracy predictions. Our results show that the incorporation of multiple genomic characterizations lowered the mean and variance of the estimated bootstrap prediction error. We also applied our approach to the Cancer Cell Line Encyclopedia database for sensitivity prediction and the ability to extract the top targets of an anti-cancer drug. The results illustrate the effectiveness of our approach in predicting drug sensitivity from heterogeneous genomic datasets.

  18. Model-based estimation for dynamic cardiac studies using ECT

    International Nuclear Information System (INIS)

    Chiao, P.C.; Rogers, W.L.; Clinthorne, N.H.; Fessler, J.A.; Hero, A.O.

    1994-01-01

    In this paper, the authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (Emission Computed Tomography). The authors construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. The authors also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, model assumptions and potential uses of the joint estimation strategy are discussed

  19. Model-based estimation for dynamic cardiac studies using ECT.

    Science.gov (United States)

    Chiao, P C; Rogers, W L; Clinthorne, N H; Fessler, J A; Hero, A O

    1994-01-01

    The authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (emission computed tomography). They construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. They also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, the authors discuss model assumptions and potential uses of the joint estimation strategy.

  20. A Comparative Study of Distribution System Parameter Estimation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup

    2016-07-17

    In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of both methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.

  1. Study on interference of technetium in spectrophotometric estimation of uranium

    International Nuclear Information System (INIS)

    Revathi, P.; Saipriya, K.; Madhavan Kutty, V.K.; Srinivasa Rao, G.; Vijayakumar, N.; Kumar, T.

    2015-01-01

    Estimation of uranium is essential for process control purposes as well as to arrive optimum parameters for further waste management in reprocessing industry. Uranium estimation is done by spectrophotometry using ammonium thiocyanate, DBM, PAR and Br-PADAP as chromogenic reagents for colour development. Extractive spectrophotometry can also be used to eliminate some of the interfering ions. During inter method comparison, technetium was found to be interfering in the thiocyanate spectrophotometry. This study is an effort to find out the extent of technetium interference in the estimation of uranium by spectrophotometry using the above said chromogenic reagents. (author)

  2. Improving estimation of flight altitude in wildlife telemetry studies

    Science.gov (United States)

    Poessel, Sharon; Duerr, Adam E.; Hall, Jonathan C.; Braham, Melissa A.; Katzner, Todd

    2018-01-01

    Altitude measurements from wildlife tracking devices, combined with elevation data, are commonly used to estimate the flight altitude of volant animals. However, these data often include measurement error. Understanding this error may improve estimation of flight altitude and benefit applied ecology.There are a number of different approaches that have been used to address this measurement error. These include filtering based on GPS data, filtering based on behaviour of the study species, and use of state-space models to correct measurement error. The effectiveness of these approaches is highly variable.Recent studies have based inference of flight altitude on misunderstandings about avian natural history and technical or analytical tools. In this Commentary, we discuss these misunderstandings and suggest alternative strategies both to resolve some of these issues and to improve estimation of flight altitude. These strategies also can be applied to other measures derived from telemetry data.Synthesis and applications. Our Commentary is intended to clarify and improve upon some of the assumptions made when estimating flight altitude and, more broadly, when using GPS telemetry data. We also suggest best practices for identifying flight behaviour, addressing GPS error, and using flight altitudes to estimate collision risk with anthropogenic structures. Addressing the issues we describe would help improve estimates of flight altitude and advance understanding of the treatment of error in wildlife telemetry studies.

  3. Dense Descriptors for Optical Flow Estimation: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Ahmadreza Baghaie

    2017-02-01

    Full Text Available Estimating the displacements of intensity patterns between sequential frames is a very well-studied problem, which is usually referred to as optical flow estimation. The first assumption among many of the methods in the field is the brightness constancy during movements of pixels between frames. This assumption is proven to be not true in general, and therefore, the use of photometric invariant constraints has been studied in the past. One other solution can be sought by use of structural descriptors rather than pixels for estimating the optical flow. Unlike sparse feature detection/description techniques and since the problem of optical flow estimation tries to find a dense flow field, a dense structural representation of individual pixels and their neighbors is computed and then used for matching and optical flow estimation. Here, a comparative study is carried out by extending the framework of SIFT-flow to include more dense descriptors, and comprehensive comparisons are given. Overall, the work can be considered as a baseline for stimulating more interest in the use of dense descriptors for optical flow estimation.

  4. Residential Lighting End-Use Consumption Study: Estimation Framework and Initial Estimates

    Energy Technology Data Exchange (ETDEWEB)

    Gifford, Will R.; Goldberg, Miriam L.; Tanimoto, Paulo M.; Celnicker, Dane R.; Poplawski, Michael E.

    2012-12-01

    The U.S. DOE Residential Lighting End-Use Consumption Study is an initiative of the U.S. Department of Energy’s (DOE’s) Solid-State Lighting Program that aims to improve the understanding of lighting energy usage in residential dwellings. The study has developed a regional estimation framework within a national sample design that allows for the estimation of lamp usage and energy consumption 1) nationally and by region of the United States, 2) by certain household characteristics, 3) by location within the home, 4) by certain lamp characteristics, and 5) by certain categorical cross-classifications (e.g., by dwelling type AND lamp type or fixture type AND control type).

  5. Quantifying cannabis: A field study of marijuana quantity estimation.

    Science.gov (United States)

    Prince, Mark A; Conner, Bradley T; Pearson, Matthew R

    2018-05-17

    The assessment of marijuana use quantity poses unique challenges. These challenges have limited research efforts on quantity assessments. However, quantity estimates are critical to detecting associations between marijuana use and outcomes. We examined accuracy of marijuana users' estimations of quantities of marijuana they prepared to ingest and predictors of both how much was prepared for a single dose and the degree of (in)accuracy of participants' estimates. We recruited a sample of 128 regular-to-heavy marijuana users for a field study wherein they prepared and estimated quantities of marijuana flower in a joint or a bowl as well as marijuana concentrate using a dab tool. The vast majority of participants overestimated the quantity of marijuana that they used in their preparations. We failed to find robust predictors of estimation accuracy. Self-reported quantity estimates are inaccurate, which has implications for studying the link between quantity and marijuana use outcomes. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  6. Pharmacologically directed strategies in academic anticancer drug discovery based on the European NCI compounds initiative.

    Science.gov (United States)

    Hendriks, Hans R; Govaerts, Anne-Sophie; Fichtner, Iduna; Burtles, Sally; Westwell, Andrew D; Peters, Godefridus J

    2017-07-11

    The European NCI compounds programme, a joint initiative of the EORTC Research Branch, Cancer Research Campaign and the US National Cancer Institute, was initiated in 1993. The objective was to help the NCI in reducing the backlog of in vivo testing of potential anticancer compounds, synthesised in Europe that emerged from the NCI in vitro 60-cell screen. Over a period of more than twenty years the EORTC-Cancer Research Campaign panel reviewed ∼2000 compounds of which 95 were selected for further evaluation. Selected compounds were stepwise developed with clear go/no go decision points using a pharmacologically directed programme. This approach eliminated quickly compounds with unsuitable pharmacological properties. A few compounds went into Phase I clinical evaluation. The lessons learned and many of the principles outlined in the paper can easily be applied to current and future drug discovery and development programmes. Changes in the review panel, restrictions regarding numbers and types of compounds tested in the NCI in vitro screen and the appearance of targeted agents led to the discontinuation of the European NCI programme in 2017 and its transformation into an academic platform of excellence for anticancer drug discovery and development within the EORTC-PAMM group. This group remains open for advice and collaboration with interested parties in the field of cancer pharmacology.

  7. Kernel bandwidth estimation for non-parametric density estimation: a comparative study

    CSIR Research Space (South Africa)

    Van der Walt, CM

    2013-12-01

    Full Text Available We investigate the performance of conventional bandwidth estimators for non-parametric kernel density estimation on a number of representative pattern-recognition tasks, to gain a better understanding of the behaviour of these estimators in high...

  8. Comparative study of approaches to estimate pipe break frequencies

    Energy Technology Data Exchange (ETDEWEB)

    Simola, K.; Pulkkinen, U.; Talja, H.; Saarenheimo, A.; Karjalainen-Roikonen, P. [VTT Industrial Systems (Finland)

    2002-12-01

    The report describes the comparative study of two approaches to estimate pipe leak and rupture frequencies for piping. One method is based on a probabilistic fracture mechanistic (PFM) model while the other one is based on statistical estimation of rupture frequencies from a large database. In order to be able to compare the approaches and their results, the rupture frequencies of some selected welds have been estimated using both of these methods. This paper highlights the differences both in methods, input data, need and use of plant specific information and need of expert judgement. The study focuses on one specific degradation mechanism, namely the intergranular stress corrosion cracking (IGSCC). This is the major degradation mechanism in old stainless steel piping in BWR environment, and its growth is influenced by material properties, stresses and water chemistry. (au)

  9. Age estimation using exfoliative cytology and radiovisiography: A comparative study.

    Science.gov (United States)

    Nallamala, Shilpa; Guttikonda, Venkateswara Rao; Manchikatla, Praveen Kumar; Taneeru, Sravya

    2017-01-01

    Age estimation is one of the essential factors in establishing the identity of an individual. Among various methods, exfoliative cytology (EC) is a unique, noninvasive technique, involving simple, and pain-free collection of intact cells from the oral cavity for microscopic examination. The study was undertaken with an aim to estimate the age of an individual from the average cell size of their buccal smears calculated using image analysis morphometric software and the pulp-tooth area ratio in mandibular canine of the same individual using radiovisiography (RVG). Buccal smears were collected from 100 apparently healthy individuals. After fixation in 95% alcohol, the smears were stained using Papanicolaou stain. The average cell size was measured using image analysis software (Image-Pro Insight 8.0). The RVG images of mandibular canines were obtained, pulp and tooth areas were traced using AutoCAD 2010 software, and area ratio was calculated. The estimated age was then calculated using regression analysis. The paired t -test between chronological age and estimated age by cell size and pulp-tooth area ratio was statistically nonsignificant ( P > 0.05). In the present study, age estimated by pulp-tooth area ratio and EC yielded good results.

  10. DISSENYAR EXPERIÈNCIES AMB VALOR TURÍSTIC: PAISATGES URBANS

    Directory of Open Access Journals (Sweden)

    Francesc Fusté

    2015-10-01

    Full Text Available Aquest article tracta sobre les possibilitats que la creació d’experiències té en relació al desenvolupament empresarial i regional, gràcies a la tematització del sector turístic i la modificació intencional de l’entorn, tant cultural com natural. El paisatge caracteritza els espais en funció de la seva configuració territorial i també arquitectònica i urbana. Les estructures arquitectòniques, els esdeveniments i les activitats que impliquen la participació activa dels usuaris són la clau de l’èxit del disseny de les experiències amb un valor afegit, on les noves tecnologies ajuden a emfatitzar-ne l’impacte. Sigui com sigui, convertir els llocs en experiències tant pels residents com pels visitants.

  11. NCI Funding Trends and Priorities in Physical Activity and Energy Balance Research Among Cancer Survivors.

    Science.gov (United States)

    Alfano, Catherine M; Bluethmann, Shirley M; Tesauro, Gina; Perna, Frank; Agurs-Collins, Tanya; Elena, Joanne W; Ross, Sharon A; O'Connell, Mary; Bowles, Heather R; Greenberg, Deborah; Nebeling, Linda

    2016-01-01

    There is considerable evidence that a healthy lifestyle consisting of physical activity, healthy diet, and weight control is associated with reduced risk of morbidity and mortality after cancer. However, these behavioral interventions are not widely adopted in practice or community settings. Integrating heath behavior change interventions into standard survivorship care for the growing number of cancer survivors requires an understanding of the current state of the science and a coordinated scientific agenda for the future with focused attention in several priority areas. To facilitate this goal, this paper presents trends over the past decade of the National Cancer Institute (NCI) research portfolio, fiscal year 2004 to 2014, by funding mechanism, research focus, research design and methodology, primary study exposures and outcomes, and study team expertise and composition. These data inform a prioritized research agenda for the next decade focused on demonstrating value and feasibility and creating desire for health behavior change interventions at multiple levels including the survivor, clinician, and healthcare payer to facilitate the development and implementation of appropriately targeted, adaptive, effective, and sustainable programs for all survivors. Published by Oxford University Press (2015). This work is written by (a) US Government employee(s) and is in the public domain in the US.

  12. A prospective longitudinal study to estimate the prevalence of ...

    African Journals Online (AJOL)

    A prospective longitudinal study to estimate the prevalence of obesity in Egyptian children with nocturnal enuresis and the association between body mass index and ... Egyptian Journal of Medical Human Genetics ... Response to the treatment was evaluated statistically and correlated with body mass index percentile.

  13. A Comparative Study Of Source Location And Depth Estimates From ...

    African Journals Online (AJOL)

    ... the analytic signal amplitude (ASA) and the local wave number (LWN) of the total intensity magnetic field. In this study, a synthetic magnetic field due to four buried dipoles was analysed to show that estimates of source location and depth can be improved significantly by reducing the data to the pole prior to the application ...

  14. NIH and NCI grant-related changes during fiscal years 2014 and 2015

    Science.gov (United States)

    Wong, Rosemary S. L.

    2015-03-01

    The 2014 fiscal year (FY) continued to be a challenging one for all federal agencies despite the many Congressional strategies proposed to address the U.S. budget deficit. The Bipartisan Budget Act of 2013 passed by the House and Senate in December 2013 approved a two-year spending bill which cancelled the FY2014 and FY2015 required sequestration cuts (i.e., 4-5% National Institute of Health (NIH)/National Cancer Institute (NCI) budget reduction initiated on March 1, 2013), but extended the sequestration period through FY2023. This bill passage helped minimize any further budget reductions and resulted in a final FY2014 NIH budget of 29.9 billion and a NCI budget of 4.9 billion. Both NIH and NCI worked hard to maintain awarding the same number of NIH/NCI investigator-initiated R01 and exploratory R21 grants funded in FY2014 and similar to the level seen in FY2013 and previous years (see Tables 1 and 2). Since Congress only recently passed the 2015 spending bill in December 16, 2014, the final NIH and NCI budget appropriations for FY2015 remains unknown at this time and most likely will be similar to the FY2014 budget level. The NCI overall success and funding rates for unsolicited investigator-initiated R01 applications remained at 15%, while the success rate for exploratory R21 applications was 12% in FY2014 with similar rates seen in FY2013 (see Tables 1 and 2). The success rate for biomedical research applications in the Photodynamic Therapy and laser research field will be provided for the past few years. NIH provides numerous resources to help inform the extramural biomedical research community of new and current grant applicants about new grant policy changes and the grant submission and review processes.

  15. A feasibility study on wavelet transform for reactivity coefficient estimation

    International Nuclear Information System (INIS)

    Shimazu, Yoichiro

    2000-01-01

    Recently, a new method using Fourier transform has been introduced in place of the conventional method in order to reduce the time required for the measurement of moderator temperature coefficient in domestic PWRs. The basic concept of these methods is to eliminate noise in the reactivity signal. From this point of view, wavelet analysis is also known as an effective method. In this paper, we tried to apply this method to estimate reactivity coefficients of a nuclear reactor. The basic idea of the reactivity coefficient estimation is to analyze the ratios themselves of the corresponding expansion coefficients of the wavelet transform of the signals of reactivity and the relevant parameter. The concept requires no inverse wavelet transform. Based on numerical simulations, it is found that the method can reasonably estimate reactivity coefficient, for example moderator temperature coefficient, with less length of time sequence data than those required for Fourier transform method. We will continue this study to examine the validity of the estimation procedure for the actual reactor data and further to estimate the other reactivity coefficients. (author)

  16. Fusion reactor design studies: standard accounts for cost estimates

    International Nuclear Information System (INIS)

    Schulte, S.C.; Willke, T.L.; Young, J.R.

    1978-05-01

    The fusion reactor design studies--standard accounts for cost estimates provides a common format from which to assess the economic character of magnetically confined fusion reactor design concepts. The format will aid designers in the preparation of design concept costs estimates and also provide policymakers with a tool to assist in appraising which design concept may be economically promising. The format sets forth a categorization and accounting procedure to be used when estimating fusion reactor busbar energy cost that can be easily and consistently applied. Reasons for developing the procedure, explanations of the procedure, justifications for assumptions made in the procedure, and the applicability of the procedure are described in this document. Adherence to the format when evaluating prospective fusion reactor design concepts will result in the identification of the more promising design concepts thus enabling the fusion power alternatives with better economic potential to be quickly and efficiently developed

  17. Estimation of dose and exposure at sentinel node study

    International Nuclear Information System (INIS)

    Skopljak, A.; Kucukalic-Selimovic, E.; Beslic, N.; Begic, A.; Begovic-Hadzimuratovic, S.; Drazeta, Z.; Beganovic, A.

    2005-01-01

    The purpose of this study was to estimate the dose end exposure in staff involved in sentinel node procedure for breast cancer patients. The Institute of Nuclear Medicine in Sarajevo uses a protocol for lymphoscintigraphy of the sentinel node whereby 13 MBq of 9 9mT c nanocoll are used. In this study, we measured radiation doses and exposure of a nuclear medicine physician and a technologist, as well as a surgeon performing sentinel node lymphoscintigraphy and biopsy. Dose and exposure were calculated using the equation in which we have gamma constant for 9 9mT c. Calculations were made for different times of exposure and distance. In Table 1. we estimated the dose and exposure during sentinel node study. Radiation levels were very low and the most exposed hospital staff performing sentinel node study were nuclear medicine physicians. The doses on the hands of surgeons were negligible 8 hours after exposure.(author)

  18. Permissivity of the NCI-60 cancer cell lines to oncolytic Vaccinia Virus GLV-1h68

    International Nuclear Information System (INIS)

    Ascierto, Maria Libera; Bedognetti, Davide; Uccellini, Lorenzo; Rossano, Fabio; Ascierto, Paolo A; Stroncek, David F; Restifo, Nicholas P; Wang, Ena; Szalay, Aladar A; Marincola, Francesco M; Worschech, Andrea; Yu, Zhiya; Adams, Sharon; Reinboth, Jennifer; Chen, Nanhai G; Pos, Zoltan; Roychoudhuri, Rahul; Di Pasquale, Giovanni

    2011-01-01

    Oncolytic viral therapy represents an alternative therapeutic strategy for the treatment of cancer. We previously described GLV-1h68, a modified Vaccinia Virus with exclusive tropism for tumor cells, and we observed a cell line-specific relationship between the ability of GLV-1h68 to replicate in vitro and its ability to colonize and eliminate tumor in vivo. In the current study we surveyed the in vitro permissivity to GLV-1h68 replication of the NCI-60 panel of cell lines. Selected cell lines were also tested for permissivity to another Vaccinia Virus and a vesicular stomatitis virus (VSV) strain. In order to identify correlates of permissity to viral infection, we measured transcriptional profiles of the cell lines prior infection. We observed highly heterogeneous permissivity to VACV infection amongst the cell lines. The heterogeneity of permissivity was independent of tissue with the exception of B cell derivation. Cell lines were also tested for permissivity to another Vaccinia Virus and a vesicular stomatitis virus (VSV) strain and a significant correlation was found suggesting a common permissive phenotype. While no clear transcriptional pattern could be identified as predictor of permissivity to infection, some associations were observed suggesting multifactorial basis permissivity to viral infection. Our findings have implications for the design of oncolytic therapies for cancer and offer insights into the nature of permissivity of tumor cells to viral infection

  19. Effect of bcl-2 antisense oligodexynucleotides on chemotherapy efficacy of Vp-16 on human small cell lung cancer cell line NCI-H69

    International Nuclear Information System (INIS)

    He Wenqian; Liu Zhonghua

    2007-01-01

    Objective: To study the effect of bcl-2 antisense oligodexynucleotides on chemotherapy efficacy of Vp-16 on human small cell lung cancer cell line NCI-H69. Methods: Cultured NCI-H69 cells were derided into 4 groups: bcl-2 antisense oligodexynucleotides (ASODN) added, sense oligodexynucleotides (SODN) added, nonsense oligodexynucleotides (NSODN) added and control (no nucleotides added), the oligodexynucleotides were transfected into the cultured cells with oligofectamine. The cellular expression of Bcl-2 protein 72h later was examined with Western-Blot. The four different groups of cultured tumor cells were treated with etopside(Vp-16) at different concentrations (0, 0.25, 0.5, 1.0, 2.0 and 4.0 μg/ml) for 48hr then the cell survival fraction was assessed with MTY test. Results: The apoptotic rate of cells in the ASODN group was significantly higher than that of the control group, also, the survival fraction of cells in ASODN group was significantly lower than that of the control group. The Bcl-2 protein expression in ASODN group was significantly lower than that in the control group, but no inhibition was observed in SODN and NSODN groups. Conclusion: The bcl-2 ASODN could enhance the sensitivity to chemotherapy with Vp-16 in small cell lung cancer cell line NCI-H69 by effectively blocking bcl-2 gene expression. (authors)

  20. Readability of Online Patient Educational Resources Found on NCI-Designated Cancer Center Web Sites.

    Science.gov (United States)

    Rosenberg, Stephen A; Francis, David; Hullett, Craig R; Morris, Zachary S; Fisher, Michael M; Brower, Jeffrey V; Bradley, Kristin A; Anderson, Bethany M; Bassetti, Michael F; Kimple, Randall J

    2016-06-01

    The NIH and Department of Health & Human Services recommend online patient information (OPI) be written at a sixth grade level. We used a panel of readability analyses to assess OPI from NCI-Designated Cancer Center (NCIDCC) Web sites. Cancer.gov was used to identify 68 NCIDCC Web sites from which we collected both general OPI and OPI specific to breast, prostate, lung, and colon cancers. This text was analyzed by 10 commonly used readability tests: the New Dale-Chall Readability Formula, Flesch Reading Ease scale, Flesch-Kinaid Grade Level, FORCAST scale, Fry Readability Graph, Simple Measure of Gobbledygook test, Gunning Frequency of Gobbledygook index, New Fog Count, Raygor Readability Estimate Graph, and Coleman-Liau Index. We tested the hypothesis that the readability of NCIDCC OPI was written at the sixth grade level. Secondary analyses were performed to compare readability of OPI between comprehensive and noncomprehensive centers, by region, and to OPI produced by the American Cancer Society (ACS). A mean of 30,507 words from 40 comprehensive and 18 noncomprehensive NCIDCCs was analyzed (7 nonclinical and 3 without appropriate OPI were excluded). Using a composite grade level score, the mean readability score of 12.46 (ie, college level: 95% CI, 12.13-12.79) was significantly greater than the target grade level of 6 (middle-school: Preadability metrics (P<.05). ACS OPI provides easier language, at the seventh to ninth grade level, across all tests (P<.01). OPI from NCIDCC Web sites is more complex than recommended for the average patient. Copyright © 2016 by the National Comprehensive Cancer Network.

  1. 77 FR 2734 - Proposed Collection; Comment Request: Solar Cell: A Mobile UV Manager for Smart Phones (NCI)

    Science.gov (United States)

    2012-01-19

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Institutes of Health Proposed Collection; Comment Request: Solar Cell: A Mobile UV Manager for Smart Phones (NCI) SUMMARY: In compliance with the... Manager for Smart Phones (NCI). Type of Information Collection Request: New. Need and Use of Information...

  2. Study on estimation of evacuation distances for nuclear emergency

    International Nuclear Information System (INIS)

    Sato, Sohei; Homma, Toshimitsu

    2005-09-01

    Japan Atomic Energy Research Institute (JAERI) have conducted the analytical studies on the Probabilistic Safety Assessment (PSA), the severe accidents, and the optimization of protective actions. Based on the results of these studies, JAERI are investigating the method for taking urgent protective actions more reasonably. If an accident occurs in a nuclear power plant (NPP), early protective actions are carried out. To implement these actions more effectively, emergency preparedness and emergency planning are important, and especially prompt evacuation is expected to reduce a large amount of radiation exposures. To examine the effect of early protective measures by using a PSA method, estimation of the parameter uncertainty related in the time for early protective actions is needed. For this purpose, we have developed an analytical method for urgent protective actions using geographic information, and estimated the movement distance based on gathering points arrangement an population distribution. For this analysis, we used the gathering point data shown on each regional plans for disaster prevention which will be used in actual emergency situation and targeted the population inside Emergency Planning Zone (EPZ). By applying this method for the existing sixteen commercial NPP sites, we estimated the average value and distribution of the movement distance for each sites. This report provides a brief description of the method for estimating the movement distance, input data for this analysis, and the result. Moreover, the problem on the method of evacuation distance analysis and usefulness of this method for emergency planning were discussed. (author)

  3. Dental age estimation using Willems method: A digital orthopantomographic study

    Directory of Open Access Journals (Sweden)

    Rezwana Begum Mohammed

    2014-01-01

    Full Text Available In recent years, age estimation has become increasingly important in living people for a variety of reasons, including identifying criminal and legal responsibility, and for many other social events such as a birth certificate, marriage, beginning a job, joining the army, and retirement. Objectives: The aim of this study was to assess the developmental stages of left seven mandibular teeth for estimation of dental age (DA in different age groups and to evaluate the possible correlation between DA and chronological age (CA in South Indian population using Willems method. Materials and Methods: Digital Orthopantomogram of 332 subjects (166 males, 166 females who fit the study and the criteria were obtained. Assessment of mandibular teeth (from central incisor to the second molar on left quadrant development was undertaken and DA was assessed using Willems method. Results and Discussion: The present study showed a significant correlation between DA and CA in both males (r = 0.71 and females (r = 0.88. The overall mean difference between the estimated DA and CA for males was 0.69 ± 2.14 years (P 0.05. Willems method underestimated the mean age of males by 0.69 years and females by 0.08 years and showed that females mature earlier than males in selected population. The mean difference between DA and CA according to Willems method was 0.39 years and is statistically significant (P < 0.05. Conclusion: This study showed significant relation between DA and CA. Thus, digital radiographic assessment of mandibular teeth development can be used to generate mean DA using Willems method and also the estimated age range for an individual of unknown CA.

  4. Estimating bacterial diversity for ecological studies: methods, metrics, and assumptions.

    Directory of Open Access Journals (Sweden)

    Julia Birtel

    Full Text Available Methods to estimate microbial diversity have developed rapidly in an effort to understand the distribution and diversity of microorganisms in natural environments. For bacterial communities, the 16S rRNA gene is the phylogenetic marker gene of choice, but most studies select only a specific region of the 16S rRNA to estimate bacterial diversity. Whereas biases derived from from DNA extraction, primer choice and PCR amplification are well documented, we here address how the choice of variable region can influence a wide range of standard ecological metrics, such as species richness, phylogenetic diversity, β-diversity and rank-abundance distributions. We have used Illumina paired-end sequencing to estimate the bacterial diversity of 20 natural lakes across Switzerland derived from three trimmed variable 16S rRNA regions (V3, V4, V5. Species richness, phylogenetic diversity, community composition, β-diversity, and rank-abundance distributions differed significantly between 16S rRNA regions. Overall, patterns of diversity quantified by the V3 and V5 regions were more similar to one another than those assessed by the V4 region. Similar results were obtained when analyzing the datasets with different sequence similarity thresholds used during sequences clustering and when the same analysis was used on a reference dataset of sequences from the Greengenes database. In addition we also measured species richness from the same lake samples using ARISA Fingerprinting, but did not find a strong relationship between species richness estimated by Illumina and ARISA. We conclude that the selection of 16S rRNA region significantly influences the estimation of bacterial diversity and species distributions and that caution is warranted when comparing data from different variable regions as well as when using different sequencing techniques.

  5. Reducing Friction: An Update on the NCIP Open Development Initiative - NCI BioMedical Informatics Blog

    Science.gov (United States)

    NCIP has migrated 132 repositories from the NCI subversion repository to our public NCIP GitHub channel with the goal of facilitating third party contributions to the existing code base. Within the GitHub environment, we are advocating use of the GitHub “fork and pull” model.

  6. NCI and the Chinese National Cancer Center pursue new collaborations in cancer research

    Science.gov (United States)

    CGH Director, Dr. Ted Trimble, and East Asia Program Director, Dr. Ann Chao, traveled to Beijing with Mr. Matthew Brown from the Department of Health and Human Services Office of Global Affairs to attend the Joint Meeting of the NCC and the U.S. NCI.

  7. 78 FR 53763 - Proposed Collection; 60-day Comment Request Cancer Trials Support Unit (CTSU) (NCI)

    Science.gov (United States)

    2013-08-30

    ... proposed data collection projects, the National Cancer Institute (NCI), National Institutes of Health (NIH), will publish periodic summaries of proposed projects to be submitted to the Office of Management and... proposed collection of information, including the validity of the methodology and assumptions used; (3...

  8. NCI at Frederick Employees Receive Awards at the Spring Research Festival | Poster

    Science.gov (United States)

    NCI and Frederick National Laboratory staff members were among those honored at the Spring Research Festival Awards Ceremony on May 28. The ceremony was the culmination of the festival, which was sponsored by the National Interagency Confederation for Biological Research (NICBR), May 4–7. Maj. Gen. Brian Lein, commanding general, U.S. Army Medical Research and Materiel Command

  9. Microsoft Office 365 Deployment Continues through June at NCI at Frederick | Poster

    Science.gov (United States)

    The latest Microsoft suite, Office 365 (O365), is being deployed to all NCI at Frederick computers during the months of May and June to comply with federal mandates. The suite includes the latest versions of Word, Excel, Outlook, PowerPoint, and Skype for Business, along with cloud-based capabilities. These cloud-based capabilities will help meet the federal mandates that

  10. Puerto Rico NCI Community Oncology Research Program Minority/Underserved | Division of Cancer Prevention

    Science.gov (United States)

    The Puerto Rico NCI Community Oncology Research Program (PRNCORP) will be the principal organization in the island that promotes cancer prevention, control and screening/post-treatment surveillance clinical trials. It will conduct cancer care delivery research and will provide access to treatment and imaging clinical trials conducted under the reorganization of the National

  11. 75 FR 61763 - Submission of OMB Review; Comment Request; Drug Accountability Record (Form NIH 2564) (NCI)

    Science.gov (United States)

    2010-10-06

    ...; Comment Request; Drug Accountability Record (Form NIH 2564) (NCI) SUMMARY: In compliance with the..., 2011, unless it displays a valid OMB control number. Proposed Collection: Title: Drug Accountability... accountability. In order to fulfill these requirements, a standard Investigational Drug Accountability Report...

  12. 75 FR 46945 - Proposed Collection; Comment Request; the Drug Accountability Record (Form NIH 2564) (NCI)

    Science.gov (United States)

    2010-08-04

    ... Request; the Drug Accountability Record (Form NIH 2564) (NCI) SUMMARY: In compliance with the requirement... Management and Budget (OMB) for review and approval. Proposed Collection Title: The Drug Accountability... agent accountability. In order to fulfill these requirements, a standard Investigational Drug...

  13. 76 FR 14034 - Proposed Collection; Comment Request; NCI Cancer Genetics Services Directory Web-Based...

    Science.gov (United States)

    2011-03-15

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Institutes of Health Proposed Collection; Comment Request; NCI Cancer Genetics Services Directory Web-Based Application Form and Update Mailer Summary: In... Cancer Genetics Services Directory Web-based Application Form and Update Mailer. [[Page 14035

  14. 78 FR 2678 - Proposed Collection; Comment Request (60-Day FRN): The National Cancer Institute (NCI...

    Science.gov (United States)

    2013-01-14

    ... Request (60-Day FRN): The National Cancer Institute (NCI) SmokefreeTXT (Text Message) Program Evaluation..., Behavioral Scientist/ Health Science Administrator, Division of Cancer Control and Population Sciences, 6130... text message smoking cessation intervention designed for young adult smokers ages 18-29. The Smokefree...

  15. NCI Statement on the U.S. Surgeon General's "Call to Action to Prevent Skin Cancer"

    Science.gov (United States)

    As the Federal Government's principal agency for cancer research and training, the National Cancer Institute (NCI) endorses the U.S. Surgeon General’s “Call to Action to Prevent Skin Cancer,” which provides a comprehensive evaluation of the current state of skin cancer prevention efforts in the United States and recommends actions for improvement in the future.

  16. Diet History Questionnaire II FAQs | EGRP/DCCPS/NCI/NIH

    Science.gov (United States)

    Answers to general questions about the Diet History Questionnaire II (DHQ II), as well as those related to DHQ II administration, validation, scanning, nutrient estimates, calculations, DHQ II modification, data quality, and more.

  17. Automated estimation of hip prosthesis migration: a feasibility study

    Science.gov (United States)

    Vandemeulebroucke, Jef; Deklerck, Rudi; Temmermans, Frederik; Van Gompel, Gert; Buls, Nico; Scheerlinck, Thierry; de Mey, Johan

    2013-09-01

    A common complication associated with hip arthoplasty is prosthesis migration, and for most cemented components a migration greater than 0.85 mm within the first six months after surgery, are an indicator for prosthesis failure. Currently, prosthesis migration is evaluated using X-ray images, which can only reliably estimate migrations larger than 5 mm. We propose an automated method for estimating prosthesis migration more accurately, using CT images and image registration techniques. We report on the results obtained using an experimental set-up, in which a metal prosthesis can be translated and rotated with respect to a cadaver femur, over distances and angles applied using a combination of positioning stages. Images are first preprocessed to reduce artefacts. Bone and prosthesis are extracted using consecutive thresholding and morphological operations. Two registrations are performed, one aligning the bones and the other aligning the prostheses. The migration is estimated as the difference between the found transformations. We use a robust, multi-resolution, stochastic optimization approach, and compare the mean squared intensity differences (MS) to mutual information (MI). 30 high-resolution helical CT scans were acquired for prosthesis translations ranging from 0.05 mm to 4 mm, and rotations ranging from 0.3° to 3° . For the translations, the mean 3D registration error was found to be 0.22 mm for MS, and 0.15 mm for MI. For the rotations, the standard deviation of the estimation error was 0.18° for MS, and 0.08° for MI. The results show that the proposed approach is feasible and that clinically acceptable accuracies can be obtained. Clinical validation studies on patient images will now be undertaken.

  18. Comparative Study of Complex Survey Estimation Software in ONS

    Directory of Open Access Journals (Sweden)

    Andy Fallows

    2015-09-01

    Full Text Available Many official statistics across the UK Government Statistical Service (GSS are produced using data collected from sample surveys. These survey data are used to estimate population statistics through weighting and calibration techniques. For surveys with complex or unusual sample designs, the weighting can be fairly complicated. Even in more simple cases, appropriate software is required to implement survey weighting and estimation. As with other stages of the survey process, it is preferable to use a standard, generic calibration tool wherever possible. Standard tools allow for efficient use of resources and assist with the harmonisation of methods. In the case of calibration, the Office for National Statistics (ONS has experience of using the Statistics Canada Generalized Estimation System (GES across a range of business and social surveys. GES is a SAS-based system and so is only available in conjunction with an appropriate SAS licence. Given recent initiatives and encouragement to investigate open source solutions across government, it is appropriate to determine whether there are any open source calibration tools available that can provide the same service as GES. This study compares the use of GES with the calibration tool ‘R evolved Generalized software for sampling estimates and errors in surveys’ (ReGenesees available in R, an open source statistical programming language which is beginning to be used in many statistical offices. ReGenesees is a free R package which has been developed by the Italian statistics office (Istat and includes functionality to calibrate survey estimates using similar techniques to GES. This report describes analysis of the performance of ReGenesees in comparison to GES to calibrate a representative selection of ONS surveys. Section 1.1 provides a brief introduction to the current use of SAS and R in ONS. Section 2 describes GES and ReGenesees in more detail. Sections 3.1 and 3.2 consider methods for

  19. Estimate of dose in interventional radiology: a study of cases

    International Nuclear Information System (INIS)

    Pinto, N.; Braz, D.; Lopes, R.; Vallim, M.; Padilha, L.; Azevedo, F.; Barroso, R.

    2006-01-01

    Values of absorbed dose taken by patients and professionals involved in interventional radiology can be significant mainly for the reason of these proceedings taking long time of fluoroscopy There are many methods to estimate and reduce doses of radiation in the interventional radiology, particularly because the fluoroscopy is responsible for the high dose contribution in the patient and in the professional. The aim of this work is the thermoluminescent dosimetry to estimate the dose values of the extremities of the professionals involved in the interventional radiology and the product dose-area was investigated using a Diamentor. This evaluation is particularly useful for proceedings that interest multiple parts of the organism. In this study were used thermoluminescent dosimeters (LiF:Mg, Ti - Harshaw) to estimate the dose values of the extremities of the professionals and to calibrate them. They were irradiated with X rays at 50 mGy, in Kerma in air and read in the reader Harshaw-5500. The product dose-area (D.A.P.) were obtained through the Diamentor (M2-P.T.W.) calibrated in Cgy.cm 2 fixed in the exit of the X-rays tube. The patients of these study were divided in three groups: individuals submitted to proceedings of embolization, individuals submitted to cerebral and renal arteriography and individuals submitted to proceedings of Transjungular Inthahepatic Porta Systemic Stent Shunt (TIPS). The texts were always carried out by the same group: radiologist doctor), an auxiliary doctor and a nursing auxiliary. The section of interventional radiology has an Angiostar Plus Siemens equipment type arc C, in which there is trifocal Megalix X-ray tube and a intensifier of image from Sirecon 40-4 HDR/33 HDR. In this work the dose estimated values were 137.25 mSv/year for the doctors, 40.27 mSv/year for the nursing and 51.95 mSv/year for the auxiliary doctor and they are below the rule, but in this study it was not taken in consideration the emergency texts as they were

  20. Exoplanet Yield Estimation for Decadal Study Concepts using EXOSIMS

    Science.gov (United States)

    Morgan, Rhonda; Lowrance, Patrick; Savransky, Dmitry; Garrett, Daniel

    2016-01-01

    The anticipated upcoming large mission study concepts for the direct imaging of exo-earths present an exciting opportunity for exoplanet discovery and characterization. While these telescope concepts would also be capable of conducting a broad range of astrophysical investigations, the most difficult technology challenges are driven by the requirements for imaging exo-earths. The exoplanet science yield for these mission concepts will drive design trades and mission concept comparisons.To assist in these trade studies, the Exoplanet Exploration Program Office (ExEP) is developing a yield estimation tool that emphasizes transparency and consistent comparison of various design concepts. The tool will provide a parametric estimate of science yield of various mission concepts using contrast curves from physics-based model codes and Monte Carlo simulations of design reference missions using realistic constraints, such as solar avoidance angles, the observatory orbit, propulsion limitations of star shades, the accessibility of candidate targets, local and background zodiacal light levels, and background confusion by stars and galaxies. The python tool utilizes Dmitry Savransky's EXOSIMS (Exoplanet Open-Source Imaging Mission Simulator) design reference mission simulator that is being developed for the WFIRST Preliminary Science program. ExEP is extending and validating the tool for future mission concepts under consideration for the upcoming 2020 decadal review. We present a validation plan and preliminary yield results for a point design.

  1. Sequential recruitment of study participants may inflate genetic heritability estimates.

    Science.gov (United States)

    Noce, Damia; Gögele, Martin; Schwienbacher, Christine; Caprioli, Giulia; De Grandi, Alessandro; Foco, Luisa; Platzgummer, Stefan; Pramstaller, Peter P; Pattaro, Cristian

    2017-06-01

    After the success of genome-wide association studies to uncover complex trait loci, attempts to explain the remaining genetic heritability (h 2 ) are mainly focused on unraveling rare variant associations and gene-gene or gene-environment interactions. Little attention is paid to the possibility that h 2 estimates are inflated as a consequence of the epidemiological study design. We studied the time series of 54 biochemical traits in 4373 individuals from the Cooperative Health Research In South Tyrol (CHRIS) study, a pedigree-based study enrolling ten participants/day over several years, with close relatives preferentially invited within the same day. We observed distributional changes of measured traits over time. We hypothesized that the combination of such changes with the pedigree structure might generate a shared-environment component with consequent h 2 inflation. We performed variance components (VC) h 2 estimation for all traits after accounting for the enrollment period in a linear mixed model (two-stage approach). Accounting for the enrollment period caused a median h 2 reduction of 4%. For 9 traits, the reduction was of >20%. Results were confirmed by a Bayesian Markov chain Monte Carlo analysis with all VCs included at the same time (one-stage approach). The electrolytes were the traits most affected by the enrollment period. The h 2 inflation was independent of the h 2 magnitude, laboratory protocol changes, and length of the enrollment period. The enrollment process may induce shared-environment effects even under very stringent and standardized operating procedures, causing h 2 inflation. Including the day of participation as a random effect is a sensitive way to avoid overestimation.

  2. CellMiner: a relational database and query tool for the NCI-60 cancer cell lines

    Directory of Open Access Journals (Sweden)

    Reinhold William C

    2009-06-01

    Full Text Available Abstract Background Advances in the high-throughput omic technologies have made it possible to profile cells in a large number of ways at the DNA, RNA, protein, chromosomal, functional, and pharmacological levels. A persistent problem is that some classes of molecular data are labeled with gene identifiers, others with transcript or protein identifiers, and still others with chromosomal locations. What has lagged behind is the ability to integrate the resulting data to uncover complex relationships and patterns. Those issues are reflected in full form by molecular profile data on the panel of 60 diverse human cancer cell lines (the NCI-60 used since 1990 by the U.S. National Cancer Institute to screen compounds for anticancer activity. To our knowledge, CellMiner is the first online database resource for integration of the diverse molecular types of NCI-60 and related meta data. Description CellMiner enables scientists to perform advanced querying of molecular information on NCI-60 (and additional types through a single web interface. CellMiner is a freely available tool that organizes and stores raw and normalized data that represent multiple types of molecular characterizations at the DNA, RNA, protein, and pharmacological levels. Annotations for each project, along with associated metadata on the samples and datasets, are stored in a MySQL database and linked to the molecular profile data. Data can be queried and downloaded along with comprehensive information on experimental and analytic methods for each data set. A Data Intersection tool allows selection of a list of genes (proteins in common between two or more data sets and outputs the data for those genes (proteins in the respective sets. In addition to its role as an integrative resource for the NCI-60, the CellMiner package also serves as a shell for incorporation of molecular profile data on other cell or tissue sample types. Conclusion CellMiner is a relational database tool for

  3. Interlaboratory analytical performance studies; a way to estimate measurement uncertainty

    Directory of Open Access Journals (Sweden)

    El¿bieta £ysiak-Pastuszak

    2004-09-01

    Full Text Available Comparability of data collected within collaborative programmes became the key challenge of analytical chemistry in the 1990s, including monitoring of the marine environment. To obtain relevant and reliable data, the analytical process has to proceed under a well-established Quality Assurance (QA system with external analytical proficiency tests as an inherent component. A programme called Quality Assurance in Marine Monitoring in Europe (QUASIMEME was established in 1993 and evolved over the years as the major provider of QA proficiency tests for nutrients, trace metals and chlorinated organic compounds in marine environment studies. The article presents an evaluation of results obtained in QUASIMEME Laboratory Performance Studies by the monitoring laboratory of the Institute of Meteorology and Water Management (Gdynia, Poland in exercises on nutrient determination in seawater. The measurement uncertainty estimated from routine internal quality control measurements and from results of analytical performance exercises is also presented in the paper.

  4. A comparative study of some robust ridge and liu estimators ...

    African Journals Online (AJOL)

    In multiple linear regression analysis, multicollinearity and outliers are two main problems. When multicollinearity exists, biased estimation techniques such as Ridge and Liu Estimators are preferable to Ordinary Least Square. On the other hand, when outliers exist in the data, robust estimators like M, MM, LTS and S ...

  5. Study on the social economic estimation of Chernobyl accident

    International Nuclear Information System (INIS)

    Sagara, Aya; Fujimoto, Noboru; Morita, Koji; Fukuda, Kenji

    2000-01-01

    In order to estimate the external economic effect for the risk of the nuclear power plants, the document research has been carried out, which mainly deals with the economic influence of the Chernobyl accident that occurred on the 26th of April 1986. As a result, the direct and indirect total economic loss between 1986 and 1995 is about $ 80 billion in Belarus, $ 115 billion in Ukraine and 1.15 trillion in Russia. This value, however, is considered as an overestimation, since the environmental contamination with radioactive material and thyroid cancer in Russia is very much the same as in Belarus and Ukraine. Also, the total economic loss is about a billion dollars in west European countries. The total economic loss for the Chernobyl accident is estimated more than about $ 300 billion. On the other hand, the chance occurrence of this kind of major accident of the nuclear power plant is very small in terms of probabilities, and the product of economic loss and frequency is smaller than the cost benefit for the measure of global warming and the energy security in Japan. This kind of problem should be treated as a social problem and study on various external economic effect is necessary. (author)

  6. Estimation methods of eco-environmental water requirements: Case study

    Institute of Scientific and Technical Information of China (English)

    YANG Zhifeng; CUI Baoshan; LIU Jingling

    2005-01-01

    Supplying water to the ecological environment with certain quantity and quality is significant for the protection of diversity and the realization of sustainable development. The conception and connotation of eco-environmental water requirements, including the definition of the conception, the composition and characteristics of eco-environmental water requirements, are evaluated in this paper. The classification and estimation methods of eco-environmental water requirements are then proposed. On the basis of the study on the Huang-Huai-Hai Area, the present water use, the minimum and suitable water requirement are estimated and the corresponding water shortage is also calculated. According to the interrelated programs, the eco-environmental water requirements in the coming years (2010, 2030, 2050) are estimated. The result indicates that the minimum and suitable eco-environmental water requirements fluctuate with the differences of function setting and the referential standard of water resources, and so as the water shortage. Moreover, the study indicates that the minimum eco-environmental water requirement of the study area ranges from 2.84×1010m3 to 1.02×1011m3, the suitable water requirement ranges from 6.45×1010m3 to 1.78×1011m3, the water shortage ranges from 9.1×109m3 to 2.16×1010m3 under the minimum water requirement, and it is from 3.07×1010m3 to 7.53×1010m3 under the suitable water requirement. According to the different values of the water shortage, the water priority can be allocated. The ranges of the eco-environmental water requirements in the three coming years (2010, 2030, 2050) are 4.49×1010m3-1.73×1011m3, 5.99×10m3?2.09×1011m3, and 7.44×1010m3-2.52×1011m3, respectively.

  7. Estimating glomerular filtration rate in a population-based study

    Directory of Open Access Journals (Sweden)

    Anoop Shankar

    2010-07-01

    Full Text Available Anoop Shankar1, Kristine E Lee2, Barbara EK Klein2, Paul Muntner3, Peter C Brazy4, Karen J Cruickshanks2,5, F Javier Nieto5, Lorraine G Danforth2, Carla R Schubert2,5, Michael Y Tsai6, Ronald Klein21Department of Community Medicine, West Virginia University School of Medicine, Morgantown, WV, USA; 2Department of Ophthalmology and Visual Sciences, 4Department of Medicine, 5Department of Population Health Sciences, University of Wisconsin, School of Medicine and Public Health, Madison, WI, USA; 3Department of Community Medicine, Mount Sinai School of Medicine, NY, USA; 6Department of Laboratory Medicine and Pathology, University of Minnesota, Minneapolis, MN, USABackground: Glomerular filtration rate (GFR-estimating equations are used to determine the prevalence of chronic kidney disease (CKD in population-based studies. However, it has been suggested that since the commonly used GFR equations were originally developed from samples of patients with CKD, they underestimate GFR in healthy populations. Few studies have made side-by-side comparisons of the effect of various estimating equations on the prevalence estimates of CKD in a general population sample.Patients and methods: We examined a population-based sample comprising adults from Wisconsin (age, 43–86 years; 56% women. We compared the prevalence of CKD, defined as a GFR of <60 mL/min per 1.73 m2 estimated from serum creatinine, by applying various commonly used equations including the modification of diet in renal disease (MDRD equation, Cockcroft–Gault (CG equation, and the Mayo equation. We compared the performance of these equations against the CKD definition of cystatin C >1.23 mg/L.Results: We found that the prevalence of CKD varied widely among different GFR equations. Although the prevalence of CKD was 17.2% with the MDRD equation and 16.5% with the CG equation, it was only 4.8% with the Mayo equation. Only 24% of those identified to have GFR in the range of 50–59 mL/min per 1

  8. 77 FR 4334 - Proposed Collection; Comment Request; Solar Cell: A Mobile UV Manager for Smart Phones (NCI)

    Science.gov (United States)

    2012-01-27

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Institutes of Health Proposed Collection; Comment Request; Solar Cell: A Mobile UV Manager for Smart Phones (NCI) SUMMARY: In compliance with the... Manager for Smart Phones [[Page 4335

  9. It’s Easy to Recycle at NCI at Frederick | Poster

    Science.gov (United States)

    From 2013 through the first quarter of 2018, NCI at Frederick has recycled over 1,667 tons of material, while incinerating or landfilling over 4,273 tons of trash. This earns us a recycling rate close to 28 percent, which is below the national average of 32 percent, according to the Environmental Protection Agency, and well below our goal of 50 percent. (These numbers only

  10. NCI Think Tank Concerning the Identifiability of Biospecimens and “-Omic” Data

    OpenAIRE

    Weil, Carol J.; Mechanic, Leah E.; Green, Tiffany; Kinsinger, Christopher; Lockhart, Nicole C.; Nelson, Stefanie A.; Rodriguez, Laura L.; Buccini, Laura D.

    2013-01-01

    On June 11 and 12, 2012, the National Cancer Institute (NCI) hosted a think tank concerning the identifiability of biospecimens and “omic” Data in order to explore challenges surrounding this complex and multifaceted topic. The think tank brought together forty-six leaders from several fields, including cancer genomics, bioinformatics, human subject protection, patient advocacy, and commercial genetics. The first day involved presentations regarding the state of the science of re-identificati...

  11. A Gene-Based Prognostic for Hepatocellular Carcinoma Patient Response to Adjuvant Transcatheter Arterial Chemoembolization | NCI Technology Transfer Center | TTC

    Science.gov (United States)

    The gold standard of care for hepatocellular carcinoma patients with intermediate- to locally advanced tumors is transcatheter arterial chemoembolization (TACE), a procedure whereby the tumor is targeted both with local chemotherapy and restriction of local blood supply. NCI scientists have identified a 14-gene signature predictive of response to TACE, and NCI seeks licensees or co-development partners to develop the technology toward commercialization.

  12. Best Performers Announced for the NCI-CPTAC DREAM Proteogenomics Computational Challenge | Office of Cancer Clinical Proteomics Research

    Science.gov (United States)

    The National Cancer Institute (NCI) Clinical Proteomic Tumor Analysis Consortium (CPTAC) is pleased to announce that teams led by Jaewoo Kang (Korea University), and Yuanfang Guan with Hongyang Li (University of Michigan) as the best performers of the NCI-CPTAC DREAM Proteogenomics Computational Challenge. Over 500 participants from 20 countries registered for the Challenge, which offered $25,000 in cash awards contributed by the NVIDIA Foundation through its Compute the Cure initiative.

  13. Inactivated Tianjin strain, a novel genotype of Sendai virus, induces apoptosis in HeLa, NCI-H446 and Hep3B cells.

    Science.gov (United States)

    Chen, Jun; Han, Han; Wang, Bin; Shi, Liying

    2016-07-01

    The Sendai virus strain Tianjin is a novel genotype of the Sendai virus. In previous studies, ultraviolet-inactivated Sendai virus strain Tianjin (UV-Tianjin) demonstrated antitumor effects on human breast cancer cells. The aim of the present study was to investigate the in vitro antitumor effects of UV-Tianjin on the human cervical carcinoma HeLa, human small cell lung cancer NCI-H446 and human hepatocellular carcinoma Hep 3B cell lines, and the possible underlying mechanisms of these antitumor effects. A 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide assay revealed that UV-Tianjin treatment inhibited the proliferation of HeLa, NCI-H446 and Hep 3B cells in a dose- and time-dependent manner. Hoechst and Annexin V-fluorescein isothiocyanate/propidium iodide double staining indicated that UV-Tianjin induced dose-dependent apoptosis in all three cell lines with the most significant effect observed in the HeLa cell line. In the HeLa cell line, UV-Tianjin-induced apoptosis was further confirmed by the disruption of the mitochondria membrane potential and the activation of caspases, as demonstrated by fluorescent cationic dye and colorimetric assays, respectively. In addition, western blot analysis revealed that UV-Tianjin treatment resulted in significant upregulation of cytochrome c , apoptosis protease activating factor-1, Fas, Fas ligand and Fas-associated protein with death domain, and activated caspase-9, -8 and -3 in HeLa cells. Based on these results, it is hypothesized that UV-Tianjin exhibits anticancer activity in HeLa, NCI-H446 and Hep 3B cell lines via the induction of apoptosis. In conclusion, the results of the present study indicate that in the HeLa cell line, intrinsic and extrinsic apoptotic pathways may be involved in UV-Tianjin-induced apoptosis.

  14. The NCI Alliance for Nanotechnology in Cancer: achievement and path forward.

    Science.gov (United States)

    Ptak, Krzysztof; Farrell, Dorothy; Panaro, Nicholas J; Grodzinski, Piotr; Barker, Anna D

    2010-01-01

    Nanotechnology is a 'disruptive technology', which can lead to a generation of new diagnostic and therapeutic products, resulting in dramatically improved cancer outcomes. The National Cancer Institute (NCI) of National Institutes of Health explores innovative approaches to multidisciplinary research allowing for a convergence of molecular biology, oncology, physics, chemistry, and engineering and leading to the development of clinically worthy technological approaches. These initiatives include programmatic efforts to enable nanotechnology as a driver of advances in clinical oncology and cancer research, known collectively as the NCI Alliance for Nanotechnology in Cancer (ANC). Over the last 5 years, ANC has demonstrated that multidisciplinary approach catalyzes scientific developments and advances clinical translation in cancer nanotechnology. The research conducted by ANC members has improved diagnostic assays and imaging agents, leading to the development of point-of-care diagnostics, identification and validation of numerous biomarkers for novel diagnostic assays, and the development of multifunctional agents for imaging and therapy. Numerous nanotechnology-based technologies developed by ANC researchers are entering clinical trials. NCI has re-issued ANC program for next 5 years signaling that it continues to have high expectations for cancer nanotechnology's impact on clinical practice. The goals of the next phase will be to broaden access to cancer nanotechnology research through greater clinical translation and outreach to the patient and clinical communities and to support development of entirely new models of cancer care.

  15. Reliability estimation of semi-Markov systems: a case study

    International Nuclear Information System (INIS)

    Ouhbi, Brahim; Limnios, Nikolaos

    1997-01-01

    In this article, we are concerned with the estimation of the reliability and the availability of a turbo-generator rotor using a set of data observed in a real engineering situation provided by Electricite De France (EDF). The rotor is modeled by a semi-Markov process, which is used to estimate the rotor's reliability and availability. To do this, we present a method for estimating the semi-Markov kernel from a censored data

  16. Comparative Study of Online Open Circuit Voltage Estimation Techniques for State of Charge Estimation of Lithium-Ion Batteries

    Directory of Open Access Journals (Sweden)

    Hicham Chaoui

    2017-04-01

    Full Text Available Online estimation techniques are extensively used to determine the parameters of various uncertain dynamic systems. In this paper, online estimation of the open-circuit voltage (OCV of lithium-ion batteries is proposed by two different adaptive filtering methods (i.e., recursive least square, RLS, and least mean square, LMS, along with an adaptive observer. The proposed techniques use the battery’s terminal voltage and current to estimate the OCV, which is correlated to the state of charge (SOC. Experimental results highlight the effectiveness of the proposed methods in online estimation at different charge/discharge conditions and temperatures. The comparative study illustrates the advantages and limitations of each online estimation method.

  17. Evaluation of the UF/NCI hybrid computational phantoms for use in organ dosimetry of pediatric patients undergoing fluoroscopically guided cardiac procedures

    Science.gov (United States)

    Marshall, Emily L.; Borrego, David; Tran, Trung; Fudge, James C.; Bolch, Wesley E.

    2018-03-01

    Epidemiologic data demonstrate that pediatric patients face a higher relative risk of radiation induced cancers than their adult counterparts at equivalent exposures. Infants and children with congenital heart defects are a critical patient population exposed to ionizing radiation during life-saving procedures. These patients will likely incur numerous procedures throughout their lifespan, each time increasing their cumulative radiation absorbed dose. As continued improvements in long-term prognosis of congenital heart defect patients is achieved, a better understanding of organ radiation dose following treatment becomes increasingly vital. Dosimetry of these patients can be accomplished using Monte Carlo radiation transport simulations, coupled with modern anatomical patient models. The aim of this study was to evaluate the performance of the University of Florida/National Cancer Institute (UF/NCI) pediatric hybrid computational phantom library for organ dose assessment of patients that have undergone fluoroscopically guided cardiac catheterizations. In this study, two types of simulations were modeled. A dose assessment was performed on 29 patient-specific voxel phantoms (taken as representing the patient’s true anatomy), height/weight-matched hybrid library phantoms, and age-matched reference phantoms. Two exposure studies were conducted for each phantom type. First, a parametric study was constructed by the attending pediatric interventional cardiologist at the University of Florida to model the range of parameters seen clinically. Second, four clinical cardiac procedures were simulated based upon internal logfiles captured by a Toshiba Infinix-i Cardiac Bi-Plane fluoroscopic unit. Performance of the phantom library was quantified by computing both the percent difference in individual organ doses, as well as the organ dose root mean square values for overall phantom assessment between the matched phantoms (UF/NCI library or reference) and the patient

  18. Regional and longitudinal estimation of product lifespan distribution: a case study for automobiles and a simplified estimation method.

    Science.gov (United States)

    Oguchi, Masahiro; Fuse, Masaaki

    2015-02-03

    Product lifespan estimates are important information for understanding progress toward sustainable consumption and estimating the stocks and end-of-life flows of products. Publications reported actual lifespan of products; however, quantitative data are still limited for many countries and years. This study presents regional and longitudinal estimation of lifespan distribution of consumer durables, taking passenger cars as an example, and proposes a simplified method for estimating product lifespan distribution. We estimated lifespan distribution parameters for 17 countries based on the age profile of in-use cars. Sensitivity analysis demonstrated that the shape parameter of the lifespan distribution can be replaced by a constant value for all the countries and years. This enabled a simplified estimation that does not require detailed data on the age profile. Applying the simplified method, we estimated the trend in average lifespans of passenger cars from 2000 to 2009 for 20 countries. Average lifespan differed greatly between countries (9-23 years) and was increasing in many countries. This suggests consumer behavior differs greatly among countries and has changed over time, even in developed countries. The results suggest that inappropriate assumptions of average lifespan may cause significant inaccuracy in estimating the stocks and end-of-life flows of products.

  19. The utilization of websites for fundraising by NCI-designated cancer centers: Examining the capacity for dialogic communication with prospective donors.

    Science.gov (United States)

    Erwin, Cathleen O; Dias, Ashley M

    2016-01-01

    The study employs a dialogic public relations framework to explore the utilization of the Internet for fundraising by nonprofit health care organizations-specifically, NCI-designated cancer centers. Cancer centers have been noted for effective websites and for being highly engaged in fundraising, which is characterized as relationship marketing. Results indicate all but one cancer center use websites and social media for fundraising but are limited in capacity for two-way symmetrical dialogue. Results are discussed and recommendations are made for future research.

  20. A comparative study of the performances of some estimators of ...

    African Journals Online (AJOL)

    In linear regression model, regressors are assumed fixed in repeated sampling. ... error terms when normally distributed regressors are fixed (non – stochastic) with ... of the estimated parameter of the model at different levels of autocorrelation ...

  1. Flux estimation algorithms for electric drives: a comparative study

    OpenAIRE

    Koteich , Mohamad

    2016-01-01

    International audience; This paper reviews the stator flux estimation algorithms applied to the alternating current motor drives. The so-called voltage model estimation, which consists of integrating the back-electromotive force signal, is addressed. However, in practice , the pure integration is prone to drift problems due to noises, measurement error, stator resistance uncertainty and unknown initial conditions. This limitation becomes more restrictive at low speed operation. Several soluti...

  2. Variations in Mre11/Rad50/Nbs1 status and DNA damage-induced S-phase arrest in the cell lines of the NCI60 panel

    Directory of Open Access Journals (Sweden)

    Eastman Alan

    2011-05-01

    Full Text Available Abstract Background The Mre11/Rad50/Nbs1 (MRN complex is a regulator of cell cycle checkpoints and DNA repair. Defects in MRN can lead to defective S-phase arrest when cells are damaged. Such defects may elicit sensitivity to selected drugs providing a chemical synthetic lethal interaction that could be used to target therapy to tumors with these defects. The goal of this study was to identify these defects in the NCI60 panel of cell lines and identify compounds that might elicit selective cytotoxicity. Methods We screened the NCI60 panel in search of cell lines that express low levels of MRN proteins, or that fail to arrest in S-phase in response to the topisomerase I inhibitor SN38. The NCI COMPARE program was used to discover compounds that preferentially target cells with these phenotypes. Results HCT116 cells were initially identified as defective in MRN and S phase arrest. Transfection with Mre11 also elevated Rad50 and Nbs1, and rescued the defective S-phase arrest. Cells of the NCI60 panel exhibited a large range of protein expression but a strong correlation existed between Mre11, Rad50 and Nbs1 consistent with complex formation determining protein stability. Mre11 mRNA correlated best with protein level suggesting it was the primary determinant of the overall level of the complex. Three other cell lines failed to arrest in response to SN38, two of which also had low MRN. However, other cell lines with low MRN still arrested suggesting low MRN does not predict an inability to arrest. Many compounds, including a family of benzothiazoles, correlated with the failure to arrest in S phase. The activity of benzothiazoles has been attributed to metabolic activation and DNA alkylation, but we note several cell lines in which sensitivity does not correlate with metabolism. We propose that the checkpoint defect imposes an additional mechanism of sensitivity on cells. Conclusions We have identified cells with possible defects in the MRN complex

  3. Variations in Mre11/Rad50/Nbs1 status and DNA damage-induced S-phase arrest in the cell lines of the NCI60 panel

    International Nuclear Information System (INIS)

    Garner, Kristen M; Eastman, Alan

    2011-01-01

    The Mre11/Rad50/Nbs1 (MRN) complex is a regulator of cell cycle checkpoints and DNA repair. Defects in MRN can lead to defective S-phase arrest when cells are damaged. Such defects may elicit sensitivity to selected drugs providing a chemical synthetic lethal interaction that could be used to target therapy to tumors with these defects. The goal of this study was to identify these defects in the NCI60 panel of cell lines and identify compounds that might elicit selective cytotoxicity. We screened the NCI60 panel in search of cell lines that express low levels of MRN proteins, or that fail to arrest in S-phase in response to the topisomerase I inhibitor SN38. The NCI COMPARE program was used to discover compounds that preferentially target cells with these phenotypes. HCT116 cells were initially identified as defective in MRN and S phase arrest. Transfection with Mre11 also elevated Rad50 and Nbs1, and rescued the defective S-phase arrest. Cells of the NCI60 panel exhibited a large range of protein expression but a strong correlation existed between Mre11, Rad50 and Nbs1 consistent with complex formation determining protein stability. Mre11 mRNA correlated best with protein level suggesting it was the primary determinant of the overall level of the complex. Three other cell lines failed to arrest in response to SN38, two of which also had low MRN. However, other cell lines with low MRN still arrested suggesting low MRN does not predict an inability to arrest. Many compounds, including a family of benzothiazoles, correlated with the failure to arrest in S phase. The activity of benzothiazoles has been attributed to metabolic activation and DNA alkylation, but we note several cell lines in which sensitivity does not correlate with metabolism. We propose that the checkpoint defect imposes an additional mechanism of sensitivity on cells. We have identified cells with possible defects in the MRN complex and S phase arrest, and a series of compounds that may

  4. Mouse Xenograft Model for Mesothelioma | NCI Technology Transfer Center | TTC

    Science.gov (United States)

    The National Cancer Institute is seeking parties interested in collaborative research to co-develop, evaluate, or commercialize a new mouse model for monoclonal antibodies and immunoconjugates that target malignant mesotheliomas. Applications of the technology include models for screening compounds as potential therapeutics for mesothelioma and for studying the pathology of mesothelioma.

  5. The study to estimate the floating population in Seoul, Korea.

    Science.gov (United States)

    Lee, Geon Woo; Lee, Yong Jin; Kim, Youngeun; Hong, Seung-Han; Kim, Soohwaun; Kim, Jeong Soo; Lee, Jong Tae; Shin, Dong Chun; Lim, Youngwook

    2017-01-01

    Traffic-related pollutants have been reported to increase the morbidity of respiratory diseases. In order to apply management policies related to motor vehicles, studies of the floating population living in cities are important. The rate of metro rail transit system use by passengers residing in Seoul is about 54% of total public transportation use. Through the rate of metro use, the people-flow ratios in each administrative area were calculated. By applying a people-flow ratio based on the official census count, the floating population in 25 regions was calculated. The reduced level of deaths among the floating population in 14 regions having the roadside monitoring station was calculated as assuming a 20% reduction of mobile emission based on the policy. The hourly floating population size was calculated by applying the hourly population ratio to the regional population size as specified in the official census count. The number of people moving from 5 a.m. to next day 1 a.m. could not be precisely calculated when the population size was applied, but no issue was observed that would trigger a sizable shift in the rate of population change. The three patterns of increase, decrease, and no change of population in work hours were analyzed. When the concentration of particulate matter less than 10 μm in aerodynamic diameter was reduced by 20%, the number of excess deaths varied according to the difference of the floating population. The effective establishment of directions to manage the pollutants in cities should be carried out by considering the floating population. Although the number of people using the metro system is only an estimate, this disadvantage was supplemented by calculating inflow and outflow ratio of metro users per time in the total floating population in each region. Especially, 54% of metro usage in public transport causes high reliability in application.

  6. The study to estimate the floating population in Seoul, Korea

    Directory of Open Access Journals (Sweden)

    Geon Woo Lee

    2017-05-01

    Full Text Available Traffic-related pollutants have been reported to increase the morbidity of respiratory diseases. In order to apply management policies related to motor vehicles, studies of the floating population living in cities are important. The rate of metro rail transit system use by passengers residing in Seoul is about 54% of total public transportation use. Through the rate of metro use, the people-flow ratios in each administrative area were calculated. By applying a people-flow ratio based on the official census count, the floating population in 25 regions was calculated. The reduced level of deaths among the floating population in 14 regions having the roadside monitoring station was calculated as assuming a 20% reduction of mobile emission based on the policy. The hourly floating population size was calculated by applying the hourly population ratio to the regional population size as specified in the official census count. The number of people moving from 5 a.m. to next day 1 a.m. could not be precisely calculated when the population size was applied, but no issue was observed that would trigger a sizable shift in the rate of population change. The three patterns of increase, decrease, and no change of population in work hours were analyzed. When the concentration of particulate matter less than 10 μm in aerodynamic diameter was reduced by 20%, the number of excess deaths varied according to the difference of the floating population. The effective establishment of directions to manage the pollutants in cities should be carried out by considering the floating population. Although the number of people using the metro system is only an estimate, this disadvantage was supplemented by calculating inflow and outflow ratio of metro users per time in the total floating population in each region. Especially, 54% of metro usage in public transport causes high reliability in application.

  7. Experimental study on source efficiencies for estimating surface contamination level

    International Nuclear Information System (INIS)

    Ichiji, Takeshi; Ogino, Haruyuki

    2008-01-01

    Source efficiency was measured experimentally for various materials, such as metals, nonmetals, flooring materials, sheet materials and other materials, contaminated by alpha and beta emitter radioactive nuclides. Five nuclides, 147 Pm, 60 Co, 137 Cs, 204 Tl and 90 Sr- 90 Y, were used as the beta emitters, and one nuclide 241 Am was used as the alpha emitter. The test samples were prepared by placing drops of the radioactive standardized solutions uniformly on the various materials using an automatic quantitative dispenser system from Musashi Engineering, Inc. After placing drops of the radioactive standardized solutions, the test materials were allowed to dry for more than 12 hours in a draft chamber with a hood. The radioactivity of each test material was about 30 Bq. Beta rays or alpha rays from the test materials were measured with a 2-pi gas flow proportional counter from Aloka Co., Ltd. The source efficiencies of the metals, nonmetals and sheet materials were higher than 0.5 in the case of contamination by the 137 Cs, 204 Tl and 90 Sr- 90 Y radioactive standardized solutions, higher than 0.4 in the case of contamination by the 60 Co radioactive standardized solution, and higher than 0.25 in the case of contamination by the alpha emitter the 241 Am radioactive standardized solution. These values were higher than those given in Japanese Industrial Standards (JIS) documents. In contrast, the source efficiencies of some permeable materials were lower than those given in JIS documents, because source efficiency varies depending on whether the materials or radioactive sources are wet or dry. This study provides basic data on source efficiency, which is useful for estimating the surface contamination level of materials. (author)

  8. Estimation of Skin to Subarachnoid Space Depth: An Observational Study.

    Science.gov (United States)

    Hazarika, Rajib; Choudhury, Dipika; Nath, Sangeeta; Parua, Samit

    2016-10-01

    In a patient, the skin to Subarachnoid Space Depth (SSD) varies considerably at different levels of the spinal cord. It also varies from patient to patient at the same vertebral level as per age, sex and Body Mass Index (BMI). Estimation of the skin to SSD reduces complications related to spinal anaesthesia. To measure the skin to SSD in the Indian population and to find a formula for predicting this depth. Three hundred adult patients belonging to American Society of Anaesthesiologist class I and II, undergoing surgery using spinal anaesthesia in various surgical specialities of Gauhati Medical College were selected by systemic sampling for this prospective, observational study. Patients were divided into three groups: Group M containing male patients, Group F containing non-pregnant female patients, and Group PF containing pregnant female's patients. SSD was measured after performing lumbar puncture. The relationship between SSD and patient characteristics were studied, correlated and statistical analysis was used to find a formula for predicting the skin to SSD. Statistical analysis was done using Statistical Package for Social Sciences (SPSS 21.0, Chicago, IL, USA). One-way ANOVA with post-hoc(Bonferroni correction factor) analysis was applied to compare the three groups. Multivariate analysis was done for the covariates followed by a multivariate regression analysis to evaluate the covariates influencing SSD for each group separately. Mean SSD was 4.37±0.31cm in the overall population. SSD in adult males was 4.49±0.19cm which was significantly longer than that observed in female's 4.18±0.39cm which was comparable with SSD in parturient 4.43±0.19 cm. The formula for predicting the skin to SSD in the male population was 1.718+0.077×BMI+0.632×Height, in nonpregnant female population was 1.828+0.077×BMI+0.018×Height+0.007×Age and 0.748+0.209×BMI+4.703×Height-0.054×weight in parturient females, respectively. Skin to SSD correlated with the BMI in all

  9. Study on Posture Estimation Using Delayed Measurements for Mobile Robots

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    When associating data from various sensors to estimate the posture of mobile robots, a crucial problem to be solved is that there may be some delayed measurements. Furthermore, the general multi-sensor data fusion algorithm is a Kalman filter. In order to handle the problem concerning delayed measurements, this paper investigates a Kalman filter modified to account for the delays. Based on the interpolating measurement, a fusion system is applied to estimate the posture of a mobile robot which fuses the data from the encoder and laser global position system using the extended Kalman filter algorithm. Finally, the posture estimation experiment of the mobile robot is given whose result verifies the feasibility and efficiency of the algorithm.

  10. Probability shapes perceptual precision: A study in orientation estimation.

    Science.gov (United States)

    Jabar, Syaheed B; Anderson, Britt

    2015-12-01

    Probability is known to affect perceptual estimations, but an understanding of mechanisms is lacking. Moving beyond binary classification tasks, we had naive participants report the orientation of briefly viewed gratings where we systematically manipulated contingent probability. Participants rapidly developed faster and more precise estimations for high-probability tilts. The shapes of their error distributions, as indexed by a kurtosis measure, also showed a distortion from Gaussian. This kurtosis metric was robust, capturing probability effects that were graded, contextual, and varying as a function of stimulus orientation. Our data can be understood as a probability-induced reduction in the variability or "shape" of estimation errors, as would be expected if probability affects the perceptual representations. As probability manipulations are an implicit component of many endogenous cuing paradigms, changes at the perceptual level could account for changes in performance that might have traditionally been ascribed to "attention." (c) 2015 APA, all rights reserved).

  11. A study on the overpressure estimation of BLEVE

    Energy Technology Data Exchange (ETDEWEB)

    Kim, In Tae [Korean Fire Protection Association (Korea); Kim, In Won; Song, Hee Oeul [Department of Chemical Engineering, Konkuk University (Korea)

    2000-03-01

    Explosion quantities and flashing mass resulting from the variation of temperature are calculated by a computer program, BLEVE ESTIMATOR, to carry out the risk assessment of BLEVE. The damages caused by the BLEVE are estimated under the explosion of the simulation condition similar to the Puchun LP gas station accident, and the results are compared with the commercial program SAFER of Dupont Co. Explosion quantities and flashing mass increase exponentially with the increase of explosion temperature. These values for propane are relatively higher than those for n-butane. In conditions of higher vessel temperature, vessel pressure, and liquid ratio of containment, higher overpressures are calculated. 10 refs., 12 figs., 2 tabs.

  12. Trainable estimators for indirect people counting : a comparative study

    NARCIS (Netherlands)

    Acampora, G.; Loia, V.; Percannella, G.; Vento, M.

    2011-01-01

    Estimating the number of people in a scene is a very relevant issue due to the possibility of using it in a large number of contexts where it is necessary to automatically monitor an area for security/safety reasons, for economic purposes, etc. The large number of people counting approaches

  13. Case Study: Zutphen : Estimates of levee system reliability

    NARCIS (Netherlands)

    Roscoe, K.; Kothuis, Baukje; Kok, Matthijs

    2017-01-01

    Estimates of levee system reliability can conflict with experience and intuition. For example, a very high failure probability may be computed while no evidence of failure has been observed, or a very low failure probability when signs of failure have been detected.

  14. Mortality Risk from Co-Morbidities independent of Triple-Negative Breast Cancer Status: NCI SEER-based Cohort Analysis

    Science.gov (United States)

    Swede, Helen; Sarwar, Amna; Magge, Anil; Braithwaite, Dejana; Cook, Linda S.; Gregorio, David I.; Jones, Beth A; Hoag, Jessica; Gonsalves, Lou; Salner, Andrew; Zarfos, Kristen; Andemariam, Biree; Stevens, Richard G; Dugan, Alicia; Pensa, Mellisa; Brockmeyer, Jessica

    2017-01-01

    Purpose A comparatively high prevalence of co-morbidities among African-American/Blacks (AA/B) has been implicated in disparate survival in breast cancer. There is a scarcity of data, however, if this effect persists when accounting for the adverse triple-negative breast cancer (TNBC) subtype which occurs at three-fold the rate in AA/B compared to white breast cancer patients. Methods We reviewed charts of 214 white and 202 AA/B breast cancer patients in the NCI-SEER Connecticut Tumor Registry who were diagnosed in 2000-07. We employed the Charlson Co-Morbidity Index (CCI), a weighted 17-item tool to predict risk of death in cancer populations. Cox Survival Analyses estimated hazard ratios (HR) for all-cause mortality in relation to TNBC and CCI adjusting for clinicopathological factors. Results Among patients with SEER-Local Stage, TNBC increased the risk of death (HR=2.18, 95% CI 1.14-4.16), which was attenuated when the CCI score was added to the model (Adj. HR=1.50, 95% CI 0.74-3.01). Conversely, the adverse impact of the CCI score persisted when controlling for TNBC (Adj. HR=1.49, 95% CI 1.29-1.71; per one point increase). Similar patterns were observed in SEER-Regional Stage but estimated HRs were lower. AA/B patients with a CCI score of ≥3 had a significantly higher risk of death compared to AA/B patients without comorbidities (Adj. HR=5.65, 95% CI 2.90-11.02). A lower and non-significant effect was observed for whites with a CCI of ≥3 (Adj. HR=1.90, 95% CI 0.68-5.29). Conclusions Co-morbidities at diagnosis increase risk of death independent of TNBC, and AA/B patients may be disproportionately at risk. PMID:27000206

  15. Mortality risk from comorbidities independent of triple-negative breast cancer status: NCI-SEER-based cohort analysis.

    Science.gov (United States)

    Swede, Helen; Sarwar, Amna; Magge, Anil; Braithwaite, Dejana; Cook, Linda S; Gregorio, David I; Jones, Beth A; R Hoag, Jessica; Gonsalves, Lou; L Salner, Andrew; Zarfos, Kristen; Andemariam, Biree; Stevens, Richard G; G Dugan, Alicia; Pensa, Mellisa; A Brockmeyer, Jessica

    2016-05-01

    A comparatively high prevalence of comorbidities among African-American/Blacks (AA/B) has been implicated in disparate survival in breast cancer. There is a scarcity of data, however, if this effect persists when accounting for the adverse triple-negative breast cancer (TNBC) subtype which occurs at threefold the rate in AA/B compared to white breast cancer patients. We reviewed charts of 214 white and 202 AA/B breast cancer patients in the NCI-SEER Connecticut Tumor Registry who were diagnosed in 2000-2007. We employed the Charlson Co-Morbidity Index (CCI), a weighted 17-item tool to predict risk of death in cancer populations. Cox survival analyses estimated hazard ratios (HRs) for all-cause mortality in relation to TNBC and CCI adjusting for clinicopathological factors. Among patients with SEER local stage, TNBC increased the risk of death (HR 2.18, 95 % CI 1.14-4.16), which was attenuated when the CCI score was added to the model (Adj. HR 1.50, 95 % CI 0.74-3.01). Conversely, the adverse impact of the CCI score persisted when controlling for TNBC (Adj. HR 1.49, 95 % CI 1.29-1.71; per one point increase). Similar patterns were observed in SEER regional stage, but estimated HRs were lower. AA/B patients with a CCI score of ≥3 had a significantly higher risk of death compared to AA/B patients without comorbidities (Adj. HR 5.65, 95 % CI 2.90-11.02). A lower and nonsignificant effect was observed for whites with a CCI of ≥3 (Adj. HR 1.90, 95 % CI 0.68-5.29). comorbidities at diagnosis increase risk of death independent of TNBC, and AA/B patients may be disproportionately at risk.

  16. Quantification of Biodegradation: Applied Example on Oil Seeps in Armàncies Fm, Southeastern Pyrenees

    OpenAIRE

    Permanyer, Albert; Caja, Miguel Ángel

    2005-01-01

    La presencia de petróleo expulsado directamente de la roca madre de la Formación Armàncies, constituye un caso único para el estudio de los procesos de biodegradación aeróbica en petróleo. El estado de degradación bacteriana es moderado y está principalmente limitado a la alteración de n-alcanos, isoprenoides y algunos aromáticos. La cuantificación ha sido realizada mediante el contenido en sulfuro y con los marcadores moleculares de la fracción aromática. Los resultados obtenidos...

  17. Test de visualitat: les preferències del bon disseny

    Directory of Open Access Journals (Sweden)

    Quim Merino

    2014-07-01

    Full Text Available Aquest article pretén donar notícia de la investigació dirigida pel Grup de Recerca en Publicitat i Relacions Públiques (en endavant, GRP de la Universitat Autònoma de Barcelona duta a terme pels autors d'aquesta ressenya. El treball s'emmarca en una activitat de l'assignatura de Disseny en Publicitat i Relacions Públiques del Grau en Publicitat i Relacions Públiques de la UAB. L'objectiu del treball és constatar les preferències del consumidor davant diferents estímuls formals del disseny gràfic en publicitat

  18. Malalties de transmissió sexual a urgències pediàtriques

    OpenAIRE

    Díaz Sabogal, Diana; Curcoy Barcenilla, Ana Isabel; Trenchs Sainz de la Maza, Victoria; Giménez Roca, Clara; Luaces Cubells, Carles

    2014-01-01

    Determinar les característiques dels pacients diag- nosticats de malalties de transmissió sexual (MTS) a urgèn- cies i establir la freqüència en què són degudes a abús sexual. Mètode. Estudi retrospectiu fet entre el gener del 2007 i el desembre del 2011. S'inclouen els pacients menors de 18 anys diagnosticats a urgències d'MTS -infecció per Neisse- ria gonorrhoeae, Chlamydia trachomatis, Treponema palli- dum, , virus d'immunodeficiència humana (VIH), virus del pa- pil loma humà (VPH) i virus...

  19. Anàlisi forense d'evidències digitals

    OpenAIRE

    Bonachera López, Esteban

    2014-01-01

    L'objectiu principal d'aquest projecte consisteix en la realització de l'anàlisi forense del disc dur i de la memòria RAM d'un ordinador personal, en concret un Netbook, vinculat a una possible conducta delictiva. També s'inclou en l'anàlisi una base de dades del conegut programari WhatsApp extreta d'un smartphone. Per realitzar aquesta tasca s'utilitzaran eines específiques per localitzar les evidències digitals que puguin demostrar els presumptes delictes. El objetivo principal de este p...

  20. La colección ibero-balear de Meloidae Gyllenhal, 1810 (Coleoptera, Tenebrionoidea del Museu de Ciències Naturals de Barcelona

    Directory of Open Access Journals (Sweden)

    Prieto, M.

    2016-12-01

    Full Text Available The Ibero-Balearic collection of Meloidae Gyllenhal, 1810 (Coleoptera, Tenebrionoidea of the Museu de Ciències Naturals de Barcelona A commented catalogue of the Ibero-Balearic collection of Meloidae Gyllenhal, 1810 housed in the Museu de Ciències Naturals de Barcelona is presented. The studied material consists of 2,129 specimens belonging to 49 of 64 species from the Iberian peninsula and the Balearic Islands. The temporal coverage of the collection extends from the last decades of the nineteenth century to the present time. Revision, documentation, and computerization of the material have been made, resulting in 963 collection records (June 2014. For each lot, the catalogue includes the register number, geographical data, collection date, collector or origin of the collection, and number of specimens. Information about taxonomy and distribution of the species is also given. Chorological novelties are provided, extending the distribution areas for most species. The importance of the collection for the knowledge of the Ibero-Balearic fauna of Meloidae is discussed, particularly concerning the area of Catalonia (northeastern Iberian peninsula as it accounts for 60% of the records. Some rare or particularly interesting species in the collection are highlighted, as are those requiring protection measures in Spain and Catalonia. The catalogue also shows a brief gallery of photographs that includes four type specimens.

  1. Empirical Study of Travel Time Estimation and Reliability

    OpenAIRE

    Li, Ruimin; Chai, Huajun; Tang, Jin

    2013-01-01

    This paper explores the travel time distribution of different types of urban roads, the link and path average travel time, and variance estimation methods by analyzing the large-scale travel time dataset detected from automatic number plate readers installed throughout Beijing. The results show that the best-fitting travel time distribution for different road links in 15 min time intervals differs for different traffic congestion levels. The average travel time for all links on all days can b...

  2. Development of cancer risk estimates from epidemiologic studies

    International Nuclear Information System (INIS)

    Webster, E.W.

    1983-01-01

    Radiation risk estimates may be made for an increase in mortality from, or for an increase in incidence of, particular types of disease. For both endpoints, two numerical systems of risk expression are used: the absolute risk system (usually the excess deaths or cases per million persons per year per rad), and the relative risk system (usually excess deaths or cases per year per rad expressed as a percentage of those normally expected). Risks may be calculated for specific age groups or for a general population. An alternative in both risk systems is the estimation of cumulative of lifetime risk rather than annual risk (e.g. in excess deaths per million per rad over a specified long period including the remainder of lifespan). The derivation of both absolute and relative risks is illustrated by examples. The effects on risk estimates of latent period, follow-up time, age at exposure and age standardization within dose groups are illustrated. The dependence of the projected cumulative (lifetime) risk on the adoption of a constant absolute risk or constant relative risk is noted. The use of life-table data in the adjustment of cumulative risk for normal mortality following single or annual doses is briefly discussed

  3. Studies on risk estimation to public from medical radiation (III)

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hai Yong; Kim, Jong Hyung; Kim, Hyeog Ju; Kim, Ji Soon; Oh, Hyeon Joo; Kim, Cheol Hyeon; Yang, Hyun Kyu [Korea Food and Drug Administraion, Seoul (Korea, Republic of); Park, Chan Il [Seoul National Univ., Seoul (Korea, Republic of)

    1998-06-01

    A nationwide survey was conducted to give representative levels of effective doses to patient for 17 types of CT examination and also representative level of MGD (mean glandular dose) to standard breast for mammography X-ray equipment. The effective doses to patient from 16 CT scanners were estimated from measurement of CTDI (Computed Tomography Dose Index) in air by multiplying conversion coefficients which are specified by National Radiological Protection Board in United Kingdom. The lowest and hightest mean values of effective dose measured to patient from CT scanner were 0.05 mSv for IAM examination and 17.75 mSv for routine abdomen examination, respectively. The average values of 17 effective doses were lower than other results of foreign countrys' surveys. The mean glandular doses to a standard breast for 26 mammography units were estimated from measurement of the air kerma at the surface of a 40 mm plain Perspex phantom by applying conversion factors described in Report 59 of the Institute of Physical Sciences in Medicine of United Kingdom. The exposure factors for this measurement were those used clinically at each hospital. The average MGD to standard breast was 1.06 mGy in units with grid and 0.49 mGy in units without grid. These results are lower than guidance levels by IPSM and AAPM. These results will be used for risk estimation to the Korean public from the medical radiation.

  4. Phenethyl Isothiocyanate Induces Apoptotic Cell Death Through the Mitochondria-dependent Pathway in Gefitinib-resistant NCI-H460 Human Lung Cancer Cells In Vitro.

    Science.gov (United States)

    Hsia, Te-Chun; Huang, Yi-Ping; Jiang, Yi-Wen; Chen, Hsin-Yu; Cheng, Zheng-Yu; Hsiao, Yung-Ting; Chen, Cheng-Yen; Peng, Shu-Fen; Chueh, Fu-Shin; Chou, Yu-Cheng; Chung, Jing-Gung

    2018-04-01

    Some lung cancer patients treated with gefitinib develop resistance to this drug resulting in unsatisfactory treatment outcomes. Phenethyl isothiocyanate (PEITC), present in our common cruciferous vegetables, exhibits anticancer activities in many human cancer cell lines. Currently, there is no available information on the possible modification of gefitinib resistance of lung cancer in vitro by PEITC. Thus, the effects of PEITC on gefitinib resistant lung cancer NCI-H460 cells were investigated in vitro. The total cell viability, apoptotic cell death, production of reactive oxygen species (ROS) and Ca 2+ , levels of mitochondria membrane potential (ΔΨ m ) and caspase-3, -8 and -9 activities were measured by flow cytometry assay. PEITC induced chromatin condensation was examined by DAPI staining. PEITC-induced cell morphological changes, decreased total viable cell number and induced apoptotic cell death in NCI-H460 and NCI-H460/G cells. PEITC decreased ROS production in NCI-H460 cells, but increased production in NCI-H460/G cells. PEITC increased Ca 2+ production, decreased the levels of ΔΨ m and increased caspase-3, -8 and -9 activities in both NCI-H460 and NCI-H460/G cells. Western blotting was used to examine the effect of apoptotic cell death associated protein expression in NCI-H460 NCI-H460/G cells after exposure to PEITC. Results showed that PEITC increased expression of cleaved caspase-3, PARP, GADD153, Endo G and pro-apoptotic protein Bax in NCI-H460/G cells. Based on these results, we suggest that PEITC induces apoptotic cell death via the caspase- and mitochondria-dependent pathway in NCI-H460/G cells. Copyright© 2018, International Institute of Anticancer Research (Dr. George J. Delinasios), All rights reserved.

  5. Estimating study costs for use in VOI, a study of dutch publicly funded drug related research

    NARCIS (Netherlands)

    Van Asselt, A.D.; Ramaekers, B.L.; Corro Ramos, I.; Joore, M.A.; Al, M.J.; Lesman-Leegte, I.; Postma, M.J.; Vemer, P.; Feenstra, T.F.

    2016-01-01

    Objectives: To perform value of information (VOI) analyses, an estimate of research costs is needed. However, reference values for such costs are not available. This study aimed to analyze empirical data on research budgets and, by means of a cost tool, provide an overview of costs of several types

  6. Estimating heritability for cause specific mortality based on twin studies

    DEFF Research Database (Denmark)

    Scheike, Thomas; Holst, Klaus Kähler; von Bornemann Hjelmborg, Jacob

    2014-01-01

    the Danish twin registry and discuss how to define heritability for cancer occurrence. The key point is that this should be done taking censoring as well as competing risks due to e.g.  death into account. We describe the dependence between twins on the probability scale and show that various models can...... be used to achieve sensible estimates of the dependence within monozygotic and dizygotic twin pairs that may vary over time. These dependence measures can subsequently be decomposed into a genetic and environmental component using random effects models. We here present several novel models that in essence...

  7. The national cancer institute (NCI) and cancer biology in a 'post genome world'

    International Nuclear Information System (INIS)

    Klausner, Richard D.

    1996-01-01

    The National Cancer Institute (NCI) exists to reduce the burden of all cancers through research and discovery. Extensive restructuring of the NCI over the past year has been aimed at assuring that the institution functions in all ways to promote opportunities for discovery in the laboratory, in the clinic, and in the community. To do this well requires the difficult and almost paradoxical problem of planning for scientific discovery which, in turn is based on the freedom to pursue the unanticipated. The intellectual and structural landscape of science is changing and it places new challenges, new demands and new opportunities for facilitating discovery. The nature of cancer as a disease of genomic instability and of accumulated genetic change, coupled with a possibility of the development of new technologies for reading, utilizing, interpreting and manipulating the genome of single cells, provides unprecedented opportunities for a new type of high through-put biology that will change the nature of discovery, cancer detection, diagnosis, prognosis, therapeutic decision-making and therapeutic discovery. To capture these new opportunities will require attention to be paid to integrate the development of technology and new scientific discoveries with the ability to apply advances rapidly and efficiently through clinical trials

  8. NCI Workshop Report: Clinical and Computational Requirements for Correlating Imaging Phenotypes with Genomics Signatures

    Directory of Open Access Journals (Sweden)

    Rivka Colen

    2014-10-01

    Full Text Available The National Cancer Institute (NCI Cancer Imaging Program organized two related workshops on June 26–27, 2013, entitled “Correlating Imaging Phenotypes with Genomics Signatures Research” and “Scalable Computational Resources as Required for Imaging-Genomics Decision Support Systems.” The first workshop focused on clinical and scientific requirements, exploring our knowledge of phenotypic characteristics of cancer biological properties to determine whether the field is sufficiently advanced to correlate with imaging phenotypes that underpin genomics and clinical outcomes, and exploring new scientific methods to extract phenotypic features from medical images and relate them to genomics analyses. The second workshop focused on computational methods that explore informatics and computational requirements to extract phenotypic features from medical images and relate them to genomics analyses and improve the accessibility and speed of dissemination of existing NIH resources. These workshops linked clinical and scientific requirements of currently known phenotypic and genotypic cancer biology characteristics with imaging phenotypes that underpin genomics and clinical outcomes. The group generated a set of recommendations to NCI leadership and the research community that encourage and support development of the emerging radiogenomics research field to address short-and longer-term goals in cancer research.

  9. Analyzing paired diagnostic studies by estimating the expected benefit

    DEFF Research Database (Denmark)

    Gerke, Oke; Høilund-Carlsen, Poul Flemming; Vach, Werner

    2015-01-01

    is of an indirect nature as test results do influence downstream clinical decisions, but test performance (as characterized by sensitivity, specificity, and the predictive values of a procedure) is, at best, only a surrogate endpoint for patient outcome and does not necessarily translate into it. Not many...... randomized controlled trials have been conducted so far in diagnostic research, and, hence, we need alternative approaches to close the gap between test characteristics and patient outcomes. Several informal approaches have been suggested in order to close this gap, and decision modeling has been advocated...... as a means of obtaining formal approaches. Recently, the expected benefit has been proposed as a quantity that allows a simple formal approach, and we take up this suggestion in this paper. We regard the expected benefit as an estimation problem and consider two approaches to statistical inference. Moreover...

  10. Estimating U.S. Methane Emissions from the Natural Gas Supply Chain. Approaches, Uncertainties, Current Estimates, and Future Studies

    Energy Technology Data Exchange (ETDEWEB)

    Heath, Garvin [Joint Inst. for Strategic Energy Analysis, Golden, CO (United States); Warner, Ethan [Joint Inst. for Strategic Energy Analysis, Golden, CO (United States); Steinberg, Daniel [Joint Inst. for Strategic Energy Analysis, Golden, CO (United States); Brandt, Adam [Stanford Univ., CA (United States)

    2015-08-01

    A growing number of studies have raised questions regarding uncertainties in our understanding of methane (CH4) emissions from fugitives and venting along the natural gas (NG) supply chain. In particular, a number of measurement studies have suggested that actual levels of CH4 emissions may be higher than estimated by EPA" tm s U.S. GHG Emission Inventory. We reviewed the literature to identify the growing number of studies that have raised questions regarding uncertainties in our understanding of methane (CH4) emissions from fugitives and venting along the natural gas (NG) supply chain.

  11. A Study on Parametric Wave Estimation Based on Measured Ship Motions

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam; Iseki, Toshio

    2011-01-01

    The paper studies parametric wave estimation based on the ‘wave buoy analogy’, and data and results obtained from the training ship Shioji-maru are compared with estimates of the sea states obtained from other measurements and observations. Furthermore, the estimating characteristics of the param......The paper studies parametric wave estimation based on the ‘wave buoy analogy’, and data and results obtained from the training ship Shioji-maru are compared with estimates of the sea states obtained from other measurements and observations. Furthermore, the estimating characteristics...... of the parametric model are discussed by considering the results of a similar estimation concept based on Bayesian modelling. The purpose of the latter comparison is not to favour the one estimation approach to the other but rather to highlight some of the advantages and disadvantages of the two approaches....

  12. Study on color difference estimation method of medicine biochemical analysis

    Science.gov (United States)

    Wang, Chunhong; Zhou, Yue; Zhao, Hongxia; Sun, Jiashi; Zhou, Fengkun

    2006-01-01

    The biochemical analysis in medicine is an important inspection and diagnosis method in hospital clinic. The biochemical analysis of urine is one important item. The Urine test paper shows corresponding color with different detection project or different illness degree. The color difference between the standard threshold and the test paper color of urine can be used to judge the illness degree, so that further analysis and diagnosis to urine is gotten. The color is a three-dimensional physical variable concerning psychology, while reflectance is one-dimensional variable; therefore, the estimation method of color difference in urine test can have better precision and facility than the conventional test method with one-dimensional reflectance, it can make an accurate diagnose. The digital camera is easy to take an image of urine test paper and is used to carry out the urine biochemical analysis conveniently. On the experiment, the color image of urine test paper is taken by popular color digital camera and saved in the computer which installs a simple color space conversion (RGB -> XYZ -> L *a *b *)and the calculation software. Test sample is graded according to intelligent detection of quantitative color. The images taken every time were saved in computer, and the whole illness process will be monitored. This method can also use in other medicine biochemical analyses that have relation with color. Experiment result shows that this test method is quick and accurate; it can be used in hospital, calibrating organization and family, so its application prospect is extensive.

  13. Study on the Leak Rate Estimation of SG Tubes and Residual Stress Estimation based on Plastic Deformation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Young Jin; Chang, Yoon Suk; Lee, Dock Jin; Lee, Tae Rin; Choi, Shin Beom; Jeong, Jae Uk; Yeum, Seung Won [Sungkyunkwan University, Seoul (Korea, Republic of)

    2009-02-15

    In this research project, a leak rate estimation model was developed for steam generator tubes with through wall cracks. The modelling was based on the leak data from 23 tube specimens. Also, the procedure of finite element analysis was developed for residual stress calculation of dissimilar metal weld in a bottom mounted instrumentation. The effect of geometric variables related with the residual stress in penetration weld part was investigated by using the developed analysis procedure. The key subjects dealt in this research are: 1. Development of leak rate estimation model for steam generator tubes with through wall cracks 2. Development of the program which can perform the structure and leakage integrity evaluation for steam generator tubes 3. Development of analysis procedure for bottom mounted instrumentation weld residual stress 4. Analysis on the effects of geometric variables on weld residual stress It is anticipated that the technologies developed in this study are applicable for integrity estimation of steam generator tubes and weld part in NPP.

  14. Plant collecting program in Southeast Asia under the sponsorship of the United States National Cancer Institute (NCI) (1986-1991)

    NARCIS (Netherlands)

    Soejarto, D.D.

    1992-01-01

    Under the funding from the United States National Cancer Institute (NCI)¹, a program was undertaken to collect plant samples in Southeast Asia to be tested for their cancer- and AIDS-arresting properties, for the period of September 1, 1986 through August 31, 1991. The program was implemented with

  15. Photoactivatable Lipid-based Nanoparticles as a Vehicle for Dual Agent Delivery | NCI Technology Transfer Center | TTC

    Science.gov (United States)

    Researchers at the National Cancer Institute (NCI) RNA Biology Laboratory have developed nanoparticles that can deliver an agent (i.e., therapeutic or imaging) and release the agent upon targeted photoactivation allowing for controlled temporal and localized release of the agent.

  16. NCI's High Performance Computing (HPC) and High Performance Data (HPD) Computing Platform for Environmental and Earth System Data Science

    Science.gov (United States)

    Evans, Ben; Allen, Chris; Antony, Joseph; Bastrakova, Irina; Gohar, Kashif; Porter, David; Pugh, Tim; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2015-04-01

    The National Computational Infrastructure (NCI) has established a powerful and flexible in-situ petascale computational environment to enable both high performance computing and Data-intensive Science across a wide spectrum of national environmental and earth science data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress so far to harmonise the underlying data collections for future interdisciplinary research across these large volume data collections. NCI has established 10+ PBytes of major national and international data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the major Australian national-scale scientific collections), leading research communities, and collaborating overseas organisations. New infrastructures created at NCI mean the data collections are now accessible within an integrated High Performance Computing and Data (HPC-HPD) environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large-scale high-bandwidth Lustre filesystems. The hardware was designed at inception to ensure that it would allow the layered software environment to flexibly accommodate the advancement of future data science. New approaches to software technology and data models have also had to be developed to enable access to these large and exponentially

  17. ANIONIC POLYMERIZATION OF ALKYL METHACRYLATES INITIATED BY nBuCu(NCy2)Li

    Institute of Scientific and Technical Information of China (English)

    Bing-yong Han; Jian-guo Liang; Jian-min Lu; Feng An; Wan-tai Yang

    2009-01-01

    Anionic polymerization of methyl methacrylate (MMA), n-butyl methacrylate (nBMA) and glycidyl methacrylate (GMA) initiated by nBuCu(NCy2)Li (1) in tetrahydrofuran (THF) at -50℃ to -10℃ was investigated. It was found that the polymerization of MMA and nBMA initiated by 1 proceeded quantitatively in THF to afford PMMA and PBMA with polydispersity index 1.15-1.30 and nearly 100% initiator efficiencies at -10℃. The molecular weights increased linearly with the ratio of [monomer]/[1]. However, a post-polymerization experiment carried out on this system revealed a double polymer peak by GPC when fresh monomer was added after an interval of 10 rain. Polymerization of styrene could be initiated by 1, but the initiator efficiency was low.

  18. Methods to estimate the between‐study variance and its uncertainty in meta‐analysis†

    Science.gov (United States)

    Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian PT; Langan, Dean; Salanti, Georgia

    2015-01-01

    Meta‐analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between‐study variability, which is typically modelled using a between‐study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between‐study variance, has been long challenged. Our aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between‐study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between‐study variance. Based on the scenarios and results presented in the published studies, we recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐study variance statistic’ to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence‐based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. PMID:26332144

  19. Investigation of internalization and cytotoxicity of 125I-[Tyr3]-octreotide in NCI-H446 cell line

    International Nuclear Information System (INIS)

    Sun Junjie; Fan Wo; Xu Yujie; Zhang Youjiu; Zhu Ran; Hu Mingjiang

    2004-01-01

    Objective: To investigate the [Tyr 3 ]-octreotide (TOC) internalizing capacity of NCI-H446 cell line, and the cytotoxicity of 125 I-TOC in NCI-H446 cell line. To assess the therapeutic radiopharmaceutical potentiality of 125 I-TOC for the somatostatin receptor (SSTR) positive tumor. Methods: NCI-H446 cells were incubated together with 125 I-TOC for different periods of time, the amount of internalized 125 I-TOC and the 125 I-TOC bound on the cellular nucleus were detected with γ counter, respectively. The viability of the cells was analyzed by a 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay at different time points with various doses of 125 I-TOC, free 125 I and TOC. Results: 125 I-TOC was internalized into the nucleus and bound on the nucleus in a time-dependent manner. 125 I-TOC bound on the nucleus increased to the highest level at 24 h, the amount of nucleus bound 125 I-TOC at 24 h was 7 times higher than that at 0.5 h. Cytotoxicity of 125 I-TOC in SSTR positive NCI-H446 cells was also dose- and time-dependent. The supreme effect of cytotoxicity was found at 96 h with 74 kBq 125 I-TOC, the survival ratio of cells was reduced to (44.8 ± 7.2)%. Conclusions: 125 I-TOC can be internalized into SSTR positive cells mediated by SSTR. The NCI-H446 cells can be killed by Auger electron emitting from 125 I-TOC. Effect of cytotoxicity showed dose- and time-dependent

  20. NCI's national environmental research data collection: metadata management built on standards and preparing for the semantic web

    Science.gov (United States)

    Wang, Jingbo; Bastrakova, Irina; Evans, Ben; Gohar, Kashif; Santana, Fabiana; Wyborn, Lesley

    2015-04-01

    National Computational Infrastructure (NCI) manages national environmental research data collections (10+ PB) as part of its specialized high performance data node of the Research Data Storage Infrastructure (RDSI) program. We manage 40+ data collections using NCI's Data Management Plan (DMP), which is compatible with the ISO 19100 metadata standards. We utilize ISO standards to make sure our metadata is transferable and interoperable for sharing and harvesting. The DMP is used along with metadata from the data itself, to create a hierarchy of data collection, dataset and time series catalogues that is then exposed through GeoNetwork for standard discoverability. This hierarchy catalogues are linked using a parent-child relationship. The hierarchical infrastructure of our GeoNetwork catalogues system aims to address both discoverability and in-house administrative use-cases. At NCI, we are currently improving the metadata interoperability in our catalogue by linking with standardized community vocabulary services. These emerging vocabulary services are being established to help harmonise data from different national and international scientific communities. One such vocabulary service is currently being established by the Australian National Data Services (ANDS). Data citation is another important aspect of the NCI data infrastructure, which allows tracking of data usage and infrastructure investment, encourage data sharing, and increasing trust in research that is reliant on these data collections. We incorporate the standard vocabularies into the data citation metadata so that the data citation become machine readable and semantically friendly for web-search purpose as well. By standardizing our metadata structure across our entire data corpus, we are laying the foundation to enable the application of appropriate semantic mechanisms to enhance discovery and analysis of NCI's national environmental research data information. We expect that this will further

  1. The study to estimate the floating population in Seoul, Korea

    OpenAIRE

    Lee, Geon Woo; Lee, Yong Jin; Kim, Youngeun; Hong, Seung-Han; Kim, Soohwaun; Kim, Jeong Soo; Lee, Jong Tae; Shin, Dong Chun; Lim, Youngwook

    2017-01-01

    Traffic-related pollutants have been reported to increase the morbidity of respiratory diseases. In order to apply management policies related to motor vehicles, studies of the floating population living in cities are important. The rate of metro rail transit system use by passengers residing in Seoul is about 54% of total public transportation use. Through the rate of metro use, the people-flow ratios in each administrative area were calculated. By applying a people-flow ratio based on the o...

  2. The Effect of Some Estimators of Between-Study Variance on Random

    African Journals Online (AJOL)

    Samson Henry Dogo

    the first step to such objectivity (Schmidt, 1992), allows to combine results from many studies and accurately ... Schmidt, 2000) due to its ability to account for variation in effects across the studies. Random-effects model ... (2015), and each of the estimators differs in terms of their bias and precision in estimation. By definition ...

  3. Microscopic study of rock for estimating long-term behavior

    International Nuclear Information System (INIS)

    Ichikawa, Yasuaki

    1997-03-01

    One must consider micro-structures of rock and rock mass in order to predict the long-term behavior for more than ten thousand years. First we observe the micro-crack distribution of granite which is commonly distributed in Japan, and is widely used for several structures. The creep under constant load and the relaxation under constant displacement are typical time dependent phenomena, and we performed a series of relaxation tests under microscope observation in laboratory. The specimen that is preserved in water is granite as mentioned above. The aim of this experiment is to observe the sequential propagation of micro-cracks and its affect to the macroscopic response of the rock material under relaxation state. Next, a viscoelastic homogenization method is applied for analyzing the behavior of granite that is composed of several kinds of minerals (i.e., a polycrystalline material). The homogenization method developed for analyzing mechanics of composite materials is a mathematical theory that can describe the macroscopic behavior accounting for the microscopic characteristics with periodic microstructures. In this study, it is applied to a polycrystalline rock which involves a few minerals and micro-cracks. Furthermore, it is required to apply the homogenization analysis for rock materials which show a nonlinear time dependent behavior, so we develop a new elasto-visco-plastic homogenization theory, and its validity is checked for some ground structures made by clay. (author)

  4. Small-Area Estimation with Zero-Inflated Data – a Simulation Study

    Directory of Open Access Journals (Sweden)

    Krieg Sabine

    2016-12-01

    Full Text Available Many target variables in official statistics follow a semicontinuous distribution with a mixture of zeros and continuously distributed positive values. Such variables are called zero inflated. When reliable estimates for subpopulations with small sample sizes are required, model-based small-area estimators can be used, which improve the accuracy of the estimates by borrowing information from other subpopulations. In this article, three small-area estimators are investigated. The first estimator is the EBLUP, which can be considered the most common small-area estimator and is based on a linear mixed model that assumes normal distributions. Therefore, the EBLUP is model misspecified in the case of zero-inflated variables. The other two small-area estimators are based on a model that takes zero inflation explicitly into account. Both the Bayesian and the frequentist approach are considered. These small-area estimators are compared with each other and with design-based estimation in a simulation study with zero-inflated target variables. Both a simulation with artificial data and a simulation with real data from the Dutch Household Budget Survey are carried out. It is found that the small-area estimators improve the accuracy compared to the design-based estimator. The amount of improvement strongly depends on the properties of the population and the subpopulations of interest.

  5. Microscopic study of rock for estimating long-term behavior

    International Nuclear Information System (INIS)

    Ichikawa, Yasuaki

    2004-02-01

    Micro-structure of rock plays an essential role for their long-term behavior. For understanding long-term characteristics of granite we here present the followings: 1) observation of microcrack initiation and propagation by Conforcal Laser Scanning Microscope (CLSM) under uniaxial compression (before loading and at each loading stage), 2) characterization of the mechanism of microcrack initiation and propagation observed by stereoscopic microscope under uniaxial/triaxial compression and relaxation tests, and 3) a study of strong discontinuity analysis included in the homogenization theory to predict the long-term behavior of micro/macro-level stress for granite. First, CLSM was used to acquire clearly focused three-dimensional images of granite specimens, and observed the change of microscale structure including the mineral configuration under applying uniaxial compression stress. Then though microcracks have ever thought to be initiated and propagated on intergranular boundaries, we understand through the CLSM observation that new microcracks are generated from the ends of pre-existing cracks which are distributed in quartz and biotite. Second, we showed the results of stress-relaxation test of granite specimens observed by an optical microscope under water-saturated triaxial compression condition. Since microcrack generation and propagation play an essential role to predict the long-term behavior of rock, we managed the experiments with careful attention of 1) keeping constant edge-displacement and constant strain in the whole specimen accurately, and 2) measuring the relaxed stress exactly. Next, in order to simulate the experimental results which indicate that initiation and propagation of microcracks control the stress-relaxation phenomenon, we introduce a homogenization analysis procedure together with the strong discontinuity analysis which has recently established the mechanical implication and mathematical foundation. The numerical results show that we can

  6. A Generalized Estimating Equations Approach to Model Heterogeneity and Time Dependence in Capture-Recapture Studies

    Directory of Open Access Journals (Sweden)

    Akanda Md. Abdus Salam

    2017-03-01

    Full Text Available Individual heterogeneity in capture probabilities and time dependence are fundamentally important for estimating the closed animal population parameters in capture-recapture studies. A generalized estimating equations (GEE approach accounts for linear correlation among capture-recapture occasions, and individual heterogeneity in capture probabilities in a closed population capture-recapture individual heterogeneity and time variation model. The estimated capture probabilities are used to estimate animal population parameters. Two real data sets are used for illustrative purposes. A simulation study is carried out to assess the performance of the GEE estimator. A Quasi-Likelihood Information Criterion (QIC is applied for the selection of the best fitting model. This approach performs well when the estimated population parameters depend on the individual heterogeneity and the nature of linear correlation among capture-recapture occasions.

  7. On estimation of time-dependent attributable fraction from population-based case-control studies.

    Science.gov (United States)

    Zhao, Wei; Chen, Ying Qing; Hsu, Li

    2017-09-01

    Population attributable fraction (PAF) is widely used to quantify the disease burden associated with a modifiable exposure in a population. It has been extended to a time-varying measure that provides additional information on when and how the exposure's impact varies over time for cohort studies. However, there is no estimation procedure for PAF using data that are collected from population-based case-control studies, which, because of time and cost efficiency, are commonly used for studying genetic and environmental risk factors of disease incidences. In this article, we show that time-varying PAF is identifiable from a case-control study and develop a novel estimator of PAF. Our estimator combines odds ratio estimates from logistic regression models and density estimates of the risk factor distribution conditional on failure times in cases from a kernel smoother. The proposed estimator is shown to be consistent and asymptotically normal with asymptotic variance that can be estimated empirically from the data. Simulation studies demonstrate that the proposed estimator performs well in finite sample sizes. Finally, the method is illustrated by a population-based case-control study of colorectal cancer. © 2017, The International Biometric Society.

  8. A Study on Fuel Estimation Algorithms for a Geostationary Communication & Broadcasting Satellite

    Directory of Open Access Journals (Sweden)

    Jong Won Eun

    2000-12-01

    Full Text Available It has been developed to calculate fuel budget for a geostationary communication and broadcasting satellite. It is quite essential that the pre-launch fuel budget estimation must account for the deterministic transfer and drift orbit maneuver requirements. After on-station, the calculation of satellite lifetime should be based on the estimation of remaining fuel and assessment of actual performance. These estimations step from the proper algorithms to produce the prediction of satellite lifetime. This paper concentrates on the fuel estimation method that was studied for calculation of the propellant budget by using the given algorithms. Applications of this method are discussed for a communication and broadcasting satellite.

  9. Sea state estimation from an advancing ship – A comparative study using sea trial data

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam; Stredulinsky, David C.

    2012-01-01

    of a traditional wave rider buoy. The paper studies the ‘wave buoy analogy’, and a large set of full-scale motion measurements is considered. It is shown that the wave buoy analogy gives fairly accurate estimates of integrated sea state parameters when compared to corresponding estimates from real wave rider buoys...

  10. MANTEL-HAENSZEL TYPE ESTIMATORS FOR THE COUNTER-MATCHED SAMPLING DESIGN IN NESTED CASE-CONTROL STUDY

    OpenAIRE

    Fujii, Yoshinori; Zhang, Zhong-Zhan; 藤井, 良宜

    2001-01-01

    We are concerned with a counter-matched nested case-control study. Assuming the proportional hazards model, the Mantel-Haenszel estimators of hazard rates are presented in two situations. The proposed estimators can be calculated without estimating the nuisance parameter. Consistent estimators of the variance of the proposed hazard rate estimators are also developed. We compare these estimators to the maximum partial likelihood estimators in the asymptotic variance. The methods are illustrate...

  11. 76 FR 21383 - Proposed Collection; Comment Request; Food Reporting Comparison Study (FORCS) and Food and Eating...

    Science.gov (United States)

    2011-04-15

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Institutes of Health Proposed Collection; Comment Request; Food Reporting Comparison Study (FORCS) and Food and Eating Assessment Study (FEAST) (NCI... Collection: Title: Food Reporting Comparison Study (FORCS) and Food and Eating Assessment Study (FEAST) (NCI...

  12. Two-stage estimation in copula models used in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2005-01-01

    by Shih and Louis (Biometrics vol. 51, pp. 1384-1399, 1995b) and Glidden (Lifetime Data Analysis vol. 6, pp. 141-156, 2000). Because register based family studies often involve very large cohorts a method for analysing a sampled cohort is also derived together with the asymptotic properties...... of the estimators. The proposed methods are studied in simulations and the estimators are found to be highly efficient. Finally, the methods are applied to a study of mortality in twins....

  13. Augmented cross-sectional studies with abbreviated follow-up for estimating HIV incidence.

    Science.gov (United States)

    Claggett, B; Lagakos, S W; Wang, R

    2012-03-01

    Cross-sectional HIV incidence estimation based on a sensitive and less-sensitive test offers great advantages over the traditional cohort study. However, its use has been limited due to concerns about the false negative rate of the less-sensitive test, reflecting the phenomenon that some subjects may remain negative permanently on the less-sensitive test. Wang and Lagakos (2010, Biometrics 66, 864-874) propose an augmented cross-sectional design that provides one way to estimate the size of the infected population who remain negative permanently and subsequently incorporate this information in the cross-sectional incidence estimator. In an augmented cross-sectional study, subjects who test negative on the less-sensitive test in the cross-sectional survey are followed forward for transition into the nonrecent state, at which time they would test positive on the less-sensitive test. However, considerable uncertainty exists regarding the appropriate length of follow-up and the size of the infected population who remain nonreactive permanently to the less-sensitive test. In this article, we assess the impact of varying follow-up time on the resulting incidence estimators from an augmented cross-sectional study, evaluate the robustness of cross-sectional estimators to assumptions about the existence and the size of the subpopulation who will remain negative permanently, and propose a new estimator based on abbreviated follow-up time (AF). Compared to the original estimator from an augmented cross-sectional study, the AF estimator allows shorter follow-up time and does not require estimation of the mean window period, defined as the average time between detectability of HIV infection with the sensitive and less-sensitive tests. It is shown to perform well in a wide range of settings. We discuss when the AF estimator would be expected to perform well and offer design considerations for an augmented cross-sectional study with abbreviated follow-up. © 2011, The

  14. NCI Releases Video: Proteogenomics Research - On the Frontier of Precision Medicine | Office of Cancer Clinical Proteomics Research

    Science.gov (United States)

    The Clinical Proteomic Tumor Analysis Consortium (CPTAC) of the National Cancer Institute (NCI), part of the National Institutes of Health, announces the release of an educational video titled “Proteogenomics Research: On the Frontier of Precision Medicine."  Launched at the HUPO2017 Global Leadership Gala Dinner, catalyzed in part by the Cancer Moonshot initiative and featuring as keynote speaker the 47th Vice President of the United States of America Joseph R.

  15. 76 FR 38669 - Submission for OMB Review; Comment Request; Food Reporting Comparison Study (FORCS) and Food and...

    Science.gov (United States)

    2011-07-01

    ...; Comment Request; Food Reporting Comparison Study (FORCS) and Food and Eating Assessment Study (FEAST) (NCI... Collection: Title: Food Reporting Comparison Study (FORCS) and Food and Eating Assessment Study (FEAST) (NCI... (in Minnesota, California, and Michigan) between ages 20 and 70 years. For the FEAST study...

  16. Weight Estimate and Centers of Gravity for JT-11 Nuclear Conversion Study

    International Nuclear Information System (INIS)

    Manning, R. W.

    1958-01-01

    Weight estimates and centers of gravity for the JT-11 nuclear conversion study are tabulated. Included in the radiator section are: diffuser, shrouds, supports, radiator, liquid metal, shafting and casing.

  17. Study on Top-Down Estimation Method of Software Project Planning

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jun-guang; L(U) Ting-jie; ZHAO Yu-mei

    2006-01-01

    This paper studies a new software project planning method under some actual project data in order to make software project plans more effective. From the perspective of system theory, our new method regards a software project plan as an associative unit for study. During a top-down estimation of a software project, Program Evaluation and Review Technique (PERT) method and analogy method are combined to estimate its size, then effort estimation and specific schedules are obtained according to distributions of the phase effort. This allows a set of practical and feasible planning methods to be constructed. Actual data indicate that this set of methods can lead to effective software project planning.

  18. In memoriam: an appreciation for the NCI R25T cancer education and career development program.

    Science.gov (United States)

    Chang, Shine

    2014-06-01

    On September 7, 2013, the NCI R25T award mechanism ended its final "receipt/review/award cycle" after more than two decades shaping the cancer prevention and control workforce. Created in 1991 to respond to a national shortage of cancer prevention and control researchers, the R25T supported innovative institutional programs with specialized curricula preparing individuals for careers as independent scientists for the field. Required elements ensured developing transdisciplinary sensibilities and skills highly suited to team science, including conducting collaborative research with mentors of complementary expertise. R25Ts provided trainee stipends, research, education, and travel funds at levels far higher than T32 National Service Research Awards to attract individuals from diverse disciplines. Graduates are faculty at all academic ranks, and hold leadership positions such as associate directors of cancer prevention and control. Beyond its trainees, R25Ts also recruited into the field other students exposed through courses in specialized prevention curricula, as well as course instructors and trainee mentors, who did not initially consider their work to be relevant to cancer prevention. Although advances are being achieved, prevention efforts are not yet fully realized, and currently unknown is the impact on the workforce of terminating the R25T, including whether it is another barrier to preventing cancer. ©2014 American Association for Cancer Research.

  19. NCI Think Tank Concerning the Identifiability of Biospecimens and “-Omic” Data

    Science.gov (United States)

    Weil, Carol J.; Mechanic, Leah E.; Green, Tiffany; Kinsinger, Christopher; Lockhart, Nicole C.; Nelson, Stefanie A.; Rodriguez, Laura L.; Buccini, Laura D.

    2014-01-01

    On June 11 and 12, 2012, the National Cancer Institute (NCI) hosted a think tank concerning the identifiability of biospecimens and “omic” Data in order to explore challenges surrounding this complex and multifaceted topic. The think tank brought together forty-six leaders from several fields, including cancer genomics, bioinformatics, human subject protection, patient advocacy, and commercial genetics. The first day involved presentations regarding the state of the science of re-identification; current and proposed regulatory frameworks for assessing identifiability; developments in law, industry and biotechnology; and the expectations of patients and research participants. The second day was spent by think tank participants in small break-out groups designed to address specific sub-topics under the umbrella issue of identifiability, including considerations for the development of best practices for data sharing and consent, and targeted opportunities for further empirical research. We describe the outcomes of this two day meeting, including two complimentary themes that emerged from moderated discussions following the presentations on Day 1, and ideas presented for further empirical research to discern the preferences and concerns of research participants about data sharing and individual identifiability. PMID:23579437

  20. Developing Cancer Informatics Applications and Tools Using the NCI Genomic Data Commons API.

    Science.gov (United States)

    Wilson, Shane; Fitzsimons, Michael; Ferguson, Martin; Heath, Allison; Jensen, Mark; Miller, Josh; Murphy, Mark W; Porter, James; Sahni, Himanso; Staudt, Louis; Tang, Yajing; Wang, Zhining; Yu, Christine; Zhang, Junjun; Ferretti, Vincent; Grossman, Robert L

    2017-11-01

    The NCI Genomic Data Commons (GDC) was launched in 2016 and makes available over 4 petabytes (PB) of cancer genomic and associated clinical data to the research community. This dataset continues to grow and currently includes over 14,500 patients. The GDC is an example of a biomedical data commons, which collocates biomedical data with storage and computing infrastructure and commonly used web services, software applications, and tools to create a secure, interoperable, and extensible resource for researchers. The GDC is (i) a data repository for downloading data that have been submitted to it, and also a system that (ii) applies a common set of bioinformatics pipelines to submitted data; (iii) reanalyzes existing data when new pipelines are developed; and (iv) allows users to build their own applications and systems that interoperate with the GDC using the GDC Application Programming Interface (API). We describe the GDC API and how it has been used both by the GDC itself and by third parties. Cancer Res; 77(21); e15-18. ©2017 AACR . ©2017 American Association for Cancer Research.

  1. Detecting Role Errors in the Gene Hierarchy of the NCI Thesaurus

    Directory of Open Access Journals (Sweden)

    Yehoshua Perl

    2008-01-01

    Full Text Available Gene terminologies are playing an increasingly important role in the ever-growing field of genomic research. While errors in large, complex terminologies are inevitable, gene terminologies are even more susceptible to them due to the rapid growth of genomic knowledge and the nature of its discovery. It is therefore very important to establish quality- assurance protocols for such genomic-knowledge repositories. Different kinds of terminologies oftentimes require auditing methodologies adapted to their particular structures. In light of this, an auditing methodology tailored to the characteristics of the NCI Thesaurus’s (NCIT’s Gene hierarchy is presented. The Gene hierarchy is of particular interest to the NCIT’s designers due to the primary role of genomics in current cancer research. This multiphase methodology focuses on detecting role-errors, such as missing roles or roles with incorrect or incomplete target structures, occurring within that hierarchy. The methodology is based on two kinds of abstraction networks, called taxonomies, that highlight the role distribution among concepts within the IS-A (subsumption hierarchy. These abstract views tend to highlight portions of the hierarchy having a higher concentration of errors. The errors found during an application of the methodology

  2. Highlights of recent developments and trends in cancer nanotechnology research--view from NCI Alliance for Nanotechnology in Cancer.

    Science.gov (United States)

    Hull, L C; Farrell, D; Grodzinski, P

    2014-01-01

    Although the incidence of cancer and cancer related deaths in the United States has decreased over the past two decades due to improvements in early detection and treatment, cancer still is responsible for a quarter of the deaths in this country. There is much room for improvement on the standard treatments currently available and the National Cancer Institute (NCI) has recognized the potential for nanotechnology and nanomaterials in this area. The NCI Alliance for Nanotechnology in Cancer was formed in 2004 to support multidisciplinary researchers in the application of nanotechnology to cancer diagnosis and treatment. The researchers in the Alliance have been productive in generating innovative solutions to some of the central issues of cancer treatment including how to detect tumors earlier, how to target cancer cells specifically, and how to improve the therapeutic index of existing chemotherapies and radiotherapy treatments. Highly creative ideas are being pursued where novelty in nanomaterial development enables new modalities of detection or therapy. This review highlights some of the innovative materials approaches being pursued by researchers funded by the NCI Alliance. Their discoveries to improve the functionality of nanoparticles for medical applications includes the generation of new platforms, improvements in the manufacturing of nanoparticles and determining the underlying reasons for the movement of nanoparticles in the blood. © 2013.

  3. 3D Tendon Strain Estimation Using High-frequency Volumetric Ultrasound Images: A Feasibility Study.

    Science.gov (United States)

    Carvalho, Catarina; Slagmolen, Pieter; Bogaerts, Stijn; Scheys, Lennart; D'hooge, Jan; Peers, Koen; Maes, Frederik; Suetens, Paul

    2018-03-01

    Estimation of strain in tendons for tendinopathy assessment is a hot topic within the sports medicine community. It is believed that, if accurately estimated, existing treatment and rehabilitation protocols can be improved and presymptomatic abnormalities can be detected earlier. State-of-the-art studies present inaccurate and highly variable strain estimates, leaving this problem without solution. Out-of-plane motion, present when acquiring two-dimensional (2D) ultrasound (US) images, is a known problem and may be responsible for such errors. This work investigates the benefit of high-frequency, three-dimensional (3D) US imaging to reduce errors in tendon strain estimation. Volumetric US images were acquired in silico, in vitro, and ex vivo using an innovative acquisition approach that combines the acquisition of 2D high-frequency US images with a mechanical guided system. An affine image registration method was used to estimate global strain. 3D strain estimates were then compared with ground-truth values and with 2D strain estimates. The obtained results for in silico data showed a mean absolute error (MAE) of 0.07%, 0.05%, and 0.27% for 3D estimates along axial, lateral direction, and elevation direction and a respective MAE of 0.21% and 0.29% for 2D strain estimates. Although 3D could outperform 2D, this does not occur in in vitro and ex vivo settings, likely due to 3D acquisition artifacts. Comparison against the state-of-the-art methods showed competitive results. The proposed work shows that 3D strain estimates are more accurate than 2D estimates but acquisition of appropriate 3D US images remains a challenge.

  4. Study on Comparison of Bidding and Pricing Behavior Distinction between Estimate Methods

    Science.gov (United States)

    Morimoto, Emi; Namerikawa, Susumu

    The most characteristic trend on bidding and pricing behavior distinction in recent years is the increasing number of bidders just above the criteria for low-price bidding investigations. The contractor's markup is the difference between the bidding price and the execution price. Therefore, the contractor's markup is the difference between criteria for low-price bidding investigations price and the execution price in the public works bid in Japan. Virtually, bidder's strategies and behavior have been controlled by public engineer's budgets. Estimation and bid are inseparably linked in the Japanese public works procurement system. The trial of the unit price-type estimation method begins in 2004. On another front, accumulated estimation method is one of the general methods in public works. So, there are two types of standard estimation methods in Japan. In this study, we did a statistical analysis on the bid information of civil engineering works for the Ministry of Land, Infrastructure, and Transportation in 2008. It presents several issues that bidding and pricing behavior is related to an estimation method (several estimation methods) for public works bid in Japan. The two types of standard estimation methods produce different results that number of bidders (decide on bid-no bid strategy) and distribution of bid price (decide on mark-up strategy).The comparison on the distribution of bid prices showed that the percentage of the bid concentrated on the criteria for low-price bidding investigations have had a tendency to get higher in the large-sized public works by the unit price-type estimation method, comparing with the accumulated estimation method. On one hand, the number of bidders who bids for public works estimated unit-price tends to increase significantly Public works estimated unit-price is likely to have been one of the factors for the construction companies to decide if they participate in the biddings.

  5. Study of the Convergence in State Estimators for LTI Systems with Event Detection

    Directory of Open Access Journals (Sweden)

    Juan C. Posada

    2016-01-01

    Full Text Available The methods frequently used to estimate the state of an LTI system require that the precise value of the output variable is known at all times, or at equidistant sampling times. In LTI systems, in which the output signal is measured through binary sensors (detectors, the traditional way of state observers design is not applicable even though the system has a complete observability matrix. This type of state observers design is known as passive. It is necessary, then, to introduce a new state estimation technique, which allows reckoning the state from the information of the variable’s crossing through a detector’s action threshold (switch. This paper seeks, therefore, to study the convergence in this type of estimators in finite time, allowing establishing, theoretically, whether some family of the proposed models can be estimated in a convergent way through the use of the estimation technique based on events.

  6. A covariance correction that accounts for correlation estimation to improve finite-sample inference with generalized estimating equations: A study on its applicability with structured correlation matrices.

    Science.gov (United States)

    Westgate, Philip M

    2016-01-01

    When generalized estimating equations (GEE) incorporate an unstructured working correlation matrix, the variances of regression parameter estimates can inflate due to the estimation of the correlation parameters. In previous work, an approximation for this inflation that results in a corrected version of the sandwich formula for the covariance matrix of regression parameter estimates was derived. Use of this correction for correlation structure selection also reduces the over-selection of the unstructured working correlation matrix. In this manuscript, we conduct a simulation study to demonstrate that an increase in variances of regression parameter estimates can occur when GEE incorporates structured working correlation matrices as well. Correspondingly, we show the ability of the corrected version of the sandwich formula to improve the validity of inference and correlation structure selection. We also study the relative influences of two popular corrections to a different source of bias in the empirical sandwich covariance estimator.

  7. Les competències. La doctrina del Tribunal sobre la definició de les competències. Les competències exclusives, les compartides i les executives. - Las competencias. La doctrina del Tribunal sobre la definición de las competencias. Competencias exclusivas, compartidas y ejecutivas.

    Directory of Open Access Journals (Sweden)

    Ramon Riu

    2010-07-01

    Full Text Available La doctrina de la Sentència 31/2010 sobre la definició estatutària de les categories competencials (251-257 Mercè Barceló i SerramaleraLa doctrina del Tribunal Constitucional sobre la definició de competències. Les competències exclusives, les compartides i les executives (258-261Antoni Bayona RocamoraLa doctrina de la Sentència 31/2010 sobre les competències executives (Xavier Bernadí GilLa doctrina del Tribunal sobre la definició de les competències. Les ompetències exclusives, les compartides i les executives (270-276Marc Carrillo LópezEls efectes de la Sentència sobre la definició estatutària de les competències: la «devaluació» jurídica dels estatuts d’autonomia (277-281Mercè Corretja TorrensLes categories funcionals de competències a l’Estatut d’autonomia de Catalunya. Comentaris a la Sentència 31/2010 (282-287Ramon Riu FortunyTipologia de les competències. El seu abast funcional: els articles 110 a 112 (288-294Joaquín Tornos Massostenella e no enmendalla (262-269 La doctrina de la Sentencia 31/2010 sobre la definición estatutaria de las categorías competenciales (251-257Mercè Barceló i SerramaleraLa doctrina del Tribunal Constitucional sobre la definición de competencias. Las competencias exclusivas, las compartidas y las ejecutivas (258-261Antoni Bayona RocamoraLa doctrina de la Sentencia 31/2010 sobre las competencias ejecutivas (sostenella e no enmendalla (262-270 Xavier Bernadí GilLa doctrina del Tribunal sobre la definición de las competencias. Las competencias exclusivas, las compartidas y las ejecutivas (271-277Marc Carrillo LópezLos efectos de la Sentencia sobre la definición estatutaria de las competencias:la «devaluación» jurídica de los estatutos de autonomía (278-283Mercè Corretja TorrensLas categorías funcionales de competencias en el Estatuto de Autonomía de Cataluña. Comentarios a la Sentencia 31/2010 (284-289Ramon Riu FortunyTipología de las competencias. Su alcance

  8. A case study to estimate thermal conductivity of ABS in Cold Climate Chamber

    OpenAIRE

    Mughal, Umair Najeeb; Makarova, Marina; Virk, Muhammad Shakeel; Polanco Pinerez, Geanette

    2015-01-01

    Open Access (Romeo Green journal), publishers version / PDF may be used http://www.scirp.org/journal/wjet/ Non steady state thermal conductivity of ABS was estimated using an analytical approach in a Cold Climate Chamber at ?10?C and ?14?C. Two hollow cylinders of ABS of varying thickness were used to estimate the conductivity. The material was porous but the porosity was unknown. This paper is a case study to understand, if it is reasonable to estimate the thermal conductivity using th...

  9. Estimating one's own and one's relatives' multiple intelligence: a study from Argentina.

    Science.gov (United States)

    Furnham, Adrian; Chamorro-Premuzic, Tomas

    2005-05-01

    Participants from Argentina (N = 217) estimated their own, their partner's, their parents' and their grandparents' overall and multiple intelligences. The Argentinean data showed that men gave higher overall estimates than women (M = 110.4 vs. 105.1) as well as higher estimates on mathematical and spatial intelligence. Participants thought themselves slightly less bright than their fathers (2 IQ points) but brighter than their mothers (6 points), their grandfathers (8 points), but especially their grandmothers (11 points). Regressions showed that participants thought verbal and mathematical IQ to be the best predictors of overall IQ. Results were broadly in agreement with other studies in the area. A comparison was also made with British data using the same questionnaire. British participants tended to give significantly higher self-estimates than for relatives, though the pattern was generally similar. Results are discussed in terms of the studies in the field.

  10. Accuracy of prognosis estimates by four palliative care teams: a prospective cohort study

    Directory of Open Access Journals (Sweden)

    Costantini Massimo

    2002-03-01

    Full Text Available Abstract Background Prognosis estimates are used to access services, but are often inaccurate. This study aimed to determine the accuracy of giving a prognosis range. Methods and measurements A prospective cohort study in four multi-professional palliative care teams in England collected data on 275 consecutive cancer referrals who died. Prognosis estimates (minimum – maximum at referral, patient characteristics, were recorded by staff, and later compared with actual survival. Results Minimum survival estimates ranged Conclusions Offering a prognosis range has higher levels of accuracy (about double than traditional estimates, but is still very often inaccurate, except very close to death. Where possible clinicians should discuss scenarios with patients, rather than giving a prognosis range.

  11. Biodiversity estimates from different camera trap surveys: a case study from Osogovo Mt., Bulgaria

    Directory of Open Access Journals (Sweden)

    Diana P. Zlatanova

    2018-06-01

    Full Text Available Inventorying mammal assemblages is vital for their conservation and management, especially when they include rare or endangered species. However, obtaining a correct estimation of the species diversity in a particular area can be challenging due to uncertainties regarding study design and duration. In this paper, we present the biodiversity estimates derived from three unrelated camera trap studies in Osogovo Mt., Bulgaria. They have different duration and positioning schemes of the camera trap locations: Study 1 – grid based, 34 days; Study 2 – random points based, 138 days; Study 3 – locations based on expert opinion, 1437 days. Utilising EstimateS, we compare a number of estimators (Shannon diversity index, Coleman rarefaction curve, ACE (Abundance-based Coverage Estimator, ICE (Incidence-based Coverage Estimator, Chao 1, Chao 2 and Jackknife estimators to the number of present and confirmed and/or potentially present mammals (excluding bats in the mountains. A total of 17 mammal species were registered in the three studies, which represents around 76% of the permanently present mammals in the mountain that inhabit its forested area and can be detected by a camera trap. The results point to some guidelines that can aid future camera trap research in temperate forested areas. A grid-based design works best for very short study periods (e.g. 10 days, while the opportunistic expert-based positioning scheme provides good results for longer studies (approx. a month. However, the grid-based design needs to be further tested for longer periods. Generally, the random points approach does not yield satisfactory results. In agreement with other studies, analysis based on the Jackknife procedure (Jack 2 appears to result in the best estimate of species richness. When performing camera trap studies, special care should be taken to minimise the number of unidentifiable photos and to take into account «trap-shy» individuals. The results from this

  12. Improving global data infrastructures for more effective and scalable analysis of Earth and environmental data: the Australian NCI NERDIP Approach

    Science.gov (United States)

    Evans, Ben; Wyborn, Lesley; Druken, Kelsey; Richards, Clare; Trenham, Claire; Wang, Jingbo; Rozas Larraondo, Pablo; Steer, Adam; Smillie, Jon

    2017-04-01

    The National Computational Infrastructure (NCI) facility hosts one of Australia's largest repositories (10+ PBytes) of research data collections spanning datasets from climate, coasts, oceans, and geophysics through to astronomy, bioinformatics, and the social sciences domains. The data are obtained from national and international sources, spanning a wide range of gridded and ungridded (i.e., line surveys, point clouds) data, and raster imagery, as well as diverse coordinate reference projections and resolutions. Rather than managing these data assets as a digital library, whereby users can discover and download files to personal servers (similar to borrowing 'books' from a 'library'), NCI has built an extensive and well-integrated research data platform, the National Environmental Research Data Interoperability Platform (NERDIP, http://nci.org.au/data-collections/nerdip/). The NERDIP architecture enables programmatic access to data via standards-compliant services for high performance data analysis, and provides a flexible cloud-based environment to facilitate the next generation of transdisciplinary scientific research across all data domains. To improve use of modern scalable data infrastructures that are focused on efficient data analysis, the data organisation needs to be carefully managed including performance evaluations of projections and coordinate systems, data encoding standards and formats. A complication is that we have often found multiple domain vocabularies and ontologies are associated with equivalent datasets. It is not practical for individual dataset managers to determine which standards are best to apply to their dataset as this could impact accessibility and interoperability. Instead, they need to work with data custodians across interrelated communities and, in partnership with the data repository, the international scientific community to determine the most useful approach. For the data repository, this approach is essential to enable

  13. Augmented Cross-Sectional Studies with Abbreviated Follow-up for Estimating HIV Incidence

    OpenAIRE

    Claggett, B.; Lagakos, S.W.; Wang, R.

    2011-01-01

    Cross-sectional HIV incidence estimation based on a sensitive and less-sensitive test offers great advantages over the traditional cohort study. However, its use has been limited due to concerns about the false negative rate of the less-sensitive test, reflecting the phenomenon that some subjects may remain negative permanently on the less-sensitive test. Wang and Lagakos (2010) propose an augmented cross-sectional design which provides one way to estimate the size of the infected population ...

  14. Application of covariance clouds for estimating the anisotropy ellipsoid eigenvectors, with case study in uranium deposit

    International Nuclear Information System (INIS)

    Jamali Esfahlan, D.; Madani, H.; Tahmaseb Nazemi, M. T.; Mahdavi, F.; Ghaderi, M. R.; Najafi, M.

    2010-01-01

    Various methods of Kriging and nonlinear geostatistical methods considered as acceptable methods for resource and reserve estimations have characters such as the least estimation variance in their nature, and accurate results in the acceptable confidence levels range could be achieved if the required parameters for the estimation are determined accurately. If the determined parameters don't have the sufficient accuracy, 3-D geostatistical estimations will not be reliable any more, and by this, all the quantitative parameters of the mineral deposit (e.g. grade-tonnage variations) will be misinterpreted. One of the most significant parameters for 3-D geostatistical estimation is the anisotropy ellipsoid. The anisotropy ellipsoid is important for geostatistical estimations because it determines the samples in different directions required for accomplishing the estimation. The aim of this paper is to illustrate a more simple and time preserving analytical method that can apply geophysical or geochemical analysis data from the core-length of boreholes for modeling the anisotropy ellipsoid. By this method which is based on the distribution of covariance clouds in a 3-D sampling space of a deposit, quantities, ratios, azimuth and plunge of the major-axis, semi-major axis and the minor-axis determine the ore-grade continuity within the deposit and finally the anisotropy ellipsoid of the deposit will be constructed. A case study of an uranium deposit is also analytically discussed for illustrating the application of this method.

  15. Correcting the Standard Errors of 2-Stage Residual Inclusion Estimators for Mendelian Randomization Studies.

    Science.gov (United States)

    Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A

    2017-11-01

    Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.

  16. Reduced density gradient as a novel approach for estimating QSAR descriptors, and its application to 1, 4-dihydropyridine derivatives with potential antihypertensive effects.

    Science.gov (United States)

    Jardínez, Christiaan; Vela, Alberto; Cruz-Borbolla, Julián; Alvarez-Mendez, Rodrigo J; Alvarado-Rodríguez, José G

    2016-12-01

    The relationship between the chemical structure and biological activity (log IC 50 ) of 40 derivatives of 1,4-dihydropyridines (DHPs) was studied using density functional theory (DFT) and multiple linear regression analysis methods. With the aim of improving the quantitative structure-activity relationship (QSAR) model, the reduced density gradient s( r) of the optimized equilibrium geometries was used as a descriptor to include weak non-covalent interactions. The QSAR model highlights the correlation between the log IC 50 with highest molecular orbital energy (E HOMO ), molecular volume (V), partition coefficient (log P), non-covalent interactions NCI(H4-G) and the dual descriptor [Δf(r)]. The model yielded values of R 2 =79.57 and Q 2 =69.67 that were validated with the next four internal analytical validations DK=0.076, DQ=-0.006, R P =0.056, and R N =0.000, and the external validation Q 2 boot =64.26. The QSAR model found can be used to estimate biological activity with high reliability in new compounds based on a DHP series. Graphical abstract The good correlation between the log IC 50 with the NCI (H4-G) estimated by the reduced density gradient approach of the DHP derivatives.

  17. Comparative study of speed estimators with highly noisy measurement signals for Wind Energy Generation Systems

    Energy Technology Data Exchange (ETDEWEB)

    Carranza, O. [Escuela Superior de Computo, Instituto Politecnico Nacional, Av. Juan de Dios Batiz S/N, Col. Lindavista, Del. Gustavo A. Madero 7738, D.F. (Mexico); Figueres, E.; Garcera, G. [Grupo de Sistemas Electronicos Industriales, Departamento de Ingenieria Electronica, Universidad Politecnica de Valencia, Camino de Vera S/N, 7F, 46020 Valencia (Spain); Gonzalez, L.G. [Departamento de Ingenieria Electronica, Universidad de los Andes, Merida (Venezuela)

    2011-03-15

    This paper presents a comparative study of several speed estimators to implement a sensorless speed control loop in Wind Energy Generation Systems driven by power factor correction three-phase boost rectifiers. This rectifier topology reduces the low frequency harmonics contents of the generator currents and, consequently, the generator power factor approaches unity whereas undesired vibrations of the mechanical system decrease. For implementation of the speed estimators, the compared techniques start from the measurement of electrical variables like currents and voltages, which contain low frequency harmonics of the fundamental frequency of the wind generator, as well as switching frequency components due to the boost rectifier. In this noisy environment it has been analyzed the performance of the following estimation techniques: Synchronous Reference Frame Phase Locked Loop, speed reconstruction by measuring the dc current and voltage of the rectifier and speed estimation by means of both an Extended Kalman Filter and a Linear Kalman Filter. (author)

  18. Using local multiplicity to improve effect estimation from a hypothesis-generating pharmacogenetics study.

    Science.gov (United States)

    Zou, W; Ouyang, H

    2016-02-01

    We propose a multiple estimation adjustment (MEA) method to correct effect overestimation due to selection bias from a hypothesis-generating study (HGS) in pharmacogenetics. MEA uses a hierarchical Bayesian approach to model individual effect estimates from maximal likelihood estimation (MLE) in a region jointly and shrinks them toward the regional effect. Unlike many methods that model a fixed selection scheme, MEA capitalizes on local multiplicity independent of selection. We compared mean square errors (MSEs) in simulated HGSs from naive MLE, MEA and a conditional likelihood adjustment (CLA) method that model threshold selection bias. We observed that MEA effectively reduced MSE from MLE on null effects with or without selection, and had a clear advantage over CLA on extreme MLE estimates from null effects under lenient threshold selection in small samples, which are common among 'top' associations from a pharmacogenetics HGS.

  19. A study on the estimation method of nuclear accident risk cost

    International Nuclear Information System (INIS)

    Matsuo, Yuji

    2016-01-01

    The methodology of estimating nuclear accident risk cost, as a part of nuclear power generation cost, has hardly been established due mainly to the extremely wide range of the estimation of the accident frequency. This study estimates the expected nuclear accident frequency for Japan, making use of the method of Bayesian statistics, which exploits both the information obtained by Probabilistic Risk Assessment (PRA) and the observed historical accident frequencies. Using the PRA estimation of the Containment Failure Frequency (CFF) for Tomari nuclear power plant unit 3 of Hokkaido Electric Power Company (average: 2.1 x 10 -4 , 95th percentile: 7.7 x 10 -4 ) and the actual large-scale accident frequency (once in 1,460 reactor-years), the posterior CFF was estimated at 3.8 x 10 -4 . This study also took into account the 'external' factor causing unexpected nuclear accidents, concluding that such factor could result in higher CFF estimations, especially with larger observed accident numbers. (author)

  20. Multicenter European Prevalence Study of Neurocognitive Impairment and Associated Factors in HIV Positive Patients

    DEFF Research Database (Denmark)

    Haddow, Lewis J; Laverick, Rosanna; Daskalopoulou, Marina

    2018-01-01

    We conducted a cross-sectional study in 448 HIV positive patients attending five European outpatient clinics to determine prevalence of and factors associated with neurocognitive impairment (NCI) using computerized and pen-and-paper neuropsychological tests. NCI was defined as a normalized Z scor...

  1. Synergistic Effect of Subtoxic-dose Cisplatin and TRAIL to Mediate Apoptosis by Down-regulating Decoy Receptor 2 and Up-regulating Caspase-8, Caspase-9 and Bax Expression on NCI-H460 and A549 Cells

    Directory of Open Access Journals (Sweden)

    Xiaoyan Zhang

    2013-05-01

    Full Text Available Objective(s: Although tumor necrosis factor-related apoptosis-inducing ligand (TRAIL can selectively induce apoptosis in tumor cells, more than half of tumors including non-small cell lung cancer (NSCLC exhibit TRAIL-resistance. The purpose of this study was to determine whether subtoxic-dose cisplatin and TRAIL could synergistically enhance apoptosis on NSCLC cells and investigate its underlying mechanisms. Materials and Methods:NCI-H460 and A549 cells were treated with TRAIL alone, cisplatin alone or combination treatment in this study. The cytotoxicity was evaluated according to Sulforhodamine B assay, and apoptosis was examined using Hoechst 33342 staining and flow cytometry. The mRNA and protein levels of TRAIL receptors and apoptotic proteins including caspase-8, caspase-9, Bcl-2 and Bax were determined by RT-PCR and Western blotting, respectively. Results:Our results showed that NCI-H460 cells were sensitive to TRAIL, whereas A549 cells were resistant. However, subtoxic-dose cisplatin could enhance the both cells to TRAIL-mediated cell proliferation inhibition and apoptosis. The underlying mechanisms might be associated with the down-regulation of DcR2 and up-regulation of Caspase-8, Caspase-9 and Bax. Conclusion:Subtoxic-dose cisplatin could enhance both TRAIL- sensitive and TRAIL- resistant NSCLC cells to TRAIL-mediated apoptosis. These findings motivated further studies to evaluate such a combinatory therapeutic strategy against NSCLC in the animal models.

  2. Estimating the greenhouse gas benefits of forestry projects: A Costa Rican Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Busch, Christopher; Sathaye, Jayant; Sanchez Azofeifa, G. Arturo

    2000-09-01

    If the Clean Development Mechanism proposed under the Kyoto Protocol is to serve as an effective means for combating global climate change, it will depend upon reliable estimates of greenhouse gas benefits. This paper sketches the theoretical basis for estimating the greenhouse gas benefits of forestry projects and suggests lessons learned based on a case study of Costa Rica's Protected Areas Project, which is a 500,000 hectare effort to reduce deforestation and enhance reforestation. The Protected Areas Project in many senses advances the state of the art for Clean Development Mechanism-type forestry projects, as does the third-party verification work of SGS International Certification Services on the project. Nonetheless, sensitivity analysis shows that carbon benefit estimates for the project vary widely based on the imputed deforestation rate in the baseline scenario, e.g. the deforestation rate expected if the project were not implemented. This, along with a newly available national dataset that confirms other research showing a slower rate of deforestation in Costa Rica, suggests that the use of the 1979--1992 forest cover data originally as the basis for estimating carbon savings should be reconsidered. When the newly available data is substituted, carbon savings amount to 8.9 Mt (million tones) of carbon, down from the original estimate of 15.7 Mt. The primary general conclusion is that project developers should give more attention to the forecasting land use and land cover change scenarios underlying estimates of greenhouse gas benefits.

  3. A new method for assessing how sensitivity and specificity of linkage studies affects estimation.

    Directory of Open Access Journals (Sweden)

    Cecilia L Moore

    Full Text Available While the importance of record linkage is widely recognised, few studies have attempted to quantify how linkage errors may have impacted on their own findings and outcomes. Even where authors of linkage studies have attempted to estimate sensitivity and specificity based on subjects with known status, the effects of false negatives and positives on event rates and estimates of effect are not often described.We present quantification of the effect of sensitivity and specificity of the linkage process on event rates and incidence, as well as the resultant effect on relative risks. Formulae to estimate the true number of events and estimated relative risk adjusted for given linkage sensitivity and specificity are then derived and applied to data from a prisoner mortality study. The implications of false positive and false negative matches are also discussed.Comparisons of the effect of sensitivity and specificity on incidence and relative risks indicate that it is more important for linkages to be highly specific than sensitive, particularly if true incidence rates are low. We would recommend that, where possible, some quantitative estimates of the sensitivity and specificity of the linkage process be performed, allowing the effect of these quantities on observed results to be assessed.

  4. RSSI BASED LOCATION ESTIMATION IN A WI-FI ENVIRONMENT: AN EXPERIMENTAL STUDY

    Directory of Open Access Journals (Sweden)

    M. Ganesh Madhan

    2014-12-01

    Full Text Available In real life situations, location estimation of moving objects, armed personnel are of great importance. In this paper, we have attempted to locate targets which are mobile in a Wi-Fi environment. Radio Frequency (RF localization techniques based on Received Signal Strength Indication (RSSI algorithms are used. This study utilises Wireless Mon tool, software to provide complete technical information regarding received signal strength obtained from different wireless access points available in a campus Wi-Fi environment, considered for the study. All simulations have been done in MATLAB. The target location estimated by this approach agrees well with the actual GPS data.

  5. ICPP tank farm closure study. Volume III: Cost estimates, planning schedules, yearly cost flowcharts, and life-cycle cost estimates

    International Nuclear Information System (INIS)

    1998-02-01

    This volume contains information on cost estimates, planning schedules, yearly cost flowcharts, and life-cycle costs for the six options described in Volume 1, Section 2: Option 1 -- Total removal clean closure; No subsequent use; Option 2 -- Risk-based clean closure; LLW fill; Option 3 -- Risk-based clean closure; CERCLA fill; Option 4 -- Close to RCRA landfill standards; LLW fill; Option 5 -- Close to RCRA landfill standards; CERCLA fill; and Option 6 -- Close to RCRA landfill standards; Clean fill. This volume is divided into two portions. The first portion contains the cost and planning schedule estimates while the second portion contains life-cycle costs and yearly cash flow information for each option

  6. River suspended sediment estimation by climatic variables implication: Comparative study among soft computing techniques

    Science.gov (United States)

    Kisi, Ozgur; Shiri, Jalal

    2012-06-01

    Estimating sediment volume carried by a river is an important issue in water resources engineering. This paper compares the accuracy of three different soft computing methods, Artificial Neural Networks (ANNs), Adaptive Neuro-Fuzzy Inference System (ANFIS), and Gene Expression Programming (GEP), in estimating daily suspended sediment concentration on rivers by using hydro-meteorological data. The daily rainfall, streamflow and suspended sediment concentration data from Eel River near Dos Rios, at California, USA are used as a case study. The comparison results indicate that the GEP model performs better than the other models in daily suspended sediment concentration estimation for the particular data sets used in this study. Levenberg-Marquardt, conjugate gradient and gradient descent training algorithms were used for the ANN models. Out of three algorithms, the Conjugate gradient algorithm was found to be better than the others.

  7. Case Study on Ancestry Estimation in an Alaskan Native Family: Identity and Safeguards Against Reductionism.

    Science.gov (United States)

    Bader, Alyssa C; Malhi, Ripan S

    2015-10-01

    Understanding the complexities of ancestry-related identity is a necessary component of ethically sound research related to the genetic ancestry of modern-day communities. This is especially true when working with indigenous populations, given the legal and social implications that genetic ancestry interpretations may have in these communities. This study employs a multicomponent approach to explore the intricacies of ancestry-related identity within one extended family with members who identify as Alaskan Native. The seven participants were interviewed about their own self-identity, perceptions regarding genetic ancestry estimation, and their knowledge of oral family history. Additionally, each participant consented to having his or her genetic ancestry estimated. The researchers also surveyed ancestry-related documents, such as census records, birth certificates, and Certificates of Indian Blood. These three different perspectives-oral family history and self-identity, genetic ancestry estimation, historical and legal documentation-illustrate the complex nature of ancestry-related identity within the context of indigenous and colonial interactions in North America. While estimates of genetic ancestry broadly reflected each individual's self-reported biogeographic ancestry and supported all described and historically reported biological relationships, the estimates did not always match federally recorded blood quantum values, nor did they provide any information on relationships at the tribe or clan level. Employing a multicomponent approach and engaging study participants may help to safeguard against genetic essentialism and provide a more nuanced understanding of ancestry-related identity within a larger political, legal, and historical context.

  8. Computational Environments and Analysis methods available on the NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform

    Science.gov (United States)

    Evans, B. J. K.; Foster, C.; Minchin, S. A.; Pugh, T.; Lewis, A.; Wyborn, L. A.; Evans, B. J.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) has established a powerful in-situ computational environment to enable both high performance computing and data-intensive science across a wide spectrum of national environmental data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress in addressing harmonisation of the underlying data collections for future transdisciplinary research that enable accurate climate projections. NCI makes available 10+ PB major data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. The data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. This computational environment supports a catalogue of integrated reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. To enable transdisciplinary research on this scale, data needs to be harmonised so that researchers can readily apply techniques and software across the corpus of data available and not be constrained to work within artificial disciplinary boundaries. Future challenges will

  9. Influence of reported study design characteristics on intervention effect estimates from randomized, controlled trials

    DEFF Research Database (Denmark)

    Savović, Jelena; Jones, Hayley E; Altman, Douglas G

    2012-01-01

    bias and increases in between-trial heterogeneity were driven primarily by trials with subjective outcomes, with little evidence of bias in trials with objective and mortality outcomes. This study is limited by incomplete trial reporting, and findings may be confounded by other study design...... characteristics. Bias associated with study design characteristics may lead to exaggeration of intervention effect estimates and increases in between-trial heterogeneity in trials reporting subjectively assessed outcomes....

  10. Vectors to Increase Production Efficiency of Inducible Pluripotent Stem Cell (iPSC) | NCI Technology Transfer Center | TTC

    Science.gov (United States)

    This invention describes the discovery that specific p53 isoform increase the number of inducible pluripotent stem cells (iPS). It is known that the activity of p53 regulates the self-renewal and pluripotency of normal and cancer stem cells, and also affects re-programming efficiency of iPS cells. This p53 isoform-based technology provides a more natural process of increasing iPS cell production than previous methods of decreasing p53. NCI seeks licensees for this technology.

  11. Sulphur levels in saliva as an estimation of sulphur status in cattle: a validation study

    NARCIS (Netherlands)

    Dermauw, V.; Froidmont, E.; Dijkstra, J.; Boever, de J.L.; Vyverman, W.; Debeer, A.E.; Janssens, G.P.J.

    2012-01-01

    Effective assessment of sulphur (S) status in cattle is important for optimal health, yet remains difficult. Rumen fluid S concentrations are preferred, but difficult to sample under practical conditions. This study aimed to evaluate salivary S concentration as estimator of S status in cattle.

  12. Learning Curves and Bootstrap Estimates for Inference with Gaussian Processes: A Statistical Mechanics Study

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, Manfred

    2003-01-01

    We employ the replica method of statistical physics to study the average case performance of learning systems. The new feature of our theory is that general distributions of data can be treated, which enables applications to real data. For a class of Bayesian prediction models which are based...... on Gaussian processes, we discuss Bootstrap estimates for learning curves....

  13. Estimation of otitis media in ancient populations: A study of past and present Greenlandic Inuit

    DEFF Research Database (Denmark)

    Homøe, P; Lynnerup, N; Skovgaard, Lene Theil

    1996-01-01

    Examination of disease patterns in the past has often been difficult due to lack of morphological evidence. This study presents a new unbiased method for estimation of occurrence of infectious middle ear disease (IMED) in childhood. The method is based on the relation between IMED in childhood an...

  14. Uncertainty in estimated values of forestry project: a case study of ...

    African Journals Online (AJOL)

    The information obtained were analyzed using Net Present Value, Benefit-Cost Ratio, Economic Rate of Return and Sensitivity Analysis. The results of this study indicate that the NPV and B/C ratio were sensitive to increase in discount factor. The values of estimates for a direct and taungya plantatiomn at Ago-Owu forest ...

  15. Performance estimation of networked business models : Case study on a Finnish eHealth Service Project

    NARCIS (Netherlands)

    Heikkilä, M.; Solaimani, H. (Sam); Kuivaniemi, L.; Suoranta, M.

    2014-01-01

    Purpose: The objective of this paper is to propose and demonstrate a framework for estimating performance in a networked business model. Design/methodology/approach: Our approach is design science, utilising action research in studying a case of four independent firms in Health & Wellbeing sector

  16. Estimating the ratio of pond size to irrigated soybean land in Mississippi: a case study

    Science.gov (United States)

    Ying Ouyang; G. Feng; J. Read; T. D. Leininger; J. N. Jenkins

    2016-01-01

    Although more on-farm storage ponds have been constructed in recent years to mitigate groundwater resources depletion in Mississippi, little effort has been devoted to estimating the ratio of on-farm water storage pond size to irrigated crop land based on pond metric and its hydrogeological conditions.  In this study, two simulation scenarios were chosen to...

  17. Using step and path selection functions for estimating resistance to movement: Pumas as a case study

    Science.gov (United States)

    Katherine A. Zeller; Kevin McGarigal; Samuel A. Cushman; Paul Beier; T. Winston Vickers; Walter M. Boyce

    2015-01-01

    GPS telemetry collars and their ability to acquire accurate and consistently frequent locations have increased the use of step selection functions (SSFs) and path selection functions (PathSFs) for studying animal movement and estimating resistance. However, previously published SSFs and PathSFs often do not accommodate multiple scales or multiscale modeling....

  18. Estimation of net greenhouse gas balance using crop- and soil-based approaches: Two case studies

    International Nuclear Information System (INIS)

    Huang, Jianxiong; Chen, Yuanquan; Sui, Peng; Gao, Wansheng

    2013-01-01

    The net greenhouse gas balance (NGHGB), estimated by combining direct and indirect greenhouse gas (GHG) emissions, can reveal whether an agricultural system is a sink or source of GHGs. Currently, two types of methods, referred to here as crop-based and soil-based approaches, are widely used to estimate the NGHGB of agricultural systems on annual and seasonal crop timescales. However, the two approaches may produce contradictory results, and few studies have tested which approach is more reliable. In this study, we examined the two approaches using experimental data from an intercropping trial with straw removal and a tillage trial with straw return. The results of the two approaches provided different views of the two trials. In the intercropping trial, NGHGB estimated by the crop-based approach indicated that monocultured maize (M) was a source of GHGs (− 1315 kg CO 2 −eq ha −1 ), whereas maize–soybean intercropping (MS) was a sink (107 kg CO 2 −eq ha −1 ). When estimated by the soil-based approach, both cropping systems were sources (− 3410 for M and − 2638 kg CO 2 −eq ha −1 for MS). In the tillage trial, mouldboard ploughing (MP) and rotary tillage (RT) mitigated GHG emissions by 22,451 and 21,500 kg CO 2 −eq ha −1 , respectively, as estimated by the crop-based approach. However, by the soil-based approach, both tillage methods were sources of GHGs: − 3533 for MP and − 2241 kg CO 2 −eq ha −1 for RT. The crop-based approach calculates a GHG sink on the basis of the returned crop biomass (and other organic matter input) and estimates considerably more GHG mitigation potential than that calculated from the variations in soil organic carbon storage by the soil-based approach. These results indicate that the crop-based approach estimates higher GHG mitigation benefits compared to the soil-based approach and may overestimate the potential of GHG mitigation in agricultural systems. - Highlights: • Net greenhouse gas balance (NGHGB) of

  19. NCI-60 whole exome sequencing and pharmacological CellMiner analyses.

    Directory of Open Access Journals (Sweden)

    William C Reinhold

    Full Text Available Exome sequencing provides unprecedented insights into cancer biology and pharmacological response. Here we assess these two parameters for the NCI-60, which is among the richest genomic and pharmacological publicly available cancer cell line databases. Homozygous genetic variants that putatively affect protein function were identified in 1,199 genes (approximately 6% of all genes. Variants that are either enriched or depleted compared to non-cancerous genomes, and thus may be influential in cancer progression and differential drug response were identified for 2,546 genes. Potential gene knockouts are made available. Assessment of cell line response to 19,940 compounds, including 110 FDA-approved drugs, reveals ≈80-fold range in resistance versus sensitivity response across cell lines. 103,422 gene variants were significantly correlated with at least one compound (at p<0.0002. These include genes of known pharmacological importance such as IGF1R, BRAF, RAD52, MTOR, STAT2 and TSC2 as well as a large number of candidate genes such as NOM1, TLL2, and XDH. We introduce two new web-based CellMiner applications that enable exploration of variant-to-compound relationships for a broad range of researchers, especially those without bioinformatics support. The first tool, "Genetic variant versus drug visualization", provides a visualization of significant correlations between drug activity-gene variant combinations. Examples are given for the known vemurafenib-BRAF, and novel ifosfamide-RAD52 pairings. The second, "Genetic variant summation" allows an assessment of cumulative genetic variations for up to 150 combined genes together; and is designed to identify the variant burden for molecular pathways or functional grouping of genes. An example of its use is provided for the EGFR-ERBB2 pathway gene variant data and the identification of correlated EGFR, ERBB2, MTOR, BRAF, MEK and ERK inhibitors. The new tools are implemented as an updated web-based Cell

  20. Necessary accuracy of dose estimation during cohort epidemiologic study after irradiation

    International Nuclear Information System (INIS)

    Orlov, M.Yu.; Stepanenko, V.F.; Khoshi, M.; Takada, Dzh.

    2003-01-01

    Effect of breadth of dose ranges on values of radiation risk was estimated. Ratios of observed numbers of mortalities because of leukemia in the cohort in 1950 - 1974 under deferent radiation dose to expected number of mortalities in this cohort only under background radiation were used as degree of risk. Data of cooperative Japan-American Program LSS (Life Span Study) were applied in the researches. It is established that required for the risk assessment with uncertainty 20 - 30 % the accuracy of dose estimation comprises 30 - 35 % in the range 1 - 5 rad and 5 - 10 % in the range 5 - 30 rad [ru

  1. Logistic Regression with Multiple Random Effects: A Simulation Study of Estimation Methods and Statistical Packages

    Science.gov (United States)

    Kim, Yoonsang; Emery, Sherry

    2013-01-01

    Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods’ performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages—SAS GLIMMIX Laplace and SuperMix Gaussian quadrature—perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes. PMID:24288415

  2. Exploratory Study for Continuous-time Parameter Estimation of Ankle Dynamics

    Science.gov (United States)

    Kukreja, Sunil L.; Boyle, Richard D.

    2014-01-01

    Recently, a parallel pathway model to describe ankle dynamics was proposed. This model provides a relationship between ankle angle and net ankle torque as the sum of a linear and nonlinear contribution. A technique to identify parameters of this model in discrete-time has been developed. However, these parameters are a nonlinear combination of the continuous-time physiology, making insight into the underlying physiology impossible. The stable and accurate estimation of continuous-time parameters is critical for accurate disease modeling, clinical diagnosis, robotic control strategies, development of optimal exercise protocols for longterm space exploration, sports medicine, etc. This paper explores the development of a system identification technique to estimate the continuous-time parameters of ankle dynamics. The effectiveness of this approach is assessed via simulation of a continuous-time model of ankle dynamics with typical parameters found in clinical studies. The results show that although this technique improves estimates, it does not provide robust estimates of continuous-time parameters of ankle dynamics. Due to this we conclude that alternative modeling strategies and more advanced estimation techniques be considered for future work.

  3. A hierarchical model for estimating density in camera-trap studies

    Science.gov (United States)

    Royle, J. Andrew; Nichols, James D.; Karanth, K.Ullas; Gopalaswamy, Arjun M.

    2009-01-01

    Estimating animal density using capture–recapture data from arrays of detection devices such as camera traps has been problematic due to the movement of individuals and heterogeneity in capture probability among them induced by differential exposure to trapping.We develop a spatial capture–recapture model for estimating density from camera-trapping data which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to and detection by traps.We adopt a Bayesian approach to analysis of the hierarchical model using the technique of data augmentation.The model is applied to photographic capture–recapture data on tigers Panthera tigris in Nagarahole reserve, India. Using this model, we estimate the density of tigers to be 14·3 animals per 100 km2 during 2004.Synthesis and applications. Our modelling framework largely overcomes several weaknesses in conventional approaches to the estimation of animal density from trap arrays. It effectively deals with key problems such as individual heterogeneity in capture probabilities, movement of traps, presence of potential ‘holes’ in the array and ad hoc estimation of sample area. The formulation, thus, greatly enhances flexibility in the conduct of field surveys as well as in the analysis of data, from studies that may involve physical, photographic or DNA-based ‘captures’ of individual animals.

  4. Logistic Regression with Multiple Random Effects: A Simulation Study of Estimation Methods and Statistical Packages.

    Science.gov (United States)

    Kim, Yoonsang; Choi, Young-Ku; Emery, Sherry

    2013-08-01

    Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods' performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages-SAS GLIMMIX Laplace and SuperMix Gaussian quadrature-perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes.

  5. Shrinkage Estimators for Robust and Efficient Inference in Haplotype-Based Case-Control Studies

    KAUST Repository

    Chen, Yi-Hau

    2009-03-01

    Case-control association studies often aim to investigate the role of genes and gene-environment interactions in terms of the underlying haplotypes (i.e., the combinations of alleles at multiple genetic loci along chromosomal regions). The goal of this article is to develop robust but efficient approaches to the estimation of disease odds-ratio parameters associated with haplotypes and haplotype-environment interactions. We consider "shrinkage" estimation techniques that can adaptively relax the model assumptions of Hardy-Weinberg-Equilibrium and gene-environment independence required by recently proposed efficient "retrospective" methods. Our proposal involves first development of a novel retrospective approach to the analysis of case-control data, one that is robust to the nature of the gene-environment distribution in the underlying population. Next, it involves shrinkage of the robust retrospective estimator toward a more precise, but model-dependent, retrospective estimator using novel empirical Bayes and penalized regression techniques. Methods for variance estimation are proposed based on asymptotic theories. Simulations and two data examples illustrate both the robustness and efficiency of the proposed methods.

  6. Shrinkage Estimators for Robust and Efficient Inference in Haplotype-Based Case-Control Studies

    KAUST Repository

    Chen, Yi-Hau; Chatterjee, Nilanjan; Carroll, Raymond J.

    2009-01-01

    Case-control association studies often aim to investigate the role of genes and gene-environment interactions in terms of the underlying haplotypes (i.e., the combinations of alleles at multiple genetic loci along chromosomal regions). The goal of this article is to develop robust but efficient approaches to the estimation of disease odds-ratio parameters associated with haplotypes and haplotype-environment interactions. We consider "shrinkage" estimation techniques that can adaptively relax the model assumptions of Hardy-Weinberg-Equilibrium and gene-environment independence required by recently proposed efficient "retrospective" methods. Our proposal involves first development of a novel retrospective approach to the analysis of case-control data, one that is robust to the nature of the gene-environment distribution in the underlying population. Next, it involves shrinkage of the robust retrospective estimator toward a more precise, but model-dependent, retrospective estimator using novel empirical Bayes and penalized regression techniques. Methods for variance estimation are proposed based on asymptotic theories. Simulations and two data examples illustrate both the robustness and efficiency of the proposed methods.

  7. Bayesian hierarchical models for smoothing in two-phase studies, with application to small area estimation.

    Science.gov (United States)

    Ross, Michelle; Wakefield, Jon

    2015-10-01

    Two-phase study designs are appealing since they allow for the oversampling of rare sub-populations which improves efficiency. In this paper we describe a Bayesian hierarchical model for the analysis of two-phase data. Such a model is particularly appealing in a spatial setting in which random effects are introduced to model between-area variability. In such a situation, one may be interested in estimating regression coefficients or, in the context of small area estimation, in reconstructing the population totals by strata. The efficiency gains of the two-phase sampling scheme are compared to standard approaches using 2011 birth data from the research triangle area of North Carolina. We show that the proposed method can overcome small sample difficulties and improve on existing techniques. We conclude that the two-phase design is an attractive approach for small area estimation.

  8. A study on the estimation of economic consequence of severe accident

    International Nuclear Information System (INIS)

    Hong, Dae Seok; Lee, Kun Jai; Jeong, Jong Tae

    1996-01-01

    A model to estimate economic consequence of severe accident provides some measure of the impact on the accident and enables to know the different effects of the accident described as same terms of cost and combined as necessary. Techniques to assess the consequences of accidents in terms of cost have many applications, for instance in examining countermeasure options, as part of either emergency planning or decision making after an accident. In this study, a model to estimate the accident economic consequence is developed appropriate to our country focused on PWR accident costs from a societal viewpoint. Societal costs are estimated by accounting for losses that directly affect the plant licensee, the public, the nuclear industry, or the electric utility industry after PWR accident

  9. Energy shift estimation of demand response activation on domestic refrigerators – A field test study

    DEFF Research Database (Denmark)

    Lakshmanan, Venkatachalam; Gudmand-Høyer, Kristian; Marinelli, Mattia

    2014-01-01

    This paper presents a method to estimate the amount of energy that can be shifted during demand response (DR) activation on domestic refrigerator. Though there are many methods for DR activation like load reduction, load shifting and onsite generation, the method under study is load shifting....... Electric heating and cooling equipment like refrigerators, water heaters and space heaters and coolers are preferred for such DR activation because of their energy storing capacity. Accurate estimation of available regulating power and energy shift is important to understand the value of DR activation...... at any time. In this paper a novel method to estimate the available energy shift from domestic refrigerators with only two measurements, namely fridge cool chamber temperature and compressor power consumption is proposed, discussed and evaluated....

  10. Comparative study of building footprint estimation methods from LiDAR point clouds

    Science.gov (United States)

    Rozas, E.; Rivera, F. F.; Cabaleiro, J. C.; Pena, T. F.; Vilariño, D. L.

    2017-10-01

    Building area calculation from LiDAR points is still a difficult task with no clear solution. Their different characteristics, such as shape or size, have made the process too complex to automate. However, several algorithms and techniques have been used in order to obtain an approximated hull. 3D-building reconstruction or urban planning are examples of important applications that benefit of accurate building footprint estimations. In this paper, we have carried out a study of accuracy in the estimation of the footprint of buildings from LiDAR points. The analysis focuses on the processing steps following the object recognition and classification, assuming that labeling of building points have been previously performed. Then, we perform an in-depth analysis of the influence of the point density over the accuracy of the building area estimation. In addition, a set of buildings with different size and shape were manually classified, in such a way that they can be used as benchmark.

  11. Influence of hypo- and hyperthermia on death time estimation - A simulation study.

    Science.gov (United States)

    Muggenthaler, H; Hubig, M; Schenkl, S; Mall, G

    2017-09-01

    Numerous physiological and pathological mechanisms can cause elevated or lowered body core temperatures. Deviations from the physiological level of about 37°C can influence temperature based death time estimations. However, it has not been investigated by means of thermodynamics, to which extent hypo- and hyperthermia bias death time estimates. Using numerical simulation, the present study investigates the errors inherent in temperature based death time estimation in case of elevated or lowered body core temperatures before death. The most considerable errors with regard to the normothermic model occur in the first few hours post-mortem. With decreasing body core temperature and increasing post-mortem time the error diminishes and stagnates at a nearly constant level. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Study on the Flare Load Estimation of the Deethanizer using Dynamic Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Park, Kyungtae; Won, Wangyun [GS EC, Seoul (Korea, Republic of); Shin, Dongil [Myongji University, Yongin (Korea, Republic of)

    2014-10-15

    A flare system is a very important system that crucially affects on the process safety in chemical plants. If a flare system is designed too small, it cannot prevent catastrophic accidents of a chemical plant. On the other hand, if a flare system is designed too large, it will waste resources. Therefore, reasonable relief load estimation has been a crucial issue in the industry. American Petroleum Institute (API) suggests basic guidelines for relief load estimation, and a lot of engineering companies have developed their own relief load estimation methods that use an unbalanced heat and material method. However, these methods have to involve lots of conservative assumptions that lead to an overestimation of relief loads. In this study, the new design procedure for a flare system based on dynamic simulation was proposed in order to avoid the overestimation of relief loads. The relief load of a deethanizer process was tested to verify the performance of the proposed design procedure.

  13. Application of generalized estimating equations to a study in vitro of radiation sensitivity

    International Nuclear Information System (INIS)

    Cologne, J.B.; Carter, R.L.; Fujita, Shoichiro; Ban, Sadayuki.

    1993-08-01

    We describes an application of the generalized estimating equation (GEE) method (Liang K-Y, Zeger SL: Longitudinal data analysis using generalized linear models. Biometrika 73:13-22, 1986) for regression analyses of correlated Poisson data. As an alternative to the use of an arbitrarily chosen working correlation matrix, we demonstrate the use of GEE with a reasonable model for the true covariance structure among repeated observations within individuals. We show that, under such a split-plot design with large clusters, the asymptotic relative efficiency of GEE with simple (independence or exchangeable) working correlation matrices is rather low. We also illustrate the use of GEE with an empirically estimated model for overdispersion in a large study of radiation sensitivity where cluster size is small and a simple working correlation structure is sufficient. We conclude by summarizing issues and needs for further work concerning efficiency of the GEE parameter estimates in practice. (author)

  14. Study of the method to estimate the hydraulic characteristics in rock masses by using elastic wave

    International Nuclear Information System (INIS)

    Katsu, Kenta; Ohnishi, Yuzo; Nishiyama, Satoshi; Yano, Takao; Ando, Kenichi; Yoshimura, Kimitaka

    2008-01-01

    In the area of radioactive waste repository, estimating radionuclide migration through the rock mass is an important factor for assessment of the repository. The purpose of this study is to develop a method to estimate hydraulic characteristics of rock masses by using elastic wave velocity dispersion. This method is based on dynamics poroelastic relations such as Biot and BISQ theories. These theories indicate relations between velocity dispersion and hydraulic characteristics. In order to verify the validity of these theories in crystalline rocks, we performed laboratory experiments. The results of experiments show the dependency of elastic wave velocity on its frequency. To test the applicability of this method to real rock masses, we performed in-situ experiment for tuff rock masses. The results of in-situ experiment show the possibility as a practical method to estimate the hydraulic characteristics by using elastic wave velocity dispersion. (author)

  15. Tendències en el disseny metodològic de recerca sobre l’avaluació de competències a l’educació superior

    Directory of Open Access Journals (Sweden)

    Karina Angélica Villegas Sandoval

    2017-01-01

    Full Text Available L’article té com a finalitat descriure i analitzar les metodologies d'investigació utilitzades per estudis recents que aborden el tema de l’avaluació per competències a l’educació superior i la formació docent, per tal de detectar les tendències en el disseny metodològic i orientar futurs projectes d'investigació sobre aquest tema. El mètode de treball que s'ha seguit per dur a terme aquest estudi és l’anàlisi de contingut de 22 documents trobats a Dialnet. Els resultats mostren que les investigacions que tracten el tema assenyalat han anat en augment en els últims tretze anys, i es destaca el canvi de metodologia utilitzada, amb dissenys majoritàriament descriptius i avaluatius. Al seu torn, però, crida l'atenció que un gran nombre d'estudis no expliquen ni el mètode ni el disseny d'investigació que han aplicat. Es conclou que és important que les investigacions presentin un apartat que al·ludeixi al disseny metodològic a fi d’afavorir la comprensió del lector dels processos d'indagació que s'han dut a terme.

  16. Effect of the Absorbed Photosynthetically Active Radiation Estimation Error on Net Primary Production Estimation - A Study with MODIS FPAR and TOMS Ultraviolet Reflective Products

    International Nuclear Information System (INIS)

    Kobayashi, H.; Matsunaga, T.; Hoyano, A.

    2002-01-01

    Absorbed photosynthetically active radiation (APAR), which is defined as downward solar radiation in 400-700 nm absorbed by vegetation, is one of the significant variables for Net Primary Production (NPP) estimation from satellite data. Toward the reduction of the uncertainties in the global NPP estimation, it is necessary to clarify the APAR accuracy. In this paper, first we proposed the improved PAR estimation method based on Eck and Dye's method in which the ultraviolet (UV) reflectivity data derived from Total Ozone Mapping Spectrometer (TOMS) at the top of atmosphere were used for clouds transmittance estimation. The proposed method considered the variable effects of land surface UV reflectivity on the satellite-observed UV data. Monthly mean PAR comparisons between satellite-derived and ground-based data at various meteorological stations in Japan indicated that the improved PAR estimation method reduced the bias errors in the summer season. Assuming the relative error of the fraction of PAR (FPAR) derived from Moderate Resolution Imaging Spectroradiometer (MODIS) to be 10%, we estimated APAR relative errors to be 10-15%. Annual NPP is calculated using APAR derived from MODIS/ FPAR and the improved PAR estimation method. It is shown that random and bias errors of annual NPP in a 1 km resolution pixel are less than 4% and 6% respectively. The APAR bias errors due to the PAR bias errors also affect the estimated total NPP. We estimated the most probable total annual NPP in Japan by subtracting the bias PAR errors. It amounts about 248 MtC/yr. Using the improved PAR estimation method, and Eck and Dye's method, total annual NPP is 4% and 9% difference from most probable value respectively. The previous intercomparison study among using fifteen NPP models4) showed that global NPP estimations among NPP models are 44.4-66.3 GtC/yr (coefficient of variation = 14%). Hence we conclude that the NPP estimation uncertainty due to APAR estimation error is small

  17. Online Health Monitoring using Facebook Advertisement Audience Estimates in the United States: Evaluation Study.

    Science.gov (United States)

    Mejova, Yelena; Weber, Ingmar; Fernandez-Luque, Luis

    2018-03-28

    Facebook, the most popular social network with over one billion daily users, provides rich opportunities for its use in the health domain. Though much of Facebook's data are not available to outsiders, the company provides a tool for estimating the audience of Facebook advertisements, which includes aggregated information on the demographics and interests, such as weight loss or dieting, of Facebook users. This paper explores the potential uses of Facebook ad audience estimates for eHealth by studying the following: (1) for what type of health conditions prevalence estimates can be obtained via social media and (2) what type of marker interests are useful in obtaining such estimates, which can then be used for recruitment within online health interventions. The objective of this study was to understand the limitations and capabilities of using Facebook ad audience estimates for public health monitoring and as a recruitment tool for eHealth interventions. We use the Facebook Marketing application programming interface to correlate estimated sizes of audiences having health-related interests with public health data. Using several study cases, we identify both potential benefits and challenges in using this tool. We find several limitations in using Facebook ad audience estimates, for example, using placebo interest estimates to control for background level of user activity on the platform. Some Facebook interests such as plus-size clothing show encouraging levels of correlation (r=.74) across the 50 US states; however, we also sometimes find substantial correlations with the placebo interests such as r=.68 between interest in Technology and Obesity prevalence. Furthermore, we find demographic-specific peculiarities in the interests on health-related topics. Facebook's advertising platform provides aggregate data for more than 190 million US adults. We show how disease-specific marker interests can be used to model prevalence rates in a simple and intuitive manner

  18. Online Health Monitoring using Facebook Advertisement Audience Estimates in the United States: Evaluation Study

    Science.gov (United States)

    Weber, Ingmar; Fernandez-Luque, Luis

    2018-01-01

    Background Facebook, the most popular social network with over one billion daily users, provides rich opportunities for its use in the health domain. Though much of Facebook’s data are not available to outsiders, the company provides a tool for estimating the audience of Facebook advertisements, which includes aggregated information on the demographics and interests, such as weight loss or dieting, of Facebook users. This paper explores the potential uses of Facebook ad audience estimates for eHealth by studying the following: (1) for what type of health conditions prevalence estimates can be obtained via social media and (2) what type of marker interests are useful in obtaining such estimates, which can then be used for recruitment within online health interventions. Objective The objective of this study was to understand the limitations and capabilities of using Facebook ad audience estimates for public health monitoring and as a recruitment tool for eHealth interventions. Methods We use the Facebook Marketing application programming interface to correlate estimated sizes of audiences having health-related interests with public health data. Using several study cases, we identify both potential benefits and challenges in using this tool. Results We find several limitations in using Facebook ad audience estimates, for example, using placebo interest estimates to control for background level of user activity on the platform. Some Facebook interests such as plus-size clothing show encouraging levels of correlation (r=.74) across the 50 US states; however, we also sometimes find substantial correlations with the placebo interests such as r=.68 between interest in Technology and Obesity prevalence. Furthermore, we find demographic-specific peculiarities in the interests on health-related topics. Conclusions Facebook’s advertising platform provides aggregate data for more than 190 million US adults. We show how disease-specific marker interests can be used to model

  19. Estimation of sex and age of 'virtual skeletons'-a feasibility study

    Energy Technology Data Exchange (ETDEWEB)

    Grabherr, Silke [University Hospital of Lausanne, Institute of Forensic Medicine, Lausanne (Switzerland)]|[University of Bern, Institute of Forensic Medicine, Bern (Switzerland)]|[University Hospital of Lausanne, Service of Diagnostic and Interventional Radiology, Lausanne (Switzerland); Cooper, Christine; Ulrich-Bochsler, Susi [University of Bern, Institute for the History of Medicine, Historical Anthropology, Bern (Switzerland); Uldin, Tanya [Service of Osteo-Archaeology, Aesch (Switzerland); Ross, Steffen; Oesterhelweg, Lars; Bolliger, Stephan; Thali, Michael J. [University of Bern, Institute of Forensic Medicine, Bern (Switzerland); Christe, Andreas [University of Bern, Institute of Forensic Medicine, Bern (Switzerland)]|[University of Bern, Institute of Diagnostic Radiology, Bern (Switzerland); Schnyder, Pierre [University Hospital of Lausanne, Service of Diagnostic and Interventional Radiology, Lausanne (Switzerland); Mangin, Patrice [University Hospital of Lausanne, Institute of Forensic Medicine, Lausanne (Switzerland)

    2009-02-15

    This article presents a feasibility study with the objective of investigating the potential of multi-detector computed tomography (MDCT) to estimate the bone age and sex of deceased persons. To obtain virtual skeletons, the bodies of 22 deceased persons with known age at death were scanned by MDCT using a special protocol that consisted of high-resolution imaging of the skull, shoulder girdle (including the upper half of the humeri), the symphysis pubis and the upper halves of the femora. Bone and soft-tissue reconstructions were performed in two and three dimensions. The resulting data were investigated by three anthropologists with different professional experience. Sex was determined by investigating three-dimensional models of the skull and pelvis. As a basic orientation for the age estimation, the complex method according to Nemeskeri and co-workers was applied. The final estimation was effected using additional parameters like the state of dentition, degeneration of the spine, etc., which where chosen individually by the three observers according to their experience. The results of the study show that the estimation of sex and age is possible by the use of MDCT. Virtual skeletons present an ideal collection for anthropological studies, because they are obtained in a non-invasive way and can be investigated ad infinitum. (orig.)

  20. A model for estimating the minimum number of offspring to sample in studies of reproductive success.

    Science.gov (United States)

    Anderson, Joseph H; Ward, Eric J; Carlson, Stephanie M

    2011-01-01

    Molecular parentage permits studies of selection and evolution in fecund species with cryptic mating systems, such as fish, amphibians, and insects. However, there exists no method for estimating the number of offspring that must be assigned parentage to achieve robust estimates of reproductive success when only a fraction of offspring can be sampled. We constructed a 2-stage model that first estimated the mean (μ) and variance (v) in reproductive success from published studies on salmonid fishes and then sampled offspring from reproductive success distributions simulated from the μ and v estimates. Results provided strong support for modeling salmonid reproductive success via the negative binomial distribution and suggested that few offspring samples are needed to reject the null hypothesis of uniform offspring production. However, the sampled reproductive success distributions deviated significantly (χ(2) goodness-of-fit test p value reproductive success distribution at rates often >0.05 and as high as 0.24, even when hundreds of offspring were assigned parentage. In general, reproductive success patterns were less accurate when offspring were sampled from cohorts with larger numbers of parents and greater variance in reproductive success. Our model can be reparameterized with data from other species and will aid researchers in planning reproductive success studies by providing explicit sampling targets required to accurately assess reproductive success.

  1. Estimation of sex and age of ''virtual skeletons''-a feasibility study

    International Nuclear Information System (INIS)

    Grabherr, Silke; Cooper, Christine; Ulrich-Bochsler, Susi; Uldin, Tanya; Ross, Steffen; Oesterhelweg, Lars; Bolliger, Stephan; Thali, Michael J.; Christe, Andreas; Schnyder, Pierre; Mangin, Patrice

    2009-01-01

    This article presents a feasibility study with the objective of investigating the potential of multi-detector computed tomography (MDCT) to estimate the bone age and sex of deceased persons. To obtain virtual skeletons, the bodies of 22 deceased persons with known age at death were scanned by MDCT using a special protocol that consisted of high-resolution imaging of the skull, shoulder girdle (including the upper half of the humeri), the symphysis pubis and the upper halves of the femora. Bone and soft-tissue reconstructions were performed in two and three dimensions. The resulting data were investigated by three anthropologists with different professional experience. Sex was determined by investigating three-dimensional models of the skull and pelvis. As a basic orientation for the age estimation, the complex method according to Nemeskeri and co-workers was applied. The final estimation was effected using additional parameters like the state of dentition, degeneration of the spine, etc., which where chosen individually by the three observers according to their experience. The results of the study show that the estimation of sex and age is possible by the use of MDCT. Virtual skeletons present an ideal collection for anthropological studies, because they are obtained in a non-invasive way and can be investigated ad infinitum. (orig.)

  2. Informing Estimates of Program Effects for Studies of Mathematics Professional Development Using Teacher Content Knowledge Outcomes.

    Science.gov (United States)

    Phelps, Geoffrey; Kelcey, Benjamin; Jones, Nathan; Liu, Shuangshuang

    2016-10-03

    Mathematics professional development is widely offered, typically with the goal of improving teachers' content knowledge, the quality of teaching, and ultimately students' achievement. Recently, new assessments focused on mathematical knowledge for teaching (MKT) have been developed to assist in the evaluation and improvement of mathematics professional development. This study presents empirical estimates of average program change in MKT and its variation with the goal of supporting the design of experimental trials that are adequately powered to detect a specified program effect. The study drew on a large database representing five different assessments of MKT and collectively 326 professional development programs and 9,365 teachers. Results from cross-classified hierarchical growth models found that standardized average change estimates across the five assessments ranged from a low of 0.16 standard deviations (SDs) to a high of 0.26 SDs. Power analyses using the estimated pre- and posttest change estimates indicated that hundreds of teachers are needed to detect changes in knowledge at the lower end of the distribution. Even studies powered to detect effects at the higher end of the distribution will require substantial resources to conduct rigorous experimental trials. Empirical benchmarks that describe average program change and its variation provide a useful preliminary resource for interpreting the relative magnitude of effect sizes associated with professional development programs and for designing adequately powered trials. © The Author(s) 2016.

  3. Visual estimation versus gravimetric measurement of postpartum blood loss: a prospective cohort study.

    Science.gov (United States)

    Al Kadri, Hanan M F; Al Anazi, Bedayah K; Tamim, Hani M

    2011-06-01

    One of the major problems in international literature is how to measure postpartum blood loss with accuracy. We aimed in this research to assess the accuracy of visual estimation of postpartum blood loss (by each of two main health-care providers) compared with the gravimetric calculation method. We carried out a prospective cohort study at King Abdulaziz Medical City, Riyadh, Saudi Arabia between 1 November 2009 and 31 December 2009. All women who were admitted to labor and delivery suite and delivered vaginally were included in the study. Postpartum blood loss was visually estimated by the attending physician and obstetrics nurse and then objectively calculated by a gravimetric machine. Comparison between the three methods of blood loss calculation was carried out. A total of 150 patients were included in this study. There was a significant difference between the gravimetric calculated blood loss and both health-care providers' estimation with a tendency to underestimate the loss by about 30%. The background and seniority of the assessing health-care provider did not affect the accuracy of the estimation. The corrected incidence of postpartum hemorrhage in Saudi Arabia was found to be 1.47%. Health-care providers tend to underestimate the volume of postpartum blood loss by about 30%. Training and continuous auditing of the diagnosis of postpartum hemorrhage is needed to avoid missing cases and thus preventing associated morbidity and mortality.

  4. A case study to estimate costs using Neural Networks and regression based models

    Directory of Open Access Journals (Sweden)

    Nadia Bhuiyan

    2012-07-01

    Full Text Available Bombardier Aerospace’s high performance aircrafts and services set the utmost standard for the Aerospace industry. A case study in collaboration with Bombardier Aerospace is conducted in order to estimate the target cost of a landing gear. More precisely, the study uses both parametric model and neural network models to estimate the cost of main landing gears, a major aircraft commodity. A comparative analysis between the parametric based model and those upon neural networks model will be considered in order to determine the most accurate method to predict the cost of a main landing gear. Several trials are presented for the design and use of the neural network model. The analysis for the case under study shows the flexibility in the design of the neural network model. Furthermore, the performance of the neural network model is deemed superior to the parametric models for this case study.

  5. Estimation Methods of the Point Spread Function Axial Position: A Comparative Computational Study

    Directory of Open Access Journals (Sweden)

    Javier Eduardo Diaz Zamboni

    2017-01-01

    Full Text Available The precise knowledge of the point spread function is central for any imaging system characterization. In fluorescence microscopy, point spread function (PSF determination has become a common and obligatory task for each new experimental device, mainly due to its strong dependence on acquisition conditions. During the last decade, algorithms have been developed for the precise calculation of the PSF, which fit model parameters that describe image formation on the microscope to experimental data. In order to contribute to this subject, a comparative study of three parameter estimation methods is reported, namely: I-divergence minimization (MIDIV, maximum likelihood (ML and non-linear least square (LSQR. They were applied to the estimation of the point source position on the optical axis, using a physical model. Methods’ performance was evaluated under different conditions and noise levels using synthetic images and considering success percentage, iteration number, computation time, accuracy and precision. The main results showed that the axial position estimation requires a high SNR to achieve an acceptable success level and higher still to be close to the estimation error lower bound. ML achieved a higher success percentage at lower SNR compared to MIDIV and LSQR with an intrinsic noise source. Only the ML and MIDIV methods achieved the error lower bound, but only with data belonging to the optical axis and high SNR. Extrinsic noise sources worsened the success percentage, but no difference was found between noise sources for the same method for all methods studied.

  6. Early‐Stage Capital Cost Estimation of Biorefinery Processes: A Comparative Study of Heuristic Techniques

    Science.gov (United States)

    Couturier, Jean‐Luc; Kokossis, Antonis; Dubois, Jean‐Luc

    2016-01-01

    Abstract Biorefineries offer a promising alternative to fossil‐based processing industries and have undergone rapid development in recent years. Limited financial resources and stringent company budgets necessitate quick capital estimation of pioneering biorefinery projects at the early stages of their conception to screen process alternatives, decide on project viability, and allocate resources to the most promising cases. Biorefineries are capital‐intensive projects that involve state‐of‐the‐art technologies for which there is no prior experience or sufficient historical data. This work reviews existing rapid cost estimation practices, which can be used by researchers with no previous cost estimating experience. It also comprises a comparative study of six cost methods on three well‐documented biorefinery processes to evaluate their accuracy and precision. The results illustrate discrepancies among the methods because their extrapolation on biorefinery data often violates inherent assumptions. This study recommends the most appropriate rapid cost methods and urges the development of an improved early‐stage capital cost estimation tool suitable for biorefinery processes. PMID:27484398

  7. Retinal vessel diameter and estimated cerebrospinal fluid pressure in arterial hypertension: the Beijing Eye Study.

    Science.gov (United States)

    Jonas, Jost B; Wang, Ningli; Wang, Shuang; Wang, Ya Xing; You, Qi Sheng; Yang, Diya; Wei, Wen Bin; Xu, Liang

    2014-09-01

    Hypertensive retinal microvascular abnormalities include an increased retinal vein-to-artery diameter ratio. Because central retinal vein pressure depends on cerebrospinal fluid pressure (CSFP), we examined whether the retinal vein-to-artery diameter ratio and other retinal hypertensive signs are associated with CSFP. Participants of the population-based Beijing Eye Study (n = 1,574 subjects) underwent measurement of the temporal inferior and superior retinal artery and vein diameter. CSFP was calculated as 0.44 × body mass index (kg/m(2)) + 0.16 × diastolic blood pressure (mm Hg) - 0.18 × age (years) - 1.91. Larger retinal vein diameters and higher vein-to-artery diameter ratios were significantly associated with higher estimated CSFP (P = 0.001) in multivariable analysis. In contrast, temporal inferior retinal arterial diameter was marginally associated (P = 0.03) with estimated CSFP, and temporal superior artery diameter was not significantly associated (P = 0.10) with estimated CSFP; other microvascular abnormalities, such as arteriovenous crossing signs, were also not significantly associated with estimated CSFP. In a reverse manner, higher estimated CSFP as a dependent variable in the multivariable analysis was associated with wider retinal veins and higher vein-to-artery diameter ratio. In the same model, estimated CSFP was not significantly correlated with retinal artery diameters or other retinal microvascular abnormalities. Correspondingly, arterial hypertension was associated with retinal microvascular abnormalities such as arteriovenous crossing signs (P = 0.003), thinner temporal retinal arteries (P arterial hypertension, an increased retinal vein-to-artery diameter ratio depends on elevated CSFP, which is correlated with blood pressure. © American Journal of Hypertension, Ltd 2014. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Antenatal surveillance through estimates of the sources underlying the abdominal phonogram: a preliminary study

    International Nuclear Information System (INIS)

    Jiménez-González, A; James, C J

    2013-01-01

    Today, it is generally accepted that current methods for biophysical antenatal surveillance do not facilitate a comprehensive and reliable assessment of foetal well-being and that continuing research into alternative methods is necessary to improve antenatal monitoring procedures. In our research, attention has been paid to the abdominal phonogram, a signal that is recorded by positioning an acoustic sensor on the maternal womb and contains valuable information about foetal status, but which is hidden by maternal and environmental sources. To recover such information, previous work has used single-channel independent component analysis (SCICA) on the abdominal phonogram and successfully retrieved estimates of the foetal phonocardiogram, the maternal phonocardiogram, the maternal respirogram and noise. The availability of these estimates made it possible for the current study to focus on their evaluation as sources for antenatal surveillance purposes. To this end, the foetal heart rate (FHR), the foetal heart sounds morphology, the maternal heart rate (MHR) and the maternal breathing rate (MBR) were collected from the estimates retrieved from a dataset of 25 abdominal phonograms. Next, these parameters were compared with reference values to quantify the significance of the physiological information extracted from the estimates. As a result, it has been seen that the instantaneous FHR, the instantaneous MHR and the MBR collected from the estimates consistently followed the trends given by the reference signals, which is a promising outcome for this preliminary study. Thus, as far as this study has gone, it can be said that the independent traces retrieved by SCICA from the abdominal phonogram are likely to become valuable sources of information for well-being surveillance, both foetal and maternal. (paper)

  9. Avaluació de competències professionalitzadores en els estudis de grau de comunicació audiovisual

    Directory of Open Access Journals (Sweden)

    Marina Romeo

    2017-01-01

    Full Text Available Els recents canvis en la formació universitària han comportat un destacable nivell de professionalització dels estudis i una constant adequació a les demandes socials. En aquest sentit, una de les necessitats per a la formació universitària a l'àrea de la comunicació audiovisual és desenvolupar en els estudiants competències professionalitzadores que els permetin trobar nínxols d'ocupació en un mercat altament competitiu i sotmès a canvis continus. Aquesta recerca té per objecte crear una rúbrica que permeti avaluar l'aprenentatge professionalitzador en els estudis de grau en comunicació audiovisual (CAV que es desenvolupen a Espanya. Per desenvolupar-la hem comptat amb ocupadors i acadèmics experts de l'àmbit de la comunicació audiovisual triats de forma intencional. La rúbrica final desenvolupada, a més de permetre avaluar el grau d'adquisició de les competències professionalitzadores en CAV, permet dibuixar un mapa clar de l'organització i adequació dels processos i metodologies docents. En aquest sentit, la rúbrica pot ser un instrument pedagògic clau per a una futura promoció d'estudiants, i es pot convertir en un instrument que afavoreixi l'avaluació formativa dels alumnes.

  10. Improving clinical research and cancer care delivery in community settings: evaluating the NCI community cancer centers program

    Directory of Open Access Journals (Sweden)

    Fennell Mary L

    2009-09-01

    Full Text Available Abstract Background In this article, we describe the National Cancer Institute (NCI Community Cancer Centers Program (NCCCP pilot and the evaluation designed to assess its role, function, and relevance to the NCI's research mission. In doing so, we describe the evolution of and rationale for the NCCCP concept, participating sites' characteristics, its multi-faceted aims to enhance clinical research and quality of care in community settings, and the role of strategic partnerships, both within and outside of the NCCCP network, in achieving program objectives. Discussion The evaluation of the NCCCP is conceptualized as a mixed method multi-layered assessment of organizational innovation and performance which includes mapping the evolution of site development as a means of understanding the inter- and intra-organizational change in the pilot, and the application of specific evaluation metrics for assessing the implementation, operations, and performance of the NCCCP pilot. The assessment of the cost of the pilot as an additional means of informing the longer-term feasibility and sustainability of the program is also discussed. Summary The NCCCP is a major systems-level set of organizational innovations to enhance clinical research and care delivery in diverse communities across the United States. Assessment of the extent to which the program achieves its aims will depend on a full understanding of how individual, organizational, and environmental factors align (or fail to align to achieve these improvements, and at what cost.

  11. Improving clinical research and cancer care delivery in community settings: evaluating the NCI community cancer centers program.

    Science.gov (United States)

    Clauser, Steven B; Johnson, Maureen R; O'Brien, Donna M; Beveridge, Joy M; Fennell, Mary L; Kaluzny, Arnold D

    2009-09-26

    In this article, we describe the National Cancer Institute (NCI) Community Cancer Centers Program (NCCCP) pilot and the evaluation designed to assess its role, function, and relevance to the NCI's research mission. In doing so, we describe the evolution of and rationale for the NCCCP concept, participating sites' characteristics, its multi-faceted aims to enhance clinical research and quality of care in community settings, and the role of strategic partnerships, both within and outside of the NCCCP network, in achieving program objectives. The evaluation of the NCCCP is conceptualized as a mixed method multi-layered assessment of organizational innovation and performance which includes mapping the evolution of site development as a means of understanding the inter- and intra-organizational change in the pilot, and the application of specific evaluation metrics for assessing the implementation, operations, and performance of the NCCCP pilot. The assessment of the cost of the pilot as an additional means of informing the longer-term feasibility and sustainability of the program is also discussed. The NCCCP is a major systems-level set of organizational innovations to enhance clinical research and care delivery in diverse communities across the United States. Assessment of the extent to which the program achieves its aims will depend on a full understanding of how individual, organizational, and environmental factors align (or fail to align) to achieve these improvements, and at what cost.

  12. NCI Program for Natural Product Discovery: A Publicly-Accessible Library of Natural Product Fractions for High-Throughput Screening.

    Science.gov (United States)

    Thornburg, Christopher C; Britt, John R; Evans, Jason R; Akee, Rhone K; Whitt, James A; Trinh, Spencer K; Harris, Matthew J; Thompson, Jerell R; Ewing, Teresa L; Shipley, Suzanne M; Grothaus, Paul G; Newman, David J; Schneider, Joel P; Grkovic, Tanja; O'Keefe, Barry R

    2018-06-13

    The US National Cancer Institute's (NCI) Natural Product Repository is one of the world's largest, most diverse collections of natural products containing over 230,000 unique extracts derived from plant, marine, and microbial organisms that have been collected from biodiverse regions throughout the world. Importantly, this national resource is available to the research community for the screening of extracts and the isolation of bioactive natural products. However, despite the success of natural products in drug discovery, compatibility issues that make extracts challenging for liquid handling systems, extended timelines that complicate natural product-based drug discovery efforts and the presence of pan-assay interfering compounds have reduced enthusiasm for the high-throughput screening (HTS) of crude natural product extract libraries in targeted assay systems. To address these limitations, the NCI Program for Natural Product Discovery (NPNPD), a newly launched, national program to advance natural product discovery technologies and facilitate the discovery of structurally defined, validated lead molecules ready for translation will create a prefractionated library from over 125,000 natural product extracts with the aim of producing a publicly-accessible, HTS-amenable library of >1,000,000 fractions. This library, representing perhaps the largest accumulation of natural-product based fractions in the world, will be made available free of charge in 384-well plates for screening against all disease states in an effort to reinvigorate natural product-based drug discovery.

  13. Genotyping faecal samples of Bengal tiger Panthera tigris tigris for population estimation: A pilot study

    Directory of Open Access Journals (Sweden)

    Singh Lalji

    2006-10-01

    Full Text Available Abstract Background Bengal tiger Panthera tigris tigris the National Animal of India, is an endangered species. Estimating populations for such species is the main objective for designing conservation measures and for evaluating those that are already in place. Due to the tiger's cryptic and secretive behaviour, it is not possible to enumerate and monitor its populations through direct observations; instead indirect methods have always been used for studying tigers in the wild. DNA methods based on non-invasive sampling have not been attempted so far for tiger population studies in India. We describe here a pilot study using DNA extracted from faecal samples of tigers for the purpose of population estimation. Results In this study, PCR primers were developed based on tiger-specific variations in the mitochondrial cytochrome b for reliably identifying tiger faecal samples from those of sympatric carnivores. Microsatellite markers were developed for the identification of individual tigers with a sibling Probability of Identity of 0.005 that can distinguish even closely related individuals with 99.9% certainty. The effectiveness of using field-collected tiger faecal samples for DNA analysis was evaluated by sampling, identification and subsequently genotyping samples from two protected areas in southern India. Conclusion Our results demonstrate the feasibility of using tiger faecal matter as a potential source of DNA for population estimation of tigers in protected areas in India in addition to the methods currently in use.

  14. Regulation of voltage-gated potassium channels attenuates resistance of side-population cells to gefitinib in the human lung cancer cell line NCI-H460.

    Science.gov (United States)

    Choi, Seon Young; Kim, Hang-Rae; Ryu, Pan Dong; Lee, So Yeong

    2017-02-21

    Side-population (SP) cells that exclude anti-cancer drugs have been found in various tumor cell lines. Moreover, SP cells have a higher proliferative potential and drug resistance than main population cells (Non-SP cells). Also, several ion channels are responsible for the drug resistance and proliferation of SP cells in cancer. To confirm the expression and function of voltage-gated potassium (Kv) channels of SP cells, these cells, as well as highly expressed ATP-binding cassette (ABC) transporters and stemness genes, were isolated from a gefitinib-resistant human lung adenocarcinoma cell line (NCI-H460), using Hoechst 33342 efflux. In the present study, we found that mRNA expression of Kv channels in SP cells was different compared to Non-SP cells, and the resistance of SP cells to gefitinib was weakened with a combination treatment of gefitinib and Kv channel blockers or a Kv7 opener, compared to single-treatment gefitinib, through inhibition of the Ras-Raf signaling pathway. The findings indicate that Kv channels in SP cells could be new targets for reducing the resistance to gefitinib.

  15. Hexamethoxylated Monocarbonyl Analogues of Curcumin Cause G2/M Cell Cycle Arrest in NCI-H460 Cells via Michael Acceptor-Dependent Redox Intervention.

    Science.gov (United States)

    Li, Yan; Zhang, Li-Ping; Dai, Fang; Yan, Wen-Jing; Wang, Hai-Bo; Tu, Zhi-Shan; Zhou, Bo

    2015-09-09

    Curcumin, derived from the dietary spice turmeric, holds promise for cancer prevention. This prompts much interest in investigating the action mechanisms of curcumin and its analogues. Two symmetrical hexamethoxy-diarylpentadienones (1 and 2) as cucumin analogues were reported to possess significantly enhanced cytotoxicity compared with the parent molecule. However, the detailed mechanisms remain unclear. In this study, compounds 1 and 2 were identified as the G2/M cell cycle arrest agents to mediate the cytotoxicity toward NCI-H460 cells via Michael acceptor-dependent redox intervention. Compared with curcumin, they could more easily induce a burst of reactive oxygen species (ROS) and collapse of the redox buffering system. One possible reason is that they could more effectively target intracellular TrxR to convert this antioxidant enzyme into a ROS promoter. Additionally, they caused up-regulation of p53 and p21 and down-regulation of redox-sensitive Cdc25C along with cyclin B1/Cdk1 in a Michael acceptor- and ROS-dependent fashion. Interestingly, in comparison with compound 2, compound 1 displayed a relatively weak ability to generate ROS but increased cell cycle arrest activity and cytotoxicity probably due to its Michael acceptor-dependent microtubule-destabilizing effect and greater GST-inhibitory activity, as well as its enhanced cellular uptake. This work provides useful information for understanding Michael acceptor-dependent and redox-mediated cytotoxic mechanisms of curcumin and its active analogues.

  16. Estimate of the atmospheric turbidity from three broad-band solar radiation algorithms. A comparative study

    Directory of Open Access Journals (Sweden)

    G. López

    2004-09-01

    Full Text Available Atmospheric turbidity is an important parameter for assessing the air pollution in local areas, as well as being the main parameter controlling the attenuation of solar radiation reaching the Earth's surface under cloudless sky conditions. Among the different turbidity indices, the Ångström turbidity coefficient β is frequently used. In this work, we analyse the performance of three methods based on broad-band solar irradiance measurements in the estimation of β. The evaluation of the performance of the models was undertaken by graphical and statistical (root mean square errors and mean bias errors means. The data sets used in this study comprise measurements of broad-band solar irradiance obtained at eight radiometric stations and aerosol optical thickness measurements obtained at one co-located radiometric station. Since all three methods require estimates of precipitable water content, three common methods for calculating atmospheric precipitable water content from surface air temperature and relative humidity are evaluated. Results show that these methods exhibit significant differences for low values of precipitable water. The effect of these differences in precipitable water estimates on turbidity algorithms is discussed. Differences in hourly turbidity estimates are later examined. The effects of random errors in pyranometer measurements and cloud interferences on the performance of the models are also presented. Examination of the annual cycle of monthly mean values of β for each location has shown that all three turbidity algorithms are suitable for analysing long-term trends and seasonal patterns.

  17. Estimate of the atmospheric turbidity from three broad-band solar radiation algorithms. A comparative study

    Directory of Open Access Journals (Sweden)

    G. López

    2004-09-01

    Full Text Available Atmospheric turbidity is an important parameter for assessing the air pollution in local areas, as well as being the main parameter controlling the attenuation of solar radiation reaching the Earth's surface under cloudless sky conditions. Among the different turbidity indices, the Ångström turbidity coefficient β is frequently used. In this work, we analyse the performance of three methods based on broad-band solar irradiance measurements in the estimation of β. The evaluation of the performance of the models was undertaken by graphical and statistical (root mean square errors and mean bias errors means. The data sets used in this study comprise measurements of broad-band solar irradiance obtained at eight radiometric stations and aerosol optical thickness measurements obtained at one co-located radiometric station. Since all three methods require estimates of precipitable water content, three common methods for calculating atmospheric precipitable water content from surface air temperature and relative humidity are evaluated. Results show that these methods exhibit significant differences for low values of precipitable water. The effect of these differences in precipitable water estimates on turbidity algorithms is discussed. Differences in hourly turbidity estimates are later examined. The effects of random errors in pyranometer measurements and cloud interferences on the performance of the models are also presented. Examination of the annual cycle of monthly mean values of β for each location has shown that all three turbidity algorithms are suitable for analysing long-term trends and seasonal patterns.

  18. Estimate of the atmospheric turbidity from three broad-band solar radiation algorithms. A comparative study

    Energy Technology Data Exchange (ETDEWEB)

    Lopez, G.; Batlles, F.J. [Dept. de Ingenieria Electrica y Termica, EPS La Rabida, Univ. de Huelva, Huelva (Spain)

    2004-07-01

    Atmospheric turbidity is an important parameter for assessing the air pollution in local areas, as well as being the main parameter controlling the attenuation of solar radiation reaching the Earth's surface under cloudless sky conditions. Among the different turbidity indices, the Aangstroem turbidity coefficient {beta} is frequently used. In this work, we analyse the performance of three methods based on broadband solar irradiance measurements in the estimation of {beta}. The evaluation of the performance of the models was undertaken by graphical and statistical (root mean square errors and mean bias errors) means. The data sets used in this study comprise measurements of broad-band solar irradiance obtained at eight radiometric stations and aerosol optical thickness measurements obtained at one co-located radiometric station. Since all three methods require estimates of precipitable water content, three common methods for calculating atmospheric precipitable water content from surface air temperature and relative humidity are evaluated. Results show that these methods exhibit significant differences for low values of precipitable water. The effect of these differences in precipitable water estimates on turbidity algorithms is discussed. Differences in hourly turbidity estimates are later examined. The effects of random errors in pyranometer measurements and cloud interferences on the performance of the models are also presented. Examination of the annual cycle of monthly mean values of {beta} for each location has shown that all three turbidity algorithms are suitable for analysing long-term trends and seasonal patterns. (orig.)

  19. Comparative study for the estimation of To shift due to irradiation embrittlement

    International Nuclear Information System (INIS)

    Lee, Jin Ho; Park, Youn won; Choi, Young Hwan; Kim, Seok Hun; Revka, Volodymyr

    2002-01-01

    Recently, an approach called the 'Master Curve' method was proposed which has opened a new means to acquire a directly measured material-specific fracture toughness curve. For the entire application of the Master Curve method, several technical issues should be solved. One of them is to utilize existing Charpy impact test data in the evaluation of a fracture transition temperature shift due to irradiation damage. In the U.S. and most Western countries, the Charpy impact test data have been used to estimate the irradiation effects on fracture toughness changes of RPV materials. For the determination of the irradiation shift the indexing energy level of 41 joule is used irrespective of the material yield strength. The Russian Code also requires the Charpy impact test data to determine the extent of radiation embrittlement. Unlike the U.S. Code, however, the Russian approach uses the indexing energy level varying according to the material strength. The objective of this study is to determine a method by which the reference transition temperature shift (ΔT o ) due to irradiation can be estimated. By comparing the irradiation shift estimated according to the U.S. procedure (ΔT 41J ) with that estimated according to the Russian procedure (ΔT F ), it was found that one-to-one relation exists between ΔT o and ΔT F

  20. Estimating population size in wastewater-based epidemiology. Valencia metropolitan area as a case study.

    Science.gov (United States)

    Rico, María; Andrés-Costa, María Jesús; Picó, Yolanda

    2017-02-05

    Wastewater can provide a wealth of epidemiologic data on common drugs consumed and on health and nutritional problems based on the biomarkers excreted into community sewage systems. One of the biggest uncertainties of these studies is the estimation of the number of inhabitants served by the treatment plants. Twelve human urine biomarkers -5-hydroxyindoleacetic acid (5-HIAA), acesulfame, atenolol, caffeine, carbamazepine, codeine, cotinine, creatinine, hydrochlorothiazide (HCTZ), naproxen, salicylic acid (SA) and hydroxycotinine (OHCOT)- were determined by liquid chromatography-tandem mass spectrometry (LC-MS/MS) to estimate population size. The results reveal that populations calculated from cotinine, 5-HIAA and caffeine are commonly in agreement with those calculated by the hydrochemical parameters. Creatinine is too unstable to be applicable. HCTZ, naproxen, codeine, OHCOT and carbamazepine, under or overestimate the population compared to the hydrochemical population estimates but showed constant results through the weekdays. The consumption of cannabis, cocaine, heroin and bufotenine in Valencia was estimated for a week using different population calculations. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Estimation of household income diversification in South Africa: A case study of three provinces

    Directory of Open Access Journals (Sweden)

    Jabulani Mathebula

    2017-01-01

    Full Text Available We estimated household income diversification in settlement types of the poorest provinces in South Africa the Eastern Cape, Limpopo and KwaZulu-Natal. We obtained data from the 2010/2011 Income and Expenditure Survey from Statistics South Africa and Wave 3 data from the National Income Dynamics Study. We used the number of income sources, the number of income earners and the Shannon Diversity Index to estimate income diversification in the study provinces. The results show that households in the traditional and urban formal areas diversified income sources to a greater extent than households in urban informal and rural formal settlements. The varied degrees of income diversification in the three provinces suggest that targeted policy initiatives aimed at enhancing household income are important in these provinces.

  2. Consumers’ estimation of calorie content at fast food restaurants: cross sectional observational study

    OpenAIRE

    Block, Jason Perry; Condon, Suzanne K; Kleinman, Ken Paul; Mullen, Jewel; Linakis, Stephanie; Rifas-Shiman, Sheryl Lynn; Gillman, Matthew William

    2013-01-01

    Objective: To investigate estimation of calorie (energy) content of meals from fast food restaurants in adults, adolescents, and school age children. Design: Cross sectional study of repeated visits to fast food restaurant chains. Setting: 89 fast food restaurants in four cities in New England, United States: McDonald’s, Burger King, Subway, Wendy’s, KFC, Dunkin’ Donuts. Participants: 1877 adults and 330 school age children visiting restaurants at dinnertime (evening meal) in 2010 and 2011; 1...

  3. Comparative study of age estimation using dentinal translucency by digital and conventional methods

    Science.gov (United States)

    Bommannavar, Sushma; Kulkarni, Meena

    2015-01-01

    Introduction: Estimating age using the dentition plays a significant role in identification of the individual in forensic cases. Teeth are one of the most durable and strongest structures in the human body. The morphology and arrangement of teeth vary from person-to-person and is unique to an individual as are the fingerprints. Therefore, the use of dentition is the method of choice in the identification of the unknown. Root dentin translucency is considered to be one of the best parameters for dental age estimation. Traditionally, root dentin translucency was measured using calipers. Recently, the use of custom built software programs have been proposed for the same. Objectives: The present study describes a method to measure root dentin translucency on sectioned teeth using a custom built software program Adobe Photoshop 7.0 version (Adobe system Inc, Mountain View California). Materials and Methods: A total of 50 single rooted teeth were sectioned longitudinally to derive a 0.25 mm uniform thickness and the root dentin translucency was measured using digital and caliper methods and compared. The Gustafson's morphohistologic approach is used in this study. Results: Correlation coefficients of translucency measurements to age were statistically significant for both the methods (P < 0.125) and linear regression equations derived from both methods revealed better ability of the digital method to assess age. Conclusion: The custom built software program used in the present study is commercially available and widely used image editing software. Furthermore, this method is easy to use and less time consuming. The measurements obtained using this method are more precise and thus help in more accurate age estimation. Considering these benefits, the present study recommends the use of digital method to assess translucency for age estimation. PMID:25709325

  4. Spatially explicit inference for open populations: estimating demographic parameters from camera-trap studies.

    Science.gov (United States)

    Gardner, Beth; Reppucci, Juan; Lucherini, Mauro; Royle, J Andrew

    2010-11-01

    We develop a hierarchical capture-recapture model for demographically open populations when auxiliary spatial information about location of capture is obtained. Such spatial capture-recapture data arise from studies based on camera trapping, DNA sampling, and other situations in which a spatial array of devices records encounters of unique individuals. We integrate an individual-based formulation of a Jolly-Seber type model with recently developed spatially explicit capture-recapture models to estimate density and demographic parameters for survival and recruitment. We adopt a Bayesian framework for inference under this model using the method of data augmentation which is implemented in the software program WinBUGS. The model was motivated by a camera trapping study of Pampas cats Leopardus colocolo from Argentina, which we present as an illustration of the model in this paper. We provide estimates of density and the first quantitative assessment of vital rates for the Pampas cat in the High Andes. The precision of these estimates is poor due likely to the sparse data set. Unlike conventional inference methods which usually rely on asymptotic arguments, Bayesian inferences are valid in arbitrary sample sizes, and thus the method is ideal for the study of rare or endangered species for which small data sets are typical.

  5. Demirjian's method in the estimation of age: A study on human third molars.

    Science.gov (United States)

    Lewis, Amitha J; Boaz, Karen; Nagesh, K R; Srikant, N; Gupta, Neha; Nandita, K P; Manaktala, Nidhi

    2015-01-01

    The primary aim of the following study is to estimate the chronological age based on the stages of third molar development following the eight stages (A to H) method of Demirjian et al. (along with two modifications-Orhan) and secondary aim is to compare third molar development with sex and age. The sample consisted of 115 orthopantomograms from South Indian subjects with known chronological age and gender. Multiple regression analysis was performed with chronological age as the dependable variable and third molar root development as independent variable. All the statistical analysis was performed using the SPSS 11.0 package (IBM ® Corporation). Statistically no significant differences were found in third molar development between males and females. Depending on the available number of wisdom teeth in an individual, R (2) varied for males from 0.21 to 0.48 and for females from 0.16 to 0.38. New equations were derived for estimating the chronological age. The chronological age of a South Indian individual between 14 and 22 years may be estimated based on the regression formulae. However, additional studies with a larger study population must be conducted to meet the need for population-based information on third molar development.

  6. State of the art in evacuation time estimate studies for nuclear power plants

    International Nuclear Information System (INIS)

    Urbanik, T.E.; Jamison, J.D.

    1992-03-01

    In the event of a major accident at a commercial nuclear power station, exposure of the public to airborne radioactive materials can be prevented or greatly reduced by evacuating the area immediately surrounding the reactor site. Reactor licensees are required to conduct studies to estimate the time needed to evacuate the public from the area surrounding each nuclear power station. The results of such studies are used by regulatory personnel and emergency planners to assess the potential effectiveness of protective responses for the public. The time required to evacuate the public from a 10-mile emergency planning radius is estimated by analyzing the available transportation facilities and other relevant conditions within this radius. To support the analysis, data must be collected and assumptions must be made regarding the transportation facilities, the size and characteristics of the population and other conditions in the planning zone. This report describes standard approaches and provides recommendations regarding the relevant information, assumptions and methods to be used in performing evacuation time estimate studies

  7. Study on method of dose estimation for the Dual-moderated neutron survey meter

    International Nuclear Information System (INIS)

    Zhou, Bo; Li, Taosheng; Xu, Yuhai; Gong, Cunkui; Yan, Qiang; Li, Lei

    2013-01-01

    In order to study neutron dose measurement in high energy radiation field, a Dual-moderated survey meter in the range from 1 keV to 300 MeV mean energies spectra has been developed. Measurement results of some survey meters depend on the neutron spectra characteristics in different neutron radiation fields, so the characteristics of the responses to various neutron spectra should be studied in order to get more reasonable dose. In this paper the responses of the survey meter were calculated under different neutron spectra data from IAEA of Technical Reports Series No. 318 and other references. Finally one dose estimation method was determined. The range of the reading per H*(10) for the method estimated is about 0.7–1.6 for the neutron mean energy range from 50 keV to 300 MeV. -- Highlights: • We studied a novel high energy neutron survey meter. • Response characteristics of the survey meter were calculated by using a series of neutron spectra. • One significant advantage of the survey meter is that it can provide mean energy of radiation field. • Dose estimate deviation can be corrected. • The range of corrected reading per H*(10) is about 0.7–1.6 for the neutron fluence mean energy range from 0.05 MeV to 300 MeV

  8. Estimation of Finite Population Ratio When Other Auxiliary Variables are Available in the Study

    Directory of Open Access Journals (Sweden)

    Jehad Al-Jararha

    2014-12-01

    Full Text Available The estimation of the population total $t_y,$ by using one or moreauxiliary variables, and the population ratio $\\theta_{xy}=t_y/t_x,$$t_x$ is the population total for the auxiliary variable $X$, for afinite population are heavily discussed in the literature. In thispaper, the idea of estimation the finite population ratio$\\theta_{xy}$ is extended to use the availability of auxiliaryvariable $Z$ in the study, such auxiliary variable  is not used inthe definition of the population ratio. This idea may be  supported by the fact that the variable $Z$  is highly correlated with the interest variable $Y$ than the correlation between the variables $X$ and $Y.$ The availability of such auxiliary variable can be used to improve the precision of the estimation of the population ratio.  To our knowledge, this idea is not discussed in the literature.  The bias, variance and the mean squares error  are given for our approach. Simulation from real data set,  the empirical relative bias and  the empirical relative mean squares error are computed for our approach and different estimators proposed in the literature  for estimating the population ratio $\\theta_{xy}.$ Analytically and the simulation results show that, by suitable choices, our approach gives negligible bias and has less mean squares error.  

  9. The Boehringer Ingelheim employee study (Part 2): 10-year cardiovascular diseases risk estimation.

    Science.gov (United States)

    Kempf, K; Martin, S; Döhring, C; Dugi, K; Haastert, B; Schneider, M

    2016-10-01

    Cardiovascular disease (CVD) may cause an economic burden to companies, but CVD risk estimations specific to working populations are lacking. To estimate the 10-year CVD risk in the Boehringer Ingelheim (BI) employee cohort and analyse the potential effect of hypothetical risk reduction interventions. We estimated CVD risk using the Framingham (FRS), PROCAM (PRS) and Reynolds (RRS) risk scores, using cross-sectional baseline data on BI Pharma employees collected from 2005 to 2011. Results were compared using Fisher's exact and Wilcoxon tests. The predictive ability of the score estimates was assessed using receiver-operating characteristics analyses. Among the 4005 study subjects, we estimated 10-year CVD risks of 35% (FRS), 9% (PRS) and 6% (RRS) for men and 10% (FRS), 4% (PRS) and 1% (RRS) for women. One hundred and thirty-four (6%) men and 111 (6%) women employees had current CVD. The best predictors of prevalent CVD were the FRS and the RRS for men [area-under-the-curve 0.62 (0.57-0.67) for both]. A hypothetical intervention that would improve systolic blood pressure, HbA1c (for diabetes), C-reactive protein, triglycerides and total and high-density lipoprotein cholesterol by 10% each would potentially reduce expected CVD cases by 36-41% in men and 30-45% in women, and if smoking cessation is incorporated, by 39-45% and 30-55%, respectively, depending on the pre-intervention risk score. There was a substantial risk of developing CVD in this working cohort. Occupational health programmes with lifestyle interventions for high-risk individuals may be an effective risk reduction measure. © The Author 2016. Published by Oxford University Press on behalf of the Society of Occupational Medicine. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. Use of multiple data sources to estimate hepatitis C seroprevalence among prisoners: A retrospective cohort study.

    Directory of Open Access Journals (Sweden)

    Kathryn J Snow

    Full Text Available Hepatitis C is a major cause of preventable morbidity and mortality. Prisoners are a key population for hepatitis C control programs, and with the advent of highly effective therapies, prisons are increasingly important sites for hepatitis C diagnosis and treatment. Accurate estimates of hepatitis C prevalence among prisoners are needed in order to plan and resource service provision, however many prevalence estimates are based on surveys compromised by limited and potentially biased participation. We aimed to compare estimates derived from three different data sources, and to assess whether the use of self-report as a supplementary data source may help researchers assess the risk of selection bias. We used three data sources to estimate the prevalence of hepatitis C antibodies in a large cohort of Australian prisoners-prison medical records, self-reported status during a face-to-face interview prior to release from prison, and data from a statewide notifiable conditions surveillance system. Of 1,315 participants, 33.8% had at least one indicator of hepatitis C seropositivity, however less than one third of these (9.5% of the entire cohort were identified by all three data sources. Among participants of known status, self-report had a sensitivity of 80.1% and a positive predictive value of 97.8%. Any one data source used in isolation would have under-estimated the prevalence of hepatitis C in this cohort. Using multiple data sources in studies of hepatitis C seroprevalence among prisoners may improve case detection and help researchers assess the risk of selection bias due to non-participation in serological testing.

  11. Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA.

    Science.gov (United States)

    Kelly, Brendan J; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D; Collman, Ronald G; Bushman, Frederic D; Li, Hongzhe

    2015-08-01

    The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence-absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-group distance and exposure/intervention effect size must be accurately modeled to estimate statistical power for a microbiome study that will be analyzed with pairwise distances and PERMANOVA. We present a framework for PERMANOVA power estimation tailored to marker-gene microbiome studies that will be analyzed by pairwise distances, which includes: (i) a novel method for distance matrix simulation that permits modeling of within-group pairwise distances according to pre-specified population parameters; (ii) a method to incorporate effects of different sizes within the simulated distance matrix; (iii) a simulation-based method for estimating PERMANOVA power from simulated distance matrices; and (iv) an R statistical software package that implements the above. Matrices of pairwise distances can be efficiently simulated to satisfy the triangle inequality and incorporate group-level effects, which are quantified by the adjusted coefficient of determination, omega-squared (ω2). From simulated distance matrices, available PERMANOVA power or necessary sample size can be estimated for a planned microbiome study. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. Estimating Water Supply Arsenic Levels in the New England Bladder Cancer Study

    Science.gov (United States)

    Freeman, Laura E. Beane; Lubin, Jay H.; Airola, Matthew S.; Baris, Dalsu; Ayotte, Joseph D.; Taylor, Anne; Paulu, Chris; Karagas, Margaret R.; Colt, Joanne; Ward, Mary H.; Huang, An-Tsun; Bress, William; Cherala, Sai; Silverman, Debra T.; Cantor, Kenneth P.

    2011-01-01

    Background: Ingestion of inorganic arsenic in drinking water is recognized as a cause of bladder cancer when levels are relatively high (≥ 150 µg/L). The epidemiologic evidence is less clear at the low-to-moderate concentrations typically observed in the United States. Accurate retrospective exposure assessment over a long time period is a major challenge in conducting epidemiologic studies of environmental factors and diseases with long latency, such as cancer. Objective: We estimated arsenic concentrations in the water supplies of 2,611 participants in a population-based case–control study in northern New England. Methods: Estimates covered the lifetimes of most study participants and were based on a combination of arsenic measurements at the homes of the participants and statistical modeling of arsenic concentrations in the water supply of both past and current homes. We assigned a residential water supply arsenic concentration for 165,138 (95%) of the total 173,361 lifetime exposure years (EYs) and a workplace water supply arsenic level for 85,195 EYs (86% of reported occupational years). Results: Three methods accounted for 93% of the residential estimates of arsenic concentration: direct measurement of water samples (27%; median, 0.3 µg/L; range, 0.1–11.5), statistical models of water utility measurement data (49%; median, 0.4 µg/L; range, 0.3–3.3), and statistical models of arsenic concentrations in wells using aquifers in New England (17%; median, 1.6 µg/L; range, 0.6–22.4). Conclusions: We used a different validation procedure for each of the three methods, and found our estimated levels to be comparable with available measured concentrations. This methodology allowed us to calculate potential drinking water exposure over long periods. PMID:21421449

  13. Estimating the total number of susceptibility variants underlying complex diseases from genome-wide association studies.

    Directory of Open Access Journals (Sweden)

    Hon-Cheong So

    2010-11-01

    Full Text Available Recently genome-wide association studies (GWAS have identified numerous susceptibility variants for complex diseases. In this study we proposed several approaches to estimate the total number of variants underlying these diseases. We assume that the variance explained by genetic markers (Vg follow an exponential distribution, which is justified by previous studies on theories of adaptation. Our aim is to fit the observed distribution of Vg from GWAS to its theoretical distribution. The number of variants is obtained by the heritability divided by the estimated mean of the exponential distribution. In practice, due to limited sample sizes, there is insufficient power to detect variants with small effects. Therefore the power was taken into account in fitting. Besides considering the most significant variants, we also tried to relax the significance threshold, allowing more markers to be fitted. The effects of false positive variants were removed by considering the local false discovery rates. In addition, we developed an alternative approach by directly fitting the z-statistics from GWAS to its theoretical distribution. In all cases, the "winner's curse" effect was corrected analytically. Confidence intervals were also derived. Simulations were performed to compare and verify the performance of different estimators (which incorporates various means of winner's curse correction and the coverage of the proposed analytic confidence intervals. Our methodology only requires summary statistics and is able to handle both binary and continuous traits. Finally we applied the methods to a few real disease examples (lipid traits, type 2 diabetes and Crohn's disease and estimated that hundreds to nearly a thousand variants underlie these traits.

  14. In-vivo studies of new vector velocity and adaptive spectral estimators in medical ultrasound

    DEFF Research Database (Denmark)

    Hansen, Kristoffer Lindskov

    2010-01-01

    New ultrasound techniques for blood flow estimation have been investigated in-vivo. These are vector velocity estimators (Transverse Oscillation, Synthetic Transmit Aperture, Directional Beamforming and Plane Wave Excitation) and adaptive spectral estimators (Blood spectral Power Capon and Blood...

  15. Negative control exposure studies in the presence of measurement error: implications for attempted effect estimate calibration.

    Science.gov (United States)

    Sanderson, Eleanor; Macdonald-Wallis, Corrie; Davey Smith, George

    2018-04-01

    Negative control exposure studies are increasingly being used in epidemiological studies to strengthen causal inference regarding an exposure-outcome association when unobserved confounding is thought to be present. Negative control exposure studies contrast the magnitude of association of the negative control, which has no causal effect on the outcome but is associated with the unmeasured confounders in the same way as the exposure, with the magnitude of the association of the exposure with the outcome. A markedly larger effect of the exposure on the outcome than the negative control on the outcome strengthens inference that the exposure has a causal effect on the outcome. We investigate the effect of measurement error in the exposure and negative control variables on the results obtained from a negative control exposure study. We do this in models with continuous and binary exposure and negative control variables using analysis of the bias of the estimated coefficients and Monte Carlo simulations. Our results show that measurement error in either the exposure or negative control variables can bias the estimated results from the negative control exposure study. Measurement error is common in the variables used in epidemiological studies; these results show that negative control exposure studies cannot be used to precisely determine the size of the effect of the exposure variable, or adequately adjust for unobserved confounding; however, they can be used as part of a body of evidence to aid inference as to whether a causal effect of the exposure on the outcome is present.

  16. Studies on health risks to persons exposed to plutonium

    International Nuclear Information System (INIS)

    Voelz, G.L.; Stebbings, J.H. Jr.; Healy, J.W.; Hempelmann, L.H.

    1979-01-01

    Two studies on Los Alamos workers exposed to plutonium have shown no increase in cancers of the lung, bone, and liver, three principal cancers of interest following plutonium deposition. A clinical study of 26 workers exposed 32 years ago shows no cases of cancer other than two skin cancers that were excised successfully. A mortality study of 224 workers, all persons with estimated deposition of 10 nCi or moe in 1974, showed no excess of mortality due to any cause. No bone or liver cancers were present, while one death due to lung cancer was observed as compared to an expected three cases. These negative findings on such small groups are not able to prove or disprove the validity of commonly used risk estimates as recommended in the 1972 BEIR and 1977 UNSCEAR reports, but the data do indicate that much higher risk estimates are not warranted

  17. Studies on health risks to persons exposed to plutonium

    Energy Technology Data Exchange (ETDEWEB)

    Voelz, G.L.; Stebbings, J.H. Jr.; Healy, J.W.; Hempelmann, L.H.

    1979-01-01

    Two studies on Los Alamos workers exposed to plutonium have shown no increase in cancers of the lung, bone, and liver, three principal cancers of interest following plutonium deposition. A clinical study of 26 workers exposed 32 years ago shows no cases of cancer other than two skin cancers that were excised successfully. A mortality study of 224 workers, all persons with estimated deposition of 10 nCi or moe in 1974, showed no excess of mortality due to any cause. No bone or liver cancers were present, while one death due to lung cancer was observed as compared to an expected three cases. These negative findings on such small groups are not able to prove or disprove the validity of commonly used risk estimates as recommended in the 1972 BEIR and 1977 UNSCEAR reports, but the data do indicate that much higher risk estimates are not warranted.

  18. Delayed Cystectomy for T1G3 Transitional Cell Carcinoma (TCC) of the Urinary Bladder, NCI Retrospective Case Series

    International Nuclear Information System (INIS)

    FAKHR, I.; EL-HOSSIENY, H.; SALAMA, A.

    2008-01-01

    Aim: We aim to evaluate the National Cancer Institute (NCI) treatment protocol and its outcome regarding recurrence, progression and survival in patients with T1G3 urinary bladder transitional cell carcinoma. Patients and Methods: In a retrospective study, between January 2001 and December 2007, all 34 patients with T1G3 bladder transitional cell carcinoma (TCC), after complete transurethral resection (TURBT), received intravesical BCG as adjuvant therapy. A conservative approach was adopted, whereby those with superficial recurrences were eligible to TURBT, with delayed cystectomy for progression to muscle invasion. Overall, recurrence, and progression-free survival were analyzed. Results: Thirty-three patients were included, 29 were males and 4 were females. The mean age was 61 years (range 35-89 years). Final analysis was made at median follow-up of 15 months (Range of 3-68 months, mean 18 months) for survival. Eleven (33.3%) patients had multi- focal tumors. Associated schistosomiasis was present in 12 (36.6%) patients. Twenty-two (66.67%) patients showed recurrence. Eleven out of these 22 (50.0%) patients progressed to muscle invasion and underwent radical cystectomy. Ten out of 34 (30.3%) patients received post- cystectomy radiotherapy. Two (20.0%) of them, were staged as TNM stage II, 6 (60.0%) as TNM stage III and 2 (20.0%) patients were TNM stage IV. Eight (72.7%) of these 11 patients had post-cystectomy radiotherapy alone; while the 2 (6.0%) other patients with stage IV had adjuvant concomitant Cisplatin and Gemcitabine chemotherapy. Five (14%) patients of those cystectomy patients died of TCC. Three (60%) patients died from metastatic disease (to lung, liver and bone), one patient died from advanced locoregional disease and another patient died from post- operative complications. Among those patients who received radiotherapy alone, 62.5% are alive. Although, we report a biologically more aggressive behavior of T1G3 than that reported by some authors

  19. Hydrograph sensitivity to estimates of map impervious cover: a WinHSPF BASINS case study

    Science.gov (United States)

    Endreny, Theodore A.; Somerlot, Christopher; Hassett, James M.

    2003-04-01

    The BASINS geographic information system hydrologic toolkit was designed to compute total maximum daily loads, which are often derived by combining water quantity estimates with pollutant concentration estimates. In this paper the BASINS toolkit PLOAD and WinHSPF sub-models are briefly described, and then a 0·45 km2 headwater watershed in the New York Croton River area is used for a case study illustrating a full WinHSPF implementation. The goal of the Croton study was to determine the sensitivity of WinHSPF hydrographs to changes in land cover map inputs. This scenario occurs when scaling the WinHSPF model from the smaller 0·45 km2 watershed to the larger 1000 km2 management basin of the entire Croton area. Methods used to test model sensitivity include first calibrating the WinHSPF hydrograph using research-monitored precipitation and discharge data together with high spatial resolution and accuracy land cover data of impervious and pervious areas, and then swapping three separate land cover files, known as GIRAS, MRLC, and DOQQ data, into the calibrated model. Research results indicated that the WinHSPF land cover swapping had peak flow sensitivity in December 2001 hydrographs between 35% underestimation and 20% overestimation, and that errors in land-cover-derived runoff ratios for storm totals and peak flows tracked with the land cover data estimates of impervious area.

  20. Reliability of Semiautomated Computational Methods for Estimating Tibiofemoral Contact Stress in the Multicenter Osteoarthritis Study

    Directory of Open Access Journals (Sweden)

    Donald D. Anderson

    2012-01-01

    Full Text Available Recent findings suggest that contact stress is a potent predictor of subsequent symptomatic osteoarthritis development in the knee. However, much larger numbers of knees (likely on the order of hundreds, if not thousands need to be reliably analyzed to achieve the statistical power necessary to clarify this relationship. This study assessed the reliability of new semiautomated computational methods for estimating contact stress in knees from large population-based cohorts. Ten knees of subjects from the Multicenter Osteoarthritis Study were included. Bone surfaces were manually segmented from sequential 1.0 Tesla magnetic resonance imaging slices by three individuals on two nonconsecutive days. Four individuals then registered the resulting bone surfaces to corresponding bone edges on weight-bearing radiographs, using a semi-automated algorithm. Discrete element analysis methods were used to estimate contact stress distributions for each knee. Segmentation and registration reliabilities (day-to-day and interrater for peak and mean medial and lateral tibiofemoral contact stress were assessed with Shrout-Fleiss intraclass correlation coefficients (ICCs. The segmentation and registration steps of the modeling approach were found to have excellent day-to-day (ICC 0.93–0.99 and good inter-rater reliability (0.84–0.97. This approach for estimating compartment-specific tibiofemoral contact stress appears to be sufficiently reliable for use in large population-based cohorts.

  1. A cross-sectional study of mathematics achievement, estimation skills, and academic self-perception in students of varying ability.

    Science.gov (United States)

    Montague, Marjorie; van Garderen, Delinda

    2003-01-01

    This study investigated students' mathematics achievement, estimation ability, use of estimation strategies, and academic self-perception. Students with learning disabilities (LD), average achievers, and intellectually gifted students (N = 135) in fourth, sixth, and eighth grade participated in the study. They were assessed to determine their mathematics achievement, ability to estimate discrete quantities, knowledge and use of estimation strategies, and perception of academic competence. The results indicated that the students with LD performed significantly lower than their peers on the math achievement measures, as expected, but viewed themselves to be as academically competent as the average achievers did. Students with LD and average achievers scored significantly lower than gifted students on all estimation measures, but they differed significantly from one another only on the estimation strategy use measure. Interestingly, even gifted students did not seem to have a well-developed understanding of estimation and, like the other students, did poorly on the first estimation measure. The accuracy of their estimates seemed to improve, however, when students were asked open-ended questions about the strategies they used to arrive at their estimates. Although students with LD did not differ from average achievers in their estimation accuracy, they used significantly fewer effective estimation strategies. Implications for instruction are discussed.

  2. Estimating the potential impacts of a nuclear reactor accident: methodology and case studies

    International Nuclear Information System (INIS)

    Cartwright, J.V.; Beemiller, R.M.; Trott, E.A. Jr.; Younger, J.M.

    1982-04-01

    This monograph describes an industrial impact model that can be used to estimate the regional industry-specific impacts of disasters. Special attention is given to the impacts of possible nuclear reactor accidents. The monograph also presents three applications of the model. The impacts estimated in the case studies are based on (1) general information and reactor-specific data, supplied by the US Nuclear Regulatory Commission (NRC), (2) regional economic models derived from the Regional Input-Output Modeling System (RIMS II) developed at the Bureau of Economic Analysis (BEA), and (3) additional methodology developed especially for taking into account the unique characteristics of a nuclear reactor accident with respect to regional industrial activity

  3. Long-term intercomparison of Spanish environmental dosimetry services. Study of transit dose estimations

    International Nuclear Information System (INIS)

    Duch, Ma Amor; Carlos Saez-Vergara, Jose; Ginjaume, Merce; Gomez, Candelas; Maria Gonzalez-Leiton, Ana; Herrero, Javier; Jose de Lucas, Ma; Rodriguez, Rafael; Marugan, Immaculada; Salas, Rosario

    2008-01-01

    This paper presents the layout and results of a three-year follow-up of a national intercomparison campaign organized on a voluntary basis among the Spanish Laboratories in charge of environmental monitoring at and in the vicinity of Spanish nuclear installations. The dosemeters were exposed in the field at an environmental reference station with a known ambient dose equivalent, and controlled meteorological parameters. The study aimed at verifying the consistency of the different laboratories in estimating the ambient dose equivalent in realistic fields and to evaluate the influence of two different procedures to estimate the transit dose during the transfer of the dosemeters both from and to the dosimetric laboratory and the monitored site. All the results were within 20% of the reference doses for all the dosemeters tested, and in most cases they were within 10%

  4. The Additive Risk Model for Estimation of Effect of Haplotype Match in BMT Studies

    DEFF Research Database (Denmark)

    Scheike, Thomas; Martinussen, T; Zhang, MJ

    2011-01-01

    leads to a missing data problem. We show how Aalen's additive risk model can be applied in this setting with the benefit that the time-varying haplomatch effect can be easily studied. This problem has not been considered before, and the standard approach where one would use the expected-maximization (EM......) algorithm cannot be applied for this model because the likelihood is hard to evaluate without additional assumptions. We suggest an approach based on multivariate estimating equations that are solved using a recursive structure. This approach leads to an estimator where the large sample properties can...... be developed using product-integration theory. Small sample properties are investigated using simulations in a setting that mimics the motivating haplomatch problem....

  5. A new technique for testing distribution of knowledge and to estimate sampling sufficiency in ethnobiology studies.

    Science.gov (United States)

    Araújo, Thiago Antonio Sousa; Almeida, Alyson Luiz Santos; Melo, Joabe Gomes; Medeiros, Maria Franco Trindade; Ramos, Marcelo Alves; Silva, Rafael Ricardo Vasconcelos; Almeida, Cecília Fátima Castelo Branco Rangel; Albuquerque, Ulysses Paulino

    2012-03-15

    We propose a new quantitative measure that enables the researcher to make decisions and test hypotheses about the distribution of knowledge in a community and estimate the richness and sharing of information among informants. In our study, this measure has two levels of analysis: intracultural and intrafamily. Using data collected in northeastern Brazil, we evaluated how these new estimators of richness and sharing behave for different categories of use. We observed trends in the distribution of the characteristics of informants. We were also able to evaluate how outliers interfere with these analyses and how other analyses may be conducted using these indices, such as determining the distance between the knowledge of a community and that of experts, as well as exhibiting the importance of these individuals' communal information of biological resources. One of the primary applications of these indices is to supply the researcher with an objective tool to evaluate the scope and behavior of the collected data.

  6. Estimation of the total number of mast cells in the human umbilical cord. A methodological study

    DEFF Research Database (Denmark)

    Engberg Damsgaard, T M; Windelborg Nielsen, B; Sørensen, Flemming Brandt

    1992-01-01

    The aim of the present study was to estimate the total number of mast cells in the human umbilical cord. Using 50 microns-thick paraffin sections, made from a systematic random sample of umbilical cord, the total number of mast cells per cord was estimated using a combination of the optical...... disector and fractionated sampling. The mast cell of the human umbilical cord was found in Wharton's jelly, most frequently in close proximity to the three blood vessels. No consistent pattern of variation in mast cell numbers from the fetal end of the umbilical cord towards the placenta was seen....... The total number of mast cells found in the umbilical cord was 5,200,000 (median), range 2,800,000-16,800,000 (n = 7), that is 156,000 mast cells per gram umbilical cord (median), range 48,000-267,000. Thus, the umbilical cord constitutes an adequate source of mast cells for further investigation...

  7. How does spatial study design influence density estimates from spatial capture-recapture models?

    Directory of Open Access Journals (Sweden)

    Rahel Sollmann

    Full Text Available When estimating population density from data collected on non-invasive detector arrays, recently developed spatial capture-recapture (SCR models present an advance over non-spatial models by accounting for individual movement. While these models should be more robust to changes in trapping designs, they have not been well tested. Here we investigate how the spatial arrangement and size of the trapping array influence parameter estimates for SCR models. We analysed black bear data collected with 123 hair snares with an SCR model accounting for differences in detection and movement between sexes and across the trapping occasions. To see how the size of the trap array and trap dispersion influence parameter estimates, we repeated analysis for data from subsets of traps: 50% chosen at random, 50% in the centre of the array and 20% in the South of the array. Additionally, we simulated and analysed data under a suite of trap designs and home range sizes. In the black bear study, we found that results were similar across trap arrays, except when only 20% of the array was used. Black bear density was approximately 10 individuals per 100 km(2. Our simulation study showed that SCR models performed well as long as the extent of the trap array was similar to or larger than the extent of individual movement during the study period, and movement was at least half the distance between traps. SCR models performed well across a range of spatial trap setups and animal movements. Contrary to non-spatial capture-recapture models, they do not require the trapping grid to cover an area several times the average home range of the studied species. This renders SCR models more appropriate for the study of wide-ranging mammals and more flexible to design studies targeting multiple species.

  8. 75 FR 79009 - Proposed Collection; Comment Request; Questionnaire Cognitive Interview and Pretesting (NCI)

    Science.gov (United States)

    2010-12-17

    ... participant in depth about interpretations of questions, recall processes used to answer them, and adequacy of... no costs to respondents other than their time. The total estimated annualized burden hours are 600...) Pilot 1,200 1 30/60 (0.5) 600.0 Household interviews. Totals 3,600 3,600.0 The estimated total annual...

  9. 78 FR 44136 - Submission for OMB review; 30-day Comment Request: National Cancer Institute (NCI) Cancer...

    Science.gov (United States)

    2013-07-23

    ... award performance and the effectiveness of the program as a whole. The respondents are the Principal Investigators of the awards, along with their institutional business officials. The awards are administered by... costs to respondents other than their time. The estimated annualized burden hours are 72. Estimated...

  10. Basic study on relationship between estimated rate constants and noise in FDG kinetic analysis

    International Nuclear Information System (INIS)

    Kimura, Yuichi; Toyama, Hinako; Senda, Michio.

    1996-01-01

    For accurate estimation of the rate constants in 18 F-FDG dynamic study, the shape of the estimation function (Φ) is crucial. In this investigation, the relationship between the noise level in tissue time activity curve and the shape of the least squared estimation function which is the sum of squared error between a function of model parameters and a measured data is calculated in 3 parameter model of 18 F-FDG. In the first simulation, by using actual plasma time activity curve, the true tissue curve was generated from known sets of rate constants ranging 0.05≤k 1 ≤0.15, 0.1≤k 2 ≤0.2 and 0.01≤k 3 ≤0.1 in 0.01 step. This procedure was repeated under various noise levels in the tissue time activity curve from 1 to 8% of the maximum value in the tissue activity. In the second simulation, plasma and tissue time activity curves from clinical 18 F-FDG dynamic study were used to calculate the Φ. In the noise-free case, because the global minima is separated from neighboring local minimums, it was easy to find out the optimum point. However, with increasing noise level, the optimum point was buried in many neighboring local minima. Making it difficult to find out the optimum point. The optimum point was found within 20% of the convergence point by standard non-linear optimization method. The shape of Φ for the clinical data was similar to that with the noise level of 3 or 5% in the first simulation. Therefore direct search within the area extending 20% from the result of usual non-linear curve fitting procedure is recommended for accurate estimation of the constants. (author)

  11. A simulation study on estimating biomarker-treatment interaction effects in randomized trials with prognostic variables.

    Science.gov (United States)

    Haller, Bernhard; Ulm, Kurt

    2018-02-20

    To individualize treatment decisions based on patient characteristics, identification of an interaction between a biomarker and treatment is necessary. Often such potential interactions are analysed using data from randomized clinical trials intended for comparison of two treatments. Tests of interactions are often lacking statistical power and we investigated if and how a consideration of further prognostic variables can improve power and decrease the bias of estimated biomarker-treatment interactions in randomized clinical trials with time-to-event outcomes. A simulation study was performed to assess how prognostic factors affect the estimate of the biomarker-treatment interaction for a time-to-event outcome, when different approaches, like ignoring other prognostic factors, including all available covariates or using variable selection strategies, are applied. Different scenarios regarding the proportion of censored observations, the correlation structure between the covariate of interest and further potential prognostic variables, and the strength of the interaction were considered. The simulation study revealed that in a regression model for estimating a biomarker-treatment interaction, the probability of detecting a biomarker-treatment interaction can be increased by including prognostic variables that are associated with the outcome, and that the interaction estimate is biased when relevant prognostic variables are not considered. However, the probability of a false-positive finding increases if too many potential predictors are included or if variable selection is performed inadequately. We recommend undertaking an adequate literature search before data analysis to derive information about potential prognostic variables and to gain power for detecting true interaction effects and pre-specifying analyses to avoid selective reporting and increased false-positive rates.

  12. Estimating population exposure to power plant emissions using CALPUFF: a case study in Beijing, China

    Energy Technology Data Exchange (ETDEWEB)

    Ying Zhou; Levy, J.I. [Harvard School of Public Health, Boston, MA (United States); Hammitt, J.K.; Evans, J.S. [Harvard Center for Risk Analysis, Boston, MA (United States)

    2003-02-01

    Epidemiological studies have shown a significant association between ambient particulate matter (PM) exposures and increased mortality and morbidity risk. Power plants are significant emitters of precursor gases of fine particulate matter. To evaluate the public health risk posed by power plants, it is necessary to evaluate population exposure to different pollutants. The concept of intake fraction (the fraction of a pollutant emitted that is eventually inhaled or ingested by a population) has been proposed to provide a simple summary measure of the relationship between emissions and exposure. Currently available intake fraction estimates from developing countries used models that look only at the near field impacts, which may not capture the full impact of a pollution source. This case study demonstrated how the intake fraction of power plant emissions in China can be calculated using a detailed long-range atmospheric dispersion model-CALPUFF. We found that the intake fraction of primary fine particles is roughly on the order of 10{sup -5}, while the intake fractions of sulfur dioxide, sulfate and nitrate are on the order of 10{sup -6}. These estimates are an order of magnitude higher than the US estimates. We also tested how sensitive the results were to key assumptions within the model. The size distribution of primary particles has a large impact on the intake fraction for primary particles while the background ammonia concentration is an important factor influencing the intake fraction of nitrate. The background ozone concentration has a moderate impact on the intake fraction of sulfate and nitrate. Our analysis shows that this approach is applicable to a developing country and it provides reasonable population exposure estimates. (author)

  13. Genome-wide association study of swine farrowing traits. Part I: genetic and genomic parameter estimates.

    Science.gov (United States)

    Schneider, J F; Rempel, L A; Rohrer, G A

    2012-10-01

    The primary objective of this study was to determine genetic and genomic parameters among swine (Sus scrofa) farrowing traits. Genetic parameters were obtained using MTDFREML. Genomic parameters were obtained using GENSEL. Genetic and residual variances obtained from MTDFREML were used as priors for the Bayes C analysis of GENSEL. Farrowing traits included total number born (TNB), number born alive (NBA), number born dead (NBD), number stillborn (NSB), number of mummies (MUM), litter birth weight (LBW), and average piglet birth weight (ABW). Statistically significant heritabilities included TNB (0.09, P = 0.048), NBA (0.09, P = 0.041), LBW (0.20, P = 0.002), and ABW (0.26, P NBA (0.97, P NBA-LBW (0.56, P NBA (0.06), NBD (0.00), NSB (0.01), MUM (0.00), LBW (0.11), and ABW (0.31). Limited information is available in the literature about genomic parameters. Only the GP estimate for NSB is significantly lower than what has been published. The GP estimate for ABW is greater than the estimate for heritability found in this study. Other traits with significant heritability had GP estimates half the value of heritability. This research indicates that significant genetic markers will be found for TNB, NBA, LBW, and ABW that will have either immediate use in industry or provide a roadmap to further research with fine mapping or sequencing of areas of significance. Furthermore, these results indicate that genomic selection implemented at an early age would have similar annual progress as traditional selection, and could be incorporated along with traditional selection procedures to improve genetic progress of litter traits.

  14. Bias analysis applied to Agricultural Health Study publications to estimate non-random sources of uncertainty.

    Science.gov (United States)

    Lash, Timothy L

    2007-11-26

    The associations of pesticide exposure with disease outcomes are estimated without the benefit of a randomized design. For this reason and others, these studies are susceptible to systematic errors. I analyzed studies of the associations between alachlor and glyphosate exposure and cancer incidence, both derived from the Agricultural Health Study cohort, to quantify the bias and uncertainty potentially attributable to systematic error. For each study, I identified the prominent result and important sources of systematic error that might affect it. I assigned probability distributions to the bias parameters that allow quantification of the bias, drew a value at random from each assigned distribution, and calculated the estimate of effect adjusted for the biases. By repeating the draw and adjustment process over multiple iterations, I generated a frequency distribution of adjusted results, from which I obtained a point estimate and simulation interval. These methods were applied without access to the primary record-level dataset. The conventional estimates of effect associating alachlor and glyphosate exposure with cancer incidence were likely biased away from the null and understated the uncertainty by quantifying only random error. For example, the conventional p-value for a test of trend in the alachlor study equaled 0.02, whereas fewer than 20% of the bias analysis iterations yielded a p-value of 0.02 or lower. Similarly, the conventional fully-adjusted result associating glyphosate exposure with multiple myleoma equaled 2.6 with 95% confidence interval of 0.7 to 9.4. The frequency distribution generated by the bias analysis yielded a median hazard ratio equal to 1.5 with 95% simulation interval of 0.4 to 8.9, which was 66% wider than the conventional interval. Bias analysis provides a more complete picture of true uncertainty than conventional frequentist statistical analysis accompanied by a qualitative description of study limitations. The latter approach is

  15. Bias analysis applied to Agricultural Health Study publications to estimate non-random sources of uncertainty

    Directory of Open Access Journals (Sweden)

    Lash Timothy L

    2007-11-01

    Full Text Available Abstract Background The associations of pesticide exposure with disease outcomes are estimated without the benefit of a randomized design. For this reason and others, these studies are susceptible to systematic errors. I analyzed studies of the associations between alachlor and glyphosate exposure and cancer incidence, both derived from the Agricultural Health Study cohort, to quantify the bias and uncertainty potentially attributable to systematic error. Methods For each study, I identified the prominent result and important sources of systematic error that might affect it. I assigned probability distributions to the bias parameters that allow quantification of the bias, drew a value at random from each assigned distribution, and calculated the estimate of effect adjusted for the biases. By repeating the draw and adjustment process over multiple iterations, I generated a frequency distribution of adjusted results, from which I obtained a point estimate and simulation interval. These methods were applied without access to the primary record-level dataset. Results The conventional estimates of effect associating alachlor and glyphosate exposure with cancer incidence were likely biased away from the null and understated the uncertainty by quantifying only random error. For example, the conventional p-value for a test of trend in the alachlor study equaled 0.02, whereas fewer than 20% of the bias analysis iterations yielded a p-value of 0.02 or lower. Similarly, the conventional fully-adjusted result associating glyphosate exposure with multiple myleoma equaled 2.6 with 95% confidence interval of 0.7 to 9.4. The frequency distribution generated by the bias analysis yielded a median hazard ratio equal to 1.5 with 95% simulation interval of 0.4 to 8.9, which was 66% wider than the conventional interval. Conclusion Bias analysis provides a more complete picture of true uncertainty than conventional frequentist statistical analysis accompanied by a

  16. An Updated Algorithm for Estimation of Pesticide Exposure Intensity in the Agricultural Health Study

    Directory of Open Access Journals (Sweden)

    Aaron Blair

    2011-12-01

    Full Text Available An algorithm developed to estimate pesticide exposure intensity for use in epidemiologic analyses was revised based on data from two exposure monitoring studies. In the first study, we estimated relative exposure intensity based on the results of measurements taken during the application of the herbicide 2,4-dichlorophenoxyacetic acid (2,4-D (n = 88 and the insecticide chlorpyrifos (n = 17. Modifications to the algorithm weighting factors were based on geometric means (GM of post-application urine concentrations for applicators grouped by application method and use of chemically-resistant (CR gloves. Measurement data from a second study were also used to evaluate relative exposure levels associated with airblast as compared to hand spray application methods. Algorithm modifications included an increase in the exposure reduction factor for use of CR gloves from 40% to 60%, an increase in the application method weight for boom spray relative to in-furrow and for air blast relative to hand spray, and a decrease in the weight for mixing relative to the new weights assigned for application methods. The weighting factors for the revised algorithm now incorporate exposure measurements taken on Agricultural Health Study (AHS participants for the application methods and personal protective equipment (PPE commonly reported by study participants.

  17. Consumers' estimation of calorie content at fast food restaurants: cross sectional observational study.

    Science.gov (United States)

    Block, Jason P; Condon, Suzanne K; Kleinman, Ken; Mullen, Jewel; Linakis, Stephanie; Rifas-Shiman, Sheryl; Gillman, Matthew W

    2013-05-23

    To investigate estimation of calorie (energy) content of meals from fast food restaurants in adults, adolescents, and school age children. Cross sectional study of repeated visits to fast food restaurant chains. 89 fast food restaurants in four cities in New England, United States: McDonald's, Burger King, Subway, Wendy's, KFC, Dunkin' Donuts. 1877 adults and 330 school age children visiting restaurants at dinnertime (evening meal) in 2010 and 2011; 1178 adolescents visiting restaurants after school or at lunchtime in 2010 and 2011. Estimated calorie content of purchased meals. Among adults, adolescents, and school age children, the mean actual calorie content of meals was 836 calories (SD 465), 756 calories (SD 455), and 733 calories (SD 359), respectively. A calorie is equivalent to 4.18 kJ. Compared with the actual figures, participants underestimated calorie content by means of 175 calories (95% confidence interval 145 to 205), 259 calories (227 to 291), and 175 calories (108 to 242), respectively. In multivariable linear regression models, underestimation of calorie content increased substantially as the actual meal calorie content increased. Adults and adolescents eating at Subway estimated 20% and 25% lower calorie content than McDonald's diners (relative change 0.80, 95% confidence interval 0.66 to 0.96; 0.75, 0.57 to 0.99). People eating at fast food restaurants underestimate the calorie content of meals, especially large meals. Education of consumers through calorie menu labeling and other outreach efforts might reduce the large degree of underestimation.

  18. EEG-fMRI Bayesian framework for neural activity estimation: a simulation study

    Science.gov (United States)

    Croce, Pierpaolo; Basti, Alessio; Marzetti, Laura; Zappasodi, Filippo; Del Gratta, Cosimo

    2016-12-01

    Objective. Due to the complementary nature of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI), and given the possibility of simultaneous acquisition, the joint data analysis can afford a better understanding of the underlying neural activity estimation. In this simulation study we want to show the benefit of the joint EEG-fMRI neural activity estimation in a Bayesian framework. Approach. We built a dynamic Bayesian framework in order to perform joint EEG-fMRI neural activity time course estimation. The neural activity is originated by a given brain area and detected by means of both measurement techniques. We have chosen a resting state neural activity situation to address the worst case in terms of the signal-to-noise ratio. To infer information by EEG and fMRI concurrently we used a tool belonging to the sequential Monte Carlo (SMC) methods: the particle filter (PF). Main results. First, despite a high computational cost, we showed the feasibility of such an approach. Second, we obtained an improvement in neural activity reconstruction when using both EEG and fMRI measurements. Significance. The proposed simulation shows the improvements in neural activity reconstruction with EEG-fMRI simultaneous data. The application of such an approach to real data allows a better comprehension of the neural dynamics.

  19. Estimating population exposure to power plant emissions using CALPUFF: a case study in Beijing, China

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Y.; Levy, J.I.; Hammitt, J.K.; Evans, J.S. [Harvard University, Boston, MA (USA). School of Public Health, Landmark Center

    2003-02-01

    Power plants are significant emitters of precursor gases of fine particulate matter. To evaluate the public health risk posed by power plants, it is necessary to evaluate population exposure to different pollutants. The concept of intake fraction (the fraction of a pollutant emitted that is eventually inhaled or ingested by a population) has been proposed to provide a simple summary measure of the relationship between emissions and exposure. Currently available intake fraction estimates from developing countries used models that look only at the near field impacts, which may not capture the full impact of a pollution source. This case study demonstrated how the intake fraction of power plant emissions in China can be calculated using a detailed long-range atmospheric dispersion model, CALPUFF. It was found that the intake fraction of primary fine particles is roughly on the order of 10{sup -5}, while the intake fractions of sulfur dioxide, sulfate and nitrate are on the order of 10{sup -6}. These estimates are an order of magnitude higher than the US estimates. The authors also tested how sensitive the results were to key assumptions within the model. The size distribution of primary particles has a large impact on the intake fraction for primary particles while the background ammonia concentration is an important factor influencing the intake fraction of nitrate. The background ozone concentration has a moderate impact on the intake fraction of sulfate and nitrate.

  20. Estimating search engine index size variability: a 9-year longitudinal study.

    Science.gov (United States)

    van den Bosch, Antal; Bogers, Toine; de Kunder, Maurice

    One of the determining factors of the quality of Web search engines is the size of their index. In addition to its influence on search result quality, the size of the indexed Web can also tell us something about which parts of the WWW are directly accessible to the everyday user. We propose a novel method of estimating the size of a Web search engine's index by extrapolating from document frequencies of words observed in a large static corpus of Web pages. In addition, we provide a unique longitudinal perspective on the size of Google and Bing's indices over a nine-year period, from March 2006 until January 2015. We find that index size estimates of these two search engines tend to vary dramatically over time, with Google generally possessing a larger index than Bing. This result raises doubts about the reliability of previous one-off estimates of the size of the indexed Web. We find that much, if not all of this variability can be explained by changes in the indexing and ranking infrastructure of Google and Bing. This casts further doubt on whether Web search engines can be used reliably for cross-sectional webometric studies.

  1. Estimating the wake deflection downstream of a wind turbine in different atmospheric stabilities: an LES study

    Directory of Open Access Journals (Sweden)

    L. Vollmer

    2016-09-01

    Full Text Available An intentional yaw misalignment of wind turbines is currently discussed as one possibility to increase the overall energy yield of wind farms. The idea behind this control is to decrease wake losses of downstream turbines by altering the wake trajectory of the controlled upwind turbines. For an application of such an operational control, precise knowledge about the inflow wind conditions, the magnitude of wake deflection by a yawed turbine and the propagation of the wake is crucial. The dependency of the wake deflection on the ambient wind conditions as well as the uncertainty of its trajectory are not sufficiently covered in current wind farm control models. In this study we analyze multiple sources that contribute to the uncertainty of the estimation of the wake deflection downstream of yawed wind turbines in different ambient wind conditions. We find that the wake shapes and the magnitude of deflection differ in the three evaluated atmospheric boundary layers of neutral, stable and unstable thermal stability. Uncertainty in the wake deflection estimation increases for smaller temporal averaging intervals. We also consider the choice of the method to define the wake center as a source of uncertainty as it modifies the result. The variance of the wake deflection estimation increases with decreasing atmospheric stability. Control of the wake position in a highly convective environment is therefore not recommended.

  2. Model uncertainty of various settlement estimation methods in shallow tunnels excavation; case study: Qom subway tunnel

    Science.gov (United States)

    Khademian, Amir; Abdollahipour, Hamed; Bagherpour, Raheb; Faramarzi, Lohrasb

    2017-10-01

    In addition to the numerous planning and executive challenges, underground excavation in urban areas is always followed by certain destructive effects especially on the ground surface; ground settlement is the most important of these effects for which estimation there exist different empirical, analytical and numerical methods. Since geotechnical models are associated with considerable model uncertainty, this study characterized the model uncertainty of settlement estimation models through a systematic comparison between model predictions and past performance data derived from instrumentation. To do so, the amount of surface settlement induced by excavation of the Qom subway tunnel was estimated via empirical (Peck), analytical (Loganathan and Poulos) and numerical (FDM) methods; the resulting maximum settlement value of each model were 1.86, 2.02 and 1.52 cm, respectively. The comparison of these predicted amounts with the actual data from instrumentation was employed to specify the uncertainty of each model. The numerical model outcomes, with a relative error of 3.8%, best matched the reality and the analytical method, with a relative error of 27.8%, yielded the highest level of model uncertainty.

  3. Point and interval estimation of pollinator importance: a study using pollination data of Silene caroliniana.

    Science.gov (United States)

    Reynolds, Richard J; Fenster, Charles B

    2008-05-01

    Pollinator importance, the product of visitation rate and pollinator effectiveness, is a descriptive parameter of the ecology and evolution of plant-pollinator interactions. Naturally, sources of its variation should be investigated, but the SE of pollinator importance has never been properly reported. Here, a Monte Carlo simulation study and a result from mathematical statistics on the variance of the product of two random variables are used to estimate the mean and confidence limits of pollinator importance for three visitor species of the wildflower, Silene caroliniana. Both methods provided similar estimates of mean pollinator importance and its interval if the sample size of the visitation and effectiveness datasets were comparatively large. These approaches allowed us to determine that bumblebee importance was significantly greater than clearwing hawkmoth, which was significantly greater than beefly. The methods could be used to statistically quantify temporal and spatial variation in pollinator importance of particular visitor species. The approaches may be extended for estimating the variance of more than two random variables. However, unless the distribution function of the resulting statistic is known, the simulation approach is preferable for calculating the parameter's confidence limits.

  4. Spontaneous baroreflex sensitivity estimates during graded bicycle exercise: a comparative study

    International Nuclear Information System (INIS)

    Vallais, Frederic; Baselli, Giuseppe; Lucini, Daniela; Pagani, Massimo; Porta, Alberto

    2009-01-01

    In the literature, several methods have been proposed for the assessment of the baroreflex sensitivity from spontaneous variability of heart period and systolic arterial pressure. The present study compares the most utilized approaches for the evaluation of the spontaneous baroreflex sensitivity (i.e. sequence-based, spectral, cross-spectral and model-based techniques) over a protocol capable of inducing a progressive decrease of the baroreflex sensitivity in the presence of a relevant respiratory drive (i.e. a stepwise dynamic bicycle exercise at 10%, 20% and 30% of the maximum nominal individual effort) in 16 healthy humans. Results demonstrated that the degree of correlation among the estimates is related to the structure of the model explicitly or implicitly assumed by the method and depends on the experimental condition (i.e. on the physiological mechanisms contemporaneously active with baroreflex, e.g. cardiopulmonary reflexes). However, even in the presence of a significant correlation, proportional and/or constant biases can be present, thus rendering spontaneous baroreflex estimates not interchangeable. We suggest that the comparison among different baroreflex sensitivity estimates might elucidate physiological mechanisms responsible for the relationship between heart period and systolic arterial pressure

  5. Association of a single nucleotide polymorphic variation in the human chromosome 19q13.3 with drug responses in the NCI60 cell lines

    DEFF Research Database (Denmark)

    Nissen, K.K.; Vogel, Ulla Birgitte; Nexo, B.A.

    2009-01-01

    the correlations between the responses of the NCI60 cells to different anticancer drugs and their respective alleles of five DNA polymorphisms located in a cancer-related chromosomal area. One polymorphism, located in the 5' noncoding region of the gene ASE-1, alias CD3EAP, proved to be associated with drug...

  6. A comparative study and validation of state estimation algorithms for Li-ion batteries in battery management systems

    International Nuclear Information System (INIS)

    Klee Barillas, Joaquín; Li, Jiahao; Günther, Clemens; Danzer, Michael A.

    2015-01-01

    Highlights: • Description of state observers for estimating the battery’s SOC. • Implementation of four estimation algorithms in a BMS. • Reliability and performance study of BMS regarding the estimation algorithms. • Analysis of the robustness and code properties of the estimation approaches. • Guide to evaluate estimation algorithms to improve the BMS performance. - Abstract: To increase lifetime, safety, and energy usage battery management systems (BMS) for Li-ion batteries have to be capable of estimating the state of charge (SOC) of the battery cells with a very low estimation error. The accurate SOC estimation and the real time reliability are critical issues for a BMS. In general an increasing complexity of the estimation methods leads to higher accuracy. On the other hand it also leads to a higher computational load and may exceed the BMS limitations or increase its costs. An approach to evaluate and verify estimation algorithms is presented as a requisite prior the release of the battery system. The approach consists of an analysis concerning the SOC estimation accuracy, the code properties, complexity, the computation time, and the memory usage. Furthermore, a study for estimation methods is proposed for their evaluation and validation with respect to convergence behavior, parameter sensitivity, initialization error, and performance. In this work, the introduced analysis is demonstrated with four of the most published model-based estimation algorithms including Luenberger observer, sliding-mode observer, Extended Kalman Filter and Sigma-point Kalman Filter. The experiments under dynamic current conditions are used to verify the real time functionality of the BMS. The results show that a simple estimation method like the sliding-mode observer can compete with the Kalman-based methods presenting less computational time and memory usage. Depending on the battery system’s application the estimation algorithm has to be selected to fulfill the

  7. Effects of Serum Creatinine Calibration on Estimated Renal Function in African Americans: the Jackson Heart Study

    Science.gov (United States)

    Wang, Wei; Young, Bessie A.; Fülöp, Tibor; de Boer, Ian H.; Boulware, L. Ebony; Katz, Ronit; Correa, Adolfo; Griswold, Michael E.

    2015-01-01

    Background The calibration to Isotope Dilution Mass Spectroscopy (IDMS) traceable creatinine is essential for valid use of the new Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation to estimate the glomerular filtration rate (GFR). Methods For 5,210 participants in the Jackson Heart Study (JHS), serum creatinine was measured with a multipoint enzymatic spectrophotometric assay at the baseline visit (2000–2004) and re-measured using the Roche enzymatic method, traceable to IDMS in a subset of 206 subjects. The 200 eligible samples (6 were excluded, 1 for failure of the re-measurement and 5 for outliers) were divided into three disjoint sets - training, validation, and test - to select a calibration model, estimate true errors, and assess performance of the final calibration equation. The calibration equation was applied to serum creatinine measurements of 5,210 participants to estimate GFR and the prevalence of CKD. Results The selected Deming regression model provided a slope of 0.968 (95% Confidence Interval (CI), 0.904 to 1.053) and intercept of −0.0248 (95% CI, −0.0862 to 0.0366) with R squared 0.9527. Calibrated serum creatinine showed high agreement with actual measurements when applying to the unused test set (concordance correlation coefficient 0.934, 95% CI, 0.894 to 0.960). The baseline prevalence of CKD in the JHS (2000–2004) was 6.30% using calibrated values, compared with 8.29% using non-calibrated serum creatinine with the CKD-EPI equation (P creatinine measurements in the JHS and the calibrated values provide a lower CKD prevalence estimate. PMID:25806862

  8. Effects of serum creatinine calibration on estimated renal function in african americans: the Jackson heart study.

    Science.gov (United States)

    Wang, Wei; Young, Bessie A; Fülöp, Tibor; de Boer, Ian H; Boulware, L Ebony; Katz, Ronit; Correa, Adolfo; Griswold, Michael E

    2015-05-01

    The calibration to isotope dilution mass spectrometry-traceable creatinine is essential for valid use of the new Chronic Kidney Disease Epidemiology Collaboration equation to estimate the glomerular filtration rate. For 5,210 participants in the Jackson Heart Study (JHS), serum creatinine was measured with a multipoint enzymatic spectrophotometric assay at the baseline visit (2000-2004) and remeasured using the Roche enzymatic method, traceable to isotope dilution mass spectrometry in a subset of 206 subjects. The 200 eligible samples (6 were excluded, 1 for failure of the remeasurement and 5 for outliers) were divided into 3 disjoint sets-training, validation and test-to select a calibration model, estimate true errors and assess performance of the final calibration equation. The calibration equation was applied to serum creatinine measurements of 5,210 participants to estimate glomerular filtration rate and the prevalence of chronic kidney disease (CKD). The selected Deming regression model provided a slope of 0.968 (95% confidence interval [CI], 0.904-1.053) and intercept of -0.0248 (95% CI, -0.0862 to 0.0366) with R value of 0.9527. Calibrated serum creatinine showed high agreement with actual measurements when applying to the unused test set (concordance correlation coefficient 0.934, 95% CI, 0.894-0.960). The baseline prevalence of CKD in the JHS (2000-2004) was 6.30% using calibrated values compared with 8.29% using noncalibrated serum creatinine with the Chronic Kidney Disease Epidemiology Collaboration equation (P creatinine measurements in the JHS, and the calibrated values provide a lower CKD prevalence estimate.

  9. Multivariate Error Covariance Estimates by Monte-Carlo Simulation for Assimilation Studies in the Pacific Ocean

    Science.gov (United States)

    Borovikov, Anna; Rienecker, Michele M.; Keppenne, Christian; Johnson, Gregory C.

    2004-01-01

    One of the most difficult aspects of ocean state estimation is the prescription of the model forecast error covariances. The paucity of ocean observations limits our ability to estimate the covariance structures from model-observation differences. In most practical applications, simple covariances are usually prescribed. Rarely are cross-covariances between different model variables used. Here a comparison is made between a univariate Optimal Interpolation (UOI) scheme and a multivariate OI algorithm (MvOI) in the assimilation of ocean temperature. In the UOI case only temperature is updated using a Gaussian covariance function and in the MvOI salinity, zonal and meridional velocities as well as temperature, are updated using an empirically estimated multivariate covariance matrix. Earlier studies have shown that a univariate OI has a detrimental effect on the salinity and velocity fields of the model. Apparently, in a sequential framework it is important to analyze temperature and salinity together. For the MvOI an estimation of the model error statistics is made by Monte-Carlo techniques from an ensemble of model integrations. An important advantage of using an ensemble of ocean states is that it provides a natural way to estimate cross-covariances between the fields of different physical variables constituting the model state vector, at the same time incorporating the model's dynamical and thermodynamical constraints as well as the effects of physical boundaries. Only temperature observations from the Tropical Atmosphere-Ocean array have been assimilated in this study. In order to investigate the efficacy of the multivariate scheme two data assimilation experiments are validated with a large independent set of recently published subsurface observations of salinity, zonal velocity and temperature. For reference, a third control run with no data assimilation is used to check how the data assimilation affects systematic model errors. While the performance of the

  10. Comparative study of the geostatistical ore reserve estimation method over the conventional methods

    International Nuclear Information System (INIS)

    Kim, Y.C.; Knudsen, H.P.

    1975-01-01

    Part I contains a comprehensive treatment of the comparative study of the geostatistical ore reserve estimation method over the conventional methods. The conventional methods chosen for comparison were: (a) the polygon method, (b) the inverse of the distance squared method, and (c) a method similar to (b) but allowing different weights in different directions. Briefly, the overall result from this comparative study is in favor of the use of geostatistics in most cases because the method has lived up to its theoretical claims. A good exposition on the theory of geostatistics, the adopted study procedures, conclusions and recommended future research are given in Part I. Part II of this report contains the results of the second and the third study objectives, which are to assess the potential benefits that can be derived by the introduction of the geostatistical method to the current state-of-the-art in uranium reserve estimation method and to be instrumental in generating the acceptance of the new method by practitioners through illustrative examples, assuming its superiority and practicality. These are given in the form of illustrative examples on the use of geostatistics and the accompanying computer program user's guide

  11. An at-site flood estimation method in the context of nonstationarity I. A simulation study

    Science.gov (United States)

    Gado, Tamer A.; Nguyen, Van-Thanh-Van

    2016-04-01

    The stationarity of annual flood peak records is the traditional assumption of flood frequency analysis. In some cases, however, as a result of land-use and/or climate change, this assumption is no longer valid. Therefore, new statistical models are needed to capture dynamically the change of probability density functions over time, in order to obtain reliable flood estimation. In this study, an innovative method for nonstationary flood frequency analysis was presented. Here, the new method is based on detrending the flood series and applying the L-moments along with the GEV distribution to the transformed ;stationary; series (hereafter, this is called the LM-NS). The LM-NS method was assessed through a comparative study with the maximum likelihood (ML) method for the nonstationary GEV model, as well as with the stationary (S) GEV model. The comparative study, based on Monte Carlo simulations, was carried out for three nonstationary GEV models: a linear dependence of the mean on time (GEV1), a quadratic dependence of the mean on time (GEV2), and linear dependence in both the mean and log standard deviation on time (GEV11). The simulation results indicated that the LM-NS method performs better than the ML method for most of the cases studied, whereas the stationary method provides the least accurate results. An additional advantage of the LM-NS method is to avoid the numerical problems (e.g., convergence problems) that may occur with the ML method when estimating parameters for small data samples.

  12. Reporting of HIV-infected pregnant women: estimates from a Brazilian study.

    Science.gov (United States)

    Domingues, Rosa Maria Soares Madeira; Saraceni, Valéria; Leal, Maria do Carmo

    2018-01-01

    To estimate the coverage of the reporting of cases of HIV-infected pregnant women, to estimate the increase in the coverage of the reporting with the routine search of data in other Brazilian health information systems, and to identify missed opportunities for identification of HIV-infected pregnant women in Brazilian maternity hospitals. This is a descriptive study on the linkage of Brazilian databases with primary data from the "Nascer no Brasil" study and secondary database collection from national health information systems. The "Nascer no Brasil" is a national-based study carried out in 2011-2012 with 23,894 pregnant women, which identified HIV-infected pregnant women using prenatal and medical records. We searched for cases of HIV-infected pregnant women identified in the "Nascer no Brasil" study in the Information System of Notifiable Diseases, the Control System for Laboratory Tests of the National CD4+/CD8+ Lymphocyte Count and HIV Viral Load Network, and the Logistics Control System for Medications. We used the OpenRecLink software for the linkage of databases. We estimated the notification coverage, with the respective confidence interval, of the evaluated Brazilian health information systems. We estimated the coverage of the reporting of HIV-infected pregnant women in the Information System of Notifiable Diseases as 57.1% (95%CI 42.9-70.2), and we located 89.3% of the HIV-infected pregnant women (95%CI 81.2-94.2) in some of the Brazilian health information systems researched. The search in other national health information systems would result in an increase of 57.1% of the reported cases. We identified no missed opportunities for the diagnosis of HIV+ in pregnant women in the maternity hospitals evaluated by the "Nascer no Brasil" study. The routine search for information in other Brazilian health information systems, a procedure carried out by the Ministry of Health for cases of AIDS in adults and children, should be adopted for cases of HIV in

  13. Estimation of Postmortem Interval Using the Radiological Techniques, Computed Tomography: A Pilot Study

    Directory of Open Access Journals (Sweden)

    Jiulin Wang

    2017-01-01

    Full Text Available Estimation of postmortem interval (PMI has been an important and difficult subject in the forensic study. It is a primary task of forensic work, and it can help guide the work in field investigation. With the development of computed tomography (CT technology, CT imaging techniques are now being more frequently applied to the field of forensic medicine. This study used CT imaging techniques to observe area changes in different tissues and organs of rabbits after death and the changing pattern of the average CT values in the organs. The study analyzed the relationship between the CT values of different organs and PMI with the imaging software Max Viewer and obtained multiparameter nonlinear regression equation of the different organs, and the study provided an objective and accurate method and reference information for the estimation of PMI in the forensic medicine. In forensic science, PMI refers to the time interval between the discovery or inspection of corpse and the time of death. CT, magnetic resonance imaging, and other imaging techniques have become important means of clinical examinations over the years. Although some scholars in our country have used modern radiological techniques in various fields of forensic science, such as estimation of injury time, personal identification of bodies, analysis of the cause of death, determination of the causes of injury, and identification of the foreign substances of bodies, there are only a few studies on the estimation of time of death. We detected the process of subtle changes in adult rabbits after death, the shape and size of tissues and organs, and the relationship between adjacent organs in three-dimensional space in an effort to develop new method for the estimation of PMI. The bodies of the dead rabbits were stored at 20°C room temperature, sealed condition, and prevented exposure to flesh flies. The dead rabbits were randomly divided into comparison group and experimental group. The whole

  14. Estimation of vehicular emissions using dynamic emission factors: A case study of Delhi, India

    Science.gov (United States)

    Mishra, Dhirendra; Goyal, P.

    2014-12-01

    The estimation of vehicular emissions depends mainly on the values of emission factors, which are used for the development of a comprehensive emission inventory of vehicles. In this study the variations of emission factors as well as the emission rates have been studied in Delhi. The implementation of compressed natural gas (CNG), in the diesel and petrol, public vehicles in the year 2001 has changed the complete air quality scenario of Delhi. The dynamic emission factors of criteria pollutants viz. carbon monoxide (CO), nitrogen oxide (NOx) and particulate matter (PM10) for all types of vehicles have been developed after, which are based on the several factors such as regulated emission limits, number of vehicle deterioration, vehicle increment, vehicle age etc. These emission factors are found to be decreased continuously throughout the study years 2003-2012. The International Vehicle Emissions (IVE) model is used to estimate the emissions of criteria pollutants by utilizing a dataset available from field observations at different traffic intersections in Delhi. Thus the vehicular emissions, based on dynamic emission factors have been estimated for the years 2003-2012, which are found to be comparable with the monitored concentrations at different locations in Delhi. It is noticed that the total emissions of CO, NOx, and PM10 are increased by 45.63%, 68.88% and 17.92%, respectively up to the year 2012 and the emissions of NOx and PM10 are grown continuously with an annual average growth rate of 5.4% and 1.7% respectively.

  15. Study on the pupal morphogenesis of Chrysomya rufifacies (Macquart) (Diptera: Calliphoridae) for postmortem interval estimation.

    Science.gov (United States)

    Ma, Ting; Huang, Jia; Wang, Jiang-Feng

    2015-08-01

    Chrysomya rufifacies (Macquart) is one of the most common species of blow flies at the scene of death in Southern China. Pupae are useful in postmortem interval (PMI) estimation due to their sedentary nature and longer duration of association with the corpse. However, to determine the age of a pupa is more difficult than that of a larva, due to the fact that morphological changes are rarely visible during pupal development. In this study, eggs of C. rufifacies were reared in climatic chambers under four different constant temperatures (20, 24, 28 and 32°C each±1°C, respectively) with same rearing conditions such as foodstuff, substrate, photoperiod and relative humidity. Ten duplicate pupae were sampled at 8-h intervals from prepupae to emergence under the different constant temperatures, respectively. The pupae were sampled, killed, fixed, dissected and with the puparium removed, the external morphological changes of the pupae were observed, recorded and photographed. The morphological characters of C. rufifacies pupae were described. Based on the visible external morphological characters during pupal morphogenesis at 28°C±1°C, the developmental period of C. rufifacies was divided into nine developmental stages and recorded in detailed description. Based on above-mentioned nine developmental stages, some visible external morphological characters were selected as indications for developmental stages. These indications mapped to 8-h sampling intervals at the four different constant temperatures were also described in this study. It is demonstrated that generally the duration of each developmental stage of C. rufifacies pupae is inversely correlated to appropriate developmental temperatures. This study provides relatively systematic pupal developmental data of C. rufifacies for the estimation of PMI. In addition, further work may improve by focus on other environmental factors, histological analysis, more thorough external examination by shortening sampling

  16. Covariate adjustments in randomized controlled trials increased study power and reduced biasedness of effect size estimation.

    Science.gov (United States)

    Lee, Paul H

    2016-08-01

    This study aims to show that under several assumptions, in randomized controlled trials (RCTs), unadjusted, crude analysis will underestimate the Cohen's d effect size of the treatment, and an unbiased estimate of effect size can be obtained only by adjusting for all predictors of the outcome. Four simulations were performed to examine the effects of adjustment on the estimated effect size of the treatment and power of the analysis. In addition, we analyzed data from the Advanced Cognitive Training for Independent and Vital Elderly (ACTIVE) study (older adults aged 65-94), an RCT with three treatment arms and one control arm. We showed that (1) the number of unadjusted covariates was associated with the effect size of the treatment; (2) the biasedness of effect size estimation was minimized if all covariates were adjusted for; (3) the power of the statistical analysis slightly decreased with the number of adjusted noise variables; and (4) exhaustively searching the covariates and noise variables adjusted for can lead to exaggeration of the true effect size. Analysis of the ACTIVE study data showed that the effect sizes adjusting for covariates of all three treatments were 7.39-24.70% larger than their unadjusted counterparts, whereas the effect size would be elevated by at most 57.92% by exhaustively searching the variables adjusted for. All covariates of the outcome in RCTs should be adjusted for, and if the effect of a particular variable on the outcome is unknown, adjustment will do more good than harm. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Secondary Analysis of the NCI-60 Whole Exome Sequencing Data Indicates Significant Presence of Propionibacterium acnes Genomic Material in Leukemia (RPMI-8226 and Central Nervous System (SF-295, SF-539, and SNB-19 Cell Lines.

    Directory of Open Access Journals (Sweden)

    Mark Rojas

    Full Text Available The NCI-60 human tumor cell line panel has been used in a broad range of cancer research over the last two decades. A landmark 2013 whole exome sequencing study of this panel added an exceptional new resource for cancer biologists. The complementary analysis of the sequencing data produced by this study suggests the presence of Propionibacterium acnes genomic sequences in almost half of the datasets, with the highest abundance in the leukemia (RPMI-8226 and central nervous system (SF-295, SF-539, and SNB-19 cell lines. While the origin of these contaminating bacterial sequences remains to be determined, observed results suggest that computational control for the presence of microbial genomic material is a necessary step in the analysis of the high throughput sequencing (HTS data.

  18. Estimating the economic impacts of ecosystem restoration—Methods and case studies

    Science.gov (United States)

    Cullinane Thomas, Catherine; Huber, Christopher; Skrabis, Kristin; Sidon, Joshua

    2016-04-05

    Federal investments in ecosystem restoration projects protect Federal trusts, ensure public health and safety, and preserve and enhance essential ecosystem services. These investments also generate business activity and create jobs. It is important for restoration practitioners to be able to quantify the economic impacts of individual restoration projects in order to communicate the contribution of these activities to local and national stakeholders. This report provides a detailed description of the methods used to estimate economic impacts of case study projects and also provides suggestions, lessons learned, and trade-offs between potential analysis methods.

  19. A study on the improved DTC method for estimations of radionuclide activity in radwaste containers

    International Nuclear Information System (INIS)

    Kang, Sang Hee; Hwang, Ki Ha; Lee, Sang Chul; Lee, Kun Jai; Kim, Tae Wook; Kim, Kyoung Deok; Herr, Young Hoi; Song, Myung Jae

    2004-01-01

    Disposal of rad waste containers requires the assessment of the radioactive contents of each container. Some containers can not be assessed by the γ nuclide analyzer because of time constraint and economical burden. One alternative method, dose to curie conversion (DTC) method can provide an estimate of the container activity. This study evaluates the impact of voids, the chemical composition and density of the material and the distribution of the source related to surface dose rate and the development of the improved DTC method for more accurate assessment

  20. Steam generator tubes rupture probability estimation - study of the axially cracked tube case

    International Nuclear Information System (INIS)

    Mavko, B.; Cizelj, L.; Roussel, G.

    1992-01-01

    The objective of the present study is to estimate the probability of a steam generator tube rupture due to the unstable propagation of axial through-wall cracks during a hypothetical accident. For this purpose the probabilistic fracture mechanics model was developed taking into account statistical distributions of influencing parameters. A numerical example considering a typical steam generator seriously affected by axial stress corrosion cracking in the roll transition area, is presented; it indicates the change of rupture probability with different assumptions focusing mostly on tubesheet reinforcing factor, crack propagation rate and crack detection probability. 8 refs., 4 figs., 4 tabs

  1. Remaining useful life estimation based on stochastic deterioration models: A comparative study

    International Nuclear Information System (INIS)

    Le Son, Khanh; Fouladirad, Mitra; Barros, Anne; Levrat, Eric; Iung, Benoît

    2013-01-01

    Prognostic of system lifetime is a basic requirement for condition-based maintenance in many application domains where safety, reliability, and availability are considered of first importance. This paper presents a probabilistic method for prognostic applied to the 2008 PHM Conference Challenge data. A stochastic process (Wiener process) combined with a data analysis method (Principal Component Analysis) is proposed to model the deterioration of the components and to estimate the RUL on a case study. The advantages of our probabilistic approach are pointed out and a comparison with existing results on the same data is made

  2. Pressurized water reactor monitoring. Study of detection, diagnostic and estimation (least squares and filtering) methods

    International Nuclear Information System (INIS)

    Gillet, M.

    1986-07-01

    This thesis presents a study for the surveillance of the Primary circuit water inventory of a pressurized water reactor. A reference model is developed for the development of an automatic system ensuring detection and real-time diagnostic. The methods to our application are statistical tests and adapted a pattern recognition method. The estimation of the detected anomalies is treated by the least square fit method, and by filtering. A new projected optimization method with superlinear convergence is developed in this framework, and a segmented linearization of the model is introduced, in view of a multiple filtering. 46 refs [fr

  3. Factoring vs linear modeling in rate estimation: a simulation study of relative accuracy.

    Science.gov (United States)

    Maldonado, G; Greenland, S

    1998-07-01

    A common strategy for modeling dose-response in epidemiology is to transform ordered exposures and covariates into sets of dichotomous indicator variables (that is, to factor the variables). Factoring tends to increase estimation variance, but it also tends to decrease bias and thus may increase or decrease total accuracy. We conducted a simulation study to examine the impact of factoring on the accuracy of rate estimation. Factored and unfactored Poisson regression models were fit to follow-up study datasets that were randomly generated from 37,500 population model forms that ranged from subadditive to supramultiplicative. In the situations we examined, factoring sometimes substantially improved accuracy relative to fitting the corresponding unfactored model, sometimes substantially decreased accuracy, and sometimes made little difference. The difference in accuracy between factored and unfactored models depended in a complicated fashion on the difference between the true and fitted model forms, the strength of exposure and covariate effects in the population, and the study size. It may be difficult in practice to predict when factoring is increasing or decreasing accuracy. We recommend, therefore, that the strategy of factoring variables be supplemented with other strategies for modeling dose-response.

  4. Finite Element Method Application in Areal Rainfall Estimation Case Study; Mashhad Plain Basin

    Directory of Open Access Journals (Sweden)

    M. Irani

    2016-10-01

    Full Text Available Introduction: The hydrological models are very important tools for planning and management of water resources. These models can be used for identifying basin and nature problems and choosing various managements. Precipitation is based on these models. Calculations of rainfall would be affected by displacement and region factor such as topography, etc. Estimating areal rainfall is one of the basic needs in meteorological, water resources and others studies. There are various methods for the estimation of rainfall, which can be evaluated by using statistical data and mathematical terms. In hydrological analysis, areal rainfall is so important because of displacement of precipitation. Estimating areal rainfall is divided to three methods: 1- graphical. 2-topographical. 3-numerical. This paper represented calculating mean precipitation (daily, monthly and annual using Galerkin’s method (numerical method and it was compared with other methods such as kriging, IDW, Thiessen and arithmetic mean. In this study, there were 42 actual gauges and thirteen dummies in Mashhad plain basin which is calculated by Galerkin’s method. The method included the use of interpolation functions, allowing an accurate representation of shape and relief of catchment with numerical integration performed by Gaussian quadrature and represented the allocation of weights to stations. Materials and Methods:The estimation of areal rainfall (daily, monthly,… is the basic need for meteorological project. In this field ,there are various methods that one of them is finite element method. Present study aimed to estimate areal rainfall with a 16-year period (1997-2012 by using Galerkin method ( finite element in Mashhad plain basin for 42 station. Therefore, it was compared with other usual methods such as arithmetic mean, Thiessen, Kriging and IDW. The analysis of Thiessen, Kriging and IDW were in ArcGIS10.0 software environment and finite element analysis did by using of Matlab

  5. Development of a low-maintenance measurement approach to continuously estimate methane emissions: A case study.

    Science.gov (United States)

    Riddick, S N; Hancock, B R; Robinson, A D; Connors, S; Davies, S; Allen, G; Pitt, J; Harris, N R P

    2018-03-01

    The chemical breakdown of organic matter in landfills represents a significant source of methane gas (CH 4 ). Current estimates suggest that landfills are responsible for between 3% and 19% of global anthropogenic emissions. The net CH 4 emissions resulting from biogeochemical processes and their modulation by microbes in landfills are poorly constrained by imprecise knowledge of environmental constraints. The uncertainty in absolute CH 4 emissions from landfills is therefore considerable. This study investigates a new method to estimate the temporal variability of CH 4 emissions using meteorological and CH 4 concentration measurements downwind of a landfill site in Suffolk, UK from July to September 2014, taking advantage of the statistics that such a measurement approach offers versus shorter-term, but more complex and instantaneously accurate, flux snapshots. Methane emissions were calculated from CH 4 concentrations measured 700m from the perimeter of the landfill with observed concentrations ranging from background to 46.4ppm. Using an atmospheric dispersion model, we estimate a mean emission flux of 709μgm -2 s -1 over this period, with a maximum value of 6.21mgm -2 s -1 , reflecting the wide natural variability in biogeochemical and other environmental controls on net site emission. The emissions calculated suggest that meteorological conditions have an influence on the magnitude of CH 4 emissions. We also investigate the factors responsible for the large variability observed in the estimated CH 4 emissions, and suggest that the largest component arises from uncertainty in the spatial distribution of CH 4 emissions within the landfill area. The results determined using the low-maintenance approach discussed in this paper suggest that a network of cheaper, less precise CH 4 sensors could be used to measure a continuous CH 4 emission time series from a landfill site, something that is not practical using far-field approaches such as tracer release methods

  6. New methods for estimating follow-up rates in cohort studies

    Directory of Open Access Journals (Sweden)

    Xiaonan Xue

    2017-12-01

    Full Text Available Abstract Background The follow-up rate, a standard index of the completeness of follow-up, is important for assessing the validity of a cohort study. A common method for estimating the follow-up rate, the “Percentage Method”, defined as the fraction of all enrollees who developed the event of interest or had complete follow-up, can severely underestimate the degree of follow-up. Alternatively, the median follow-up time does not indicate the completeness of follow-up, and the reverse Kaplan-Meier based method and Clark’s Completeness Index (CCI also have limitations. Methods We propose a new definition for the follow-up rate, the Person-Time Follow-up Rate (PTFR, which is the observed person-time divided by total person-time assuming no dropouts. The PTFR cannot be calculated directly since the event times for dropouts are not observed. Therefore, two estimation methods are proposed: a formal person-time method (FPT in which the expected total follow-up time is calculated using the event rate estimated from the observed data, and a simplified person-time method (SPT that avoids estimation of the event rate by assigning full follow-up time to all events. Simulations were conducted to measure the accuracy of each method, and each method was applied to a prostate cancer recurrence study dataset. Results Simulation results showed that the FPT has the highest accuracy overall. In most situations, the computationally simpler SPT and CCI methods are only slightly biased. When applied to a retrospective cohort study of cancer recurrence, the FPT, CCI and SPT showed substantially greater 5-year follow-up than the Percentage Method (92%, 92% and 93% vs 68%. Conclusions The Person-time methods correct a systematic error in the standard Percentage Method for calculating follow-up rates. The easy to use SPT and CCI methods can be used in tandem to obtain an accurate and tight interval for PTFR. However, the FPT is recommended when event rates and

  7. Numerical study of the evaporation process and parameter estimation analysis of an evaporation experiment

    Directory of Open Access Journals (Sweden)

    K. Schneider-Zapp

    2010-05-01

    Full Text Available Evaporation is an important process in soil-atmosphere interaction. The determination of hydraulic properties is one of the crucial parts in the simulation of water transport in porous media. Schneider et al. (2006 developed a new evaporation method to improve the estimation of hydraulic properties in the dry range. In this study we used numerical simulations of the experiment to study the physical dynamics in more detail, to optimise the boundary conditions and to choose the optimal combination of measurements. The physical analysis exposed, in accordance to experimental findings in the literature, two different evaporation regimes: (i a soil-atmosphere boundary layer dominated regime (regime I close to saturation and (ii a hydraulically dominated regime (regime II. During this second regime a drying front (interface between unsaturated and dry zone with very steep gradients forms which penetrates deeper into the soil as time passes. The sensitivity analysis showed that the result is especially sensitive at the transition between the two regimes. By changing the boundary conditions it is possible to force the system to switch between the two regimes, e.g. from II back to I. Based on this findings a multistep experiment was developed. The response surfaces for all parameter combinations are flat and have a unique, localised minimum. Best parameter estimates are obtained if the evaporation flux and a potential measurement in 2 cm depth are used as target variables. Parameter estimation from simulated experiments with realistic measurement errors with a two-stage Monte-Carlo Levenberg-Marquardt procedure and manual rejection of obvious misfits lead to acceptable results for three different soil textures.

  8. The estimated incidence of induced abortion in Kenya: a cross-sectional study.

    Science.gov (United States)

    Mohamed, Shukri F; Izugbara, Chimaraoke; Moore, Ann M; Mutua, Michael; Kimani-Murage, Elizabeth W; Ziraba, Abdhalah K; Bankole, Akinrinola; Singh, Susheela D; Egesa, Caroline

    2015-08-21

    The recently promulgated 2010 constitution of Kenya permits abortion when the life or health of the woman is in danger. Yet broad uncertainty remains about the interpretation of the law. Unsafe abortion remains a leading cause of maternal morbidity and mortality in Kenya. The current study aimed to determine the incidence of induced abortion in Kenya in 2012. The incidence of induced abortion in Kenya in 2012 was estimated using the Abortion Incidence Complications Methodology (AICM) along with the Prospective Morbidity Survey (PMS). Data were collected through three surveys, (i) Health Facilities Survey (HFS), (ii) Prospective Morbidity Survey (PMS), and (iii) Health Professionals Survey (HPS). A total of 328 facilities participated in the HFS, 326 participated in the PMS, and 124 key informants participated in the HPS. Abortion numbers, rates, ratios and unintended pregnancy rates were calculated for Kenya as a whole and for five geographical regions. In 2012, an estimated 464,000 induced abortions occurred in Kenya. This translates into an abortion rate of 48 per 1,000 women aged 15-49, and an abortion ratio of 30 per 100 live births. About 120,000 women received care for complications of induced abortion in health facilities. About half (49%) of all pregnancies in Kenya were unintended and 41% of unintended pregnancies ended in an abortion. This study provides the first nationally-representative estimates of the incidence of induced abortion in Kenya. An urgent need exists for improving facilities' capacity to provide safe abortion care to the fullest extent of the law. All efforts should be made to address underlying factors to reduce risk of unsafe abortion.

  9. Improving Focal Depth Estimates: Studies of Depth Phase Detection at Regional Distances

    Science.gov (United States)

    Stroujkova, A.; Reiter, D. T.; Shumway, R. H.

    2006-12-01

    The accurate estimation of the depth of small, regionally recorded events continues to be an important and difficult explosion monitoring research problem. Depth phases (free surface reflections) are the primary tool that seismologists use to constrain the depth of a seismic event. When depth phases from an event are detected, an accurate source depth is easily found by using the delay times of the depth phases relative to the P wave and a velocity profile near the source. Cepstral techniques, including cepstral F-statistics, represent a class of methods designed for the depth-phase detection and identification; however, they offer only a moderate level of success at epicentral distances less than 15°. This is due to complexities in the Pn coda, which can lead to numerous false detections in addition to the true phase detection. Therefore, cepstral methods cannot be used independently to reliably identify depth phases. Other evidence, such as apparent velocities, amplitudes and frequency content, must be used to confirm whether the phase is truly a depth phase. In this study we used a variety of array methods to estimate apparent phase velocities and arrival azimuths, including beam-forming, semblance analysis, MUltiple SIgnal Classification (MUSIC) (e.g., Schmidt, 1979), and cross-correlation (e.g., Cansi, 1995; Tibuleac and Herrin, 1997). To facilitate the processing and comparison of results, we developed a MATLAB-based processing tool, which allows application of all of these techniques (i.e., augmented cepstral processing) in a single environment. The main objective of this research was to combine the results of three focal-depth estimation techniques and their associated standard errors into a statistically valid unified depth estimate. The three techniques include: 1. Direct focal depth estimate from the depth-phase arrival times picked via augmented cepstral processing. 2. Hypocenter location from direct and surface-reflected arrivals observed on sparse

  10. Interval estimation of the overall treatment effect in a meta-analysis of a few small studies with zero events.

    Science.gov (United States)

    Pateras, Konstantinos; Nikolakopoulos, Stavros; Mavridis, Dimitris; Roes, Kit C B

    2018-03-01

    When a meta-analysis consists of a few small trials that report zero events, accounting for heterogeneity in the (interval) estimation of the overall effect is challenging. Typically, we predefine meta-analytical methods to be employed. In practice, data poses restrictions that lead to deviations from the pre-planned analysis, such as the presence of zero events in at least one study arm. We aim to explore heterogeneity estimators behaviour in estimating the overall effect across different levels of sparsity of events. We performed a simulation study that consists of two evaluations. We considered an overall comparison of estimators unconditional on the number of observed zero cells and an additional one by conditioning on the number of observed zero cells. Estimators that performed modestly robust when (interval) estimating the overall treatment effect across a range of heterogeneity assumptions were the Sidik-Jonkman, Hartung-Makambi and improved Paul-Mandel. The relative performance of estimators did not materially differ between making a predefined or data-driven choice. Our investigations confirmed that heterogeneity in such settings cannot be estimated reliably. Estimators whose performance depends strongly on the presence of heterogeneity should be avoided. The choice of estimator does not need to depend on whether or not zero cells are observed.

  11. ICPP calcined solids storage facility closure study. Volume II: Cost estimates, planning schedules, yearly cost flowcharts, and life-cycle cost estimates

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-02-01

    This document contains Volume II of the Closure Study for the Idaho Chemical Processing Plant Calcined Solids Storage Facility. This volume contains draft information on cost estimates, planning schedules, yearly cost flowcharts, and life-cycle costs for the four options described in Volume I: (1) Risk-Based Clean Closure; NRC Class C fill, (2) Risk-Based Clean Closure; Clean fill, (3) Closure to landfill Standards; NRC Class C fill, and (4) Closure to Landfill Standards; Clean fill.

  12. ICPP calcined solids storage facility closure study. Volume II: Cost estimates, planning schedules, yearly cost flowcharts, and life-cycle cost estimates

    International Nuclear Information System (INIS)

    1998-02-01

    This document contains Volume II of the Closure Study for the Idaho Chemical Processing Plant Calcined Solids Storage Facility. This volume contains draft information on cost estimates, planning schedules, yearly cost flowcharts, and life-cycle costs for the four options described in Volume I: (1) Risk-Based Clean Closure; NRC Class C fill, (2) Risk-Based Clean Closure; Clean fill, (3) Closure to landfill Standards; NRC Class C fill, and (4) Closure to Landfill Standards; Clean fill

  13. A Web-based Simulator for Sample Size and Power Estimation in Animal Carcinogenicity Studies

    Directory of Open Access Journals (Sweden)

    Hojin Moon

    2002-12-01

    Full Text Available A Web-based statistical tool for sample size and power estimation in animal carcinogenicity studies is presented in this paper. It can be used to provide a design with sufficient power for detecting a dose-related trend in the occurrence of a tumor of interest when competing risks are present. The tumors of interest typically are occult tumors for which the time to tumor onset is not directly observable. It is applicable to rodent tumorigenicity assays that have either a single terminal sacrifice or multiple (interval sacrifices. The design is achieved by varying sample size per group, number of sacrifices, number of sacrificed animals at each interval, if any, and scheduled time points for sacrifice. Monte Carlo simulation is carried out in this tool to simulate experiments of rodent bioassays because no closed-form solution is available. It takes design parameters for sample size and power estimation as inputs through the World Wide Web. The core program is written in C and executed in the background. It communicates with the Web front end via a Component Object Model interface passing an Extensible Markup Language string. The proposed statistical tool is illustrated with an animal study in lung cancer prevention research.

  14. Estimation of tourism-induced electricity consumption: The case study of Balearics Islands, Spain

    International Nuclear Information System (INIS)

    Bakhat, Mohcine; Rossello, Jaume

    2011-01-01

    Tourism has started to be acknowledged as a significant contributor to the increase in environmental externalities, especially to climate change. Various studies have started to estimate and compute the role of the different tourism sectors' contributions to greenhouse gas (GHG) emissions. These estimations have been made from a sectoral perspective, assessing the contribution of air transport, the accommodation sector, or other tourism-related economic sectors. However, in order to evaluate the impact of this sector on energy use the approaches used in the literature consider tourism in its disaggregated way. This paper assesses the electricity demand pattern and investigates the aggregated contribution of tourism to electricity consumption using the case study of the Balearic Islands (Spain). Using a conventional daily electricity demand model, including data for daily stocks of tourists the impact of the different population growth rate scenarios on electricity loads is also investigated. The results show that, in terms of electricity consumption, tourism cannot be considered a very energy-intensive sector.

  15. Estimated drinking water fluoride exposure and risk of hip fracture: a cohort study.

    Science.gov (United States)

    Näsman, P; Ekstrand, J; Granath, F; Ekbom, A; Fored, C M

    2013-11-01

    The cariostatic benefit from water fluoridation is indisputable, but the knowledge of possible adverse effects on bone and fracture risk due to fluoride exposure is ambiguous. The association between long-term (chronic) drinking water fluoride exposure and hip fracture (ICD-7-9: '820' and ICD-10: 'S72.0-S72.2') was assessed in Sweden using nationwide registers. All individuals born in Sweden between January 1, 1900 and December 31, 1919, alive and living in their municipality of birth at the time of start of follow-up, were eligible for this study. Information on the study population (n = 473,277) was linked among the Swedish National In-Patient Register (IPR), the Swedish Cause of Death Register, and the Register of Population and Population Changes. Estimated individual drinking water fluoride exposure was stratified into 4 categories: very low, hip fracture. The risk estimates did not change in analyses restricted to only low-trauma osteoporotic hip fractures. Chronic fluoride exposure from drinking water does not seem to have any important effects on the risk of hip fracture, in the investigated exposure range.

  16. Estimation development cost, study case: Quality Management System Reactor TRIGA Mark III

    International Nuclear Information System (INIS)

    Antúnez Barbosa, Tereso Antonio; Valdovinos Rosas, Rosa María; Marcial Romero, José Raymundo; Ramos Corchado, Marco Antonio; Edgar Herrera Arriaga

    2016-01-01

    The process of estimating costs in software engineering is not a simple task, it must be addressed carefully to obtain an efficient strategy to solve problems associated with the effort, cost and time of activities that are performed in the development of an information system project. In this context the main goal for both developers and customers is the cost, since developers are worry about the effort pay-load and customers are worry about the product pay-load. However, in other fields the cost of goods depends on the activity or process that is performed, thereby deduce that the main cost of the final product of a development project software project is undoubtedly its size. In this paper a comparative study of common models for estimating costs are developed. These models are used today in order to create a structured analysis to provide the necessary information about cost, time and effort for making decisions in a software development project. Finally the models are applied to a case study, which is a system called Monitorizacion Automatica del Sistema de Gestion de Calidad del Reactor TRIGA Mark III. (author)

  17. Defining and Estimating Healthy Aging in Spain: A Cross-sectional Study.

    Science.gov (United States)

    Rodriguez-Laso, Angel; McLaughlin, Sara J; Urdaneta, Elena; Yanguas, Javier

    2018-03-19

    Using an operational continuum of healthy aging developed by U.S. researchers, we sought to estimate the prevalence of healthy aging among older Spaniards, inform the development of a definition of healthy aging in Spain, and foster cross-national research on healthy aging. The ELES pilot study is a nationwide, cross-sectional survey of community-dwelling Spaniards 50 years and older. The prevalence of healthy aging was calculated for the 65 and over population using varying definitions. To evaluate their validity, we examined the association of healthy aging with the 8 foot up & go test, quality of life scores and self-perceived health using multiple linear and logistic regression. The estimated prevalence of healthy aging varied across the operational continuum, from 4.5% to 49.2%. Prevalence figures were greater for men and those aged 65 to 79 years and were higher than in the United States. Predicted mean physical performance scores were similar for 3 of the 4 definitions, suggesting that stringent definitions of healthy aging offer little advantage over a more moderate one. Similar to U.S. researchers, we recommend a definition of healthy aging that incorporates measures of functional health and limiting disease as opposed to definitions requiring the absence of all disease in studies designed to assess the effect of policy initiatives on healthy aging.

  18. Estimation of tourism-induced electricity consumption: The case study of Balearics Islands, Spain

    Energy Technology Data Exchange (ETDEWEB)

    Bakhat, Mohcine, E-mail: mohcine_bakhat@yahoo.com; Rossello, Jaume

    2011-05-15

    Tourism has started to be acknowledged as a significant contributor to the increase in environmental externalities, especially to climate change. Various studies have started to estimate and compute the role of the different tourism sectors' contributions to greenhouse gas (GHG) emissions. These estimations have been made from a sectoral perspective, assessing the contribution of air transport, the accommodation sector, or other tourism-related economic sectors. However, in order to evaluate the impact of this sector on energy use the approaches used in the literature consider tourism in its disaggregated way. This paper assesses the electricity demand pattern and investigates the aggregated contribution of tourism to electricity consumption using the case study of the Balearic Islands (Spain). Using a conventional daily electricity demand model, including data for daily stocks of tourists the impact of the different population growth rate scenarios on electricity loads is also investigated. The results show that, in terms of electricity consumption, tourism cannot be considered a very energy-intensive sector.

  19. Comparing population exposure to multiple Washington earthquake scenarios for prioritizing loss estimation studies

    Science.gov (United States)

    Wood, Nathan J.; Ratliff, Jamie L.; Schelling, John; Weaver, Craig S.

    2014-01-01

    Scenario-based, loss-estimation studies are useful for gauging potential societal impacts from earthquakes but can be challenging to undertake in areas with multiple scenarios and jurisdictions. We present a geospatial approach using various population data for comparing earthquake scenarios and jurisdictions to help emergency managers prioritize where to focus limited resources on data development and loss-estimation studies. Using 20 earthquake scenarios developed for the State of Washington (USA), we demonstrate how a population-exposure analysis across multiple jurisdictions based on Modified Mercalli Intensity (MMI) classes helps emergency managers understand and communicate where potential loss of life may be concentrated and where impacts may be more related to quality of life. Results indicate that certain well-known scenarios may directly impact the greatest number of people, whereas other, potentially lesser-known, scenarios impact fewer people but consequences could be more severe. The use of economic data to profile each jurisdiction’s workforce in earthquake hazard zones also provides additional insight on at-risk populations. This approach can serve as a first step in understanding societal impacts of earthquakes and helping practitioners to efficiently use their limited risk-reduction resources.

  20. Assessing the external validity of model-based estimates of the incidence of heart attack in England: a modelling study

    Directory of Open Access Journals (Sweden)

    Peter Scarborough

    2016-11-01

    Full Text Available Abstract Background The DisMod II model is designed to estimate epidemiological parameters on diseases where measured data are incomplete and has been used to provide estimates of disease incidence for the Global Burden of Disease study. We assessed the external validity of the DisMod II model by comparing modelled estimates of the incidence of first acute myocardial infarction (AMI in England in 2010 with estimates derived from a linked dataset of hospital records and death certificates. Methods Inputs for DisMod II were prevalence rates of ever having had an AMI taken from a population health survey, total mortality rates and AMI mortality rates taken from death certificates. By definition, remission rates were zero. We estimated first AMI incidence in an external dataset from England in 2010 using a linked dataset including all hospital admissions and death certificates since 1998. 95 % confidence intervals were derived around estimates from the external dataset and DisMod II estimates based on sampling variance and reported uncertainty in prevalence estimates respectively. Results Estimates of the incidence rate for the whole population were higher in the DisMod II results than the external dataset (+54 % for men and +26 % for women. Age-specific results showed that the DisMod II results over-estimated incidence for all but the oldest age groups. Confidence intervals for the DisMod II and external dataset estimates did not overlap for most age groups. Conclusion By comparison with AMI incidence rates in England, DisMod II did not achieve external validity for age-specific incidence rates, but did provide global estimates of incidence that are of similar magnitude to measured estimates. The model should be used with caution when estimating age-specific incidence rates.

  1. The Effects of Metal on Size Specific Dose Estimation (SSDE) in CT: A Phantom Study

    Science.gov (United States)

    Alsanea, Maram M.

    Over the past number of years there has been a significant increase in the awareness of radiation dose from use of computed tomography (CT). Efforts have been made to reduce radiation dose from CT and to better quantify dose being delivered. However, unfortunately, these dose metrics such as CTDI vol are not a specific patient dose. In 2011, the size-specific dose estimation (SSDE) was introduced by AAPM TG-204 which accounts for the physical size of the patient. However, the approach presented in TG-204 ignores the importance of the attenuation differences in the body. In 2014, a newer methodology that accounted for tissue attenuation was introduced by the AAPM TG-220 based on the concept of water equivalent diameter, Dw. One of the limitation of TG-220 is that there is no estimation of the dose while highly attenuating objects such as metal is present in the body. The purpose of this research is to evaluate the accuracy of size-specific dose estimates in CT in the presence of simulated metal prostheses using a conventional PMMA CTDI phantom at different phantom diameter (body and head) and beam energy. Titanium, Cobalt- chromium and stainless steel alloys rods were used in the study. Two approaches were used as introduced by AAPM TG-204 and 220 utilizing the effective diameter and the Dw calculations. From these calculations, conversion factors have been derived that could be applied to the measured CTDIvol to convert it to specific patient dose, or size specific dose estimate, (SSDE). Radiation dose in tissue (f-factor = 0.94) was measured at various chamber positions with the presence of metal. Following, an average weighted tissue dose (AWTD) was calculated in a manner similar to the weighted CTDI (CTDIw). In general, for the 32 cm body phantom SSDE220 provided more accurate estimates of AWTD than did SSDE204. For smaller patient size, represented by the 16 cm head phantom, the SSDE204 was a more accurate estimate of AWTD that that of SSDE220. However, as the

  2. Study on simplified estimation of J-integral under thermal loading

    International Nuclear Information System (INIS)

    Takahashi, Y.

    1993-01-01

    . On the other hand, a method utilizing the stress intensity factor (SIF) solutions for crack surface loading, called influence function method, has been developed for linear elastic problem. This method is very versatile in the sense that it can be applied for estimating the SIF for arbitrary loading using , the stress distribution for an identical but uncracked body, which is much easier to analyze. Therefore utilization of this method in the nonlinear thermal stress problem should be promising if succeeded. Sakon and Kaneko showed that purely elastic solutions give reasonably good estimates of J-integral even for elastic-plastic cracked bodies subjected to deformation controlled loading. On the other hand, Budden studied procedures for estimating J-integral based on stress/strain distribution obtained by elastic-plastic analysis for uncracked bodies. To assess the validity of these approaches as well as their variations in a broader range of conditions, detailed finite element analysis for cracked and uncracked cylinders was conducted in this study

  3. Engineering estimates versus impact evaluation of energy efficiency projects: Regression discontinuity evidence from a case study

    International Nuclear Information System (INIS)

    Lang, Corey; Siler, Matthew

    2013-01-01

    Energy efficiency upgrades have been gaining widespread attention across global channels as a cost-effective approach to addressing energy challenges. The cost-effectiveness of these projects is generally predicted using engineering estimates pre-implementation, often with little ex post analysis of project success. In this paper, for a suite of energy efficiency projects, we directly compare ex ante engineering estimates of energy savings to ex post econometric estimates that use 15-min interval, building-level energy consumption data. In contrast to most prior literature, our econometric results confirm the engineering estimates, even suggesting the engineering estimates were too modest. Further, we find heterogeneous efficiency impacts by time of day, suggesting select efficiency projects can be useful in reducing peak load. - Highlights: • Regression discontinuity used to estimate energy savings from efficiency projects. • Ex post econometric estimates validate ex ante engineering estimates of energy savings. • Select efficiency projects shown to reduce peak load

  4. Consumers’ estimation of calorie content at fast food restaurants: cross sectional observational study

    Science.gov (United States)

    Condon, Suzanne K; Kleinman, Ken; Mullen, Jewel; Linakis, Stephanie; Rifas-Shiman, Sheryl; Gillman, Matthew W

    2013-01-01

    Objective To investigate estimation of calorie (energy) content of meals from fast food restaurants in adults, adolescents, and school age children. Design Cross sectional study of repeated visits to fast food restaurant chains. Setting 89 fast food restaurants in four cities in New England, United States: McDonald’s, Burger King, Subway, Wendy’s, KFC, Dunkin’ Donuts. Participants 1877 adults and 330 school age children visiting restaurants at dinnertime (evening meal) in 2010 and 2011; 1178 adolescents visiting restaurants after school or at lunchtime in 2010 and 2011. Main outcome measure Estimated calorie content of purchased meals. Results Among adults, adolescents, and school age children, the mean actual calorie content of meals was 836 calories (SD 465), 756 calories (SD 455), and 733 calories (SD 359), respectively. A calorie is equivalent to 4.18 kJ. Compared with the actual figures, participants underestimated calorie content by means of 175 calories (95% confidence interval 145 to 205), 259 calories (227 to 291), and 175 calories (108 to 242), respectively. In multivariable linear regression models, underestimation of calorie content increased substantially as the actual meal calorie content increased. Adults and adolescents eating at Subway estimated 20% and 25% lower calorie content than McDonald’s diners (relative change 0.80, 95% confidence interval 0.66 to 0.96; 0.75, 0.57 to 0.99). Conclusions People eating at fast food restaurants underestimate the calorie content of meals, especially large meals. Education of consumers through calorie menu labeling and other outreach efforts might reduce the large degree of underestimation. PMID:23704170

  5. Estimated cost of asthma in outpatient treatment: a real-world study

    Science.gov (United States)

    Costa, Eduardo; Caetano, Rosangela; Werneck, Guilherme Loureiro; Bregman, Maurício; Araújo, Denizar Vianna; Rufino, Rogério

    2018-01-01

    ABSTRACT OBJECTIVE To estimate the cost of diagnosis and treatment of asthma. METHODS We used the perspective of society. We sequentially included for 12 months, in 2011-2012, 117 individuals over five years of age who were treated for asthma in the Pneumology and Allergy-Immunology Services of the Piquet Carneiro Polyclinic, Universidade do Estado do Rio de Janeiro. All of them were interviewed twice with a six-month interval for data collection, covering 12 months. The cost units were identified and valued according to defined methods. We carried out a sensitivity analysis and applied statistical methods with a significance level of 5% for cost comparisons between subgroups. RESULTS The study consisted of 108 patients, and 73.8% of them were women. Median age was 49.5 years. Rhinitis was present in 83.3% of the individuals, and more than half were overweight or obese. Mean family income was U$915.90/month (SD = 879.12). Most workers and students had absenteeism related to asthma. Total annual mean cost was U$1,291.20/patient (SD = 1,298.57). The cost related to isolated asthma was U$1,155.43/patient-year (SD = 1,305.58). Obese, severe, and uncontrolled asthmatic patients had higher costs than non-obese, non-severe, and controlled asthmatics, respectively. Severity and control level were independently associated with higher cost (p = 0.001 and 0.000, respectively). The direct cost accounted for 82.3% of the estimated total cost. The cost of medications for asthma accounted for 62.2% of the direct costs of asthma. CONCLUSIONS Asthma medications, environmental control measures, and long-term health leaves had the greatest potential impact on total cost variation. The results are an estimate of the cost of treating asthma at a secondary level in the Brazilian Unified Health System, assuming that the treatment used represents the ideal approach to the disease. PMID:29641652

  6. Simultaneous estimation of Poisson's ratio and Young's modulus using a single indentation: a finite element study

    International Nuclear Information System (INIS)

    Zheng, Y P; Choi, A P C; Ling, H Y; Huang, Y P

    2009-01-01

    Indentation is commonly used to determine the mechanical properties of different kinds of biological tissues and engineering materials. With the force–deformation data obtained from an indentation test, Young's modulus of the tissue can be calculated using a linear elastic indentation model with a known Poisson's ratio. A novel method for simultaneous estimation of Young's modulus and Poisson's ratio of the tissue using a single indentation was proposed in this study. Finite element (FE) analysis using 3D models was first used to establish the relationship between Poisson's ratio and the deformation-dependent indentation stiffness for different aspect ratios (indentor radius/tissue original thickness) in the indentation test. From the FE results, it was found that the deformation-dependent indentation stiffness linearly increased with the deformation. Poisson's ratio could be extracted based on the deformation-dependent indentation stiffness obtained from the force–deformation data. Young's modulus was then further calculated with the estimated Poisson's ratio. The feasibility of this method was demonstrated in virtue of using the indentation models with different material properties in the FE analysis. The numerical results showed that the percentage errors of the estimated Poisson's ratios and the corresponding Young's moduli ranged from −1.7% to −3.2% and 3.0% to 7.2%, respectively, with the aspect ratio (indentor radius/tissue thickness) larger than 1. It is expected that this novel method can be potentially used for quantitative assessment of various kinds of engineering materials and biological tissues, such as articular cartilage

  7. A comparative study of satellite estimation for solar insolation in Albania with ground measurements

    International Nuclear Information System (INIS)

    Mitrushi, Driada; Berberi, Pëllumb; Muda, Valbona; Buzra, Urim; Bërdufi, Irma; Topçiu, Daniela

    2016-01-01

    The main objective of this study is to compare data provided by Database of NASA with available ground data for regions covered by national meteorological net NASA estimates that their measurements of average daily solar radiation have a root-mean-square deviation RMSD error of 35 W/m"2 (roughly 20% inaccuracy). Unfortunately valid data from meteorological stations for regions of interest are quite rare in Albania. In these cases, use of Solar Radiation Database of NASA would be a satisfactory solution for different case studies. Using a statistical method allows to determine most probable margins between to sources of data. Comparison of mean insulation data provided by NASA with ground data of mean insulation provided by meteorological stations show that ground data for mean insolation results, in all cases, to be underestimated compared with data provided by Database of NASA. Converting factor is 1.149.

  8. The Role of Organisational Phenomena in Software Cost Estimation: A Case Study of Supporting and Hindering Factors

    Directory of Open Access Journals (Sweden)

    Jurka Rahikkala

    2018-01-01

    Full Text Available Despite the fact that many researchers and practitioners agree that organisational issues are equally important as technical issues from the software cost estimation (SCE success point of view, most of the research focus has been put on the development of methods, whereas organisational factors have received surprisingly little academic scrutiny. This study aims to identify organisational factors that either support or hinder meaningful SCE, identifying their impact on estimation success. Top management’s role is specifically addressed. The study takes a qualitative and explorative case study based approach. In total, 18 semi-structured interviews aided the study of three projects in three organisations. Hence, the transferability of the results is limited. The results suggest that the role of the top management is important in creating prerequisites for meaningful estimation, but their day-to-day participation is not required for successful estimation. Top management may also induce undesired distortion in estimation. Estimation maturity and estimation success seem to have an interrelationship with software process maturity, but there seem to be no significant individual organisational factors, which alone would make estimation successful. Our results validate several distortions and biases reported in the previous studies, and show the SCE research focus has remained on methodologies and technical issues.

  9. Experimental design for estimating parameters of rate-limited mass transfer: Analysis of stream tracer studies

    Science.gov (United States)

    Wagner, Brian J.; Harvey, Judson W.

    1997-01-01

    Tracer experiments are valuable tools for analyzing the transport characteristics of streams and their interactions with shallow groundwater. The focus of this work is the design of tracer studies in high-gradient stream systems subject to advection, dispersion, groundwater inflow, and exchange between the active channel and zones in surface or subsurface water where flow is stagnant or slow moving. We present a methodology for (1) evaluating and comparing alternative stream tracer experiment designs and (2) identifying those combinations of stream transport properties that pose limitations to parameter estimation and therefore a challenge to tracer test design. The methodology uses the concept of global parameter uncertainty analysis, which couples solute transport simulation with parameter uncertainty analysis in a Monte Carlo framework. Two general conclusions resulted from this work. First, the solute injection and sampling strategy has an important effect on the reliability of transport parameter estimates. We found that constant injection with sampling through concentration rise, plateau, and fall provided considerably more reliable parameter estimates than a pulse injection across the spectrum of transport scenarios likely encountered in high-gradient streams. Second, for a given tracer test design, the uncertainties in mass transfer and storage-zone parameter estimates are strongly dependent on the experimental Damkohler number, DaI, which is a dimensionless combination of the rates of exchange between the stream and storage zones, the stream-water velocity, and the stream reach length of the experiment. Parameter uncertainties are lowest at DaI values on the order of 1.0. When DaI values are much less than 1.0 (owing to high velocity, long exchange timescale, and/or short reach length), parameter uncertainties are high because only a small amount of tracer interacts with storage zones in the reach. For the opposite conditions (DaI ≫ 1.0), solute

  10. Estimation of Geographically Weighted Regression Case Study on Wet Land Paddy Productivities in Tulungagung Regency

    Directory of Open Access Journals (Sweden)

    Danang Ariyanto

    2017-11-01

    Full Text Available Regression is a method connected independent variable and dependent variable with estimation parameter as an output. Principal problem in this method is its application in spatial data. Geographically Weighted Regression (GWR method used to solve the problem. GWR  is a regression technique that extends the traditional regression framework by allowing the estimation of local rather than global parameters. In other words, GWR runs a regression for each location, instead of a sole regression for the entire study area. The purpose of this research is to analyze the factors influencing wet land paddy productivities in Tulungagung Regency. The methods used in this research is  GWR using cross validation  bandwidth and weighted by adaptive Gaussian kernel fungtion.This research using  4 variables which are presumed affecting the wet land paddy productivities such as:  the rate of rainfall(X1, the average cost of fertilizer per hectare(X2, the average cost of pestisides per hectare(X3 and Allocation of subsidized NPK fertilizer of food crops sub-sector(X4. Based on the result, X1, X2, X3 and X4  has a different effect on each Distric. So, to improve the productivity of wet land paddy in Tulungagung Regency required a special policy based on the GWR model in each distric.

  11. Study on Spectrum Estimation in Biophoton Emission Signal Analysis of Wheat Varieties

    Directory of Open Access Journals (Sweden)

    Yitao Liang

    2014-01-01

    Full Text Available The photon emission signal in visible range (380 nm–630 nm was measured from various wheat kernels by means of a low noise photomultiplier system. To study the features of the photon emission signal, the spectrum estimation method of the photon emission signal is described for the first time. The biophoton emission signal, belonging to four varieties of wheat, is analyzed in time domain and frequency domain. It shows that the intensity of the biophoton emission signal for four varieties of wheat kernels is relatively weak and has dramatic changes over time. Mean and mean square value are obviously different in four varieties; the range was, respectively, 3.7837 and 74.8819. The difference of variance is not significant. The range is 1.1764. The results of power spectrum estimation deduced that the biophoton emission signal is a low frequency signal, and its power spectrum is mostly distributed in the frequency less than 0.1 Hz. Then three parameters, which are spectral edge frequency, spectral gravity frequency, and power spectral entropy, are adopted to explain the features of the kernels’ spontaneous biophoton emission signal. It shows that the parameters of the spontaneous biophoton emission signal for different varieties of wheat are similar.

  12. Can results from animal studies be used to estimate dose or low dose effects in humans

    International Nuclear Information System (INIS)

    Thomas, J.M.; Eberhardt, L.L.

    1981-01-01

    A method has been devised to extrapolate biological equilibrium levels between animal species and subsequently to humans. Our initial premise was based on the observation that radionuclide retention is normally a function of metabolism so that direct or indirect measures could be described by a power law based on body weights of test animal species. However, we found that such interspecies comparisons ought to be based on the coefficient of the power equation rather than on the exponential parameter. The method is illustrated using retention data obtained from five non-ruminant species (including humans) that were fed radionuclides with different properties. It appears that biological equilibrium level for radionuclides in man can be estimated using data from mice, rats, and dogs. The need to extrapolate low-dose effects data obtained from small animals (usually rodents) to humans is not unique to radiation dosimetry or radiation protection problems. Therefore, some quantitative problems connected with estimating low-dose effects from other disciplines have been reviewed, both because of the concern about effects induced by the radionuclide moiety of a radiopharmaceutical and those of the nonradioactive component. The possibility of extrapolating low-dose effects calculated from animal studies to human is discussed

  13. Estimation and Control in Agile Methods for Software Development: a Case Study

    Directory of Open Access Journals (Sweden)

    Mitre-Hernández Hugo A.

    2014-07-01

    Full Text Available The development of software (SW using agile methods is growing due to the productivity associated with these methodologies, in addition to the flexibility shown in small teams. However, these methods have clear weaknesses of software development in cost estimation and management, as well as the fact that project managers do not have enough evidence to verify the budget spending on a project due to the poor documentation generated and the lack of monitoring of resource spending. A proposal estimation and cost control in agile methods to solve these shortcomings. To this end, a case study was conducted in an agile software development company using the proposal for Software as a Service (SaaS and Web application projects. The results found were that the proposal generates a high degree of evidence for project managers, but it has shortcomings in the administration of the evidence for the control and decision making, which led to a definition of a decision making process to be coupled with the measurement proposal.

  14. A Study on the Estimation of the Scale Factor for Precise Point Positioning

    Science.gov (United States)

    Erdogan, Bahattin; Kayacik, Orhan

    2017-04-01

    Precise Point Positioning (PPP) technique is one of the most important subject in Geomatic Engineering. PPP technique needs only one GNSS receiver and users have preferred it instead of traditional relative positioning technique for several applications. Scientific software has been used for PPP solutions and the software may underestimate the formal errors of the estimated coordinates. The formal errors have major effects on statistical interpretation. Variance-Covariance (VCV) matrix derived from GNSS processing software plays important role for deformation analysis and scientists sometimes need to scale VCV matrix. In this study, 10 continuously operating reference stations have been considered for 11 days dated 2014. All points have been analyzed by Gipsy-OASIS v6.4 scientific software. The solutions were derived for different session durations as 2, 4, 6, 8, 12 and 24 hours to obtain repeatability of the coordinates and analyses were carried out in order to estimate scale factor for Gipsy-OASIS v6.4 PPP results. According to the first results scale factors slightly increase depending on the raises in respect of session duration. Keywords: Precise Point Positioning, Gipsy-OASIS v6.4, Variance-Covariance Matrix, Scale Factor

  15. Reliability of different sampling densities for estimating and mapping lichen diversity in biomonitoring studies

    International Nuclear Information System (INIS)

    Ferretti, M.; Brambilla, E.; Brunialti, G.; Fornasier, F.; Mazzali, C.; Giordani, P.; Nimis, P.L.

    2004-01-01

    Sampling requirements related to lichen biomonitoring include optimal sampling density for obtaining precise and unbiased estimates of population parameters and maps of known reliability. Two available datasets on a sub-national scale in Italy were used to determine a cost-effective sampling density to be adopted in medium-to-large-scale biomonitoring studies. As expected, the relative error in the mean Lichen Biodiversity (Italian acronym: BL) values and the error associated with the interpolation of BL values for (unmeasured) grid cells increased as the sampling density decreased. However, the increase in size of the error was not linear and even a considerable reduction (up to 50%) in the original sampling effort led to a far smaller increase in errors in the mean estimates (<6%) and in mapping (<18%) as compared with the original sampling densities. A reduction in the sampling effort can result in considerable savings of resources, which can then be used for a more detailed investigation of potentially problematic areas. It is, however, necessary to decide the acceptable level of precision at the design stage of the investigation, so as to select the proper sampling density. - An acceptable level of precision must be decided before determining a sampling design

  16. Methods for Estimating Environmental Effects and Constraints on NexGen: High Density Case Study

    Science.gov (United States)

    Augustine, S.; Ermatinger, C.; Graham, M.; Thompson, T.

    2010-01-01

    This document provides a summary of the current methods developed by Metron Aviation for the estimate of environmental effects and constraints on the Next Generation Air Transportation System (NextGen). This body of work incorporates many of the key elements necessary to achieve such an estimate. Each section contains the background and motivation for the technical elements of the work, a description of the methods used, and possible next steps. The current methods described in this document were selected in an attempt to provide a good balance between accuracy and fairly rapid turn around times to best advance Joint Planning and Development Office (JPDO) System Modeling and Analysis Division (SMAD) objectives while also supporting the needs of the JPDO Environmental Working Group (EWG). In particular this document describes methods applied to support the High Density (HD) Case Study performed during the spring of 2008. A reference day (in 2006) is modeled to describe current system capabilities while the future demand is applied to multiple alternatives to analyze system performance. The major variables in the alternatives are operational/procedural capabilities for airport, terminal, and en route airspace along with projected improvements to airframe, engine and navigational equipment.

  17. Reliability of third molar development for age estimation in Gujarati population: A comparative study.

    Science.gov (United States)

    Gandhi, Neha; Jain, Sandeep; Kumar, Manish; Rupakar, Pratik; Choyal, Kanaram; Prajapati, Seema

    2015-01-01

    Age assessment may be a crucial step in postmortem profiling leading to confirmative identification. In children, Demirjian's method based on eight developmental stages was developed to determine maturity scores as a function of age and polynomial functions to determine age as a function of score. Of this study was to evaluate the reliability of age estimation using Demirjian's eight teeth method following the French maturity scores and Indian-specific formula from developmental stages of third molar with the help of orthopantomograms using the Demirjian method. Dental panoramic tomograms from 30 subjects each of known chronological age and sex were collected and were evaluated according to Demirjian's criteria. Age calculations were performed using Demirjian's formula and Indian formula. Statistical analysis used was Chi-square test and ANOVA test and the P values obtained were statistically significant. There was an average underestimation of age with both Indian and Demirjian's formulas. The mean absolute error was lower using Indian formula hence it can be applied for age estimation in present Gujarati population. Also, females were ahead of achieving dental maturity than males thus completion of dental development is attained earlier in females. Greater accuracy can be obtained if population-specific formulas considering the ethnic and environmental variation are derived performing the regression analysis.

  18. Desorption isotherms of cementitious materials: study of an accelerated protocol and estimation of RVE

    International Nuclear Information System (INIS)

    Wu, Qier

    2014-01-01

    In the framework of French radioactive waste management and storage, the durability evaluation and prediction of concrete structures requires the knowledge of desorption isotherm of concrete. The aim of the present study is to develop an accelerated experimental method to obtain desorption isotherm of cementitious materials more quickly and to estimate the Representative Volume Element (RVE) size related to the desorption isotherm of concrete. In order to ensure that experimental results can be statistically considered representative, a great amount of sliced samples of cementitious materials with three different thicknesses (1 mm, 2 mm and 3 mm) have been de-saturated. The effect of slice thickness and the saturation condition on the mass variation kinetics and the desorption isotherms is analyzed. The influence of the aggregate distribution on the water content and the water saturation degree is also analyzed. A method based on statistical analysis of water content and water saturation degree is proposed to estimate the RVE for water desorption experiment of concrete. The evolution of shrinkage with relative humidity is also followed for each material during the water desorption experiment. A protocol of cycle of rapid desaturation-re-saturation is applied and shows the existence of hysteresis between desorption and adsorption. (author)

  19. Can results from animal studies be used to estimate dose or low dose effects in humans

    International Nuclear Information System (INIS)

    Thomas, J.M.; Eberhardt, L.L.

    1981-01-01

    We have devised a method to extrapolate biological equilibrium levels between animal species and subsequently to humans. Our initial premise was based on the observation that radionuclide retention is normally a function of metabolism so that direct or indirect measures could be described by a power law based on body weights of test animal species. However, we found that such interspecies comparisons ought to be based on the coefficient of the power equation rather than on the exponential parameter. The method is illustrated using retention data obtained from five non-ruminant species (including humans) that were fed radionuclides with different properties. It appears that biological equilibrium level for radionuclides in man can be estimated using data from mice, rats and dogs. The need to extrapolate low-dose effects data obtained from small animals (usually rodents) to humans is not unique to radiation dosimetry or radiation protection problems. Therefore, researchers have reviewed some quantitative problems connected with estimating low-dose effects from other disciplines, both because of the concern about effects induced by the radionuclide moiety of a radiopharmaceutical and those of the nonradioactive component. The possibility of extrapolating low-dose effects calculated from animal studies to humans is discussed

  20. A pilot study of a novel smartphone application for the estimation of sleep onset.

    Science.gov (United States)

    Scott, Hannah; Lack, Leon; Lovato, Nicole

    2018-02-01

    The aim of the study was to investigate the accuracy of Sleep On Cue: a novel iPhone application that uses behavioural responses to auditory stimuli to estimate sleep onset. Twelve young adults underwent polysomnography recording while simultaneously using Sleep On Cue. Participants completed as many sleep-onset trials as possible within a 2-h period following their normal bedtime. On each trial, participants were awoken by the app following behavioural sleep onset. Then, after a short break of wakefulness, commenced the next trial. There was a high degree of correspondence between polysomnography-determined sleep onset and Sleep On Cue behavioural sleep onset, r = 0.79, P Sleep On Cue overestimated sleep-onset latency by 3.17 min (SD = 3.04). When polysomnography sleep onset was defined as the beginning of N2 sleep, the discrepancy was reduced considerably (M = 0.81, SD = 1.96). The discrepancy between polysomnography and Sleep On Cue varied between individuals, which was potentially due to variations in auditory stimulus intensity. Further research is required to determine whether modifications to the stimulus intensity and behavioural response could improve the accuracy of the app. Nonetheless, Sleep On Cue is a viable option for estimating sleep onset and may be used to administer Intensive Sleep Retraining or facilitate power naps in the home environment. © 2017 European Sleep Research Society.

  1. Studies on the Estimation of Stature from Hand and Foot Length of an Individual

    Directory of Open Access Journals (Sweden)

    O. S. Saka

    2016-10-01

    Full Text Available Background: Studies on the estimation of stature from hand and foot length of an individual are essential study in personal identification. Aim and Objectives: This study is to find out correlation between statures with hand and foot dimensions in both sexes and gender comparison from an individual in Lautech Staff College in Ogbomoso and College ogbomoso and College of Health Sciences, Obafemi Awolowo University, Ile-Ife, Nigeria. Material and Methods: A sample of 140 students and staff; 70 male and 70 female Students and staff of Lautech Staff College in Ogbomoso and College ogbomoso and College of Health Sciences, Obafemi Awolowo University, Ile-Ife, between 16-35years were considered and measurements were taken for each of the parameters. Gender differences for the two parameters were determined using Student t-test. Pearson's correlation coefficient (r was used to examine the relationship between two anthropometric parameters and standing height (stature. All these measurements were done by using standard anthropometric instruments and standard anthropometric techniques. Results: The findings of the study indicated that the males mean values are not significantly difference when compared with females mean values in all measured parameters. The study showed significant (p<0.001 positive correlation between the stature with hand lengths and foot lengths. The hand and foot length provide accurate and reliable means in establishing the height of an individual. Conclusion: This study will be useful for forensic scientists and anthropologists as well as anatomists in ascertain medico-legal cases

  2. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    Science.gov (United States)

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.

  3. Pilot Study: Estimation of Stroke Volume and Cardiac Output from Pulse Wave Velocity.

    Directory of Open Access Journals (Sweden)

    Yurie Obata

    Full Text Available Transesophageal echocardiography (TEE is increasingly replacing thermodilution pulmonary artery catheters to assess hemodynamics in patients at high risk for cardiovascular morbidity. However, one of the drawbacks of TEE compared to pulmonary artery catheters is the inability to measure real time stroke volume (SV and cardiac output (CO continuously. The aim of the present proof of concept study was to validate a novel method of SV estimation, based on pulse wave velocity (PWV in patients undergoing cardiac surgery.This is a retrospective observational study. We measured pulse transit time by superimposing the radial arterial waveform onto the continuous wave Doppler waveform of the left ventricular outflow tract, and calculated SV (SVPWV using the transformed Bramwell-Hill equation. The SV measured by TEE (SVTEE was used as a reference.A total of 190 paired SV were measured from 28 patients. A strong correlation was observed between SVPWV and SVTEE with the coefficient of determination (R2 of 0.71. A mean difference between the two (bias was 3.70 ml with the limits of agreement ranging from -20.33 to 27.73 ml and a percentage error of 27.4% based on a Bland-Altman analysis. The concordance rate of two methods was 85.0% based on a four-quadrant plot. The angular concordance rate was 85.9% with radial limits of agreement (the radial sector that contained 95% of the data points of ± 41.5 degrees based on a polar plot.PWV based SV estimation yields reasonable agreement with SV measured by TEE. Further studies are required to assess its utility in different clinical situations.

  4. Estimation of natural age of menopause in Iranian women: A meta-analysis study

    Directory of Open Access Journals (Sweden)

    Abdolreza Rajaeefard

    2011-10-01

    Full Text Available Introduction: The mean age of menopause have been reported at the age of 51 in the world and regarding the increase in life expectancy in many countries more than a third of the life time of women i s in menopause period. The importance of menopause is due to its relationship with various diseases and quality of life. The present study was conducted to estimate the average natural age of menopause in women based on a meta-analysis study. Material and Methods: In a meta-analysis study on all the existing articles in the natural age o f menopause in Iran, 21 articles were selected based on inclusion criteria. Begg and Egger tests fo r publication bias and Cochrane test were used to determine the heterogeneity among samples. ???? estimate of mean calculated based on Random effect model in Stata11 software. Results: The publication bias assumption was rejected by Begg and Egger tests with significant value s equal to 0.174 and 0.446 respectively. There was a heterogeneity among samples (Q=4626.3, df=20 , P<0.001. So based on random effect model the mean age of menopause was calculated as 48.183 with 95 % CI=47.457-48.91. Conclusion: The average age of natural menopause in Iranian women is favorable to some places of Middle East, but is less compared with developed countries and the world mean. Because of the importance of this period in women, educational programs seem to be necessary.

  5. Small area estimation (SAE) model: Case study of poverty in West Java Province

    Science.gov (United States)

    Suhartini, Titin; Sadik, Kusman; Indahwati

    2016-02-01

    This paper showed the comparative of direct estimation and indirect/Small Area Estimation (SAE) model. Model selection included resolve multicollinearity problem in auxiliary variable, such as choosing only variable non-multicollinearity and implemented principal component (PC). Concern parameters in this paper were the proportion of agricultural venture poor households and agricultural poor households area level in West Java Province. The approach for estimating these parameters could be performed based on direct estimation and SAE. The problem of direct estimation, three area even zero and could not be conducted by directly estimation, because small sample size. The proportion of agricultural venture poor households showed 19.22% and agricultural poor households showed 46.79%. The best model from agricultural venture poor households by choosing only variable non-multicollinearity and the best model from agricultural poor households by implemented PC. The best estimator showed SAE better then direct estimation both of the proportion of agricultural venture poor households and agricultural poor households area level in West Java Province. The solution overcame small sample size and obtained estimation for small area was implemented small area estimation method for evidence higher accuracy and better precision improved direct estimator.

  6. Estimation of the inverse Weibull distribution based on progressively censored data: Comparative study

    International Nuclear Information System (INIS)

    Musleh, Rola M.; Helu, Amal

    2014-01-01

    In this article we consider statistical inferences about the unknown parameters of the Inverse Weibull distribution based on progressively type-II censoring using classical and Bayesian procedures. For classical procedures we propose using the maximum likelihood; the least squares methods and the approximate maximum likelihood estimators. The Bayes estimators are obtained based on both the symmetric and asymmetric (Linex, General Entropy and Precautionary) loss functions. There are no explicit forms for the Bayes estimators, therefore, we propose Lindley's approximation method to compute the Bayes estimators. A comparison between these estimators is provided by using extensive simulation and three criteria, namely, Bias, mean squared error and Pitman nearness (PN) probability. It is concluded that the approximate Bayes estimators outperform the classical estimators most of the time. Real life data example is provided to illustrate our proposed estimators. - Highlights: • We consider progressively type-II censored data from the Inverse Weibull distribution (IW). • We derive MLEs, approximate MLEs, LS and Bayes estimate methods of scale and shape parameters of the IW. • Bayes estimator of shape parameter cannot be expressed in closed forms. • We suggest using Lindley's approximation. • We conclude that the Bayes estimates outperform the classical methods

  7. Sensitivity of landscape resistance estimates based on point selection functions to scale and behavioral state: Pumas as a case study

    Science.gov (United States)

    Katherine A. Zeller; Kevin McGarigal; Paul Beier; Samuel A. Cushman; T. Winston Vickers; Walter M. Boyce

    2014-01-01

    Estimating landscape resistance to animal movement is the foundation for connectivity modeling, and resource selection functions based on point data are commonly used to empirically estimate resistance. In this study, we used GPS data points acquired at 5-min intervals from radiocollared pumas in southern California to model context-dependent point selection...

  8. The UF/NCI family of hybrid computational phantoms representing the current US population of male and female children, adolescents, and adults—application to CT dosimetry

    International Nuclear Information System (INIS)

    Geyer, Amy M; O'Reilly, Shannon; Long, Daniel J; Bolch, Wesley E; Lee, Choonsik

    2014-01-01

    Substantial increases in pediatric and adult obesity in the US have prompted a major revision to the current UF/NCI (University of Florida/National Cancer Institute) family of hybrid computational phantoms to more accurately reflect current trends in larger body morphometry. A decision was made to construct the new library in a gridded fashion by height/weight without further reference to age-dependent weight/height percentiles as these become quickly outdated. At each height/weight combination, circumferential parameters were defined and used for phantom construction. All morphometric data for the new library were taken from the CDC NHANES survey data over the time period 1999–2006, the most recent reported survey period. A subset of the phantom library was then used in a CT organ dose sensitivity study to examine the degree to which body morphometry influences the magnitude of organ doses for patients that are underweight to morbidly obese in body size. Using primary and secondary morphometric parameters, grids containing 100 adult male height/weight bins, 93 adult female height/weight bins, 85 pediatric male height/weight bins and 73 pediatric female height/weight bins were constructed. These grids served as the blueprints for construction of a comprehensive library of patient-dependent phantoms containing 351 computational phantoms. At a given phantom standing height, normalized CT organ doses were shown to linearly decrease with increasing phantom BMI for pediatric males, while curvilinear decreases in organ dose were shown with increasing phantom BMI for adult females. These results suggest that one very useful application of the phantom library would be the construction of a pre-computed dose library for CT imaging as needed for patient dose-tracking. (paper)

  9. Capturing heterogeneity: The role of a study area's extent for estimating mean throughfall

    Science.gov (United States)

    Zimmermann, Alexander; Voss, Sebastian; Metzger, Johanna Clara; Hildebrandt, Anke; Zimmermann, Beate

    2016-11-01

    The selection of an appropriate spatial extent of a sampling plot is one among several important decisions involved in planning a throughfall sampling scheme. In fact, the choice of the extent may determine whether or not a study can adequately characterize the hydrological fluxes of the studied ecosystem. Previous attempts to optimize throughfall sampling schemes focused on the selection of an appropriate sample size, support, and sampling design, while comparatively little attention has been given to the role of the extent. In this contribution, we investigated the influence of the extent on the representativeness of mean throughfall estimates for three forest ecosystems of varying stand structure. Our study is based on virtual sampling of simulated throughfall fields. We derived these fields from throughfall data sampled in a simply structured forest (young tropical forest) and two heterogeneous forests (old tropical forest, unmanaged mixed European beech forest). We then sampled the simulated throughfall fields with three common extents and various sample sizes for a range of events and for accumulated data. Our findings suggest that the size of the study area should be carefully adapted to the complexity of the system under study and to the required temporal resolution of the throughfall data (i.e. event-based versus accumulated). Generally, event-based sampling in complex structured forests (conditions that favor comparatively long autocorrelations in throughfall) requires the largest extents. For event-based sampling, the choice of an appropriate extent can be as important as using an adequate sample size.

  10. A pilot study of a simple screening technique for estimation of salivary flow.

    Science.gov (United States)

    Kanehira, Takashi; Yamaguchi, Tomotaka; Takehara, Junji; Kashiwazaki, Haruhiko; Abe, Takae; Morita, Manabu; Asano, Kouzo; Fujii, Yoshinori; Sakamoto, Wataru

    2009-09-01

    The purpose of this study was to develop a simple screening technique for estimation of salivary flow and to test the usefulness of the method for determining decreased salivary flow. A novel assay system comprising 3 spots containing 30 microg starch and 49.6 microg potassium iodide per spot on filter paper and a coloring reagent, based on the color reaction of iodine-starch and theory of paper chromatography, was designed. We investigated the relationship between resting whole salivary rates and the number of colored spots on the filter produced by 41 hospitalized subjects. A significant negative correlation was observed between the number of colored spots and the resting salivary flow rate (n = 41; r = -0.803; P bedridden and disabled elderly people.

  11. NATO Advanced Study Institute on Statistical Treatments for Estimation of Mineral and Energy Resources

    CERN Document Server

    Fabbri, A; Sinding-Larsen, R

    1988-01-01

    This volume contains the edited papers prepared by lecturers and participants of the NATO Advanced Study Institute on "Statistical Treatments for Estimation of Mineral and Energy Resources" held at II Ciocco (Lucca), Italy, June 22 - July 4, 1986. During the past twenty years, tremendous efforts have been made to acquire quantitative geoscience information from ore deposits, geochemical, geophys ical and remotely-sensed measurements. In October 1981, a two-day symposium on "Quantitative Resource Evaluation" and a three-day workshop on "Interactive Systems for Multivariate Analysis and Image Processing for Resource Evaluation" were held in Ottawa, jointly sponsored by the Geological Survey of Canada, the International Association for Mathematical Geology, and the International Geological Correlation Programme. Thirty scientists from different countries in Europe and North America were invited to form a forum for the discussion of quantitative methods for mineral and energy resource assessment. Since then, not ...

  12. Pressurized water reactor monitoring. Study of detection, diagnostic and estimation methods (least error squares and filtering)

    International Nuclear Information System (INIS)

    Gillet, M.

    1986-07-01

    This thesis presents a study for the surveillance of the ''primary coolant circuit inventory monitoring'' of a pressurized water reactor. A reference model is developed in view of an automatic system ensuring detection and diagnostic in real time. The methods used for the present application are statistical tests and a method related to pattern recognition. The estimation of failures detected, difficult owing to the non-linearity of the problem, is treated by the least error squares method of the predictor or corrector type, and by filtering. It is in this frame that a new optimized method with superlinear convergence is developed, and that a segmented linearization of the model is introduced, in view of a multiple filtering [fr

  13. Rain pH estimation based on the particulate matter pollutants and wet deposition study.

    Science.gov (United States)

    Singh, Shweta; Elumalai, Suresh Pandian; Pal, Asim Kumar

    2016-09-01

    In forecasting of rain pH, the changes caused by particulate matter (PM) are generally neglected. In regions of high PM concentration like Dhanbad, the role of PM in deciding the rain pH becomes important. Present work takes into account theoretical prediction of rain pH by two methods. First method considers only acid causing gases (ACG) like CO2, SO2 and NOx in pH estimation, whereas, second method additionally accounts for effect of PM (ACG-PM). In order to predict the rain pH, site specific deposited dust that represents local PM was studied experimentally for its impact on pH of neutral water. After incorporation of PM correction factor, it was found that, rain pH values estimated were more representative of the observed ones. Fractional bias (FB) for the ACG-PM method reduced to values of the order of 10(-2) from those with order of 10(-1) for the ACG method. The study confirms neutralization of rain acidity by PM. On account of this, rain pH was found in the slightly acidic to near neutral range, despite of the high sulfate flux found in rain water. Although, the safer range of rain pH blurs the severity of acid rain from the picture, yet huge flux of acidic and other ions get transferred to water bodies, soil and ultimately to the ground water system. Simple use of rain pH for rain water quality fails to address the issues of its increased ionic composition due to the interfering pollutants and thus undermines severity of pollutants transferred from air to rain water and then to water bodies and soil. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. A Spanish multicenter study to estimate the prevalence and incidence of chronic pancreatitis and its complications

    Directory of Open Access Journals (Sweden)

    J. Enrique Domínguez-Muñoz

    2014-04-01

    Full Text Available Background and objective: No nation-wide epidemiological study on the incidence and prevalence of chronic pancreatitis (CP had been thus far carried out in Spain. Our goal is to estimate the prevalence and incidence of CP, as well as to determine the diagnostic and therapeutic criteria used in Spanish pancreas units. Methods: An observarional, descriptive study of hospital pancreas units in Spain. CP-related epidemiology, etiology, manifestations, diagnostic tests, functional complications, and treatments were all assessed using a structured questionnaire. Overall results were estimated by weighting cases in each site. Results: Information was collected from six pancreas units with a sample frame of 1,900,751 inhabitants. Overall prevalence was 49.3 cases per 10(5 population (95 % CI, 46 to 52 and incidence was 5.5 cases per 10(5 inhabitant-years (95 % CI, 5.4 to 5.6. Most common etiologies included tobacco and alcoholism, which were associated with three in every four cases. The most prevalent symptoms were recurring pain (48.8 % and chronic abdominal pain (30.6 %. The most widely used diagnostic method was echoendoscopy (79.8 %, CT (computerized tomography (58.7 %, and MRI (magnetic resonance imaging/MRCP (magnetic resonance cholangiopancreatography (55.9 %. Most prevalent morphologic findings included calcifications (35 % and pseudocysts (27 %. Exocrine (38.8 % and endocrine (35.2 % pancreatic insufficiency had both a similar frequency. Treatments used were rather heterogeneous among sites, with enzyme replacement therapy (40.7 % and insulin (30.9 % being most commonly used. Conclusions: Pancreas units amass a significant number of both prevalent and incident CP cases. Patients seen in these units share a similar typology, and differences between units are greater regarding diagnostic and therapeutic strategies.

  15. Reliability estimation of structures under stochastic loading—A case study on nuclear piping

    International Nuclear Information System (INIS)

    Hari Prasad, M.; Rami Reddy, G.; Dubey, P.N.; Srividya, A.; Verma, A.K.

    2013-01-01

    Highlights: ► Structures are generally subjected to different types of loadings. ► One such type of loading is random sequence and has been treated as a stochastic fatigue loading. ► In this methodology both stress amplitude and number of cycles to failure have been considered as random variables. ► The methodology has been demonstrated with a case study on nuclear piping. ► The failure probability of piping has been estimated as a function of time. - Abstract: Generally structures are subjected to different types of loadings throughout their life time. These loads can be either discrete in nature or continuous in nature and also these can be either stationary or non stationary processes. This means that the structural reliability analysis not only considers random variables but also considers random variables which are functions of time, referred to as stochastic processes. A stochastic process can be viewed as a family of random variables. When a structure is subjected to a random loading, based on the stresses developed in the structure and failure criteria the failure probability can be estimated. In practice the structures are designed with higher factor of safety to take care of such random loads. In such cases the structure will fail only when the random loads are cyclic in nature. In traditional reliability analysis, the variation in the load is treated as a random variable and to account for the number of occurrences of the loading the concept of extreme value theory is used. But with this method one is neglecting the damage accumulation that will take place from one loading to another loading. Hence, in this paper, a new way of dealing with these types of problems has been discussed by using the concept of stochastic fatigue loading. The random loading has been considered as earthquake loading. The methodology has been demonstrated with a case study on nuclear power plant piping.

  16. A Comparative Study on Fetal Heart Rates Estimated from Fetal Phonography and Cardiotocography

    Directory of Open Access Journals (Sweden)

    Emad A. Ibrahim

    2017-10-01

    Full Text Available The aim of this study is to investigate that fetal heart rates (fHR extracted from fetal phonocardiography (fPCG could convey similar information of fHR from cardiotocography (CTG. Four-channel fPCG sensors made of low cost (<$1 ceramic piezo vibration sensor within 3D-printed casings were used to collect abdominal phonogram signals from 20 pregnant mothers (>34 weeks of gestation. A novel multi-lag covariance matrix-based eigenvalue decomposition technique was used to separate maternal breathing, fetal heart sounds (fHS and maternal heart sounds (mHS from abdominal phonogram signals. Prior to the fHR estimation, the fPCG signals were denoised using a multi-resolution wavelet-based filter. The proposed source separation technique was first tested in separating sources from synthetically mixed signals and then on raw abdominal phonogram signals. fHR signals extracted from fPCG signals were validated using simultaneous recorded CTG-based fHR recordings.The experimental results have shown that the fHR derived from the acquired fPCG can be used to detect periods of acceleration and deceleration, which are critical indication of the fetus' well-being. Moreover, a comparative analysis demonstrated that fHRs from CTG and fPCG signals were in good agreement (Bland Altman plot has mean = −0.21 BPM and ±2 SD = ±3 with statistical significance (p < 0.001 and Spearman correlation coefficient ρ = 0.95. The study findings show that fHR estimated from fPCG could be a reliable substitute for fHR from the CTG, opening up the possibility of a low cost monitoring tool for fetal well-being.

  17. A Comparative Study on Carbohydrate Estimation: GoCARB vs. Dietitians

    Directory of Open Access Journals (Sweden)

    Maria F. Vasiloglou

    2018-06-01

    Full Text Available GoCARB is a computer vision-based smartphone system designed for individuals with Type 1 Diabetes to estimate plated meals’ carbohydrate (CHO content. We aimed to compare the accuracy of GoCARB in estimating CHO with the estimations of six experienced dietitians. GoCARB was used to estimate the CHO content of 54 Central European plated meals, with each of them containing three different weighed food items. Ground truth was calculated using the USDA food composition database. Dietitians were asked to visually estimate the CHO content based on meal photographs. GoCARB and dietitians achieved comparable accuracies. The mean absolute error of the dietitians was 14.9 (SD 10.12 g of CHO versus 14.8 (SD 9.73 g of CHO for the GoCARB (p = 0.93. No differences were found between the estimations of dietitians and GoCARB, regardless the meal size. The larger the size of the meal, the greater were the estimation errors made by both. Moreover, the higher the CHO content of a food category was, the more challenging its accurate estimation. GoCARB had difficulty in estimating rice, pasta, potatoes, and mashed potatoes, while dietitians had problems with pasta, chips, rice, and polenta. GoCARB may offer diabetic patients the option of an easy, accurate, and almost real-time estimation of the CHO content of plated meals, and thus enhance diabetes self-management.

  18. Studies on tender wheatgrass: estimation of elemental content, bioaccessibility of essential elements and antioxidant activity

    International Nuclear Information System (INIS)

    Reddy, A.V.R.; Acharya, R.; Nair, A.G.C.; Kulkarni, S.D.; Rajurkar, N.S.

    2008-08-01

    Tender wheatgrass is being consumed by human beings in juice form or as it is due to its antioxidant potential and medicinal value. Systematic studies were carried out to (i) estimate elemental profiles as a function of growth period and conditions, bioaccessibility of different elements and the antioxidant potential of the tender wheatgrass, (ii) determine the optimum growth period for obtaining maximum benefit and (iii) examine the possible correlation between antioxidant potential and mineral content. Wheatgrass was grown in four different conditions namely (i) tap water, (ii) tap water with nutrients, (iii) soil and tap water and (iv) soil with nutrient solution. The studies were carried out on the wheatgrass of 5-20 days old. For comparison with laboratory grown wheatgrass, a set of commercially available wheatgrass tablets and wheat seeds were also studied. Instrumental neutron activation analysis (INAA) was used for concentration determination of elements in the wheatgrass, wheat seeds and wheatgrass tablets. A total of 15 elements like Na, K, Ca, Mg, Mn, Br, Fe and Zn were determined in the samples of shoots and roots of tender wheatgrass. A comparison with the recommended dietary allowance (RDA) of different essential elements with that in tender wheatgrass revealed that wheatgrass is a good source of minerals for health benefits rather than a food supplement. Bioaccessible fractions of various elements were estimated by a chemical NAA method by subjecting the samples to in vitro gastric and gastro-intestinal digestion followed by NAA. The bioaccessibility concentrations by both the measurements were in the range of 9-60%. It was found that bioaccessibility of the elements studied was the highest from fresh wheatgrass and the lowest from wheat seeds. Accuracy of the NAA method was evaluated by analyzing two biological reference materials, SRM 1573a (Tomato leaves) from NIST, USA and ICHTJ CTA-vtl-2 (Tobacco leaves) from INCT, Poland. The antioxidant

  19. Soil Moisture Estimation Using MODIS Images (Case Study: Mashhad Plain Area

    Directory of Open Access Journals (Sweden)

    M. Fashaee

    2016-09-01

    Full Text Available Introduction: Numerous studies have been undertaken based on satellite imagery in order to estimate soil moisture using vegetation indices such as NDVI. Previous studies suffer from a restriction; these indices are not able to estimate where the vegetative coverage is low or where no vegetation exists. Hence, it is essential to develop a model which can overcome this restriction. Focus of this research is on estimation of soil moisture for low or scattered vegetative land covers. Trapezoidal temperature-vegetation (Ts~VI model is able to consider the status of soil moisture and vegetation condition. It can estimate plant water deficit for weak or no vegetation land cover. Materials and Methods: Moran proposed Water Deficit Index (WDI for evaluating field evapotranspiration rates and relative field water deficit for both full-cover and partially vegetated sites. The theoretical basis of this method is based on the energy balance equation. Penman-Monteith equation of energy balance was used to calculate the coordinates of the four vertices of the temperature-vegetation trapezoid also for four different extreme combinations of temperature and vegetation. For the (Ts−Ta~Vc trapezoid, four vertices correspond to 1 well-watered full-cover vegetation, 2 water-stressed full-cover vegetation, 3 saturated bare soil, and 4 dry bare soil. WDI is equal to 0 for well-watered conditions and equals to 1 for maximum stress conditions. As suggested by Moran et al. to draw a trapezoidal shape, some field measurements are required such as wind speed at the height of 2 meters, air pressure, mean daily temperature, vapor pressure-temperature curve slope, Psychrometrics constant, vapor pressure at mean temperature, vapor pressure deficit, external radiation, solar radiation of short wavelength, longwave radiation, net radiation, soil heat flux and air aerodynamic resistance is included. Crop vegetation and canopy resistance should be measured or estimated. The study

  20. Experiències de realitat augmentada en biblioteques : estat de la qüestió

    Directory of Open Access Journals (Sweden)

    Arroyo Vázquez, Natalia

    2016-06-01

    Full Text Available Objectiu: donar a conèixer les experiències més significatives d'ús de la realitat augmentada en biblioteques, amb una especial atenció als resultats obtinguts, les aportacions i les limitacions que s'han de tenir en compte. -- Metodologia: revisió bibliogràfica, selecció i anàlisi d'experiències sobre l'ús de realitat augmentada en biblioteques. -- Resultats: tot i ser una tecnologia recent, són diversos els exemples d'ús de la realitat augmentada en biblioteques. No obstant això, es fa necessari donar a conèixer els resultats d'aquestes experiències, de manera que puguin servir no solament com a model, sinó també per conèixer què és el que funciona. Es presenta als professionals un catàleg d'usos de la realitat augmentada en biblioteques, dels quals s'analitzen de forma crítica els possibles beneficis i limitacions, i s'agrupen en set apartats segons la utilitat: geolocalització, contextualització històrica, exposicions i altres activitats, publicacions, enriquiment dels espais físics, alfabetització i ludificació i, finalment, usos professionals.Objetivo: dar a conocer las experiencias más significativas de uso de la realidad aumentada en bibliotecas, con una especial atención a los resultados obtenidos, las aportaciones y las limitaciones que se deben tener en cuenta. -- Metodología: revisión bibliográfica, selección y análisis de experiencias sobre el uso de realidad aumentada en bibliotecas. -- Resultados: a pesar de ser una tecnología reciente, son varios los ejemplos de uso de la realidad aumentada en bibliotecas. Sin embargo, se hace necesario dar a conocer los resultados de dichas experiencias, de forma que puedan servir no solo como modelo, sino también para conocer qué es lo que funciona. Se presenta a los profesionales un catálogo de usos de la realidad aumentada en bibliotecas, analizados de forma crítica sus posibles beneficios y limitaciones, agrupados en siete apartados según la utilidad

  1. Experiències de realitat augmentada en biblioteques : estat de la qüestió

    Directory of Open Access Journals (Sweden)

    Arroyo Vázquez, Natalia

    2016-06-01

    Full Text Available Objectiu: donar a conèixer les experiències més significatives d'ús de la realitat augmentada en biblioteques, fent una especial atenció als resultats obtinguts, les aportacions i les limitacions que s'han de tenir en compte. -- Metodologia: revisió bibliogràfica, selecció i anàlisi d'experiències sobre l'ús de la realitat augmentada en biblioteques. -- Resultats: malgrat ser una tecnologia recent, els exemples d'ús de la realitat augmentada en biblioteques són diversos. No obstant això, es fa necessari donar a conèixer els resultats d'aquestes experiències, de manera que puguin servir no solament com a model, sinó també per conèixer què és el que funciona. Es presenta als professionals un catàleg d'usos de la realitat augmentada en biblioteques, dels quals s'analitzen de forma crítica els possibles beneficis i limitacions, i s'agrupen en set apartats segons la utilitat: geolocalització, contextualització històrica, exposicions i altres activitats, publicacions, enriquiment dels espais físics, alfabetització i ludificació i, finalment, usos professionals.Objetivo: dar a conocer las experiencias más significativas de uso de la realidad aumentada en bibliotecas, con una especial atención a los resultados obtenidos, las aportaciones y las limitaciones que se deben tener en cuenta. -- Metodología: revisión bibliográfica, selección y análisis de experiencias sobre el uso de la realidad aumentada en bibliotecas. -- Resultados: a pesar de ser una tecnología reciente, son varios los ejemplos de uso de la realidad aumentada en bibliotecas. Sin embargo, se hace necesario dar a conocer los resultados de dichas experiencias, de forma que puedan servir no solo como modelo, sino también para conocer qué es lo que funciona. Se presenta a los profesionales un catálogo de usos de la realidad aumentada en bibliotecas, analizados de forma crítica sus posibles beneficios y limitaciones, agrupados en siete apartados según la

  2. A feasibility study - in vivo measurement of lead 210 in Newfoundland fluorspar miners

    International Nuclear Information System (INIS)

    Davis, M.W.

    1986-02-01

    A feasibility study was attempted to determine if skeletal burdens of Pb210 in Newfoundland fluorspar miners could be measured by in-vivo techniques using phoswich detectors inside a shadow shield. The detection system comprised two 12.7 cm diameter phoswich detectors with 3 mm thick front crystals of NaI (T1 activated) and 5 cm thick back crystals of CsI (T1 activated). Calibration of the system was carried out using a head phantom impregnated with Pb210 and a minimum detection limit of 0.20 nCi in the skull was calculated. Pb210 burdens in the skull and knee were measured in each of two ex-miners who had received radon daughter exposures estimated at 1766 and 1235 Working Level Months (WLM). The last exposures had been 25 and 19 years ago respectively and the Pb210 burdens had decreased to the point where they were undetectable using this technique. The estimated exposures are not inconsistent with the upper limits of exposure calculated using Eisenbud's model and assuming 0.2 nCi skull burdens. Among thirty other potential candidates for a full scale study most had exposures less than 3500 WLM and based on the limited data obtained from this work, results from a full scale study would have significant statistical uncertainties. Unfortunately during the course of this work a negative attitude developed among the candidates and the research had to be stopped prematurely

  3. How to estimate exposure when studying the temperature-mortality relationship? A case study of the Paris area.

    Science.gov (United States)

    Schaeffer, Laura; de Crouy-Chanel, Perrine; Wagner, Vérène; Desplat, Julien; Pascal, Mathilde

    2016-01-01

    Time series studies assessing the effect of temperature on mortality generally use temperatures measured by a single weather station. In the Paris region, there is a substantial measurement network, and a variety of exposure indicators created from multiple stations can be tested. The aim of this study is to test the influence of exposure indicators on the temperature-mortality relationship in the Paris region. The relationship between temperature and non-accidental mortality was assessed based on a time series analysis using Poisson regression and a generalised additive model. Twenty-five stations in Paris and its three neighbouring departments were used to create four exposure indicators. These indicators were (1) the temperature recorded by one reference station, (2) a simple average of the temperatures of all stations, (3) an average weighted on the departmental population and (4) a classification of the stations based on land use and an average weighted on the population in each class. The relative risks and the Akaike criteria were similar for all the exposure indicators. The estimated temperature-mortality relationship therefore did not appear to be significantly affected by the indicator used, regardless of study zone (departments or region) or age group. The increase in temperatures from the 90(th) to the 99(th) percentile of the temperature distribution led to a significant increase in mortality over 75 years (RR = 1.10 [95% CI, 1.07; 1.14]). Conversely, the decrease in temperature between the 10(th) and 1(st) percentile had a significant effect on the mortality under 75 years (RR = 1.04 [95% CI, 1.01; 1.06]). In the Paris area, there is no added value in taking multiple climatic stations into account when estimating exposure in time series studies. Methods to better represent the subtle temperature variations in densely populated areas in epidemiological studies are needed.

  4. The influence of study species selection on estimates of pesticide exposure in free-ranging birds

    Science.gov (United States)

    Borges, Shannon L.; Vyas, Nimish B.; Christman, Mary C.

    2014-01-01

    Field studies of pesticide effects on birds often utilize indicator species with the purpose 16 of extrapolating to other avian taxa. Little guidance exists for choosing indicator species to 17 monitor the presence and/or effects of contaminants that are labile in the environment or body, 18 but are acutely toxic, such as anticholinesterase (anti-ChE) insecticides. Use of an indicator 19 species that does not represent maximum exposure and/or effects could lead to inaccurate risk 20 estimates. Our objective was to test the relevance of a priori selection of indicator species for a 21 study on pesticide exposure to birds inhabiting fruit orchards. We used total plasma 22 cholinesterase (ChE) activity and ChE reactivation to describe the variability in anti-ChE exposure among avian species in two conventionally managed fruit orchards. Of seven 24 species included in statistical analyses, the less common species, chipping sparrow (Spizella 25 passerina), showed the greatest percentage of exposed individuals and the greatest ChE 26 depression, whereas the two most common species, American robins (Turdus migratorius) and 27 grey catbirds (Dumatella carolinensis), did not show significant exposure. Due to their lower 28 abundance, chipping sparrows would have been an unlikely choice for study. Our results show 29 that selection of indicator species using traditionally accepted criteria such as abundance and 30 ease of collection may not identify species that are at greatest risk. Our efforts also demonstrate 31 the usefulness of conducting multiple-species pilot studies prior to initiating detailed studies on 32 pesticide effects. A study such as ours can help focus research and resources on study species 33 that are most appropriate.

  5. A Study on Fuel Estimation Algorithms for a Geostationary Communication & Broadcasting Satellite

    OpenAIRE

    Jong Won Eun

    2000-01-01

    It has been developed to calculate fuel budget for a geostationary communication and broadcasting satellite. It is quite essential that the pre-launch fuel budget estimation must account for the deterministic transfer and drift orbit maneuver requirements. After on-station, the calculation of satellite lifetime should be based on the estimation of remaining fuel and assessment of actual performance. These estimations step from the proper algorithms to produce the prediction of satellite lifet...

  6. MO-E-BRF-01: Research Opportunities in Technology for Innovation in Radiation Oncology (Highlight of ASTRO NCI 2013 Workshop)

    International Nuclear Information System (INIS)

    Hahn, S; Jaffray, D; Chetty, I; Benedict, S

    2014-01-01

    Radiotherapy is one of the most effective treatments for solid tumors, in large part due to significant technological advances associated with, for instance, the ability to target tumors to very high levels of accuracy (within millimeters). Technological advances have played a central role in the success of radiation therapy as an oncologic treatment option for patients. ASTRO, AAPM and NCI sponsored a workshop “Technology for Innovation in Radiation Oncology” at the NCI campus in Bethesda, MD on June 13–14, 2013. The purpose of this workshop was to bring together expert clinicians and scientists to discuss the role of disruptive technologies in radiation oncology, in particular with regard to how they are being developed and translated to clinical practice in the face of current and future challenges and opportunities. The technologies discussed encompassed imaging and delivery aspects, along with methods to enable/facilitate application of them in the clinic. Measures for assessment of the performance of these technologies, such as techniques to validate quantitative imaging, were reviewed. Novel delivery technologies, incorporating efficient and safe delivery mechanisms enabled by development of tools for process automation and the associated field of oncology informatics formed one of the central themes of the workshop. The discussion on disruptive technologies was grounded in the need for evidence of efficacy. Scientists in the areas of technology assessment and bioinformatics provided expert views on different approaches toward evaluation of technology efficacy. Clinicians well versed in clinical trials incorporating disruptive technologies (e.g. SBRT for early stage lung cancer) discussed the important role of these technologies in significantly improving local tumor control and survival for these cohorts of patients. Recommendations summary focused on the opportunities associated with translating the technologies into the clinic and assessing their

  7. MO-E-BRF-01: Research Opportunities in Technology for Innovation in Radiation Oncology (Highlight of ASTRO NCI 2013 Workshop)

    Energy Technology Data Exchange (ETDEWEB)

    Hahn, S [University of Pennsylvania, Philadelphia, PA (United States); Jaffray, D [Princess Margaret Hospital, Toronto, ON (Canada); Chetty, I [Henry Ford Health System, Detroit, MI (United States); Benedict, S [UC Davis Cancer Center, Sacramento, CA (United States)

    2014-06-15

    Radiotherapy is one of the most effective treatments for solid tumors, in large part due to significant technological advances associated with, for instance, the ability to target tumors to very high levels of accuracy (within millimeters). Technological advances have played a central role in the success of radiation therapy as an oncologic treatment option for patients. ASTRO, AAPM and NCI sponsored a workshop “Technology for Innovation in Radiation Oncology” at the NCI campus in Bethesda, MD on June 13–14, 2013. The purpose of this workshop was to bring together expert clinicians and scientists to discuss the role of disruptive technologies in radiation oncology, in particular with regard to how they are being developed and translated to clinical practice in the face of current and future challenges and opportunities. The technologies discussed encompassed imaging and delivery aspects, along with methods to enable/facilitate application of them in the clinic. Measures for assessment of the performance of these technologies, such as techniques to validate quantitative imaging, were reviewed. Novel delivery technologies, incorporating efficient and safe delivery mechanisms enabled by development of tools for process automation and the associated field of oncology informatics formed one of the central themes of the workshop. The discussion on disruptive technologies was grounded in the need for evidence of efficacy. Scientists in the areas of technology assessment and bioinformatics provided expert views on different approaches toward evaluation of technology efficacy. Clinicians well versed in clinical trials incorporating disruptive technologies (e.g. SBRT for early stage lung cancer) discussed the important role of these technologies in significantly improving local tumor control and survival for these cohorts of patients. Recommendations summary focused on the opportunities associated with translating the technologies into the clinic and assessing their

  8. Hypnosis and pain perception: An Activation Likelihood Estimation (ALE) meta-analysis of functional neuroimaging studies.

    Science.gov (United States)

    Del Casale, Antonio; Ferracuti, Stefano; Rapinesi, Chiara; De Rossi, Pietro; Angeletti, Gloria; Sani, Gabriele; Kotzalidis, Georgios D; Girardi, Paolo

    2015-12-01

    Several studies reported that hypnosis can modulate pain perception and tolerance by affecting cortical and subcortical activity in brain regions involved in these processes. We conducted an Activation Likelihood Estimation (ALE) meta-analysis on functional neuroimaging studies of pain perception under hypnosis to identify brain activation-deactivation patterns occurring during hypnotic suggestions aiming at pain reduction, including hypnotic analgesic, pleasant, or depersonalization suggestions (HASs). We searched the PubMed, Embase and PsycInfo databases; we included papers published in peer-reviewed journals dealing with functional neuroimaging and hypnosis-modulated pain perception. The ALE meta-analysis encompassed data from 75 healthy volunteers reported in 8 functional neuroimaging studies. HASs during experimentally-induced pain compared to control conditions correlated with significant activations of the right anterior cingulate cortex (Brodmann's Area [BA] 32), left superior frontal gyrus (BA 6), and right insula, and deactivation of right midline nuclei of the thalamus. HASs during experimental pain impact both cortical and subcortical brain activity. The anterior cingulate, left superior frontal, and right insular cortices activation increases could induce a thalamic deactivation (top-down inhibition), which may correlate with reductions in pain intensity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Pain anticipation: an activation likelihood estimation meta-analysis of brain imaging studies.

    Science.gov (United States)

    Palermo, Sara; Benedetti, Fabrizio; Costa, Tommaso; Amanzio, Martina

    2015-05-01

    The anticipation of pain has been investigated in a variety of brain imaging studies. Importantly, today there is no clear overall picture of the areas that are involved in different studies and the exact role of these regions in pain expectation remains especially unexploited. To address this issue, we used activation likelihood estimation meta-analysis to analyze pain anticipation in several neuroimaging studies. A total of 19 functional magnetic resonance imaging were included in the analysis to search for the cortical areas involved in pain anticipation in human experimental models. During anticipation, activated foci were found in the dorsolateral prefrontal, midcingulate and anterior insula cortices, medial and inferior frontal gyri, inferior parietal lobule, middle and superior temporal gyrus, thalamus, and caudate. Deactivated foci were found in the anterior cingulate, superior frontal gyrus, parahippocampal gyrus and in the claustrum. The results of the meta-analytic connectivity analysis provide an overall view of the brain responses triggered by the anticipation of a noxious stimulus. Such a highly distributed perceptual set of self-regulation may prime brain regions to process information where emotion, action and perception as well as their related subcategories play a central role. Not only do these findings provide important information on the neural events when anticipating pain, but also they may give a perspective into nocebo responses, whereby negative expectations may lead to pain worsening. © 2014 Wiley Periodicals, Inc.

  10. Study on measuring analysis for estimating effect of energy saving policy

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Joong Ku; Park, Jeong Soon [Korea Energy Economics Institute, Euiwang (Korea)

    1999-12-01

    Since the study on measuring analysis for estimating effect of energy saving policy is too broad to implement all industries, so it limited its study only on manufacturing industry. This study is concentrated on its effort to measure energy saving using energy saving model by putting energy saving policy as an input and regarding its result as energy saving. It used B/C theory for positive analysis methodology and the result of analysis is investment effect on manufacturing industry. The total cost invested on manufacturing sector from 1982 to 1996 was 5,871 billion won based on constant cost in 1990, and the energy saving cost directly acquired from it reached 1,534.5 billion won based on constant cost in 1990, so B/C rate reached 2.56. Particularly, if you separated the amount supported by the government policy, energy saving support cost reached 3,904.2 billion won (based on constant cost in 1990) and the total benefit was 10,146.4 billion won (based on constant cost in 1990) by adding saving cost 9,997.9 billion won and environmental improvement effect 223.2 billion won. (author). 51 refs., 17 figs., 35 tabs.

  11. Fundamental study on temperature estimation of steam generator tubes at sodium-water reaction

    International Nuclear Information System (INIS)

    Furukawa, Tomohiro; Yoshida, Eiichi

    2008-11-01

    In case of the tube failure in the steam generator of the sodium cooled fast breeder reactor, its adjoined tubes are rapidly heated up by the chemical reaction between sodium and water/steam. And it is known that the tubes have the damage called 'wastage' by the disclosure steam jet. This research is a fundamental study based on the metallography about temperature estimation of the damaged tubes at the sodium-water reaction for the establishment of mechanism analysis technique of the behavior. In the examination, the material which gave the rapid thermal history which imitated sodium-water reaction was produced. And it was investigated whether the thermal history (i.e. maximum temperature and the holding time) of the samples could be presumed from the metallurgical examination of the samples. The major results are as follows: (1) The microstructure of the sample which was given the rapid thermal heating has reserved the influence of the maximum temperature and the time, and the structure can explain by referring to the equilibrium diagram and the continuous cooling transformation diagram. (2) Results of the electrolytic extraction of the samples, the ratio of the remained volume to the electrolyzed volume degreased with the increase of the maximum temperature and the time. Furthermore, it was observed the correlation between the remained volume of each element (Cr, Mo, Fe, V and Nb) and the thermal history. (3) It was obtained that the thermal history of the tubes damaged by sodium-water reaction might be able to be estimated from the metallurgical examinations. (author)

  12. Estimating cardiovascular risk in patients with type 2 diabetes: a national multicenter study in Brazil

    Directory of Open Access Journals (Sweden)

    Gomes Marilia B

    2009-10-01

    Full Text Available Abstract According to Brazilian National Data Survey diabetes is the fifth cause for hospitalization and is one of the ten major causes of mortality in this country. Aims to stratify the estimated cardiovascular risk (eCVR in a population of type 2 diabetics (T2DM according to the Framingham prediction equations as well as to determine the association between eCVR with metabolic and clinical control of the disease. Methods From 2000 to 2001 a cross-sectional multicenter study was conducted in 13 public out-patients diabetes/endocrinology clinics from 8 Brazilian cities. The 10-year risk of developing coronary heart disease (CHD was estimated by the prediction equations described by Wilson et al (Circulation 1998. LDL equations were preferably used; when patients missed LDL data we used total cholesterol equations instead. Results Data from 1382 patients (59.0% female were analyzed. Median and inter-quartile range (IQ of age and duration of diabetes were 57.4 (51-65 and 8.8 (3-13 years, respectively without differences according to the gender. Forty-two percent of these patients were overweight and 35.4% were obese (the prevalence of higher BMI and obesity in this T2DM group was significantly higher in women than in men; p 20% in 738 (53.4%, intermediate in 202 (14.6% and low in 442 (32% patients. Men [25.1(15.4-37.3] showed a higher eCVR than women [18.8 (12.4-27.9 p

  13. [Estimation of DOC concentrations using CDOM absorption coefficients: a case study in Taihu Lake].

    Science.gov (United States)

    Jiang, Guang-Jia; Ma, Rong-Hua; Duan, Hong-Tao

    2012-07-01

    Dissolved organic carbon (DOC) is the largest organic carbon stock in water ecosystems, which plays an important role in the carbon cycle in water. Chromophoric dissolved organic matter (CDOM), an important water color variation, is the colored fraction of DOC and its absorption controls the instruction of light under water. The available linkage between DOC concentration and CDOM absorptions enables the determination of DOC accumulations using remote sensing reflectance or radiance in lake waters. The present study explored the multi-liner relationship between CDOM absorptions [a(g) (250) and a(g) (365)] and DOC concentrations in Taihu Lake, based on the available data in 4 cruises (201005, 201101, 201103, 201105) (totally 183 sampling sites). Meanwhile, the results were validated with the data of the experiment carried out from August 29 to September 2, 2011 in Taihu Lake (n = 27). Furthermore, a universal pattern of modeling from remote sensing was built for lake waters. The results demonstrated that this method provided more satisfying estimation of DOC concentrations in Taihu Lake. Except the data obtained in January 2011, the fitted results of which were not conductive to the winter dataset (201101) in Taihu Lake, due to the diverse sources and sinks of DOC and CDOM, the multi-liner relationship was robust for the data collected in the other three cruises (R2 = 0.64, RMSE = 14.31%, n = 164), which was validated using the 201108 sampling dataset (R2 = 0.67, RMSE = 10.58%, n = 27). In addition, the form of the statistic model is universal, to some extent, for other water areas, however, there is difference in the modeling coefficients. Further research should be focused on the parameterization using local data from different lakes, which provides effective methodology for the estimation of DOC concentrations in lakes and other water regions.

  14. A Comparison Study on the Integrated Risk Estimation for Various Power Systems

    International Nuclear Information System (INIS)

    Kim, Tae Woon; Ha, J. J.; Kim, S. H.; Jeong, J. T.; Min, K. R.; Kim, K. Y.

    2007-06-01

    The objective of this study is to establish a system for the comparative analysis of the environmental impacts, risks, health effects, and social acceptance for various electricity generation systems and a computational framework and necessary databases. In this study, the second phase of the nuclear research and development program(2002-2004), the methodologies for the comparative analysis of the environmental impacts, risks, and health effects for various electricity generation systems was investigated and applied to reference power plants. The life cycle assessment (LCA) methodology as a comparative analysis tool for the environmental impacts was adopted and applied to fossil-fueled and nuclear power plants. The scope of the analysis considered in this study are the construction, operation/fuel cycle), and demolition of each power generation system. In the risk analysis part, the empirical and analytical methods were adopted and applied to fossil-fueled and nuclear power plants. In the empirical risk assessment part, we collected historical experiences of worldwide energy-related accidents with fatalities over the last 30 years. The scope of the analysis considered in this study are the construction, operation (fuel cycle), and demolition stages of each power generation systems. The risks for the case of nuclear power plants which have potential releases of radioactive materials were estimated In a probabilistic way (PSA) by considering the occurrence of severe accidents and compared with the risks of other electricity generation systems. The health effects testimated as external cost) resulting from the operation of nuclear, coal, and hydro power systems were estimated and compared by using the program developed by the IAEA. Regarding a comprehensive comparison of the various power systems, the analytic hierarchy process (AHP) method is introduced to aggregate the diverse information under conflicting decision criteria. Social aspect is treated by a web

  15. Incorporació de petites seqüències de cinema comercial en l’ensenyament de les drogodependències. Assaig pilot en l'assignatura de Toxicologia

    Directory of Open Access Journals (Sweden)

    Miguel Rodamilans-Pérez

    2013-01-01

    Full Text Available El Grup d'Innovació Docent Orfila, en el seu projecte per millorar la qualitat de la docència, està assajant la utilització del cinema amb finalitat didàctica. El material didàctic que hem desenvolupat en aquest projecte són petites seqüències de pel·lícules comercials de 3 a 5 minuts, per ser utilitzades com a elements il·lustratius del procés addictiu. Se seleccionen escenes de la filmografia i s'adeqüen als nostres programes docents. Es recull l'opinió dels professors participants, així com la dels alumnes, mitjançant una entrevista personal i una enquesta d'opinió, respectivament.De les entrevistes als professors i de les enquestes d'opinió dels alumnes, es dedueix un alt grau de satisfacció.

  16. Why do paleomagnetic studies in Tibet lead to such disparate paleolatitude estimates? (Invited)

    Science.gov (United States)

    Lippert, P. C.; Huang, W.; Van Hinsbergen, D. J.; Dupont-Nivet, G.

    2013-12-01

    Paleomagnetism is the only technique that can quantify paleolatitudes. Thus, many paleomagnetists have attempted to date the India-Asia collision by determining when the latitudes of rocks of the northern Himalayas (i.e. the northernmost continental rocks derived from India) overlap with those from the Lhasa terrane (i.e., the southern margin of Asia). The first studies were published in the 1980s, and only recently has the paleomagnetic community revived this technique. A suite of recent papers reported paleomagnetic data from coeval volcanic units of the upper Linzizong Formation in Southern Tibet. Despite similar sampling and measurement methods, these studies presented paleolatitudes for the Lhasa terrane that range over more than 20 degrees, i.e. >2200 km. These results have engendered skepticism within the tectonics community and raised questions regarding the usefulness of paleomagnetism in studying the India-Asia collision. Here we attempt to demonstrate that the diversity of paleolatitude estimates from the upper Linzizong Formation is an artifact of the statistical treatment of datasets primarily arising from under-sampling of the time-averaged paleomagnetic field in many of the individual studies. Previous reviews treated all of these data with equal weight, despite disparate sample sizes and quality criteria among the individual studies. This approach introduces large errors and spurious uncertainty in paleolatitude calculations. We show that applying the rigorous methodology typically employed by the geomagnetic field community to each paleomagnetic data set establishes coherency within and between data sets. The resulting latitude history provides a paleomagnetically-determined collision age between the Tibetan Himalaya and the southern margin of Asia at the longitudes of Nepal that is 49.5×4.5 Ma at 21×4° N latitude. The paleomagnetic age and latitude of this collision may be a few millions of years earlier and ~2° lower if estimates for

  17. L’avaluació de competències a l’Educació Superior: el cas d’un màster universitari

    Directory of Open Access Journals (Sweden)

    Xavier Ma. Triadó i Ivern

    2013-01-01

    Full Text Available La implantació de les competències és una tasca que ha anat incorporant-se paulatinament pels docents de la universitat espanyola amb l’entrada en vigor del EEES. Tot i això, encara s’està lluny d’aconseguir nivells òptims d’avaluació de les mateixes. Aquest article permet reflexionar sobre algunes bones practiques al respecte i sobre les dificultats i limitacions que apareixen en voler implementar un canvi en les metodologies docents, en el marc d’un màster universitari. Els resultats indiquen el grau en què s’han avaluat i adquirit tant les competències genèriques com especifiques en l’educació superior.

  18. Estimation of RF energy absorbed in the brain from mobile phones in the Interphone Study

    Science.gov (United States)

    Varsier, N; Bowman, J D; Deltour, I; Figuerola, J; Mann, S; Moissonnier, M; Taki, M; Vecchia, P; Villegas, R; Vrijheid, M; Wake, K; Wiart, J

    2011-01-01

    Objectives The objective of this study was to develop an estimate of a radio frequency (RF) dose as the amount of mobile phone RF energy absorbed at the location of a brain tumour, for use in the Interphone Epidemiological Study. Methods We systematically evaluated and quantified all the main parameters thought to influence the amount of specific RF energy absorbed in the brain from mobile telephone use. For this, we identified the likely important determinants of RF specific energy absorption rate during protocol and questionnaire design, we collected information from study subjects, network operators and laboratories involved in specific energy absorption rate measurements and we studied potential modifiers of phone output through the use of software-modified phones. Data collected were analysed to assess the relative importance of the different factors, leading to the development of an algorithm to evaluate the total cumulative specific RF energy (in joules per kilogram), or dose, absorbed at a particular location in the brain. This algorithm was applied to Interphone Study subjects in five countries. Results The main determinants of total cumulative specific RF energy from mobile phones were communication system and frequency band, location in the brain and amount and duration of mobile phone use. Though there was substantial agreement between categorisation of subjects by cumulative specific RF energy and cumulative call time, misclassification was non-negligible, particularly at higher frequency bands. Factors such as adaptive power control (except in Code Division Multiple Access networks), discontinuous transmission and conditions of phone use were found to have a relatively minor influence on total cumulative specific RF energy. Conclusions While amount and duration of use are important determinants of RF dose in the brain, their impact can be substantially modified by communication system, frequency band and location in the brain. It is important to take

  19. Integration of Active and Passive Safety Technologies--A Method to Study and Estimate Field Capability.

    Science.gov (United States)

    Hu, Jingwen; Flannagan, Carol A; Bao, Shan; McCoy, Robert W; Siasoco, Kevin M; Barbat, Saeed

    2015-11-01

    The objective of this study is to develop a method that uses a combination of field data analysis, naturalistic driving data analysis, and computational simulations to explore the potential injury reduction capabilities of integrating passive and active safety systems in frontal impact conditions. For the purposes of this study, the active safety system is actually a driver assist (DA) feature that has the potential to reduce delta-V prior to a crash, in frontal or other crash scenarios. A field data analysis was first conducted to estimate the delta-V distribution change based on an assumption of 20% crash avoidance resulting from a pre-crash braking DA feature. Analysis of changes in driver head location during 470 hard braking events in a naturalistic driving study found that drivers' head positions were mostly in the center position before the braking onset, while the percentage of time drivers leaning forward or backward increased significantly after the braking onset. Parametric studies with a total of 4800 MADYMO simulations showed that both delta-V and occupant pre-crash posture had pronounced effects on occupant injury risks and on the optimal restraint designs. By combining the results for the delta-V and head position distribution changes, a weighted average of injury risk reduction of 17% and 48% was predicted by the 50th percentile Anthropomorphic Test Device (ATD) model and human body model, respectively, with the assumption that the restraint system can adapt to the specific delta-V and pre-crash posture. This study demonstrated the potential for further reducing occupant injury risk in frontal crashes by the integration of a passive safety system with a DA feature. Future analyses considering more vehicle models, various crash conditions, and variations of occupant characteristics, such as age, gender, weight, and height, are necessary to further investigate the potential capability of integrating passive and DA or active safety systems.

  20. Estimation of RF energy absorbed in the brain from mobile phones in the Interphone Study.

    Science.gov (United States)

    Cardis, E; Varsier, N; Bowman, J D; Deltour, I; Figuerola, J; Mann, S; Moissonnier, M; Taki, M; Vecchia, P; Villegas, R; Vrijheid, M; Wake, K; Wiart, J

    2011-09-01

    The objective of this study was to develop an estimate of a radio frequency (RF) dose as the amount of mobile phone RF energy absorbed at the location of a brain tumour, for use in the Interphone Epidemiological Study. We systematically evaluated and quantified all the main parameters thought to influence the amount of specific RF energy absorbed in the brain from mobile telephone use. For this, we identified the likely important determinants of RF specific energy absorption rate during protocol and questionnaire design, we collected information from study subjects, network operators and laboratories involved in specific energy absorption rate measurements and we studied potential modifiers of phone output through the use of software-modified phones. Data collected were analysed to assess the relative importance of the different factors, leading to the development of an algorithm to evaluate the total cumulative specific RF energy (in joules per kilogram), or dose, absorbed at a particular location in the brain. This algorithm was applied to Interphone Study subjects in five countries. The main determinants of total cumulative specific RF energy from mobile phones were communication system and frequency band, location in the brain and amount and duration of mobile phone use. Though there was substantial agreement between categorisation of subjects by cumulative specific RF energy and cumulative call time, misclassification was non-negligible, particularly at higher frequency bands. Factors such as adaptive power control (except in Code Division Multiple Access networks), discontinuous transmission and conditions of phone use were found to have a relatively minor influence on total cumulative specific RF energy. While amount and duration of use are important determinants of RF dose in the brain, their impact can be substantially modified by communication system, frequency band and location in the brain. It is important to take these into account in analyses of risk

  1. Generalized equations for estimating DXA percent fat of diverse young women and men: The Tiger Study

    Science.gov (United States)

    Popular generalized equations for estimating percent body fat (BF%) developed with cross-sectional data are biased when applied to racially/ethnically diverse populations. We developed accurate anthropometric models to estimate dual-energy x-ray absorptiometry BF% (DXA-BF%) that can be generalized t...

  2. Estimating the Entropy of Binary Time Series: Methodology, Some Theory and a Simulation Study

    Directory of Open Access Journals (Sweden)

    Elie Bienenstock

    2008-06-01

    Full Text Available Partly motivated by entropy-estimation problems in neuroscience, we present a detailed and extensive comparison between some of the most popular and effective entropy estimation methods used in practice: The plug-in method, four different estimators based on the Lempel-Ziv (LZ family of data compression algorithms, an estimator based on the Context-Tree Weighting (CTW method, and the renewal entropy estimator. METHODOLOGY: Three new entropy estimators are introduced; two new LZ-based estimators, and the “renewal entropy estimator,” which is tailored to data generated by a binary renewal process. For two of the four LZ-based estimators, a bootstrap procedure is described for evaluating their standard error, and a practical rule of thumb is heuristically derived for selecting the values of their parameters in practice. THEORY: We prove that, unlike their earlier versions, the two new LZ-based estimators are universally consistent, that is, they converge to the entropy rate for every finite-valued, stationary and ergodic process. An effective method is derived for the accurate approximation of the entropy rate of a finite-state hidden Markov model (HMM with known distribution. Heuristic calculations are presented and approximate formulas are derived for evaluating the bias and the standard error of each estimator. SIMULATION: All estimators are applied to a wide range of data generated by numerous different processes with varying degrees of dependence and memory. The main conclusions drawn from these experiments include: (i For all estimators considered, the main source of error is the bias. (ii The CTW method is repeatedly and consistently seen to provide the most accurate results. (iii The performance of the LZ-based estimators is often comparable to that of the plug-in method. (iv The main drawback of the plug-in method is its computational inefficiency; with small word-lengths it fails to detect longer-range structure in

  3. Performance Estimation of Networked Business Models: Case Study on a Finnish eHealth Service Project

    Directory of Open Access Journals (Sweden)

    Marikka Heikkilä

    2014-08-01

    Full Text Available Purpose: The objective of this paper is to propose and demonstrate a framework for estimating performance in a networked business model. Design/methodology/approach: Our approach is design science, utilising action research in studying a case of four independent firms in Health & Wellbeing sector aiming to jointly provide a new service for business and private customers. The duration of the research study is 3 years. Findings: We propose that a balanced set of performance indicators can be defined by paying attention to all main components of the business model, enriched with of network collaboration. The results highlight the importance of measuring all main components of the business model and also the business network partners’ view on trust, contracts and fairness. Research implications: This article contributes to the business model literature by combining business modelling with performance evaluation. The article points out that it is essential to create metrics that can be applied to evaluate and improve the business model blueprints, but it is also important to measure business collaboration aspects. Practical implications: Companies have already adopted Business model canvas or similar business model tools to innovate new business models. We suggest that companies continue their business model innovation work by agreeing on a set of performance metrics, building on the business model components model enriched with measures of network collaboration. Originality/value: This article contributes to the business model literature and praxis by combining business modelling with performance evaluation.

  4. Estimation of Hypertension Risk from Lifestyle Factors and Health Profile: A Case Study

    Directory of Open Access Journals (Sweden)

    Zhuoyuan Zheng

    2014-01-01

    Full Text Available Hypertension is a highly prevalent risk factor for cardiovascular disease and it can also lead to other diseases which seriously harm the human health. Screening the risks and finding a clinical model for estimating the risk of onset, maintenance, or the prognosis of hypertension are of great importance to the prevention or treatment of the disease, especially if the indicator can be derived from simple health profile. In this study, we investigate a chronic disease questionnaire data set of 6563 rural citizens in East China and find out a clinical signature that can assess the risk of hypertension easily and accurately. The signature achieves an accuracy of about 83% on the external test dataset, with an AUC of 0.91. Our study demonstrates that a combination of simple lifestyle features can sufficiently reflect the risk of hypertension onset. This finding provides potential guidance for disease prevention and control as well as development of home care and home-care technologies.

  5. Study on inelastic analysis method for structural design (1). Estimation method of loading history effect

    International Nuclear Information System (INIS)

    Tanaka, Yoshihiko; Kasahara, Naoto

    2003-05-01

    The advanced loop-type reactor system, one of the promising concepts in the Feasibility study of the FBR Cycle, adopts many innovative ideas to meet the challenging requirements for safety and economy. As a results, it seems that the structures of the reactor system would be subjected to severer loads than the predecessors. One of the countermeasures to them is the design by inelastic analysis. In the past, many studies showed that structural design by inelastic analysis is much more reasonable than one by conservative elastic analysis. However, inelastic analysis has hardly been adopted in nuclear design so far. One of the reasons is that inelastic analysis has loading history effect, that is, the analysis result would differ depending on the order of loads. It seems to be difficult to find the general solution for the loading history effect. Consequently, inelastic analysis output from the four deferent thermal load histories which consists of the thermal load cycle including the severest cold shock ('C') and the one including the severest hot shock ('H') were compared with each other. From this comparison, it was revealed that the thermal load history with evenly distributed 'H's among 'C's tend to give the most conservative damage estimation derived from inelastic analysis output. Therefore, such thermal load history pattern is proposed for the structural design by inelastic analysis. (author)

  6. Urban energy consumption and related carbon emission estimation: a study at the sector scale

    Science.gov (United States)

    Lu, Weiwei; Chen, Chen; Su, Meirong; Chen, Bin; Cai, Yanpeng; Xing, Tao

    2013-12-01

    With rapid economic development and energy consumption growth, China has become the largest energy consumer in the world. Impelled by extensive international concern, there is an urgent need to analyze the characteristics of energy consumption and related carbon emission, with the objective of saving energy, reducing carbon emission, and lessening environmental impact. Focusing on urban ecosystems, the biggest energy consumer, a method for estimating energy consumption and related carbon emission was established at the urban sector scale in this paper. Based on data for 1996-2010, the proposed method was applied to Beijing in a case study to analyze the consumption of different energy resources (i.e., coal, oil, gas, and electricity) and related carbon emission in different sectors (i.e., agriculture, industry, construction, transportation, household, and service sectors). The results showed that coal and oil contributed most to energy consumption and carbon emission among different energy resources during the study period, while the industrial sector consumed the most energy and emitted the most carbon among different sectors. Suggestions were put forward for energy conservation and emission reduction in Beijing. The analysis of energy consumption and related carbon emission at the sector scale is helpful for practical energy saving and emission reduction in urban ecosystems.

  7. Estimation of beverage consumption and associated caloric intake in adult Czech population. An observational study.

    Science.gov (United States)

    Adámková, Věra; Hubáček, Jaroslav A; Zimmelová, Petra; Velemínský, Miloš

    2011-01-01

    Food intake is a commonly monitored issue in many studies. In contrast, almost no information has been published on beverage intake in adults. To evaluate beverage intake, we studied a population of 1, 200 adults (656 males and 544 females, aged 18-54 years). The volumes and types of beverages were obtained from self-reported questionnaires. The mean beverage intake was highly variable, with a minimum of 450 mL/day and a maximum of 5,330 mL/day. A mean of 1,575 mL/day was found in the entire population (2,300 mL in males and 840 mL in females). Different patterns in the consumption of beverage types were observed between the males and females. For both males and females, the most common beverage consumed was water followed by tea. The next preferable beverages were alcoholic beer, coffee, and non-alcoholic beer in males and coffee, milk, and alcoholic beer in females. The estimated caloric intake from beverages covers, in most individuals, 10-30% of the recommended daily caloric intake. There is substantial variation among individuals, both in beverage intake and in caloric intake through beverages. The caloric intake from beverages reaches, in some individuals, one-third of the recommended daily caloric rate. © 2011 Neuroendocrinology Letters

  8. A case-control study estimating accident risk for alcohol, medicines and illegal drugs.

    Directory of Open Access Journals (Sweden)

    Kim Paula Colette Kuypers

    Full Text Available The aim of the present study was to assess the risk of having a traffic accident after using alcohol, single drugs, or a combination, and to determine the concentrations at which this risk is significantly increased.A population-based case-control study was carried out, collecting whole blood samples of both cases and controls, in which a number of drugs were detected. The risk of having an accident when under the influence of drugs was estimated using logistic regression adjusting for gender, age and time period of accident (cases/sampling (controls. The main outcome measures were odds ratio (OR for accident risk associated with single and multiple drug use. In total, 337 cases (negative: 176; positive: 161 and 2726 controls (negative: 2425; positive: 301 were included in the study.Main findings were that 1 alcohol in general (all the concentrations together caused an elevated crash risk; 2 cannabis in general also caused an increase in accident risk; at a cut-off of 2 ng/mL THC the risk of having an accident was four times the risk associated with the lowest THC concentrations; 3 when ranking the adjusted OR from lowest to highest risk, alcohol alone or in combination with other drugs was related to a very elevated crash risk, with the highest risk for stimulants combined with sedatives.The study demonstrated a concentration-dependent crash risk for THC positive drivers. Alcohol and alcohol-drug combinations are by far the most prevalent substances in drivers and subsequently pose the largest risk in traffic, both in terms of risk and scope.

  9. Burden of diabetes mellitus estimated with a longitudinal population-based study using administrative databases.

    Directory of Open Access Journals (Sweden)

    Luciana Scalone

    Full Text Available OBJECTIVE: To assess the epidemiologic and economic burden of diabetes mellitus (DM from a longitudinal population-based study. RESEARCH DESIGN AND METHODS: Lombardy Region includes 9.9 million individuals. Its DM population was identified through a data warehouse (DENALI, which matches with a probabilistic linkage demographic, clinical and economic data of different Healthcare Administrative databases. All individuals, who, during the year 2000 had an hospital discharge with a IDC-9 CM code 250.XX, and/or two consecutive prescriptions of drugs for diabetes (ATC code A10XXXX within one year, and/or an exemption from co-payment healthcare costs specific for DM, were selected and followed up to 9 years. We calculated prevalence, mortality and healthcare costs (hospitalizations, drugs and outpatient examinations/visits from the National Health Service's perspective. RESULTS: We identified 312,223 eligible subjects. The study population (51% male had a mean age of 66 (from 0.03 to 105.12 years at the index date. Prevalence ranged from 0.4% among subjects aged ≤45 years to 10.1% among those >85 years old. Overall 43.4 deaths per 1,000 patients per year were estimated, significantly (p<0.001 higher in men than women. Overall, 3,315€/patient-year were spent on average: hospitalizations were the cost driver (54.2% of total cost. Drugs contributed to 31.5%, outpatient claims represented 14.3% of total costs. Thirty-five percent of hospital costs were attributable to cerebro-/cardiovascular reasons, 6% to other complications of DM, and 4% to DM as a main diagnosis. Cardiovascular drugs contributed to 33.5% of total drug costs, 21.8% was attributable to class A (16.7% to class A10 and 4.3% to class B (2.4% to class B01 drugs. CONCLUSIONS: Merging different administrative databases can provide with many data from large populations observed for long time periods. DENALI shows to be an efficient instrument to obtain accurate estimates of burden of

  10. Heritability Estimates of Endophenotypes of Long and Health Life: The Long Life Family Study

    DEFF Research Database (Denmark)

    Matteini, Amy M; Fallin, M Daniele; Kammerer, Candace M

    2010-01-01

    survival were identified and heritability estimates were calculated. Principal components (PCs) analysis was carried out using 28 physiologic measurements from five trait domains (cardiovascular, cognition, physical function, pulmonary, and metabolic). RESULTS: The five most dominant PCs accounted for 50......% of underlying trait variance. The first PC (PC1), which consisted primarily of poor pulmonary and physical function, represented 14.3% of the total variance and had an estimated heritability of 39%. PC2 consisted of measures of good metabolic and cardiovascular function with an estimated heritability of 27%. PC...

  11. Evaluation Study of Fast Spectral Estimators Using In-vivo Data

    DEFF Research Database (Denmark)

    Hansen, Kristoffer Lindskov; Gran, Fredrik; Pedersen, Mads Møller

    2009-01-01

    Spectrograms in medical ultrasound are usually estimated with Welch's method (WM). To achieve sufficient spectral resolution and contrast, WM uses an observation window (OW) of up to 256 emissions per estimate. Two adaptive filterbank methods have been suggested to reduce the OW: Blood spectral...... Power Capon (BPC) and the Blood Amplitude and Phase EStimation method (BAPES). Ten volunteers were scanned over the carotid artery. From each dataset, 28 spectrograms were produced by combining four approaches (WM with a Hanning window (W.HAN), WM with a boxcar window (W.BOX), BPC and BAPES) and seven...

  12. Estimating mortality from external causes using data from retrospective surveys: A validation study in Niakhar (Senegal

    Directory of Open Access Journals (Sweden)

    Gilles Pison

    2018-03-01

    Full Text Available Background: In low- and middle-income countries (LMICs, data on causes of death is often inaccurate or incomplete. In this paper, we test whether adding a few questions about injuries and accidents to mortality questionnaires used in representative household surveys would yield accurate estimates of the extent of mortality due to external causes (accidents, homicides, or suicides. Methods: We conduct a validation study in Niakhar (Senegal, during which we compare reported survey data to high-quality prospective records of deaths collected by a health and demographic surveillance system (HDSS. Results: Survey respondents more frequently list the deaths of their adult siblings who die of external causes than the deaths of those who die from other causes. The specificity of survey data is high, but sensitivity is low. Among reported deaths, less than 60Š of the deaths classified as due to external causes by the HDSS are also classified as such by survey respondents. Survey respondents better report deaths due to road-traffic accidents than deaths from suicides and homicides. Conclusions: Asking questions about deaths resulting from injuries and accidents during surveys might help measure mortality from external causes in LMICs, but the resulting data displays systematic bias in a rural population of Senegal. Future studies should 1 investigate whether similar biases also apply in other settings and 2 test new methods to further improve the accuracy of survey data on mortality from external causes. Contribution: This study helps strengthen the monitoring of sustainable development targets in LMICs by validating a simple approach for the measurement of mortality from external causes.

  13. Using generalizability analysis to estimate parameters for anatomy assessments: A multi-institutional study.

    Science.gov (United States)

    Byram, Jessica N; Seifert, Mark F; Brooks, William S; Fraser-Cotlin, Laura; Thorp, Laura E; Williams, James M; Wilson, Adam B

    2017-03-01

    With integrated curricula and multidisciplinary assessments becoming more prevalent in medical education, there is a continued need for educational research to explore the advantages, consequences, and challenges of integration practices. This retrospective analysis investigated the number of items needed to reliably assess anatomical knowledge in the context of gross anatomy and histology. A generalizability analysis was conducted on gross anatomy and histology written and practical examination items that were administered in a discipline-based format at Indiana University School of Medicine and in an integrated fashion at the University of Alabama School of Medicine and Rush University Medical College. Examination items were analyzed using a partially nested design s×(i:o) in which items were nested within occasions (i:o) and crossed with students (s). A reliability standard of 0.80 was used to determine the minimum number of items needed across examinations (occasions) to make reliable and informed decisions about students' competence in anatomical knowledge. Decision study plots are presented to demonstrate how the number of items per examination influences the reliability of each administered assessment. Using the example of a curriculum that assesses gross anatomy knowledge over five summative written and practical examinations, the results of the decision study estimated that 30 and 25 items would be needed on each written and practical examination to reach a reliability of 0.80, respectively. This study is particularly relevant to educators who may question whether the amount of anatomy content assessed in multidisciplinary evaluations is sufficient for making judgments about the anatomical aptitude of students. Anat Sci Educ 10: 109-119. © 2016 American Association of Anatomists. © 2016 American Association of Anatomists.

  14. Age estimation using development of third molars in South Indian population: A radiological study.

    Science.gov (United States)

    Priyadharshini, K Indra; Idiculla, Jose Joy; Sivapathasundaram, B; Mohanbabu, V; Augustine, Dominic; Patil, Shankargouda

    2015-05-01

    To assess the estimation of chronological age based on the stages of third molar development following the eight stages (A-H) method of Demirjian et al. in Chennai population of South India. A sample consisting of 848 individuals (471 males and 377 females) aged between 14 and 30 years was randomly selected for the clinical evaluation and 323 orthopantomograms with clinically missing third molars were taken for radiological evaluation using Demirjian's method from a Chennai population of known chronological age and sex. Statistical analysis was performed using Pearson's Chi-square test and mean values were compared between the study groups using t-test or analysis of variance (ANOVA) followed by Tukey's highly significant difference (HSD). In the present study, P age of having clinically completely erupted maxillary third molars was 22.41 years in male subjects and 23.81 years in female subjects and that of mandibular third molars was 21.49 years in male subjects and 23.34 years in female subjects. Mandibular third molars were clinically missing more often in females than in males. Eruption of mandibular third molars was generally ahead of the emergence of maxillary third molars into the oral cavity. Third molar development between male and female subjects showed statistically significant differences at calcification stage F and stage G in maxillary third molars and stage F in mandibular third molars (P third molar eruption reached Demirjian's formation stages earlier in males than in females. It is suggested that in future studies, to increase the accuracy of age determination, indications of sexual maturity and ossification should also be evaluated in addition to third molar mineralization.

  15. Estimation of mean glandular dose for patients who undergo mammography and studying the factors affecting it

    Science.gov (United States)

    Barzanje, Sana L. N. H.; Harki, Edrees M. Tahir Nury

    2017-09-01

    The objective of this study was to determine mean glandular dose (MGD) during diagnostic mammography. This study was done in two hospitals in Hawler city in Kurdistan -region /Iraq, the exposure parameters kVp and mAs was recorded for 40 patients under go mammography. The MGD estimated by multiplied ESD with normalized glandular dose (Dn). The ESD measured indirectly by measuring output radiation mGy/mAs by using PalmRAD 907 as a suitable detector (Gigger detector).the results; shown that the mean and its standard deviation of MGD for Screen Film Mammography and Digital Mammography are (0.95±0.18)mGy and (0.99±0.26)mGy, respectively. And there is a significant difference between MGD for Screen Film Mammography and Digital Mammography views (p≤0. 05). Also the mean value and its standard deviation of MGD for screen film mammography is (0.96±0.21) for CC projection and (1.03±0.3) mGy for MLO projection, but mean value and its standard deviation evaluated of MGD for digital mammography is (0.92±0.17) mGy for CC projection and (0.98±0.2) mGy for MLO projection. As well as, the effect of kVp and mAs in MGD were studied, shows that in general as kVp and mAs increased the MGD increased accordingly in both of mammography systems.

  16. NIH study finds that coffee drinkers have lower risk of death

    Science.gov (United States)

    Older adults who drank coffee -- caffeinated or decaffeinated -- had a lower risk of death overall than others who did not drink coffee, according a study by researchers from the National Cancer Institute (NCI), part of the National Institutes of Health,

  17. NIH mouse study finds gut microorganisms may determine cancer treatment outcome

    Science.gov (United States)

    An intact gut commensal microbiota, which is a population of microorganisms living in the intestine, is required for optimal response to cancer therapy, according to a mouse study by scientists at the National Cancer Institute (NCI)

  18. Biomass estimation with high resolution satellite images: A case study of Quercus rotundifolia

    Science.gov (United States)

    Sousa, Adélia M. O.; Gonçalves, Ana Cristina; Mesquita, Paulo; Marques da Silva, José R.

    2015-03-01

    Forest biomass has had a growing importance in the world economy as a global strategic reserve, due to applications in bioenergy, bioproduct development and issues related to reducing greenhouse gas emissions. Current techniques used for forest inventory are usually time consuming and expensive. Thus, there is an urgent need to develop reliable, low cost methods that can be used for forest biomass estimation and monitoring. This study uses new techniques to process high spatial resolution satellite images (0.70 m) in order to assess and monitor forest biomass. Multi-resolution segmentation method and object oriented classification are used to obtain the area of tree canopy horizontal projection for Quercus rotundifolia. Forest inventory allows for calculation of tree and canopy horizontal projection and biomass, the latter with allometric functions. The two data sets are used to develop linear functions to assess above ground biomass, with crown horizontal projection as an independent variable. The functions for the cumulative values, both for inventory and satellite data, for a prediction error equal or smaller than the Portuguese national forest inventory (7%), correspond to stand areas of 0.5 ha, which include most of the Q.rotundifolia stands.

  19. Studies estimating the dermal bioavailability of polynuclear aromatic hydrocarbons from manufactured plant tar-contaminated soils

    International Nuclear Information System (INIS)

    Roy, T.A.; Krueger, A.J.; Taylor, B.B.; Mauro, D.M.; Goldstein, L.S.

    1998-01-01

    In vitro percutaneous absorption studies were performed with contaminated soils or organic extracts of contaminated soils collected at manufactured gas plant (MGP) sites. The MGP tar contaminated soils were found to contain a group of targeted polynuclear aromatic hydrocarbons (PAH) at levels ranging from 10 to 2400 mg/kg. The soil extracts contained target PAH at levels ranging from 12 000 - 34 000 mg/kg. Dermal penetration rates of target PAH from the MGP tar-contaminated soils/soil extracts were determined experimentally through human skin using 3 H-benzo(a)pyrene (BaP) as a surrogate. Results from three MGP sites showed reductions of 2-3 orders of magnitude in PAH absorption through human skin from the most contaminated soils in comparison to the soil extracts. Reduction in PAH penetration can be attributed to PAH concentration and (soil) matrix properties. PAH dermal flux values are used to determine site-specific dermally absorbed dose (DAD) and chronic daily intake (CDI) which are essential terms required to estimate risk associated with human exposure to MGP tar and MGP tar-contaminated soils. 21 refs., 4 figs., 3 tabs

  20. Identifying patient preferences for communicating risk estimates: A descriptive pilot study

    Directory of Open Access Journals (Sweden)

    O'Connor Annette M

    2001-08-01

    Full Text Available Abstract Background Patients increasingly seek more active involvement in health care decisions, but little is known about how to communicate complex risk information to patients. The objective of this study was to elicit patient preferences for the presentation and framing of complex risk information. Method To accomplish this, eight focus group discussions and 15 one-on-one interviews were conducted, where women were presented with risk data in a variety of different graphical formats, metrics, and time horizons. Risk data were based on a hypothetical woman's risk for coronary heart disease, hip fracture, and breast cancer, with and without hormone replacement therapy. Participants' preferences were assessed using likert scales, ranking, and abstractions of focus group discussions. Results Forty peri- and postmenopausal women were recruited through hospital fliers (n = 25 and a community health fair (n = 15. Mean age was 51 years, 50% were non-Caucasian, and all had completed high school. Bar graphs were preferred by 83% of participants over line graphs, thermometer graphs, 100 representative faces, and survival curves. Lifetime risk estimates were preferred over 10 or 20-year horizons, and absolute risks were preferred over relative risks and number needed to treat. Conclusion Although there are many different formats for presenting and framing risk information, simple bar charts depicting absolute lifetime risk were rated and ranked highest overall for patient preferences for format.

  1. Statistical analysis of maximum likelihood estimator images of human brain FDG PET studies

    International Nuclear Information System (INIS)

    Llacer, J.; Veklerov, E.; Hoffman, E.J.; Nunez, J.; Coakley, K.J.

    1993-01-01

    The work presented in this paper evaluates the statistical characteristics of regional bias and expected error in reconstructions of real PET data of human brain fluorodeoxiglucose (FDG) studies carried out by the maximum likelihood estimator (MLE) method with a robust stopping rule, and compares them with the results of filtered backprojection (FBP) reconstructions and with the method of sieves. The task that the authors have investigated is that of quantifying radioisotope uptake in regions-of-interest (ROI's). They first describe a robust methodology for the use of the MLE method with clinical data which contains only one adjustable parameter: the kernel size for a Gaussian filtering operation that determines final resolution and expected regional error. Simulation results are used to establish the fundamental characteristics of the reconstructions obtained by out methodology, corresponding to the case in which the transition matrix is perfectly known. Then, data from 72 independent human brain FDG scans from four patients are used to show that the results obtained from real data are consistent with the simulation, although the quality of the data and of the transition matrix have an effect on the final outcome

  2. Wind turbine power coefficient estimation by soft computing methodologies: Comparative study

    International Nuclear Information System (INIS)

    Shamshirband, Shahaboddin; Petković, Dalibor; Saboohi, Hadi; Anuar, Nor Badrul; Inayat, Irum; Akib, Shatirah; Ćojbašić, Žarko; Nikolić, Vlastimir; Mat Kiah, Miss Laiha; Gani, Abdullah

    2014-01-01

    Highlights: • Variable speed operation of wind turbine to increase power generation. • Changeability and fluctuation of wind has to be accounted. • To build an effective prediction model of wind turbine power coefficient. • The impact of the variation in the blade pitch angle and tip speed ratio. • Support vector regression methodology application as predictive methodology. - Abstract: Wind energy has become a large contender of traditional fossil fuel energy, particularly with the successful operation of multi-megawatt sized wind turbines. However, reasonable wind speed is not adequately sustainable everywhere to build an economical wind farm. In wind energy conversion systems, one of the operational problems is the changeability and fluctuation of wind. In most cases, wind speed can vacillate rapidly. Hence, quality of produced energy becomes an important problem in wind energy conversion plants. Several control techniques have been applied to improve the quality of power generated from wind turbines. In this study, the polynomial and radial basis function (RBF) are applied as the kernel function of support vector regression (SVR) to estimate optimal power coefficient value of the wind turbines. Instead of minimizing the observed training error, SVR p oly and SVR r bf attempt to minimize the generalization error bound so as to achieve generalized performance. The experimental results show that an improvement in predictive accuracy and capability of generalization can be achieved by the SVR approach in compare to other soft computing methodologies

  3. Estimation of photovoltaic systems for rural development: a case study of village near Quetta

    International Nuclear Information System (INIS)

    Nasir, S.M.; Raza, S.M.; Hyder, S.

    1994-01-01

    The energy needs in Pakistan are rising in all sectors whereas the supplies seem to be still heavily dependent on oil and electricity generation from WAPDA (Water and Power Development Authority). It is estimated that there are more than 100 villages having population more than 500. These villages are remotely scattered and are away from the industrial belt. It is unlikely to electrify all these villages in the near future. It is difficult to reach them with fossil fuels, particularly oil in quantities they need. In this context, solar and wind power may offer a viable solution. Increasing human and financial commitment would enable the societies to take advantage of this new technology. Considering the demand and supply infrastructure economy, social and cultural set up of the rural communities where a large number of people live must be considered. A case study was made of a village (Killi Paind Khan) near Quetta. The results show that the photovoltaic power system (PPS) is quite attractive for a small community for their lighting and other household appliances. (author)

  4. USING THE FOURNIER INDEXES IN ESTIMATING RAINFALL EROSIVITY. CASE STUDY - THE SECAŞUL MARE BASIN

    Directory of Open Access Journals (Sweden)

    M. COSTEA

    2012-03-01

    Full Text Available Using the Fournier Index in Estimating Rainfall Erosivity. Case Study - The Secaşul Mare Basin. Climatic aggressiveness is one of the most important factors in relief dynamic. Of all climatic parameters, rainfall is directly involved in versant dynamic, in the loss of soil quality and through pluvial denudation and the processes associated with it, through the erosivity of torrential rain. We analyzed rainfall aggressiveness based on monthly and annual average values through the Fournier's index (1970 and Fournier's index modified by Arnoldus (1980. They have the advantage that they can be used not only for evaluating the land susceptibility to erosion and the calculation of erodibility of land and soil losses, but also in assessing land susceptibility to sliding (Aghiruş, 2010. The literature illustrates the successful use of this index which provides a summary assessment of the probability of rainfall with significant erosive effects. The results obtained allow observation of differences in space and time of the distribution of this index.

  5. Estimation of tritium radiotracer activity for interconnection study in geothermal field

    International Nuclear Information System (INIS)

    Rasi Prasetio; Satrio

    2016-01-01

    Tritium radiotracer (3H) has been applied widely in many geothermal fields around the world. This application was done by injecting radiotracer with certain amount of activity into reinjection well in order to investigate interconnection between reinjection well with surrounding production wells. The activity of injected radiotracer must meets the field condition and the volume of reservoir, detection limit of instrument, as well as safety aspect for the workers and environment from radioactive hazard. The planning of injection process must consider the maximum permissible concentration (MPC) and minimum detection limit (MDL). Based on calculation, tritium radiotracer injection in Kamojang geothermal field can be done with minimal activity of 0.15 Ci and maximum 22100 Ci, while in Lahendong field minimum activity of 0.65 Ci and maximum 7230 Ci. In these two injection studies, tritium was detected in monitoring wells between MDL and MPC limit. By using this estimation calculation, the activity of tritium that released into the environment within safety limit, thus monitoring wells with undetectable tritium infer no connectivity between those wells with reinjection well. (author)

  6. Estimating Long-Term Care Costs among Thai Elderly: A Phichit Province Case Study

    Directory of Open Access Journals (Sweden)

    Pattaraporn Khongboon

    2018-01-01

    Full Text Available Background. Rural-urban inequality in long-term care (LTC services has been increasing alongside rapid socioeconomic development. This study estimates the average spending on LTC services and identifies the factors that influence the use and cost of LTC for the elderly living in urban and rural areas of Thailand. Methods. The sample comprised 837 elderly aged 60 years drawn from rural and urban areas in Phichit Province. Costs were assessed over a 1-month period. Direct costs of caregiving and indirect costs (opportunity cost method were analyzed. Binary logistic regression was performed to determine which factors affected LTC costs. Results. The total annual LTC spending for rural and urban residents was on average USD 7,285 and USD 7,280.6, respectively. Formal care and informal care comprise the largest share of payments. There was a significant association between rural residents and costs for informal care, day/night care, and home renovation. Conclusions. Even though total LTC expenditures do not seem to vary significantly across rural and urban areas, the fundamental differences between areas need to be recognized. Reorganizing country delivery systems and finding a balance between formal and informal care are alternative solutions.

  7. Nordic working group on CCF studies. Parameter estimation within the activities of the Nordic CCF Group

    International Nuclear Information System (INIS)

    Johansson, G.

    2002-01-01

    This is a presentation of a project programme for assessment of CCF events and adoption of international data derived in the ICDE project to conditions in Sweden and Finland. The overall objective with the working group is to: - Support safety by studying potential and real CCF events and report conclusions and recommendations that can improve the understanding of these events eventually resulting in increased safety; - The result is intended for application in NPP operation, maintenance, inspection and risk assessments. The work is divided into one quantitative and one qualitative part with the following specific objectives: Qualitative objectives: Compile experiences data and generate insights in terms of relevant failure mechanisms and effective CCF protection measures. The results shall be presented as a guide with checklists and recommendations on how to identify current CCF protection standard and improvement possibilities regarding CCF defenses decreasing the CCF vulnerability. Quantitative objectives: Prepare a Nordic C-book where quantitative insights as Impact Vectors and CCF parameters for different redundancy levels are presented. Uncertainties in CCF data shall be reduced as much as possible. The high redundancy systems sensitivity to CCF events demand a well structured quantitative analysis in support of best possible and realistic CCF parameter estimates, if possible, plant specific. Model survey and review: This survey shah examine available models and their applicability for use on the data. Several models exist and are used in the Nordic PSAs. Data survey and review: This survey shall examine available data sources and their applicability. The survey shah review ICDE and other sources and Provide a background for the decision on what data to be used. A possible outcome is of course that the ICDE data are shown to cover all other sources, but there are possibilities the ICDE data shall be combined with some other source. The situation also differs

  8. Estimation of the isothermal compressibility from event-by-event multiplicity fluctuation studies

    Directory of Open Access Journals (Sweden)

    Mukherjee Maitreyee

    2018-01-01

    Full Text Available The first estimation of the isothermal compressibility (kT of matter is presented for a wide range of collision energies from √sNN = 7.7 GeV to 2.76 TeV. kT is estimated with the help of event-byevent charged particle multiplicity fluctuations from experiment. Dynamical fluctuations are extracted by removing the statistical fluctuations obtained from the participant model. kT is also estimated from event generators AMPT, UrQMD, EPOS and a hadron resonance gas model. The values of isothermal compressibility are estimated for the Large Hadron Collider (LHC energies with the help of the event generators.

  9. Estimation of demand function on natural gas and study of demand analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Y.D. [Korea Energy Economics Institute, Euiwang (Korea, Republic of)

    1998-04-01

    Demand Function is estimated with several methods about the demand on natural gas, and analyzed per usage. Since the demand on natural gas, which has big share of heating use, has a close relationship with temperature, the inter-season trend of price and income elasticity is estimated considering temperature and economic formation. Per usage response of natural gas demand on the changes of price and income is also estimated. It was estimated that the response of gas demand on the changes of price and income occurs by the change of number of users in long term. In case of the response of unit consumption, only industrial use shows long-term response to price. Since gas price barely responds to the change of exchange rate, it seems to express the price-making mechanism that does not reflect timely the import condition such as exchange rate, etc. 16 refs., 12 figs., 13 tabs.

  10. Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study

    Science.gov (United States)

    Troudi, Molka; Alimi, Adel M.; Saoudi, Samir

    2008-12-01

    The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs). Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE) depends directly upon [InlineEquation not available: see fulltext.] which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of [InlineEquation not available: see fulltext.], the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.

  11. Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study

    Directory of Open Access Journals (Sweden)

    Samir Saoudi

    2008-07-01

    Full Text Available The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs. Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE depends directly upon J(f which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of J(f, the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.

  12. Option Price Estimates for Water Quality Improvements: A Contingent Valuation Study for the Monongahela River (1985)

    Science.gov (United States)

    This paper presents the findings from a contingent valuation survey designed to estimate the option price bids for the improved recreation resulting from enhanced water quality in the Pennsylvania portion of the Monongahela River.

  13. Comparison Study on the Battery SoC Estimation with EKF and UKF Algorithms

    Directory of Open Access Journals (Sweden)

    Hongwen He

    2013-09-01

    Full Text Available The battery state of charge (SoC, whose estimation is one of the basic functions of battery management system (BMS, is a vital input parameter in the energy management and power distribution control of electric vehicles (EVs. In this paper, two methods based on an extended Kalman filter (EKF and unscented Kalman filter (UKF, respectively, are proposed to estimate the SoC of a lithium-ion battery used in EVs. The lithium-ion battery is modeled with the Thevenin model and the model parameters are identified based on experimental data and validated with the Beijing Driving Cycle. Then space equations used for SoC estimation are established. The SoC estimation results with EKF and UKF are compared in aspects of accuracy and convergence. It is concluded that the two algorithms both perform well, while the UKF algorithm is much better with a faster convergence ability and a higher accuracy.

  14. Identifying grain-size dependent errors on global forest area estimates and carbon studies

    Science.gov (United States)

    Daolan Zheng; Linda S. Heath; Mark J. Ducey

    2008-01-01

    Satellite-derived coarse-resolution data are typically used for conducting global analyses. But the forest areas estimated from coarse-resolution maps (e.g., 1 km) inevitably differ from a corresponding fine-resolution map (such as a 30-m map) that would be closer to ground truth. A better understanding of changes in grain size on area estimation will improve our...

  15. A Study of Ship Acquisition Cost Estimating in the Naval Sea Systems Command. Appendices

    Science.gov (United States)

    1977-10-01

    acquisition pro- cess, the recommendations are linked to form a structure that is applicable for acquisition progress of all agencies...and impact on cost. CAIG considers GFM estimates to be the weakest link in the estimating process and suggests making it mandatory that the PARMs...tor- pedo and missile orders; and for providing display data to fire control systems and tactical data system operators. The AN/UYK-7 is installed

  16. Expanding Reliability Generalization Methods with KR-21 Estimates: An RG Study of the Coopersmith Self-Esteem Inventory.

    Science.gov (United States)

    Lane, Ginny G.; White, Amy E.; Henson, Robin K.

    2002-01-01

    Conducted a reliability generalizability study on the Coopersmith Self-Esteem Inventory (CSEI; S. Coopersmith, 1967) to examine the variability of reliability estimates across studies and to identify study characteristics that may predict this variability. Results show that reliability for CSEI scores can vary considerably, especially at the…

  17. Comparative Study Between Internal Ohmic Resistance and Capacity for Battery State of Health Estimation

    Directory of Open Access Journals (Sweden)

    M. Nisvo Ramadan

    2015-12-01

    Full Text Available In order to avoid battery failure, a battery management system (BMS is necessary. Battery state of charge (SOC and state of health (SOH are part of information provided by a BMS. This research analyzes methods to estimate SOH based lithium polymer battery on change of its internal resistance and its capacity. Recursive least square (RLS algorithm was used to estimate internal ohmic resistance while coloumb counting was used to predict the change in the battery capacity. For the estimation algorithm, the battery terminal voltage and current are set as the input variables. Some tests including static capacity test, pulse test, pulse variation test and before charge-discharge test have been conducted to obtain the required data. After comparing the two methods, the obtained results show that SOH estimation based on coloumb counting provides better accuracy than SOH estimation based on internal ohmic resistance. However, the SOH estimation based on internal ohmic resistance is faster and more reliable for real application

  18. Estimated percentage of typhoid fever in adult pakistani population (TAP) study

    International Nuclear Information System (INIS)

    Mehboob, F.; Arshad, A.; Firdous, S.; Ahmed, S.; Rehma, S.

    2013-01-01

    Typhoid fever is a serious infection with high morbidity and mortality in untreated cases. It is one of the very common infections in developing countries due to various factors involving hygiene and sanitation. Objective: To determine the estimated percentage of typhoid fever in Pakistani population and to find the commonly prescribed antibiotics for the disease. Material and Methods: This cross sectional study was conducted on 1036 patients, selected from forty five general practitioner clinics, between June to October 2010. Patients of > 18 years of age with > 3 days history of fever (> 100 degree F) and high index of suspicion for typhoid fever were tested for typhoid fever using Typhidot kits and positive cases were recruited for monitoring response to treatment. The febrile patients with clear cut history of urinary or respiratory infect-ion, hypovolemic shock or hepatobiliary disease were excluded and not tested by typhidot kit. The antibiotics prescribed to study population by various general practitioners were noted. Data was analysed on SPSS. Results were expressed in percentages and proportions. Results: Total 1036 patients were recruited. Typhoidot test was negative in 63.9% and positive in 36.1% patients with highest percentages of positive cases in Karachi, Rawalpindi and Hyderabad. The maximum number of cases were reported in summer season especially from June to August. Most of the patients were between ages of 19 - 39 years. The commonest anti-biotics prescribed were Ofloxacin, Ciprofloxacin and Levofloxacin. Conclusion: Typhoid fever is very common infection in Pakistan caused by Salmonella typhi which is transmitted among humans through faecooral route. Disease can be controlled not only by antibiotics like fluoroquinolones but by patient education, improvement in hygiene and sanitation, safe supply of clean drinking water and prophylactic vaccination as well. However, timely diagnosis and appropriate management with proper antibiotics is the key

  19. Estimated percentage of typhoid fever in adult pakistani population (TAP) study

    Energy Technology Data Exchange (ETDEWEB)

    Mehboob, F.; Arshad, A.; Firdous, S.; Ahmed, S.; Rehma, S. [Mayo Hospital, Lahore (Pakistan). Dept. of Medicine

    2013-01-15

    Typhoid fever is a serious infection with high morbidity and mortality in untreated cases. It is one of the very common infections in developing countries due to various factors involving hygiene and sanitation. Objective: To determine the estimated percentage of typhoid fever in Pakistani population and to find the commonly prescribed antibiotics for the disease. Material and Methods: This cross sectional study was conducted on 1036 patients, selected from forty five general practitioner clinics, between June to October 2010. Patients of > 18 years of age with > 3 days history of fever (> 100 degree F) and high index of suspicion for typhoid fever were tested for typhoid fever using Typhidot kits and positive cases were recruited for monitoring response to treatment. The febrile patients with clear cut history of urinary or respiratory infect-ion, hypovolemic shock or hepatobiliary disease were excluded and not tested by typhidot kit. The antibiotics prescribed to study population by various general practitioners were noted. Data was analysed on SPSS. Results were expressed in percentages and proportions. Results: Total 1036 patients were recruited. Typhoidot test was negative in 63.9% and positive in 36.1% patients with highest percentages of positive cases in Karachi, Rawalpindi and Hyderabad. The maximum number of cases were reported in summer season especially from June to August. Most of the patients were between ages of 19 - 39 years. The commonest anti-biotics prescribed were Ofloxacin, Ciprofloxacin and Levofloxacin. Conclusion: Typhoid fever is very common infection in Pakistan caused by Salmonella typhi which is transmitted among humans through faecooral route. Disease can be controlled not only by antibiotics like fluoroquinolones but by patient education, improvement in hygiene and sanitation, safe supply of clean drinking water and prophylactic vaccination as well. However, timely diagnosis and appropriate management with proper antibiotics is the key

  20. Estimation of breathing rate in thermal imaging videos: a pilot study on healthy human subjects.

    Science.gov (United States)

    Barbosa Pereira, Carina; Yu, Xinchi; Czaplik, Michael; Blazek, Vladimir; Venema, Boudewijn; Leonhardt, Steffen

    2017-12-01

    Diverse studies have demonstrated the importance of monitoring breathing rate (BR). Commonly, changes in BR are one of the earliest and major markers of serious complications/illness. However, it is frequently neglected due to limitations of clinically established measurement techniques, which require attachment of sensors. The employment of adhesive pads or thoracic belts in preterm infants as well as in traumatized or burned patients is an additional paramount issue. The present paper proposes a new robust approach, based on data fusion, to remotely monitor BR using infrared thermography (IRT). The algorithm considers not only temperature modulation around mouth and nostrils but also the movements of both shoulders. The data of these four sensors/regions of interest need to be further fused to reach improved accuracy. To investigate the performance of our approach, two different experiments (phase A: normal breathing, phase B: simulation of breathing disorders) on twelve healthy volunteers were performed. Thoracic effort (piezoplethysmography) was simultaneously acquired to validate our results. Excellent agreements between BR estimated with IRT and gold standard were achieved. While in phase A a mean correlation of 0.98 and a root-mean-square error (RMSE) of 0.28 bpm was reached, in phase B the mean correlation and the RMSE hovered around 0.95 and 3.45 bpm, respectively. The higher RMSE in phase B results predominantly from delays between IRT and gold standard in BR transitions: eupnea/apnea, apnea/tachypnea etc. Moreover, this study also demonstrates the capability of IRT to capture varied breathing disorders, and consecutively, to assess respiratory function. In summary, IRT might be a promising monitoring alternative to the conventional contact-based techniques regarding its performance and remarkable capabilities.

  1. Study on Hyperspectral Characteristics and Estimation Model of Soil Mercury Content

    Science.gov (United States)

    Liu, Jinbao; Dong, Zhenyu; Sun, Zenghui; Ma, Hongchao; Shi, Lei

    2017-12-01

    In this study, the mercury content of 44 soil samples in Guan Zhong area of Shaanxi Province was used as the data source, and the reflectance spectrum of soil was obtained by ASD Field Spec HR (350-2500 nm) Comparing the reflection characteristics of different contents and the effect of different pre-treatment methods on the establishment of soil heavy metal spectral inversion model. The first order differential, second order differential and reflectance logarithmic transformations were carried out after the pre-treatment of NOR, MSC and SNV, and the sensitive bands of reflectance and mercury content in different mathematical transformations were selected. A hyperspectral estimation model is established by regression method. The results of chemical analysis show that there is a serious Hg pollution in the study area. The results show that: (1) the reflectivity decreases with the increase of mercury content, and the sensitive regions of mercury are located at 392 ~ 455nm, 923nm ~ 1040nm and 1806nm ~ 1969nm. (2) The combination of NOR, MSC and SNV transformations combined with differential transformations can improve the information of heavy metal elements in the soil, and the combination of high correlation band can improve the stability and prediction ability of the model. (3) The partial least squares regression model based on the logarithm of the original reflectance is better and the precision is higher, Rc2 = 0.9912, RMSEC = 0.665; Rv2 = 0.9506, RMSEP = 1.93, which can achieve the mercury content in this region Quick forecast.

  2. Estimation of retired mobile phones generation in China: A comparative study on methodology

    Energy Technology Data Exchange (ETDEWEB)

    Li, Bo [State Key Laboratory of Urban and Regional Ecology, Research Center for Eco-Environmental Sciences, Chinese Academy of Sciences, Shuangqing Road 18, Haidian District, Beijing 100085 (China); Yang, Jianxin, E-mail: yangjx@rcees.ac.cn [State Key Laboratory of Urban and Regional Ecology, Research Center for Eco-Environmental Sciences, Chinese Academy of Sciences, Shuangqing Road 18, Haidian District, Beijing 100085 (China); Lu, Bin [State Key Laboratory of Urban and Regional Ecology, Research Center for Eco-Environmental Sciences, Chinese Academy of Sciences, Shuangqing Road 18, Haidian District, Beijing 100085 (China); Song, Xiaolong [Shanghai Cooperative Centre for WEEE Recycling, Shanghai Second Polytechnic University, Jinhai Road 2360, Pudong District, Shanghai 201209 (China)

    2015-01-15

    Highlights: • The sales data of mobile phones in China was revised by considering the amount of smuggled and counterfeit mobile phones. • The estimation of retired mobile phones in China was made by comparing some relevant methods. • The advanced result of estimation can help improve the policy-making. • The method suggested in this paper can be also used in other countries. • Some discussions on methodology are also conducted in order for the improvement. - Abstract: Due to the rapid development of economy and technology, China has the biggest production and possession of mobile phones around the world. In general, mobile phones have relatively short life time because the majority of users replace their mobile phones frequently. Retired mobile phones represent the most valuable electrical and electronic equipment (EEE) in the main waste stream because of such characteristics as large quantity, high reuse/recovery value and fast replacement frequency. Consequently, the huge amount of retired mobile phones in China calls for a sustainable management system. The generation estimation can provide fundamental information to construct the sustainable management system of retired mobile phones and other waste electrical and electronic equipment (WEEE). However, the reliable estimation result is difficult to get and verify. The priority aim of this paper is to provide proper estimation approach for the generation of retired mobile phones in China, by comparing some relevant methods. The results show that the sales and new method is in the highest priority in estimation of the retired mobile phones. The result of sales and new method shows that there are 47.92 million mobile phones retired in 2002, and it reached to 739.98 million in China in 2012. It presents an increasing tendency with some fluctuations clearly. Furthermore, some discussions on methodology, such as the selection of improper approach and error in the input data, are also conducted in order to

  3. Estimation of retired mobile phones generation in China: A comparative study on methodology

    International Nuclear Information System (INIS)

    Li, Bo; Yang, Jianxin; Lu, Bin; Song, Xiaolong

    2015-01-01

    Highlights: • The sales data of mobile phones in China was revised by considering the amount of smuggled and counterfeit mobile phones. • The estimation of retired mobile phones in China was made by comparing some relevant methods. • The advanced result of estimation can help improve the policy-making. • The method suggested in this paper can be also used in other countries. • Some discussions on methodology are also conducted in order for the improvement. - Abstract: Due to the rapid development of economy and technology, China has the biggest production and possession of mobile phones around the world. In general, mobile phones have relatively short life time because the majority of users replace their mobile phones frequently. Retired mobile phones represent the most valuable electrical and electronic equipment (EEE) in the main waste stream because of such characteristics as large quantity, high reuse/recovery value and fast replacement frequency. Consequently, the huge amount of retired mobile phones in China calls for a sustainable management system. The generation estimation can provide fundamental information to construct the sustainable management system of retired mobile phones and other waste electrical and electronic equipment (WEEE). However, the reliable estimation result is difficult to get and verify. The priority aim of this paper is to provide proper estimation approach for the generation of retired mobile phones in China, by comparing some relevant methods. The results show that the sales and new method is in the highest priority in estimation of the retired mobile phones. The result of sales and new method shows that there are 47.92 million mobile phones retired in 2002, and it reached to 739.98 million in China in 2012. It presents an increasing tendency with some fluctuations clearly. Furthermore, some discussions on methodology, such as the selection of improper approach and error in the input data, are also conducted in order to

  4. Comparison of two control groups for estimation of oral cholera vaccine effectiveness using a case-control study design.

    Science.gov (United States)

    Franke, Molly F; Jerome, J Gregory; Matias, Wilfredo R; Ternier, Ralph; Hilaire, Isabelle J; Harris, Jason B; Ivers, Louise C

    2017-10-13

    Case-control studies to quantify oral cholera vaccine effectiveness (VE) often rely on neighbors without diarrhea as community controls. Test-negative controls can be easily recruited and may minimize bias due to differential health-seeking behavior and recall. We compared VE estimates derived from community and test-negative controls and conducted bias-indicator analyses to assess potential bias with community controls. From October 2012 through November 2016, patients with acute watery diarrhea were recruited from cholera treatment centers in rural Haiti. Cholera cases had a positive stool culture. Non-cholera diarrhea cases (test-negative controls and non-cholera diarrhea cases for bias-indicator analyses) had a negative culture and rapid test. Up to four community controls were matched to diarrhea cases by age group, time, and neighborhood. Primary analyses included 181 cholera cases, 157 non-cholera diarrhea cases, 716 VE community controls and 625 bias-indicator community controls. VE for self-reported vaccination with two doses was consistent across the two control groups, with statistically significant VE estimates ranging from 72 to 74%. Sensitivity analyses revealed similar, though somewhat attenuated estimates for self-reported two dose VE. Bias-indicator estimates were consistently less than one, with VE estimates ranging from 19 to 43%, some of which were statistically significant. OCV estimates from case-control analyses using community and test-negative controls were similar. While bias-indicator analyses suggested possible over-estimation of VE estimates using community controls, test-negative analyses suggested this bias, if present, was minimal. Test-negative controls can be a valid low-cost and time-efficient alternative to community controls for OCV effectiveness estimation and may be especially relevant in emergency situations. Copyright © 2017. Published by Elsevier Ltd.

  5. Low-level inhaled-239PuO2 life-span studies in rats

    International Nuclear Information System (INIS)

    Sanders, C.L.; McDonald, K.E.; Killand, B.W.; Mahaffey, J.A.; Cannon, W.C.

    1986-01-01

    This study determined the dose-response curve for lung tumor incidence in rats after inhalation of high-fired 239 PuO 2 , which gave radiation doses to the lung of from ∼5 to >1000 rads. Exposed rats were given a single, nose-only, inhalation exposure to 169 Yb- 239 PuO 2 aerosol (AMAD, 1.6 +- 0.11 μm). The effective half-time for 169 Yb in the lung was 14 days, whereas ∼76% of 239 Pu was cleared with a half-time of 20 days and 24%, with a half-time of 180 days. Whole-body counting for 169 Yb at 14 days after exposure was an accurate method for determining 239 Pu IAD in individual rats, even at IAD's as low as 0.60 nCi of 239 Pu. The 239 Pu lung-clearance curve and an equation describing changes in lung weight with body weight and age were used to determine lung radiation doses. The IAD's of exposure groups were 0.60 +- 0.15 nCi of 239 Pu (1000 rats), 0.98 +- 0.25 (531 rats), 2.4 +- 0.69 (209 rats), 5.7 +- 1.2 (98 rats), and 7.5 +- 2.0 to 150 +- 37 nCi (300 rats); corresponding radiation doses to the lung estimated at 3 years after exposure were 8.3, 14, 33, 79, and 100 to 2100 rads, respectively. 71 refs., 5 figs., 4 tabs

  6. Estimating environmental and occupational contribution to cancer in Sudan

    International Nuclear Information System (INIS)

    Abdallah, Y. M. Y.; Beden, S. J.; Khalifa, A. A.

    2012-12-01

    This study was performed in Radiation and Isotopes Center of Khartoum (RICK) and National Cancer Institute (NCI) University of Gazeria. It focused on cancer patients who were treated by radiation therapy in the period between 2008 and 2009. The study investigator the risk and causative factors and geographical distribution over the sudan states and the relationship of incidence with some patient's customs and dietary habits. This study summarizes recent scientific evidence of environmental and occupational links to nearly 30 types of cancer. the discussion of each cancer type is introduced by highlights of trends in incidence. The study considers additional indication that involuntary exposures are linked to cancers, such as patterns observed in different geographic areas and among different population patterns of cancer in children. The purpose of this study is to review scientific evidence, particular y epidemiologic evidence. regarding the contribution of environmental and occupational exposures to the overall cancer incidence in the Sudan. The study discussed that the widespread exposures from air and water pollution, the work environment, exposure resulting form personal habits such as smoking and drinking and the diet are major contributors to cancer in human. In the past three decades, there have been several efforts to estimate the proportion of caner due to these involuntary exposure, starting with an ambitious effort by different scientists. This study provided and alternative interpretation of the evidence of cancer incidence to particular factors. We conclude the study by recommending the significance of giving environmental and occupational links to cancer serious consideration by individuals and institutions concerned with cancer prevention, particularly those involved in research and education. (Author)

  7. A Study to Estimate the Effectiveness of Visual Testing Training for Aviation Maintenance Management

    Science.gov (United States)

    Law, Lewis Lyle

    2007-01-01

    The Air Commerce Act of 1926 set the beginning for standards in aviation maintenance. Even after deregulation in the late l970s, maintenance standards and requirements still have not changed far from their initial criteria. After a potential candidate completes Federal Aviation Administration training prerequisites, they may test for their Airframe and Powerplant (A&P) certificate. Performing maintenance in the aviation industry for a minimum of three years, the technician may then test for their Inspection Authorization (IA). After receiving their Airframe and Powerplant certificate, a technician is said to have a license to perform. At no time within the three years to eligibility for Inspection Authorization are they required to attend higher-level inspection training. What a technician learns in the aviation maintenance industry is handed down from a seasoned technician to the new hire or is developed from lessons learned on the job. Only in Europe has the Joint Aviation Authorities (JAA) required higher-level training for their aviation maintenance technicians in order to control maintenance related accidents (Lu, 2005). Throughout the 1990s both the General Accounting Office (GAO) and the National Transportation Safety Board (NTSB) made public that the FAA is historically understaffed (GAO, 1996). In a safety recommendation the NTSB stated "The Safety Board continues to lack confidence in the FAA's commitment to provide effective quality assurance and safety oversight of the ATC system (NTSB, 1990)." The Federal Aviation Administration (FAA) has been known to be proactive in creating safer skies. With such reports you would suspect the FAA to also be proactive in developing more stringent inspection training for aviation maintenance technicians. The purpose of this study is to estimate the effectiveness of higher-level inspection training, such as Visual Testing (VT) for aviation maintenance technicians, to improve the safety of aircraft and to make

  8. [Cardiovascular risk: initial estimation in the study cohort "CDC of the Canary Islands in Venezuela"].

    Science.gov (United States)

    Viso, Miguel; Rodríguez, Zulma; Loreto, Neydys; Fernández, Yolima; Callegari, Carlos; Nicita, Graciela; González, Julio; Cabrera de León, Antonio; Reigosa, Aldo

    2011-12-01

    In Venezuela as in the Canary Islands (Spain), cardiovascular disease is a major cause of morbidity and mortality. The purpose of this research is to estimate the cardiovascular risk in the Canary Islands migrants living in Venezuela and participating in the study cohort "CDC of the Canary Islands in Venezuela". 452 individuals, aged 18 to 93 years (54.9% women), were enrolled between June 2008 and August 2009. A data survey was performed and their weight, height, abdomen and hip circumferences, and blood pressure were measured. After a 12-hour fasting period, a blood sample was obtained for glucose and lipid profile determinations. 40.5% of the subjects were over 65 years of age and 8% corresponded to the younger group (18-30 years). In men, the average age was 57.69 +/- 18.17 years and the body mass index 29.39 +/- 5.71 kg/m2, whereas women were 56.50 +/- 16.91 years and 28.20 +/- 5.57 kg/m2, respectively. The prevalence of metabolic syndrome was 49.1%, overweight and obesity together 75,2%, abdominal obesity 85.4%, diabetes 17.4%, impaired fasting glucose (IFG) 12.2%, elevated blood pressure 52.9%, low HDL-cholesterol 53,8% and elevated serum triglycerides 31%. Among subjects without diabetes or IFG, a third showed a high triglycerides/HDL-cholesterol ratio, indicating insulin resistance. We conclude that the Canarian-Venezuelan community suffers high prevalence of cardiovascular risk factors (obesity, abdominal obesity, dyslipidemia, diabetes). In relation to the current population of the Canary Islands, they show a lower frequency of IFG and a higher frequency of low HDL-cholesterol. In comparison to the Venezuelan population (Zulia), they showed to have lower prevalence of IFG, low HDL cholesterol and elevated triglycerides.

  9. Potential Risk Estimation Drowning Index for Children (PREDIC): a pilot study from Matlab, Bangladesh.

    Science.gov (United States)

    Borse, N N; Hyder, A A; Bishai, D; Baker, T; Arifeen, S E

    2011-11-01

    Childhood drowning is a major public health problem that has been neglected in many low- and middle-income countries. In Matlab, rural Bangladesh, more than 40% of child deaths aged 1-4 years are due to drowning. The main objective of this paper was to develop and evaluate a childhood drowning risk prediction index. A literature review was carried out to document risk factors identified for childhood drowning in Bangladesh. The Newacheck model for special health care needs for children was adapted and applied to construct a childhood drowning risk index called "Potential Risk Estimation Drowning Index for Children" (PREDIC). Finally, the proposed PREDIC Index was applied to childhood drowning deaths and compared with the comparison group from children living in Matlab, Bangladesh. This pilot study used t-tests and Receiver Operating Characteristic (ROC) curve to analyze the results. The PREDIC index was applied to 302 drowning deaths and 624 children 0-4 years old living in Matlab. The results of t-test indicate that the drowned children had a statistically (t=-8.58, p=0.0001) significant higher mean PREDIC score (6.01) than those in comparison group (5.26). Drowning cases had a PREDIC score of 6 or more for 68% of the children however, the comparison group had 43% of the children with score of 6 or more which was statistically significant (t=-7.36, p<0.001). The area under the curve for the Receiver Operating Characteristic curve was 0.662. Index score construction was scientifically plausible; and the index is relatively complete, fairly accurate, and practical. The risk index can help identify and target high risk children with drowning prevention programs. PREDIC index needs to be further tested for its accuracy, feasibility and effectiveness in drowning risk reduction in Bangladesh and other countries. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Internal strain estimation for quantification of human heel pad elastic modulus: A phantom study.

    Science.gov (United States)

    Holst, Karen; Liebgott, Hervé; Wilhjelm, Jens E; Nikolov, Svetoslav; Torp-Pedersen, Søren T; Delachartre, Philippe; Jensen, Jørgen A

    2013-02-01

    Shock absorption is the most important function of the human heel pad. However, changes in heel pad elasticity, as seen in e.g. long-distance runners, diabetes patients, and victims of Falanga torture are affecting this function, often in a painful manner. Assessment of heel pad elasticity is usually based on one or a few strain measurements obtained by an external load-deformation system. The aim of this study was to develop a technique for quantitative measurements of heel pad elastic modulus based on several internal strain measures from within the heel pad by use of ultrasound images. Nine heel phantoms were manufactured featuring a combination of three heel pad stiffnesses and three heel pad thicknesses to model the normal human variation. Each phantom was tested in an indentation system comprising a 7MHz linear array ultrasound transducer, working as the indentor, and a connected load cell. Load-compression data and ultrasound B-mode images were simultaneously acquired in 19 compression steps of 0.1mm each. The internal tissue displacement was for each step calculated by a phase-based cross-correlation technique and internal strain maps were derived from these displacement maps. Elastic moduli were found from the resulting stress-strain curves. The elastic moduli made it possible to distinguish eight of nine phantoms from each other according to the manufactured stiffness and showed very little dependence of the thickness. Mean elastic moduli for the three soft, the three medium, and the three hard phantoms were 89kPa, 153kPa, and 168kPa, respectively. The combination of ultrasound images and force measurements provided an effective way of assessing the elastic properties of the heel pad due to the internal strain estimation. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Estimation of retired mobile phones generation in China: A comparative study on methodology.

    Science.gov (United States)

    Li, Bo; Yang, Jianxin; Lu, Bin; Song, Xiaolong

    2015-01-01

    Due to the rapid development of economy and technology, China has the biggest production and possession of mobile phones around the world. In general, mobile phones have relatively short life time because the majority of users replace their mobile phones frequently. Retired mobile phones represent the most valuable electrical and electronic equipment (EEE) in the main waste stream because of such characteristics as large quantity, high reuse/recovery value and fast replacement frequency. Consequently, the huge amount of retired mobile phones in China calls for a sustainable management system. The generation estimation can provide fundamental information to construct the sustainable management system of retired mobile phones and other waste electrical and electronic equipment (WEEE). However, the reliable estimation result is difficult to get and verify. The priority aim of this paper is to provide proper estimation approach for the generation of retired mobile phones in China, by comparing some relevant methods. The results show that the sales&new method is in the highest priority in estimation of the retired mobile phones. The result of sales&new method shows that there are 47.92 million mobile phones retired in 2002, and it reached to 739.98 million in China in 2012. It presents an increasing tendency with some fluctuations clearly. Furthermore, some discussions on methodology, such as the selection of improper approach and error in the input data, are also conducted in order to improve generation estimation of retired mobile phones and other WEEE. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Experimental and Analytical Studies on Improved Feedforward ML Estimation Based on LS-SVR

    Directory of Open Access Journals (Sweden)

    Xueqian Liu

    2013-01-01

    Full Text Available Maximum likelihood (ML algorithm is the most common and effective parameter estimation method. However, when dealing with small sample and low signal-to-noise ratio (SNR, threshold effects are resulted and estimation performance degrades greatly. It is proved that support vector machine (SVM is suitable for small sample. Consequently, we employ the linear relationship between least squares support vector regression (LS-SVR’s inputs and outputs and regard LS-SVR process as a time-varying linear filter to increase input SNR of received signals and decrease the threshold value of mean square error (MSE curve. Furthermore, it is verified that by taking single-tone sinusoidal frequency estimation, for example, and integrating data analysis and experimental validation, if LS-SVR’s parameters are set appropriately, not only can the LS-SVR process ensure the single-tone sinusoid and additive white Gaussian noise (AWGN channel characteristics of original signals well, but it can also improves the frequency estimation performance. During experimental simulations, LS-SVR process is applied to two common and representative single-tone sinusoidal ML frequency estimation algorithms, the DFT-based frequency-domain periodogram (FDP and phase-based Kay ones. And the threshold values of their MSE curves are decreased by 0.3 dB and 1.2 dB, respectively, which obviously exhibit the advantage of the proposed algorithm.

  13. A study of two estimation approaches for parameters of Weibull distribution based on WPP

    International Nuclear Information System (INIS)

    Zhang, L.F.; Xie, M.; Tang, L.C.

    2007-01-01

    Least-squares estimation (LSE) based on Weibull probability plot (WPP) is the most basic method for estimating the Weibull parameters. The common procedure of this method is using the least-squares regression of Y on X, i.e. minimizing the sum of squares of the vertical residuals, to fit a straight line to the data points on WPP and then calculate the LS estimators. This method is known to be biased. In the existing literature the least-squares regression of X on Y, i.e. minimizing the sum of squares of the horizontal residuals, has been used by the Weibull researchers. This motivated us to carry out this comparison between the estimators of the two LS regression methods using intensive Monte Carlo simulations. Both complete and censored data are examined. Surprisingly, the result shows that LS Y on X performs better for small, complete samples, while the LS X on Y performs better in other cases in view of bias of the estimators. The two methods are also compared in terms of other model statistics. In general, when the shape parameter is less than one, LS Y on X provides a better model; otherwise, LS X on Y tends to be better

  14. Case Study to Apply Work Difficulty Factors to Decommissioning Cost Estimates

    Energy Technology Data Exchange (ETDEWEB)

    Yun, Taesik; Jung, Hyejin; Oh, Jaeyoung; Kim, Younggook [KHNP CRI, Daejeon (Korea, Republic of)

    2016-10-15

    This article is prepared as a guideline regarding how to apply the work difficult factor (WDF) when it comes to the estimates of the decommissioning costs. Although several cases of the decommissioning cost estimates have been made for a few commercial nuclear power plants, the different technical, site-specific economic assumptions used make it difficult to interpret those cost estimates and compare them with that of Kori-1. In addition, it is clear that we are supposed to experience difficulties being created in the process of the Kori-1 and the virtual inaccessibility to the limited areas at the pre-decommissioning stage. Estimating decommissioning costs is one of the most crucial processes since it encompasses all the spectrum of decommissioning activities from the planning to the last evaluation on whether the decommissioning has successfully been proceeded from the safety and economic perspectives. Here I suggested the activity dependent costs is only related to WDFs of the incumbent plant planning or undergone to be decommissioned since as a matter of fact, estimating WDFs is the core process to articulately scrutinize the practical costs to apply to Kori-1 project.

  15. Parameter and state estimation in a Neisseria meningitidis model: A study case of Niger

    Science.gov (United States)

    Bowong, S.; Mountaga, L.; Bah, A.; Tewa, J. J.; Kurths, J.

    2016-12-01

    Neisseria meningitidis (Nm) is a major cause of bacterial meningitidis outbreaks in Africa and the Middle East. The availability of yearly reported meningitis cases in the African meningitis belt offers the opportunity to analyze the transmission dynamics and the impact of control strategies. In this paper, we propose a method for the estimation of state variables that are not accessible to measurements and an unknown parameter in a Nm model. We suppose that the yearly number of Nm induced mortality and the total population are known inputs, which can be obtained from data, and the yearly number of new Nm cases is the model output. We also suppose that the Nm transmission rate is an unknown parameter. We first show how the recruitment rate into the population can be estimated using real data of the total population and Nm induced mortality. Then, we use an auxiliary system called observer whose solutions converge exponentially to those of the original model. This observer does not use the unknown infection transmission rate but only uses the known inputs and the model output. This allows us to estimate unmeasured state variables such as the number of carriers that play an important role in the transmission of the infection and the total number of infected individuals within a human community. Finally, we also provide a simple method to estimate the unknown Nm transmission rate. In order to validate the estimation results, numerical simulations are conducted using real data of Niger.

  16. A Study of an Iterative Channel Estimation Scheme of FS-FBMC System

    Directory of Open Access Journals (Sweden)

    YongJu Won

    2017-01-01

    Full Text Available A filter bank multicarrier on offset-quadrature amplitude modulation (FBMC/OQAM system is an alternative multicarrier modulation scheme that does not need cyclic prefix (CP even in the presence of a multipath fading channel by the properties of prototype filter. The FBMC/OQAM system can be implemented either by using the poly-phase network with fast fourier transform (PPN-FFT or by using the extended FFT on a frequency-spreading (FS domain. In this paper, we propose an iterative channel estimation scheme for each sub channel of a FBMC/OQAM system over a frequency-spreading domain. The proposed scheme first estimates the channel using the received pilot signal in the subchannel domain and interpolates the estimated channel to fine frequency-spreading domain. Then the channel compensated FS domain pilot is despread again to modify the channel state information (CSI estimation. Computer simulation shows that the proposed method outperforms the conventional FBMC/OQAM channel estimator in a frequency selective channel.

  17. Accurate and robust phylogeny estimation based on profile distances: a study of the Chlorophyceae (Chlorophyta

    Directory of Open Access Journals (Sweden)

    Rahmann Sven

    2004-06-01

    Full Text Available Abstract Background In phylogenetic analysis we face the problem that several subclade topologies are known or easily inferred and well supported by bootstrap analysis, but basal branching patterns cannot be unambiguously estimated by the usual methods (maximum parsimony (MP, neighbor-joining (NJ, or maximum likelihood (ML, nor are they well supported. We represent each subclade by a sequence profile and estimate evolutionary distances between profiles to obtain a matrix of distances between subclades. Results Our estimator of profile distances generalizes the maximum likelihood estimator of sequence distances. The basal branching pattern can be estimated by any distance-based method, such as neighbor-joining. Our method (profile neighbor-joining, PNJ then inherits the accuracy and robustness of profiles and the time efficiency of neighbor-joining. Conclusions Phylogenetic analysis of Chlorophyceae with traditional methods (MP, NJ, ML and MrBayes reveals seven well supported subclades, but the methods disagree on the basal branching pattern. The tree reconstructed by our method is better supported and can be confirmed by known morphological characters. Moreover the accuracy is significantly improved as shown by parametric bootstrap.

  18. Comparison of Sampling Designs for Estimating Deforestation from Landsat TM and MODIS Imagery: A Case Study in Mato Grosso, Brazil

    Directory of Open Access Journals (Sweden)

    Shanyou Zhu

    2014-01-01

    Full Text Available Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block.

  19. Comparison of sampling designs for estimating deforestation from landsat TM and MODIS imagery: a case study in Mato Grosso, Brazil.

    Science.gov (United States)

    Zhu, Shanyou; Zhang, Hailong; Liu, Ronggao; Cao, Yun; Zhang, Guixin

    2014-01-01

    Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block.

  20. STUDY OF ESTIMATE CONCENTRATION OF WATER CONSTITUENTS AT BADUNG STRAIT BALI USING INVERSE MODEL

    Directory of Open Access Journals (Sweden)

    I Ketut Swardika

    2012-11-01

    Full Text Available An algorithm was employed to retrieve the concentrations of three water constituents, chlorophyll-a,suspended matter and colored dissolved organic matter (CDOM from MODIS (Moderate-ResolutionImaging Spectrometer in wide range covering from oligotrophic case-1 to turbid case-2 waters at theBadung Strait Bali. The algorithm is a neural network (NN which is used to parameterize the inverse of aradiative transfer model. It’s used in this study as a multiple nonlinear regression technique. The NN is a feedforward back propagation model with two hidden layers. The NN was trained with computed radiancecovering the range of chlorophyll-a from 0.001 to 64.0 ?g/l, inorganic suspended matter from 0.01 to 50.0mg/l, and CDOM absorption at 440nm from 0.001 to 5.0 m-1. Inputs to the NN are the radiance of the fivespectral channels which were under discussion for MODIS. The outputs are the three water constituentconcentrations. The NN algorithm was tested using in-situ data set on May, September, November 2005 atthe Badung Strait Bali and the north sea of Sumbawa Island and applied to MODIS. The coefficient ofdetermination (R2 between chlorophyll-a concentrations derived from simulation and in-situ data is 0.327,for suspended matter R2 is 0.408. No in-situ measurements of CDOM available for validation. Also, in-situdata were compared with the corresponding distribution obtained by the NASA standard OC4 (OC3M forMODIS chlorophyll-a algorithm and giving R2 0.188. This study gives better accuracy compare withstandard algorithm. How ever both studies are giving over estimate chlorophyll-a concentration. Since thereare no standard MODIS products available for suspended matter and CDOM, the result of the retrieval by theNN for these two variables could only be assessed by a general knowledge of their concentrations anddistribution patterns