WorldWideScience

Sample records for providing parallel coverage

  1. Patient Experience Of Provider Refusal Of Medicaid Coverage And Its Implications.

    Science.gov (United States)

    Bhandari, Neeraj; Shi, Yunfeng; Jung, Kyoungrae

    2016-01-01

    Previous studies show that many physicians do not accept new patients with Medicaid coverage, but no study has examined Medicaid enrollees' actual experience of provider refusal of their coverage and its implications. Using the 2012 National Health Interview Survey, we estimate provider refusal of health insurance coverage reported by 23,992 adults with continuous coverage for the past 12 months. We find that among Medicaid enrollees, 6.73% reported their coverage being refused by a provider in 2012, a rate higher than that in Medicare and private insurance by 4.07 (p<.01) and 3.68 (p<.001) percentage points, respectively. Refusal of Medicaid coverage is associated with delaying needed care, using emergency room (ER) as a usual source of care, and perceiving current coverage as worse than last year. In view of the Affordable Care Act's (ACA) Medicaid expansion, future studies should continue monitoring enrollees' experience of coverage refusal.

  2. Long-term cost-effectiveness of providing full coverage for preventive medications after myocardial infarction.

    Science.gov (United States)

    Ito, Kouta; Avorn, Jerry; Shrank, William H; Toscano, Michele; Spettel, Claire; Brennan, Troyen; Choudhry, Niteesh K

    2015-05-01

    Adherence to drugs that are prescribed after myocardial infarction remains suboptimal. Although eliminating patient cost sharing for secondary prevention increases adherence and reduces rates of major cardiovascular events, the long-term clinical and economic implications of this approach have not been adequately evaluated. We developed a Markov model simulating a hypothetical cohort of commercially insured patients who were discharged from the hospital after myocardial infarction. Patients received β-blockers, renin-angiotensin system antagonists, and statins without cost sharing (full coverage) or at the current level of insurance coverage (usual coverage). Model inputs were extracted from the Post Myocardial Infarction Free Rx Event and Economic Evaluation trial and other published literature. The main outcome was an incremental cost-effectiveness ratio as measured by cost per quality-adjusted life year gained. Patients receiving usual coverage lived an average of 9.46 quality-adjusted life years after their event and incurred costs of $171,412. Patients receiving full coverage lived an average of 9.60 quality-adjusted life years and incurred costs of $167,401. Compared with usual coverage, full coverage would result in greater quality-adjusted survival (0.14 quality-adjusted life years) and less resource use ($4011) per patient. Our results were sensitive to alterations in the risk reduction for post-myocardial infarction events from full coverage. Providing full prescription drug coverage for evidence-based pharmacotherapy to commercially insured post-myocardial infarction patients has the potential to improve health outcomes and save money from the societal perspective over the long-term. https://www.clinicaltrials.gov. Unique identifier: NCT00566774. © 2015 American Heart Association, Inc.

  3. Defining the essential anatomical coverage provided by military body armour against high energy projectiles.

    Science.gov (United States)

    Breeze, John; Lewis, E A; Fryer, R; Hepper, A E; Mahoney, Peter F; Clasper, Jon C

    2016-08-01

    Body armour is a type of equipment worn by military personnel that aims to prevent or reduce the damage caused by ballistic projectiles to structures within the thorax and abdomen. Such injuries remain the leading cause of potentially survivable deaths on the modern battlefield. Recent developments in computer modelling in conjunction with a programme to procure the next generation of UK military body armour has provided the impetus to re-evaluate the optimal anatomical coverage provided by military body armour against high energy projectiles. A systematic review of the literature was undertaken to identify those anatomical structures within the thorax and abdomen that if damaged were highly likely to result in death or significant long-term morbidity. These structures were superimposed upon two designs of ceramic plate used within representative body armour systems using a computerised representation of human anatomy. Those structures requiring essential medical coverage by a plate were demonstrated to be the heart, great vessels, liver and spleen. For the 50th centile male anthropometric model used in this study, the front and rear plates from the Enhanced Combat Body Armour system only provide limited coverage, but do fulfil their original requirement. The plates from the current Mark 4a OSPREY system cover all of the structures identified in this study as requiring coverage except for the abdominal sections of the aorta and inferior vena cava. Further work on sizing of plates is recommended due to its potential to optimise essential medical coverage. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  4. Health insurance eroding for working families: employer-provided coverage declines for fifth consecutive year.

    Science.gov (United States)

    Gould, Elise

    2007-01-01

    In 2005, the percentage of Americans with employer-provided health insurance fell for the fifth year in a row. Workers and their families have been falling into the ranks of the uninsured at alarming rates. The downward trend in employer-provided coverage for children also continued into 2005. In the previous four years, children were less likely to become uninsured as public sector health coverage expanded, but in 2005 the rate of uninsured children increased. While Medicaid and SCHIP still work for many, the government has not picked up coverage for everybody who lost insurance. The weakening of this system-notably for children-is particularly difficult for workers and their families in a time of stagnating incomes. Furthermore, these programs are not designed to prevent low-income adults or middle- or high-income families from becoming uninsured. Government at the federal and state levels has responded to medical inflation with policy changes that reduce public insurance eligibility or with proposals to reduce government costs. Federal policy proposals to lessen the tax advantage of workplace insurance or to encourage a private purchase system could further destabilize the employer-provided system. Now is a critical time to consider health insurance reform. Several promising solutions could increase access to affordable health care. The key is to create large, varied, and stable risk pools.

  5. 75 FR 41787 - Requirement for Group Health Plans and Health Insurance Issuers To Provide Coverage of Preventive...

    Science.gov (United States)

    2010-07-19

    ... Insurance Issuers To Provide Coverage of Preventive Services Under the Patient Protection and Affordable... Care Act (the Affordable Care Act) regarding preventive health services. The IRS is issuing the....9815-2713 is added to read as follows: Sec. 54.9815-2713 Coverage of preventive health services. [The...

  6. Abortion providers' experiences with Medicaid abortion coverage policies: a qualitative multistate study.

    Science.gov (United States)

    Dennis, Amanda; Blanchard, Kelly

    2013-02-01

    To evaluate the implementation of state Medicaid abortion policies and the impact of these policies on abortion clients and abortion providers. From 2007 to 2010, in-depth interviews were conducted with representatives of 70 abortion-providing facilities in 15 states. In-depth interviews focused on abortion providers' perceptions regarding Medicaid and their experiences working with Medicaid and securing reimbursement in cases that should receive federal funding: rape, incest, and life endangerment. Data were transcribed verbatim before being coded. In two study states, abortion providers reported that 97 percent of submitted claims for qualifying cases were funded. Success receiving reimbursement was attributed to streamlined electronic billing procedures, timely claims processing, and responsive Medicaid staff. Abortion providers in the other 13 states reported reimbursement for 36 percent of qualifying cases. Providers reported difficulties obtaining reimbursement due to unclear rejections of qualifying claims, complex billing procedures, lack of knowledgeable Medicaid staff with whom billing problems could be discussed, and low and slow reimbursement rates. Poor state-level implementation of Medicaid coverage of abortion policies creates barriers for women seeking abortion. Efforts to ensure policies are implemented appropriately would improve women's health. © Health Research and Educational Trust.

  7. A PC parallel port button box provides millisecond response time accuracy under Linux.

    Science.gov (United States)

    Stewart, Neil

    2006-02-01

    For psychologists, it is sometimes necessary to measure people's reaction times to the nearest millisecond. This article describes how to use the PC parallel port to receive signals from a button box to achieve millisecond response time accuracy. The workings of the parallel port, the corresponding port addresses, and a simple Linux program for controlling the port are described. A test of the speed and reliability of button box signal detection is reported. If the reader is moderately familiar with Linux, this article should provide sufficient instruction for him or her to build and test his or her own parallel port button box. This article also describes how the parallel port could be used to control an external apparatus.

  8. Knowledge of Healthcare Coverage for Refugee Claimants: Results from a Survey of Health Service Providers in Montreal: e0146798

    National Research Council Canada - National Science Library

    Mónica Ruiz-Casares; Janet Cleveland; Youssef Oulhote; Catherine Dunkley-Hickin; Cécile Rousseau

    2016-01-01

      Following changes to the Interim Federal Health (IFH) program in Canada in 2012, this study investigates health service providers' knowledge of the healthcare coverage for refugee claimants living in Quebec...

  9. Knowledge of Healthcare Coverage for Refugee Claimants: Results from a Survey of Health Service Providers in Montreal

    National Research Council Canada - National Science Library

    Ruiz-Casares, Mónica; Cleveland, Janet; Oulhote, Youssef; Dunkley-Hickin, Catherine; Rousseau, Cécile

    2016-01-01

    Following changes to the Interim Federal Health (IFH) program in Canada in 2012, this study investigates health service providers' knowledge of the healthcare coverage for refugee claimants living in Quebec...

  10. The health and healthcare impact of providing insurance coverage to uninsured children: A prospective observational study.

    Science.gov (United States)

    Flores, Glenn; Lin, Hua; Walker, Candice; Lee, Michael; Currie, Janet M; Allgeyer, Rick; Portillo, Alberto; Henry, Monica; Fierro, Marco; Massey, Kenneth

    2017-05-23

    Of the 4.8 million uninsured children in America, 62-72% are eligible for but not enrolled in Medicaid or CHIP. Not enough is known, however, about the impact of health insurance on outcomes and costs for previously uninsured children, which has never been examined prospectively. This prospective observational study of uninsured Medicaid/CHIP-eligible minority children compared children obtaining coverage vs. those remaining uninsured. Subjects were recruited at 97 community sites, and 11 outcomes monitored monthly for 1 year. In this sample of 237 children, those obtaining coverage were significantly (P health (27% vs. 46%); no PCP (7% vs. 40%); experienced never/sometimes getting immediate care from the PCP (7% vs. 40%); no usual source of preventive (1% vs. 20%) or sick (3% vs. 12%) care; and unmet medical (13% vs. 48%), preventive (6% vs. 50%), and dental (18% vs. 62%) care needs. The uninsured had higher out-of-pocket doctor-visit costs (mean = $70 vs. $29), and proportions of parents not recommending the child's healthcare provider to friends (24% vs. 8%) and reporting the child's health caused family financial problems (29% vs. 5%), and lower well-child-care-visit quality ratings. In bivariate analyses, older age, birth outside of the US, and lacking health insurance for >6 months at baseline were associated with remaining uninsured for the entire year. In multivariable analysis, children who had been uninsured for >6 months at baseline (odds ratio [OR], 3.8; 95% confidence interval [CI], 1.4-10.3) and African-American children (OR, 2.8; 95% CI, 1.1-7.3) had significantly higher odds of remaining uninsured for the entire year. Insurance saved $2886/insured child/year, with mean healthcare costs = $5155/uninsured vs. $2269/insured child (P = .04). Providing health insurance to Medicaid/CHIP-eligible uninsured children improves health, healthcare access and quality, and parental satisfaction; reduces unmet needs and out-of-pocket costs; and saves

  11. Rotational electrical impedance tomography using electrodes with limited surface coverage provides window for multimodal sensing

    Science.gov (United States)

    Lehti-Polojärvi, Mari; Koskela, Olli; Seppänen, Aku; Figueiras, Edite; Hyttinen, Jari

    2018-02-01

    Electrical impedance tomography (EIT) is an imaging method that could become a valuable tool in multimodal applications. One challenge in simultaneous multimodal imaging is that typically the EIT electrodes cover a large portion of the object surface. This paper investigates the feasibility of rotational EIT (rEIT) in applications where electrodes cover only a limited angle of the surface of the object. In the studied rEIT, the object is rotated a full 360° during a set of measurements to increase the information content of the data. We call this approach limited angle full revolution rEIT (LAFR-rEIT). We test LAFR-rEIT setups in two-dimensional geometries with computational and experimental data. We use up to 256 rotational measurement positions, which requires a new way to solve the forward and inverse problem of rEIT. For this, we provide a modification, available for EIDORS, in the supplementary material. The computational results demonstrate that LAFR-rEIT with eight electrodes produce the same image quality as conventional 16-electrode rEIT, when data from an adequate number of rotational measurement positions are used. Both computational and experimental results indicate that the novel LAFR-rEIT provides good EIT with setups with limited surface coverage and a small number of electrodes.

  12. Nanospray FAIMS Fractionation Provides Significant Increases in Proteome Coverage of Unfractionated Complex Protein Digests*

    Science.gov (United States)

    Swearingen, Kristian E.; Hoopmann, Michael R.; Johnson, Richard S.; Saleem, Ramsey A.; Aitchison, John D.; Moritz, Robert L.

    2012-01-01

    High-field asymmetric waveform ion mobility spectrometry (FAIMS) is an atmospheric pressure ion mobility technique that can be used to reduce sample complexity and increase dynamic range in tandem mass spectrometry experiments. FAIMS fractionates ions in the gas-phase according to characteristic differences in mobilities in electric fields of different strengths. Undesired ion species such as solvated clusters and singly charged chemical background ions can be prevented from reaching the mass analyzer, thus decreasing chemical noise. To date, there has been limited success using the commercially available Thermo Fisher FAIMS device with both standard ESI and nanoLC-MS. We have modified a Thermo Fisher electrospray source to accommodate a fused silica pulled tip capillary column for nanospray ionization, which will enable standard laboratories access to FAIMS technology. Our modified source allows easily obtainable stable spray at flow rates of 300 nL/min when coupled with FAIMS. The modified electrospray source allows the use of sheath gas, which provides a fivefold increase in signal obtained when nanoLC is coupled to FAIMS. In this work, nanoLC-FAIMS-MS and nanoLC-MS were compared by analyzing a tryptic digest of a 1:1 mixture of SILAC-labeled haploid and diploid yeast to demonstrate the performance of nanoLC-FAIMS-MS, at different compensation voltages, for post-column fractionation of complex protein digests. The effective dynamic range more than doubled when FAIMS was used. In total, 10,377 unique stripped peptides and 1649 unique proteins with SILAC ratios were identified from the combined nanoLC-FAIMS-MS experiments, compared with 6908 unique stripped peptides and 1003 unique proteins with SILAC ratios identified from the combined nanoLC-MS experiments. This work demonstrates how a commercially available FAIMS device can be combined with nanoLC to improve proteome coverage in shotgun and targeted type proteomics experiments. PMID:22186714

  13. 25 CFR 900.193 - Does FTCA coverage extend to individuals who provide health care services under a personal...

    Science.gov (United States)

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false Does FTCA coverage extend to individuals who provide health care services under a personal services contract providing services in a facility that is owned, operated, or constructed under the jurisdiction of the IHS? 900.193 Section 900.193 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR, AND...

  14. 75 FR 27141 - Group Health Plans and Health Insurance Issuers Providing Dependent Coverage of Children to Age...

    Science.gov (United States)

    2010-05-13

    ... Revenue Service 26 CFR Part 54 RIN 1545-BJ45 Group Health Plans and Health Insurance Issuers Providing... Labor and the Office of Consumer Information and Insurance Oversight of the U.S. Department of Health... health plans and health insurance coverage offered in connection with a group health plan under the...

  15. 25 CFR 900.195 - Does FTCA coverage extend to the contractor's health care practitioners providing services to...

    Science.gov (United States)

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false Does FTCA coverage extend to the contractor's health care practitioners providing services to private patients on a fee-for-services basis when such personnel (not the self-determination contractor) receive the fee? 900.195 Section 900.195 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR, AND...

  16. Medicaid and CHIP Provide Coverage to More than Half of All Children in D.C. Policy Snapshot

    Science.gov (United States)

    DC Action for Children, 2011

    2011-01-01

    Medicaid and CHIP are crucial parts of the social safety net, providing health insurance coverage to more than half of all children ages 0-21 in D.C. and a third of children nationally. Without these two programs, more than 97,000 children in the District would have been uninsured in 2010. New research indicates that compared with the uninsured,…

  17. Chinese newspaper coverage of (unproven) stem cell therapies and their providers.

    Science.gov (United States)

    Ogbogu, Ubaka; Du, Li; Rachul, Christen; Bélanger, Lisa; Caulfield, Timothy

    2013-04-01

    China is a primary destination for stem cell tourism, the phenomenon whereby patients travel abroad to receive unproven stem cell-based treatments that have not been approved in their home countries. Yet, much remains unknown about the state of the stem cell treatment industry in China and about how the Chinese view treatments and providers. Given the media's crucial role in science/health communication and in framing public dialogue, this study sought to examine Chinese newspaper portrayal and perceptions of stem cell treatments and their providers. Based on a content analysis of over 300 newspaper articles, the study revealed that while Chinese newspaper reporting is generally neutral in tone, it is also inaccurate, overly positive, heavily influenced by "interested" treatment providers and focused on the therapeutic uses of stem cells to address the health needs of the local population. The study findings suggest a need to counterbalance providers' influence on media reporting through strategies that encourage media uptake of accurate information about stem cell research and treatments.

  18. Concomitant administration of pneumococcal-23 and zoster vaccines provides adequate herpes zoster coverage.

    Science.gov (United States)

    Wyman, Marcia J; Stabi, Katie L

    2013-01-01

    To determine whether concomitant administration of zoster vaccine and polysaccharide pneumococcal-23 vaccine (PPV23) provides sufficient protection against herpes zoster infections. Literature was retrieved through the Centers for Disease Control and Prevention (CDC) website, PubMed (inception-February 2013), and Scopus (inception-February 2013) using the key words herpes zoster, pneumococcal, vaccine, concomitant, simultaneous administration, Pneumovax, Zostavax, and barriers. In addition, reference citations from publications were used. All English-language articles identified from the data sources were evaluated. Two studies evaluating concomitant and nonconcomitant administration of zoster vaccine and PPV23 were included. Current product labeling recommends a 4-week interval between zoster vaccine and PPV23 administration; however, the Food and Drug Administration (FDA) and the CDC promote concomitant administration to prevent a missed opportunity to vaccinate. This has caused confusion among health care professionals regarding the appropriate timing of these vaccines. A randomized trial that evaluated the immunogenicity of zoster vaccine and PPV23 given together versus separated by at least 4 weeks demonstrated that the varicella zoster virus (VZV) antibody levels of concomitant versus nonconcomitant vaccination groups did not meet noninferiority requirements. However, a large retrospective cohort trial that compared the incidence of herpes zoster infections following concomitant versus nonconcomitant administration of PPV23 and zoster vaccine did not find a statistically significant between-group difference. Concomitant administration of zoster vaccine and PPV23 is advocated by the CDC and FDA to improve immunization rates among vaccine-eligible individuals. Since there is no direct evidence that simultaneous administration of zoster vaccine and PPV23 puts patients at increased risk of developing herpes zoster, the vaccines should be given during the same

  19. Parallel processing from applications to systems

    CERN Document Server

    Moldovan, Dan I

    1993-01-01

    This text provides one of the broadest presentations of parallelprocessing available, including the structure of parallelprocessors and parallel algorithms. The emphasis is on mappingalgorithms to highly parallel computers, with extensive coverage ofarray and multiprocessor architectures. Early chapters provideinsightful coverage on the analysis of parallel algorithms andprogram transformations, effectively integrating a variety ofmaterial previously scattered throughout the literature. Theory andpractice are well balanced across diverse topics in this concisepresentation. For exceptional cla

  20. Knowledge of Healthcare Coverage for Refugee Claimants: Results from a Survey of Health Service Providers in Montreal.

    Science.gov (United States)

    Ruiz-Casares, Mónica; Cleveland, Janet; Oulhote, Youssef; Dunkley-Hickin, Catherine; Rousseau, Cécile

    2016-01-01

    Following changes to the Interim Federal Health (IFH) program in Canada in 2012, this study investigates health service providers' knowledge of the healthcare coverage for refugee claimants living in Quebec. An online questionnaire was completed by 1,772 staff and physicians from five hospitals and two primary care centres in Montreal. Low levels of knowledge and significant associations between knowledge and occupational group, age, and contact with refugees were documented. Social workers, respondents aged 40-49 years, and those who reported previous contact with refugee claimants seeking healthcare were significantly more likely to have 2 or more correct responses. Rapid and multiple changes to the complex IFH policy have generated a high level of confusion among healthcare providers. Simplification of the system and a knowledge transfer strategy aimed at improving healthcare delivery for IFH patients are urgently needed, proposing easy avenues to access rapidly updated information and emphasizing ethical and clinical issues.

  1. Knowledge of Healthcare Coverage for Refugee Claimants: Results from a Survey of Health Service Providers in Montreal.

    Directory of Open Access Journals (Sweden)

    Mónica Ruiz-Casares

    Full Text Available Following changes to the Interim Federal Health (IFH program in Canada in 2012, this study investigates health service providers' knowledge of the healthcare coverage for refugee claimants living in Quebec. An online questionnaire was completed by 1,772 staff and physicians from five hospitals and two primary care centres in Montreal. Low levels of knowledge and significant associations between knowledge and occupational group, age, and contact with refugees were documented. Social workers, respondents aged 40-49 years, and those who reported previous contact with refugee claimants seeking healthcare were significantly more likely to have 2 or more correct responses. Rapid and multiple changes to the complex IFH policy have generated a high level of confusion among healthcare providers. Simplification of the system and a knowledge transfer strategy aimed at improving healthcare delivery for IFH patients are urgently needed, proposing easy avenues to access rapidly updated information and emphasizing ethical and clinical issues.

  2. Does Parallel Distributed Processing Provide a Plausible Framework for Modeling Early Reading Acquisition?

    Science.gov (United States)

    McEneaney, John E.

    A study compared a parallel distributed processing (PDP) model with a more traditional symbolic information processing model that accounts for early reading acquisition by human subjects. Two experimental paradigms were simulated. In one paradigm (a "savings" paradigm) subjects were divided into two groups and trained with two sets of…

  3. Variation in hepatitis B immunization coverage rates associated with provider practices after the temporary suspension of the birth dose

    Directory of Open Access Journals (Sweden)

    Mullooly John P

    2006-11-01

    Full Text Available Abstract Background In 1999, the American Academy of Pediatrics and U.S. Public Health Service recommended suspending the birth dose of hepatitis B vaccine due to concerns about potential mercury exposure. A previous report found that overall national hepatitis B vaccination coverage rates decreased in association with the suspension. It is unknown whether this underimmunization occurred uniformly or was associated with how providers changed their practices for the timing of hepatitis B vaccine doses. We evaluate the impact of the birth dose suspension on underimmunization for the hepatitis B vaccine series among 24-month-olds in five large provider groups and describe provider practices potentially associated with underimmunization following the suspension. Methods Retrospective cohort study of children enrolled in five large provider groups in the United States (A-E. Logistic regression was used to evaluate the association between the birth dose suspension and a child's probability of being underimmunized at 24 months for the hepatitis B vaccine series. Results Prior to July 1999, the percent of children who received a hepatitis B vaccination at birth varied widely (3% to 90% across the five provider groups. After the national recommendation to suspend the hepatitis B birth dose, the percent of children who received a hepatitis B vaccination at birth decreased in all provider groups, and this trend persisted after the policy was reversed. The most substantial decreases were observed in the two provider groups that shifted the first hepatitis B dose from birth to 5–6 months of age. Accounting for temporal trend, children in these two provider groups were significantly more likely to be underimmunized for the hepatitis B series at 24 months of age if they were in the birth dose suspension cohort compared with baseline (Group D OR 2.7, 95% CI 1.7 – 4.4; Group E OR 3.1, 95% CI 2.3 – 4.2. This represented 6% more children in Group D and 9

  4. RADseq provides evidence for parallel ecotypic divergence in the autotetraploid Cochlearia officinalis in Northern Norway.

    Science.gov (United States)

    Brandrud, Marie K; Paun, Ovidiu; Lorenzo, Maria T; Nordal, Inger; Brysting, Anne K

    2017-07-17

    Speciation encompasses a continuum over time from freely interbreeding populations to reproductively isolated species. Along this process, ecotypes - the result of local adaptation - may be on the road to new species. We investigated whether three autotetraploid Cochlearia officinalis ecotypes, adapted to different habitats (beach, estuary, spring), are genetically differentiated and result from parallel ecotypic divergence in two distinct geographical regions. We obtained genetic data from thousands of single nucleotide polymorphisms (SNPs) from restriction-site associated DNA sequencing (RADseq) and from six microsatellite markers for 12 populations to assess genetic divergence at ecotypic, geographic and population level. The genetic patterns support differentiation among ecotypes as suggested by morphology and ecology. The data fit a scenario where the ancestral beach ecotype has recurrently and polytopically given rise to the estuary and spring ecotypes. Several ecologically-relevant loci with consistent non-random segregating patterns are identified across the recurrent origins, in particular around genes related to salt stress. Despite being ecologically distinct, the Cochlearia ecotypes still represent an early stage in the process of speciation, as reproductive isolation has not (yet) developed. A sequenced annotated genome is needed to specifically target candidate genes underlying local adaptation.

  5. A model for determining when an analysis contains sufficient detail to provide adequate NEPA coverage for a proposed action

    Energy Technology Data Exchange (ETDEWEB)

    Eccleston, C.H.

    1994-11-01

    Neither the National Environmental Policy Act (NEPA) nor its subsequent regulations provide substantive guidance for determining the Level of detail, discussion, and analysis that is sufficient to adequately cover a proposed action. Yet, decisionmakers are routinely confronted with the problem of making such determinations. Experience has shown that no two decisionmakers are Likely to completely agree on the amount of discussion that is sufficient to adequately cover a proposed action. one decisionmaker may determine that a certain Level of analysis is adequate, while another may conclude the exact opposite. Achieving a consensus within the agency and among the public can be problematic. Lacking definitive guidance, decisionmakers and critics alike may point to a universe of potential factors as the basis for defending their claim that an action is or is not adequately covered. Experience indicates that assertions are often based on ambiguous opinions that can be neither proved nor disproved. Lack of definitive guidance slows the decisionmaking process and can result in project delays. Furthermore, it can also Lead to inconsistencies in decisionmaking, inappropriate Levels of NEPA documentation, and increased risk of a project being challenged for inadequate coverage. A more systematic and less subjective approach for making such determinations is obviously needed. A paradigm for reducing the degree of subjectivity inherent in such decisions is presented in the following paper. The model is specifically designed to expedite the decisionmaking process by providing a systematic approach for making these determination. In many cases, agencies may find that using this model can reduce the analysis and size of NEPA documents.

  6. PROVIDING QUALITY OF ELECTRIC POWER IN ELECTRIC POWER SYSTEM IN PARALLEL OPERATION WITH WIND TURBINE

    Directory of Open Access Journals (Sweden)

    Yu. A. Rolik

    2016-01-01

    Full Text Available The problem of providing electric power quality in the electric power systems (EPS that are equipped with sufficiently long air or cable transmission lines is under consideration. This problem proved to be of particular relevance to the EPS in which a source of electrical energy is the generator of wind turbines since the wind itself is an instable primary energy source. Determination of the degree of automation of voltage regulation in the EPS is reduced to the choice of methods and means of regulation of power quality parameters. The concept of a voltage loss and the causes of the latter are explained by the simplest power system that is presented by a single-line diagram. It is suggested to regulate voltage by means of changing parameters of the network with the use of the method of reducing loss of line voltage by reducing its reactance. The latter is achieved by longitudinal capacitive compensation of the inductive reactance of the line. The effect is illustrated by vector diagrams of currents and voltages in the equivalent circuits of transmission lines with and without the use of longitudinal capacitive compensation. The analysis of adduced formulas demonstrated that the use of this method of regulation is useful only in the systems of power supply with a relatively low power factor (cosφ < 0.7 to 0.9. This power factor is typical for the situation of inclusion the wind turbine with asynchronous generator in the network since the speed of wind is instable. The voltage regulation fulfilled with the aid of the proposed method will make it possible to provide the required quality of the consumers’ busbars voltage in this situation. In is turn, it will make possible to create the necessary conditions for the economical transmission of electric power with the lowest outlay of reactive power and the lowest outlay of active power losses.

  7. runjags: An R Package Providing Interface Utilities, Model Templates, Parallel Computing Methods and Additional Distributions for MCMC Models in JAGS

    Directory of Open Access Journals (Sweden)

    Matthew J. Denwood

    2016-07-01

    Full Text Available The runjags package provides a set of interface functions to facilitate running Markov chain Monte Carlo models in JAGS from within R. Automated calculation of appropriate convergence and sample length diagnostics, user-friendly access to commonly used graphical outputs and summary statistics, and parallelized methods of running JAGS are provided. Template model specifications can be generated using a standard lme4-style formula interface to assist users less familiar with the BUGS syntax. Automated simulation study functions are implemented to facilitate model performance assessment, as well as drop-k type cross-validation studies, using high performance computing clusters such as those provided by parallel. A module extension for JAGS is also included within runjags, providing the Pareto family of distributions and a series of minimally-informative priors including the DuMouchel and half-Cauchy priors. This paper outlines the primary functions of this package, and gives an illustration of a simulation study to assess the sensitivity of two equivalent model formulations to different prior distributions.

  8. The rate of growth in scientific publication and the decline in coverage provided by Science Citation Index.

    Science.gov (United States)

    Larsen, Peder Olesen; von Ins, Markus

    2010-09-01

    The growth rate of scientific publication has been studied from 1907 to 2007 using available data from a number of literature databases, including Science Citation Index (SCI) and Social Sciences Citation Index (SSCI). Traditional scientific publishing, that is publication in peer-reviewed journals, is still increasing although there are big differences between fields. There are no indications that the growth rate has decreased in the last 50 years. At the same time publication using new channels, for example conference proceedings, open archives and home pages, is growing fast. The growth rate for SCI up to 2007 is smaller than for comparable databases. This means that SCI was covering a decreasing part of the traditional scientific literature. There are also clear indications that the coverage by SCI is especially low in some of the scientific areas with the highest growth rate, including computer science and engineering sciences. The role of conference proceedings, open access archives and publications published on the net is increasing, especially in scientific fields with high growth rates, but this has only partially been reflected in the databases. The new publication channels challenge the use of the big databases in measurements of scientific productivity or output and of the growth rate of science. Because of the declining coverage and this challenge it is problematic that SCI has been used and is used as the dominant source for science indicators based on publication and citation numbers. The limited data available for social sciences show that the growth rate in SSCI was remarkably low and indicate that the coverage by SSCI was declining over time. National Science Indicators from Thomson Reuters is based solely on SCI, SSCI and Arts and Humanities Citation Index (AHCI). Therefore the declining coverage of the citation databases problematizes the use of this source.

  9. Climate Feedback: Bringing the Scientific Community to Provide Direct Feedback on the Credibility of Climate Media Coverage

    Science.gov (United States)

    Vincent, E. M.; Matlock, T.; Westerling, A. L.

    2015-12-01

    While most scientists recognize climate change as a major societal and environmental issue, social and political will to tackle the problem is still lacking. One of the biggest obstacles is inaccurate reporting or even outright misinformation in climate change coverage that result in the confusion of the general public on the issue.In today's era of instant access to information, what we read online usually falls outside our field of expertise and it is a real challenge to evaluate what is credible. The emerging technology of web annotation could be a game changer as it allows knowledgeable individuals to attach notes to any piece of text of a webpage and to share them with readers who will be able to see the annotations in-context -like comments on a pdf.Here we present the Climate Feedback initiative that is bringing together a community of climate scientists who collectively evaluate the scientific accuracy of influential climate change media coverage. Scientists annotate articles sentence by sentence and assess whether they are consistent with scientific knowledge allowing readers to see where and why the coverage is -or is not- based on science. Scientists also summarize the essence of their critical commentary in the form of a simple article-level overall credibility rating that quickly informs readers about the credibility of the entire piece.Web-annotation allows readers to 'hear' directly from the experts and to sense the consensus in a personal way as one can literaly see how many scientists agree with a given statement. It also allows a broad population of scientists to interact with the media, notably early career scientists.In this talk, we will present results on the impacts annotations have on readers -regarding their evaluation of the trustworthiness of the information they read- and on journalists -regarding their reception of scientists comments.Several dozen scientists have contributed to this effort to date and the system offers potential to

  10. [Acceptability of HIV testing provided to infants in pediatric services in Cote d'Ivoire, meanings for pediatric diagnostic coverage].

    Science.gov (United States)

    Oga, Maxime; Brou, Hermann; Dago-Akribi, Hortense; Coffie, Patrick; Amani-Bossé, Clarisse; Ekouévi, Didier; Yapo, Vincent; Menan, Hervé; Ndondoki, Camille; Timité-Konan, M; Leroy, Valériane

    2014-01-01

    , when he is opposed to the infant HIV testing, or also the facilitator with his realization if he is convinced. The father position thus remains essential face to the question of pediatric HIV testing acceptability. The mothers are aware of this and predict the difficulties of achieving their infant to be tested without the preliminary opinion of their partner at the same time father, and head of the family. The issue of pediatric HIV testing, at the end of our analysis, highlights three elements which require a comprehensive management to improve the coverage of pediatric HIV test. These three elements would not exist without being influenced; therefore they are constantly in interaction and prevent or support the realization or not pediatric test. Also, with the aim to improve the pediatric HIV test coverage, it is necessary to take into account the harmonious management of these elements. Firstly, the mother alone (with her knowledge, and perceptions), its marital environment (with the proposal of the HIV test integrating (1) the partner and/or father with his perceptions and knowledge on HIV infection and (2) facility of speaking about the test and its realization at both or one about the parents, the mother) and of the knowledge, attitudes and practices about the infection of health care workers of the sanitary institution. Our recommendations proposed taking into account a redefinition of the HIV/AIDS approach towards the families exposed to HIV and a more accentuated integration of the father facilitating their own HIV test acceptation and that of his child.

  11. Parallel experimental design and multivariate analysis provides efficient screening of cell culture media supplements to improve biosimilar product quality.

    Science.gov (United States)

    Brühlmann, David; Sokolov, Michael; Butté, Alessandro; Sauer, Markus; Hemberger, Jürgen; Souquet, Jonathan; Broly, Hervé; Jordan, Martin

    2017-07-01

    Rational and high-throughput optimization of mammalian cell culture media has a great potential to modulate recombinant protein product quality. We present a process design method based on parallel design-of-experiment (DoE) of CHO fed-batch cultures in 96-deepwell plates to modulate monoclonal antibody (mAb) glycosylation using medium supplements. To reduce the risk of losing valuable information in an intricate joint screening, 17 compounds were separated into five different groups, considering their mode of biological action. The concentration ranges of the medium supplements were defined according to information encountered in the literature and in-house experience. The screening experiments produced wide glycosylation pattern ranges. Multivariate analysis including principal component analysis and decision trees was used to select the best performing glycosylation modulators. Subsequent D-optimal quadratic design with four factors (three promising compounds and temperature shift) in shake tubes confirmed the outcome of the selection process and provided a solid basis for sequential process development at a larger scale. The glycosylation profile with respect to the specifications for biosimilarity was greatly improved in shake tube experiments: 75% of the conditions were equally close or closer to the specifications for biosimilarity than the best 25% in 96-deepwell plates. Biotechnol. Bioeng. 2017;114: 1448-1458. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  12. Physicians cite hurdles ranging from lack of coverage to poor communication in providing high-quality care to latinos.

    Science.gov (United States)

    Vargas Bustamante, Arturo; Chen, Jie

    2011-10-01

    We surveyed physicians about their ability to provide high-quality care to patients from diverse ethnic backgrounds. Primarily, we wanted to explore the challenges faced by physicians treating Latino patients compared to physicians whose patients were primarily white and non-Latino. We found that physicians treating Latinos, particularly those who worked in primary care in comparison to specialists, were less likely than physicians treating primarily white patients to believe in their ability to provide high-quality care. They cited problems of inadequate time with patients, patients' ability to pay, patients' nonadherence to recommended treatment, difficulties communicating with patients, relative lack of specialist availability, and lack of timely transmission of reports among physicians. Insurance expansions and complementary reforms mandated by the Affordable Care Act of 2010 and other recent legislation should aid physicians in closing some of these gaps in quality.

  13. Experiences and Attitudes of Primary Care Providers Under the First Year of ACA Coverage Expansion: Findings from the Kaiser Family Foundation/Commonwealth Fund 2015 National Survey of Primary Care Providers.

    Science.gov (United States)

    2015-06-01

    A new survey from The Kaiser Family Foundation and The Commonwealth Fund asked primary care providers--physicians, nurse practitioners, and physician assistants--about their views of and experiences with the Affordable Care Act (ACA) and other changes in health care delivery and payment, as well as their thoughts on the future of primary care. In this first brief based on the survey, many providers reported seeing an increased number of patients since the coverage expansions went into effect, but not an accompanying compromise in quality of care. A large majority of primary care providers are satisfied with their medical practice, but a substantial percentage of physicians expressed pessimism about the future of primary care. Similar to the population overall, providers' views of the ACA are divided along party lines. A second brief will report on providers' reactions to other changes occurring in primary care delivery and payment.

  14. Provider's and user's perspective about immunization coverage among migratory and non-migratory population in slums and construction sites of Chandigarh.

    Science.gov (United States)

    Sharma, Vikas; Singh, Amarjeet; Sharma, Vijaylakshmi

    2015-04-01

    Strengthening routine immunization is a corner stone for countries to achieve the United Nations Millennium Development Goal 4 (MDG 4) which aims to reduce under-five mortality by two-thirds and MDG 5 improving maternal health compared to 1990 estimates by 2015. The poor urban newborns are more vulnerable to many health and nutrition problems compared to the non-poor urban counterparts. Therefore there is a need to strengthen health system to cater the needs of urban poor. Standardized WHO30*7 cluster sampling for slums and convenience sampling for construction sites. In depth interviews were conducted for user's as well as provider's perspective about immunization coverage. Two hundred ten children and 210 mothers were enrolled in slums and 100 were sampled from construction sites. The slum workers are considered as non-migratory groups whereas construction site workers are considered as migratory population. Among children, 23 % were fully immunized, 73 % were partially immunized and 3 % were unimmunized in non-migratory population whereas 3 % were fully immunized, 91 % were partially immunized and 6 % were unimmunized in migratory population. Among mothers, 43 and 39 % were fully immunized, 13 and 15 % partially immunized and 43 and 46 % were unimmunized in non-migratory and migratory population, respectively. The various reasons attributed for low coverage are (a) dissatisfaction of the users with the service delivery and procedural delays (bureaucracy), (b) lack of faith in health workers, (c) insistence upon ID/vaccination card/aadhar card by the health worker before vaccinating child and (d) ignorance of the need of immunization by the people and migration of the population.

  15. Long-term survival of endodontically treated, maxillary anterior teeth restored with either tapered or parallel-sided glass-fiber posts and full-ceramic crown coverage.

    Science.gov (United States)

    Signore, Antonio; Benedicenti, Stefano; Kaitsas, Vassilios; Barone, Michele; Angiero, Francesca; Ravera, Giambattista

    2009-02-01

    This retrospective study investigated the clinical effectiveness over up to 8 years of parallel-sided and of tapered glass-fiber posts, in combination with either hybrid composite or dual-cure composite resin core material, in endodontically treated, maxillary anterior teeth covered with full-ceramic crowns. The study population comprised 192 patients and 526 endodontically treated teeth, with various degrees of hard-tissue loss, restored by the post-and-core technique. Four groups were defined based on post shape and core build-up materials, and within each group post-and-core restorations were assigned randomly with respect to root morphology. Inclusion criteria were symptom-free endodontic therapy, root-canal treatment with a minimum apical seal of 4mm, application of rubber dam, need for post-and-core complex because of coronal tooth loss, and tooth with at least one residual coronal wall. Survival rate of the post-and-core restorations was determined using Kaplan-Meier statistical analysis. The restorations were examined clinically and radiologically; mean observation period was 5.3 years. The overall survival rate of glass-fiber post-and-core restorations was 98.5%. The survival rate for parallel-sided posts was 98.6% and for tapered posts was 96.8%. Survival rates for core build-up materials were 100% for dual-cure composite and 96.8% for hybrid light-cure composite. For both glass-fiber post designs and for both core build-up materials, clinical performance was satisfactory. Survival was higher for teeth retaining four and three coronal walls.

  16. Regulating the for-profit private healthcare providers towards universal health coverage: A qualitative study of legal and organizational framework in Mongolia.

    Science.gov (United States)

    Tsevelvaanchig, Uranchimeg; Narula, Indermohan S; Gouda, Hebe; Hill, Peter S

    2017-05-29

    Regulating the behavior of private providers in the context of mixed health systems has become increasingly important and challenging in many developing countries moving towards universal health coverage including Mongolia. This study examines the current regulatory architecture for private healthcare in Mongolia exploring its role for improving accessibility, affordability, and quality of private care and identifies gaps in policy design and implementation. Qualitative research methods were used including documentary review, analysis, and in-depth interviews with 45 representatives of key actors involved in and affected by regulations in Mongolia's mixed health system, along with long-term participant observation. There has been extensive legal documentation developed regulating private healthcare, with specific organizations assigned to conduct health regulations and inspections. However, the regulatory architecture for healthcare in Mongolia is not optimally designed to improve affordability and quality of private care. This is not limited only to private care: important regulatory functions targeted to quality of care do not exist at the national level. The imprecise content and details of regulations in laws inviting increased political interference, governance issues, unclear roles, and responsibilities of different government regulatory bodies have contributed to failures in implementation of existing regulations. Copyright © 2017 John Wiley & Sons, Ltd.

  17. An Enumerative Combinatorics Model for Fragmentation Patterns in RNA Sequencing Provides Insights into Nonuniformity of the Expected Fragment Starting-Point and Coverage Profile.

    Science.gov (United States)

    Prakash, Celine; Haeseler, Arndt Von

    2017-03-01

    RNA sequencing (RNA-seq) has emerged as the method of choice for measuring the expression of RNAs in a given cell population. In most RNA-seq technologies, sequencing the full length of RNA molecules requires fragmentation into smaller pieces. Unfortunately, the issue of nonuniform sequencing coverage across a genomic feature has been a concern in RNA-seq and is attributed to biases for certain fragments in RNA-seq library preparation and sequencing. To investigate the expected coverage obtained from fragmentation, we develop a simple fragmentation model that is independent of bias from the experimental method and is not specific to the transcript sequence. Essentially, we enumerate all configurations for maximal placement of a given fragment length, F, on transcript length, T, to represent every possible fragmentation pattern, from which we compute the expected coverage profile across a transcript. We extend this model to incorporate general empirical attributes such as read length, fragment length distribution, and number of molecules of the transcript. We further introduce the fragment starting-point, fragment coverage, and read coverage profiles. We find that the expected profiles are not uniform and that factors such as fragment length to transcript length ratio, read length to fragment length ratio, fragment length distribution, and number of molecules influence the variability of coverage across a transcript. Finally, we explore a potential application of the model where, with simulations, we show that it is possible to correctly estimate the transcript copy number for any transcript in the RNA-seq experiment.

  18. Immunization Coverage

    Science.gov (United States)

    ... country, and global coverage was estimated at 25%. Rubella is a viral disease which is usually mild in children, but infection during early pregnancy may cause fetal death or congenital rubella syndrome, ...

  19. Functional coverages

    OpenAIRE

    Donchyts, G.; Baart, F.; Jagers, H.R.A.; Van Dam, A

    2011-01-01

    A new Application Programming Interface (API) is presented which simplifies working with geospatial coverages as well as many other data structures of a multi-dimensional nature. The main idea extends the Common Data Model (CDM) developed at the University Corporation for Atmospheric Research (UCAR). The proposed function object model uses the mathematical definition of a vector-valued function. A geospatial coverage will be expressed as a vector-valued function whose dependent variables (the...

  20. Providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer

    Energy Technology Data Exchange (ETDEWEB)

    Archer, Charles J.; Faraj, Daniel A.; Inglett, Todd A.; Ratterman, Joseph D.

    2018-01-30

    Methods, apparatus, and products are disclosed for providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: receiving a network packet in a compute node, the network packet specifying a destination compute node; selecting, in dependence upon the destination compute node, at least one of the links for the compute node along which to forward the network packet toward the destination compute node; and forwarding the network packet along the selected link to the adjacent compute node connected to the compute node through the selected link.

  1. Coverage with Evidence Development: applications and issues.

    Science.gov (United States)

    Trueman, Paul; Grainger, David L; Downs, Kristen E

    2010-01-01

    The aim of this study was to describe the current issues surrounding Coverage with Evidence Development (CED). CED is characterized by restricted coverage for a new technology in parallel with targeted research when the stated goal of the research or data collection is to provide definitive evidence for the clinical or cost-effectiveness impact of the new technology. Presented here is information summarized and interpreted from presentations and discussions at the 2008 Health Technology Assessment International (HTAi) meeting and additional information from the medical literature. This study describes the differences between CED and other conditional coverage agreements, provides a brief history of CED, describes real-world examples of CED, describes the areas of consensus between the stakeholders, discusses the areas for future negotiation between stakeholders, and proposes criteria to assist stakeholders in determining when CED could be appropriate. Payers could interpret the evidence obtained from a CED program either positively or negatively, and a range of possible changes to the reimbursement status of the new technology may result. Striking an appropriate balance between the demands for prompt access to new technology and acknowledging that some degree of uncertainty will always exist is a critical challenge to the uptake of this innovative form of conditional coverage. When used selectively for innovative procedures, pharmaceuticals, or devices in the appropriate disease areas, CED may provide patients access to promising medicines or technologies while data to minimize uncertainty are collected.

  2. Coverage Metrics for Model Checking

    Science.gov (United States)

    Penix, John; Visser, Willem; Norvig, Peter (Technical Monitor)

    2001-01-01

    When using model checking to verify programs in practice, it is not usually possible to achieve complete coverage of the system. In this position paper we describe ongoing research within the Automated Software Engineering group at NASA Ames on the use of test coverage metrics to measure partial coverage and provide heuristic guidance for program model checking. We are specifically interested in applying and developing coverage metrics for concurrent programs that might be used to support certification of next generation avionics software.

  3. Functional coverages

    NARCIS (Netherlands)

    Donchyts, G.; Baart, F.; Jagers, H.R.A.; Van Dam, A.

    2011-01-01

    A new Application Programming Interface (API) is presented which simplifies working with geospatial coverages as well as many other data structures of a multi-dimensional nature. The main idea extends the Common Data Model (CDM) developed at the University Corporation for Atmospheric Research

  4. Practical parallel computing

    CERN Document Server

    Morse, H Stephen

    1994-01-01

    Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi

  5. Parallel processing ITS

    Energy Technology Data Exchange (ETDEWEB)

    Fan, W.C.; Halbleib, J.A. Sr.

    1996-09-01

    This report provides a users` guide for parallel processing ITS on a UNIX workstation network, a shared-memory multiprocessor or a massively-parallel processor. The parallelized version of ITS is based on a master/slave model with message passing. Parallel issues such as random number generation, load balancing, and communication software are briefly discussed. Timing results for example problems are presented for demonstration purposes.

  6. Multiwavelength Study of Quiescent States of Mrk 421 with Unprecedented Hard X-Ray Coverage Provided by NuSTAR in 2013

    CERN Document Server

    Baloković, M.; Madejski, G.; Furniss, A.; Chiang, J.; Ajello, M.; Alexander, D.M.; Barret, D.; Blandford, R.; Boggs, S.E.; Christensen, F.E.; Craig, W.W.; Forster, K.; Giommi, P.; Grefenstette, B.W.; Hailey, C.J.; Harrison, F.A.; Hornstrup, A.; Kitaguchi, T.; Koglin, J.E.; Madsen, K.K.; Mao, P.H.; Miyasaka, H.; Mori, K.; Perri, M.; Pivovaroff, M.J.; Puccetti, S.; Rana, V.; Stern, D.; Tagliaferri, G.; Urry, C.M.; Westergaard, N.J.; Zhang, W.W.; Zoglauer, A.; Archambault, S.; Archer, A.A.; Barnacka, A.; Benbow, W.; Bird, R.; Buckley, J.; Bugaev, V.; Cerruti, M.; Chen, X.; Ciupik, L.; Connolly, M.P.; Cui, W.; Dickinson, H.J.; Dumm, J.; Eisch, J.D.; Falcone, A.; Feng, Q.; Finley, J.P.; Fleischhack, H.; Fortson, L.; Griffin, S.; Griffiths, S.T.; Grube, J.; Gyuk, G.; Huetten, M.; Haakansson, N.; Holder, J.; Humensky, T.B.; Johnson, C.A.; Kaaret, P.; Kertzman, M.; Khassen, Y.; Kieda, D.; Krause, M.; Krennrich, F.; Lang, M.J.; Maier, G.; McArthur, S.; Meagher, K.; Moriarty, P.; Nelson, T.; Nieto, D.; Ong, R.A.; Park, N.; Pohl, M.; Popkow, A.; Pueschel, E.; Reynolds, P.T.; Richards, G.T.; Roache, E.; Santander, M.; Sembroski, G.H.; Shahinyan, K.; Smith, A.W.; Staszak, D.; Telezhinsky, I.; Todd, N.W.; Tucci, J.V.; Tyler, J.; Vincent, S.; Weinstein, A.; Wilhelm, A.; Williams, D.A.; Zitzer, B.; Ahnen, M.L.; Ansoldi, S.; Antonelli, L.A.; Antoranz, P.; Babic, A.; Banerjee, B.; Bangale, P.; Barres de Almeida, U.; Barrio, J.; Becerra González, J.; Bednarek, W.; Bernardini, E.; Biasuzzi, B.; Biland, A.; Blanch, O.; Bonnefoy, S.; Bonnoli, G.; Borracci, F.; Bretz, T.; Carmona, E.; Carosi, A.; Chatterjee, A.; Clavero, R.; Colin, P.; Colombo, E.; Contreras, J.L.; Cortina, J.; Covino, S.; Da Vela, P.; Dazzi, F.; de Angelis, A.; De Lotto, B.; Wilhelmi, E. D. de Oña; Delgado Mendez, C.; Di Pierro, F.; Dominis Prester, D.; Dorner, D.; Doro, M.; Einecke, S.; Elsaesser, D.; Fernández-Barral, A.; Fidalgo, D.; Fonseca, M.V.; Font, L.; Frantzen, K.; Fruck, C.; Galindo, D.; López, R. J. García; Garczarczyk, M.; Garrido Terrats, D.; Gaug, M.; Giammaria, P.; Eisenacher, D.; Godinović, N.; González Muñoz, A.; Guberman, D.; Hahn, A.; Hanabata, Y.; Hayashida, M.; Herrera, J.; Hose, J.; Hrupec, D.; Hughes, G.; Idec, W.; Kodani, K.; Konno, Y.; Kubo, H.; Kushida, J.; La Barbera, A.; Lelas, D.; Lindfors, E.; Lombardi, S.; Longo, F.; López, M.; López-Coto, R.; López-Oramas, A.; Lorenz, E.; Majumdar, P.; Makariev, M.; Mallot, K.; Maneva, G.; Manganaro, M.; Mannheim, K.; Maraschi, L.; Marcote, B.; Mariotti, M.; Martínez, M.; Mazin, D.; Menzel, U.; Miranda, J.M.; Mirzoyan, R.; Moralejo, A.; Moretti, E.; Nakajima, D.; Neustroev, V.; Niedzwiecki, A.; Nievas-Rosillo, M.; Nilsson, K.; Nishijima, K.; Noda, K.; Orito, R.; Overkemping, A.; Paiano, S.; Palacio, S.; Palatiello, M.; Paoletti, R.; Paredes, J.M.; Paredes-Fortuny, X.; Persic, M.; Poutanen, J.; Prada Moroni, P. G.; Prandini, E.; Puljak, I.; Rhode, W.; Ribó, M.; Rico, J.; Garcia, J. Rodriguez; Saito, T.; Satalecka, K.; Scapin, V.; Schultz, C.; Schweizer, T.; Shore, S.N.; Sillanpää, A.; Sitarek, J.; Snidaric, I.; Sobczynska, D.; Stamerra, A.; Steinbring, T.; Strzys, M.; Takalo, L.O.; Takami, H.; Tavecchio, F.; Temnikov, P.; Terzić, T.; Tescaro, D.; Teshima, M.; Thaele, J.; Torres, D.F.; Toyama, T.; Treves, A.; Verguilov, V.; Vovk, I.; Ward, J.E.; Will, M.; Wu, M.H.; Zanin, R.; Perkins, J.; Verrecchia, F.; Leto, C.; Böttcher, M.; Villata, M.; Raiteri, C.M.; Acosta-Pulido, J.A.; Bachev, R.; Berdyugin, A.; Blinov, D.A.; Carnerero, M.I.; Chen, W.P.; Chinchilla, P.; Damljanovic, G.; Eswaraiah, C.; Grishina, T.S.; Ibryamov, S.; Jordan, B.; Jorstad, S.G.; Joshi, M.; Kopatskaya, E.N.; Kurtanidze, O.M.; Kurtanidze, S.O.; Larionova, E.G.; Larionova, L.V.; Larionov, V.M.; Latev, G.; Lin, H.C.; Marscher, A.P.; Mokrushina, A.A.; Morozova, D.A.; Nikolashvili, M.G.; Semkov, E.; Strigachev, A.; Troitskaya, Yu. V.; Troitsky, I.S.; Vince, O.; Barnes, J.; Güver, T.; Moody, J.W.; Sadun, A.C.; Sun, S.; Hovatta, T.; Richards, J.L.; Max-Moerbeck, W.; Readhead, A.C.; Lähteenmäki, A.; Tornikoski, M.; Tammi, J.; Ramakrishnan, V.; Reinthal, R.; Angelakis, E.; Fuhrmann, L.; Myserlis, I.; Karamanavis, V.; Sievers, A.; Ungerechts, H.; Zensus, J.A.

    2016-01-01

    We present coordinated multiwavelength observations of the bright, nearby BL Lac object Mrk 421 taken in 2013 January-March, involving GASP-WEBT, Swift, NuSTAR, Fermi-LAT, MAGIC, VERITAS, and other collaborations and instruments, providing data from radio to very-high-energy (VHE) gamma-ray bands. NuSTAR yielded previously unattainable sensitivity in the 3-79 keV range, revealing that the spectrum softens when the source is dimmer until the X-ray spectral shape saturates into a steep power law with a photon index of approximately 3, with no evidence for an exponential cutoff or additional hard components up to about 80 keV. For the first time, we observed both the synchrotron and the inverse-Compton peaks of the spectral energy distribution (SED) simultaneously shifted to frequencies below the typical quiescent state by an order of magnitude. The fractional variability as a function of photon energy shows a double-bump structure which relates to the two bumps of the broadband SED. In each bump, the variabilit...

  7. Multiwavelength Study of Quiescent States of Mrk 421 with Unprecedented Hard X-Ray Coverage Provided by NuSTAR in 2013

    Science.gov (United States)

    Baloković, M.; Paneque, D.; Madejski, G.; Furniss, A.; Chiang, J.; Ajello, M.; Alexander, D. M.; Barret, D.; Blandford, R. D.; Boggs, S. E.; Christensen, F. E.; Craig, W. W.; Forster, K.; Giommi, P.; Grefenstette, B.; Hailey, C.; Harrison, F. A.; Hornstrup, A.; Kitaguchi, T.; Koglin, J. E.; Madsen, K. K.; Mao, P. H.; Miyasaka, H.; Mori, K.; Perri, M.; Pivovaroff, M. J.; Puccetti, S.; Rana, V.; Stern, D.; Tagliaferri, G.; Urry, C. M.; Westergaard, N. J.; Zhang, W. W.; Zoglauer, A.; NuSTAR Team; Archambault, S.; Archer, A.; Barnacka, A.; Benbow, W.; Bird, R.; Buckley, J. H.; Bugaev, V.; Cerruti, M.; Chen, X.; Ciupik, L.; Connolly, M. P.; Cui, W.; Dickinson, H. J.; Dumm, J.; Eisch, J. D.; Falcone, A.; Feng, Q.; Finley, J. P.; Fleischhack, H.; Fortson, L.; Griffin, S.; Griffiths, S. T.; Grube, J.; Gyuk, G.; Huetten, M.; Håkansson, N.; Holder, J.; Humensky, T. B.; Johnson, C. A.; Kaaret, P.; Kertzman, M.; Khassen, Y.; Kieda, D.; Krause, M.; Krennrich, F.; Lang, M. J.; Maier, G.; McArthur, S.; Meagher, K.; Moriarty, P.; Nelson, T.; Nieto, D.; Ong, R. A.; Park, N.; Pohl, M.; Popkow, A.; Pueschel, E.; Reynolds, P. T.; Richards, G. T.; Roache, E.; Santander, M.; Sembroski, G. H.; Shahinyan, K.; Smith, A. W.; Staszak, D.; Telezhinsky, I.; Todd, N. W.; Tucci, J. V.; Tyler, J.; Vincent, S.; Weinstein, A.; Wilhelm, A.; Williams, D. A.; Zitzer, B.; VERITAS Collaboration; Ahnen, M. L.; Ansoldi, S.; Antonelli, L. A.; Antoranz, P.; Babic, A.; Banerjee, B.; Bangale, P.; Barres de Almeida, U.; Barrio, J. A.; Becerra González, J.; Bednarek, W.; Bernardini, E.; Biasuzzi, B.; Biland, A.; Blanch, O.; Bonnefoy, S.; Bonnoli, G.; Borracci, F.; Bretz, T.; Carmona, E.; Carosi, A.; Chatterjee, A.; Clavero, R.; Colin, P.; Colombo, E.; Contreras, J. L.; Cortina, J.; Covino, S.; Da Vela, P.; Dazzi, F.; De Angelis, A.; De Lotto, B.; de Oña Wilhelmi, E.; Delgado Mendez, C.; Di Pierro, F.; Dominis Prester, D.; Dorner, D.; Doro, M.; Einecke, S.; Elsaesser, D.; Fernández-Barral, A.; Fidalgo, D.; Fonseca, M. V.; Font, L.; Frantzen, K.; Fruck, C.; Galindo, D.; García López, R. J.; Garczarczyk, M.; Garrido Terrats, D.; Gaug, M.; Giammaria, P.; Glawion (Eisenacher, D.; Godinović, N.; González Muñoz, A.; Guberman, D.; Hahn, A.; Hanabata, Y.; Hayashida, M.; Herrera, J.; Hose, J.; Hrupec, D.; Hughes, G.; Idec, W.; Kodani, K.; Konno, Y.; Kubo, H.; Kushida, J.; La Barbera, A.; Lelas, D.; Lindfors, E.; Lombardi, S.; Longo, F.; López, M.; López-Coto, R.; López-Oramas, A.; Lorenz, E.; Majumdar, P.; Makariev, M.; Mallot, K.; Maneva, G.; Manganaro, M.; Mannheim, K.; Maraschi, L.; Marcote, B.; Mariotti, M.; Martínez, M.; Mazin, D.; Menzel, U.; Miranda, J. M.; Mirzoyan, R.; Moralejo, A.; Moretti, E.; Nakajima, D.; Neustroev, V.; Niedzwiecki, A.; Nievas Rosillo, M.; Nilsson, K.; Nishijima, K.; Noda, K.; Orito, R.; Overkemping, A.; Paiano, S.; Palacio, J.; Palatiello, M.; Paoletti, R.; Paredes, J. M.; Paredes-Fortuny, X.; Persic, M.; Poutanen, J.; Prada Moroni, P. G.; Prandini, E.; Puljak, I.; Rhode, W.; Ribó, M.; Rico, J.; Rodriguez Garcia, J.; Saito, T.; Satalecka, K.; Scapin, V.; Schultz, C.; Schweizer, T.; Shore, S. N.; Sillanpää, A.; Sitarek, J.; Snidaric, I.; Sobczynska, D.; Stamerra, A.; Steinbring, T.; Strzys, M.; Takalo, L.; Takami, H.; Tavecchio, F.; Temnikov, P.; Terzić, T.; Tescaro, D.; Teshima, M.; Thaele, J.; Torres, D. F.; Toyama, T.; Treves, A.; Verguilov, V.; Vovk, I.; Ward, J. E.; Will, M.; Wu, M. H.; Zanin, R.; MAGIC Collaboration; Perkins, J.; Verrecchia, F.; Leto, C.; Böttcher, M.; Villata, M.; Raiteri, C. M.; Acosta-Pulido, J. A.; Bachev, R.; Berdyugin, A.; Blinov, D. A.; Carnerero, M. I.; Chen, W. P.; Chinchilla, P.; Damljanovic, G.; Eswaraiah, C.; Grishina, T. S.; Ibryamov, S.; Jordan, B.; Jorstad, S. G.; Joshi, M.; Kopatskaya, E. N.; Kurtanidze, O. M.; Kurtanidze, S. O.; Larionova, E. G.; Larionova, L. V.; Larionov, V. M.; Latev, G.; Lin, H. C.; Marscher, A. P.; Mokrushina, A. A.; Morozova, D. A.; Nikolashvili, M. G.; Semkov, E.; Smith, P. S.; Strigachev, A.; Troitskaya, Yu. V.; Troitsky, I. S.; Vince, O.; Barnes, J.; Güver, T.; Moody, J. W.; Sadun, A. C.; Sun, S.; Hovatta, T.; Richards, J. L.; Max-Moerbeck, W.; Readhead, A. C. R.; Lähteenmäki, A.; Tornikoski, M.; Tammi, J.; Ramakrishnan, V.; Reinthal, R.; Angelakis, E.; Fuhrmann, L.; Myserlis, I.; Karamanavis, V.; Sievers, A.; Ungerechts, H.; Zensus, J. A.

    2016-03-01

    We present coordinated multiwavelength observations of the bright, nearby BL Lacertae object Mrk 421 taken in 2013 January-March, involving GASP-WEBT, Swift, NuSTAR, Fermi-LAT, MAGIC, VERITAS, and other collaborations and instruments, providing data from radio to very high energy (VHE) γ-ray bands. NuSTAR yielded previously unattainable sensitivity in the 3-79 keV range, revealing that the spectrum softens when the source is dimmer until the X-ray spectral shape saturates into a steep {{Γ }}≈ 3 power law, with no evidence for an exponential cutoff or additional hard components up to ˜80 keV. For the first time, we observed both the synchrotron and the inverse-Compton peaks of the spectral energy distribution (SED) simultaneously shifted to frequencies below the typical quiescent state by an order of magnitude. The fractional variability as a function of photon energy shows a double-bump structure that relates to the two bumps of the broadband SED. In each bump, the variability increases with energy, which, in the framework of the synchrotron self-Compton model, implies that the electrons with higher energies are more variable. The measured multi band variability, the significant X-ray-to-VHE correlation down to some of the lowest fluxes ever observed in both bands, the lack of correlation between optical/UV and X-ray flux, the low degree of polarization and its significant (random) variations, the short estimated electron cooling time, and the significantly longer variability timescale observed in the NuSTAR light curves point toward in situ electron acceleration and suggest that there are multiple compact regions contributing to the broadband emission of Mrk 421 during low-activity states.

  8. Maintaining Differentiated Coverage in Heterogeneous Sensor Networks

    Directory of Open Access Journals (Sweden)

    Du Xiaojiang

    2005-01-01

    Full Text Available Most existing research considers homogeneous sensor networks, which suffer from performance bottleneck and poor scalability. In this paper, we adopt a heterogeneous sensor network model to overcome these problems. Sensing coverage is a fundamental problem in sensor networks and has been well studied over the past years. However, most coverage algorithms only consider the uniform coverage problem, that is, all the areas have the same coverage degree requirement. In many scenarios, some key areas need high coverage degree while other areas only need low coverage degree. We propose a differentiated coverage algorithm which can provide different coverage degrees for different areas. The algorithm is energy efficient since it only keeps minimum number of sensors to work. The performance of the differentiated coverage algorithm is evaluated through extensive simulation experiments. Our results show that the algorithm performs much better than any other differentiated coverage algorithm.

  9. 49 CFR 19.31 - Insurance coverage.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 1 2010-10-01 2010-10-01 false Insurance coverage. 19.31 Section 19.31... Requirements Property Standards § 19.31 Insurance coverage. Recipients shall, at a minimum, provide the equivalent insurance coverage for real property and equipment acquired with Federal funds as provided to...

  10. Medicare Coverage Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Medicare Coverage Database (MCD) contains all National Coverage Determinations (NCDs) and Local Coverage Determinations (LCDs), local articles, and proposed NCD...

  11. A novel, multi-parallel, real-time polymerase chain reaction approach for eight gastrointestinal parasites provides improved diagnostic capabilities to resource-limited at-risk populations.

    Science.gov (United States)

    Mejia, Rojelio; Vicuña, Yosselin; Broncano, Nely; Sandoval, Carlos; Vaca, Maritza; Chico, Martha; Cooper, Philip J; Nutman, Thomas B

    2013-06-01

    Diagnosis of gastrointestinal parasites has traditionally relied on stool microscopy, which has low diagnostic sensitivity and specificity. We have developed a novel, rapid, high-throughput quantitative multi-parallel real-time polymerase chain reaction (qPCR) platform. Species-specific primers/probes were used for eight common gastrointestinal parasite pathogens: Ascaris lumbricoides, Necator americanus, Ancylostoma duodenale, Giardia lamblia, Cryptosporidium spp., Entamoeba histolytica, Trichuris trichiura, and Strongyloides stercoralis. Stool samples from 400 13-month-old children in rural Ecuador were analyzed and the qPCR was compared with a standard direct wet mount slide for stool microscopy, as were 125 8-14-year-old children before and after anthelmintic treatment. The qPCR showed higher detection rates for all parasites compared with direct microscopy, Ascaris (7.0% versus 5.5%) and for Giardia (31.5% versus 5.8%). Using an enhanced DNA extraction method, we were able to detect T. trichiura DNA. These assays will be useful to refine treatment options for affected populations, ultimately leading to better health outcomes.

  12. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  13. 15 CFR 14.31 - Insurance coverage.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false Insurance coverage. 14.31 Section 14... COMMERCIAL ORGANIZATIONS Post-Award Requirements Property Standards § 14.31 Insurance coverage. Recipients shall, at a minimum, provide the equivalent insurance coverage for real property and equipment acquired...

  14. 40 CFR 30.31 - Insurance coverage.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Insurance coverage. 30.31 Section 30.31... NON-PROFIT ORGANIZATIONS Post-Award Requirements Property Standards § 30.31 Insurance coverage. Recipients shall, at a minimum, provide the equivalent insurance coverage for real property and equipment...

  15. 45 CFR 74.31 - Insurance coverage.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Insurance coverage. 74.31 Section 74.31 Public..., AND COMMERCIAL ORGANIZATIONS Post-Award Requirements Property Standards § 74.31 Insurance coverage. Recipients shall, at a minimum, provide the equivalent insurance coverage for real property and equipment...

  16. 28 CFR 70.31 - Insurance coverage.

    Science.gov (United States)

    2010-07-01

    ... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Insurance coverage. 70.31 Section 70.31...-PROFIT ORGANIZATIONS Post-Award Requirements Property Standards § 70.31 Insurance coverage. Recipients must, at a minimum, provide the equivalent insurance coverage for real property and equipment acquired...

  17. 32 CFR 32.31 - Insurance coverage.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 1 2010-07-01 2010-07-01 false Insurance coverage. 32.31 Section 32.31 National... NON-PROFIT ORGANIZATIONS Post-Award Requirements Property Standards § 32.31 Insurance coverage. Recipients shall, at a minimum, provide the equivalent insurance coverage for real property and equipment...

  18. 38 CFR 49.31 - Insurance coverage.

    Science.gov (United States)

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2010-07-01 2010-07-01 false Insurance coverage. 49.31... NON-PROFIT ORGANIZATIONS Post-Award Requirements Property Standards § 49.31 Insurance coverage. Recipients shall, at a minimum, provide the equivalent insurance coverage for real property and equipment...

  19. 24 CFR 84.31 - Insurance coverage.

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Insurance coverage. 84.31 Section 84.31 Housing and Urban Development Office of the Secretary, Department of Housing and Urban... Insurance coverage. Recipients shall, at a minimum, provide the equivalent insurance coverage for real...

  20. Effective coverage: a metric for monitoring Universal Health Coverage.

    Directory of Open Access Journals (Sweden)

    Marie Ng

    2014-09-01

    Full Text Available A major challenge in monitoring universal health coverage (UHC is identifying an indicator that can adequately capture the multiple components underlying the UHC initiative. Effective coverage, which unites individual and intervention characteristics into a single metric, offers a direct and flexible means to measure health system performance at different levels. We view effective coverage as a relevant and actionable metric for tracking progress towards achieving UHC. In this paper, we review the concept of effective coverage and delineate the three components of the metric - need, use, and quality - using several examples. Further, we explain how the metric can be used for monitoring interventions at both local and global levels. We also discuss the ways that current health information systems can support generating estimates of effective coverage. We conclude by recognizing some of the challenges associated with producing estimates of effective coverage. Despite these challenges, effective coverage is a powerful metric that can provide a more nuanced understanding of whether, and how well, a health system is delivering services to its populations.

  1. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  2. Post Auction Coverage Baseline 2.0

    Data.gov (United States)

    Federal Communications Commission — FINAL TELEVISION CHANNEL ASSIGNMENT INFORMATION RELATED TO INCENTIVE AUCTION REPACKING. NOTE: This file provides new baseline coverage and population data for all...

  3. Women's Health Insurance Coverage

    Science.gov (United States)

    ... Women's Health Policy Women’s Health Insurance Coverage Women’s Health Insurance Coverage Published: Oct 31, 2017 Facebook Twitter LinkedIn ... that many women continue to face. Sources of Health Insurance Coverage Employer-Sponsored Insurance: Approximately 57.9 million ...

  4. Scalable parallel communications

    Science.gov (United States)

    Maly, K.; Khanna, S.; Overstreet, C. M.; Mukkamala, R.; Zubair, M.; Sekhar, Y. S.; Foudriat, E. C.

    1992-01-01

    Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth

  5. Coverage of the Stanford Prison Experiment in Introductory Psychology Courses

    Science.gov (United States)

    Bartels, Jared M.; Milovich, Marilyn M.; Moussier, Sabrina

    2016-01-01

    The present study examined the coverage of Stanford prison experiment (SPE), including criticisms of the study, in introductory psychology courses through an online survey of introductory psychology instructors (N = 117). Results largely paralleled those of the recently published textbook analyses with ethical issues garnering the most coverage,…

  6. Parallel programming with PCN

    Energy Technology Data Exchange (ETDEWEB)

    Foster, I.; Tuecke, S.

    1991-09-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, a set of tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory at info.mcs.anl.gov.

  7. Parallel programming with PCN

    Energy Technology Data Exchange (ETDEWEB)

    Foster, I.; Tuecke, S.

    1991-12-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

  8. NSW annual immunisation coverage report, 2011.

    Science.gov (United States)

    Hull, Brynley; Dey, Aditi; Campbell-Lloyd, Sue; Menzies, Robert I; McIntyre, Peter B

    2012-12-01

    This annual report, the third in the series, documents trends in immunisation coverage in NSW for children, adolescents and the elderly, to the end of 2011. Data from the Australian Childhood Immunisation Register, the NSW School Immunisation Program and the NSW Population Health Survey were used to calculate various measures of population coverage. During 2011, greater than 90% coverage was maintained for children at 12 and 24 months of age. For children at 5 years of age the improvement seen in 2010 was sustained, with coverage at or near 90%. For adolescents, there was improved coverage for all doses of human papillomavirus vaccine, both doses of hepatitis B vaccine, varicella vaccine and the dose of diphtheria, tetanus and acellular pertussis given to school attendees in Years 7 and 10. Pneumococcal vaccination coverage in the elderly has been steadily rising, although it has remained lower than the influenza coverage estimates. This report provides trends in immunisation coverage in NSW across the age spectrum. The inclusion of coverage estimates for the pneumococcal conjugate, varicella and meningococcal C vaccines in the official coverage assessments for 'fully immunised' in 2013 is a welcome initiative.

  9. Parallel R

    CERN Document Server

    McCallum, Ethan

    2011-01-01

    It's tough to argue with R as a high-quality, cross-platform, open source statistical software product-unless you're in the business of crunching Big Data. This concise book introduces you to several strategies for using R to analyze large datasets. You'll learn the basics of Snow, Multicore, Parallel, and some Hadoop-related tools, including how to find them, how to use them, when they work well, and when they don't. With these packages, you can overcome R's single-threaded nature by spreading work across multiple CPUs, or offloading work to multiple machines to address R's memory barrier.

  10. Massively parallel multicanonical simulations

    Science.gov (United States)

    Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard

    2018-03-01

    Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.

  11. C++ and Massively Parallel Computers

    Directory of Open Access Journals (Sweden)

    Daniel J. Lickly

    1993-01-01

    Full Text Available Our goal is to apply the software engineering advantages of object-oriented programming to the raw power of massively parallel architectures. To do this we have constructed a hierarchy of C++ classes to support the data-parallel paradigm. Feasibility studies and initial coding can be supported by any serial machine that has a C++ compiler. Parallel execution requires an extended Cfront, which understands the data-parallel classes and generates C* code. (C* is a data-parallel superset of ANSI C developed by Thinking Machines Corporation. This approach provides potential portability across parallel architectures and leverages the existing compiler technology for translating data-parallel programs onto both SIMD and MIMD hardware.

  12. Computer Assisted Parallel Program Generation

    CERN Document Server

    Kawata, Shigeo

    2015-01-01

    Parallel computation is widely employed in scientific researches, engineering activities and product development. Parallel program writing itself is not always a simple task depending on problems solved. Large-scale scientific computing, huge data analyses and precise visualizations, for example, would require parallel computations, and the parallel computing needs the parallelization techniques. In this Chapter a parallel program generation support is discussed, and a computer-assisted parallel program generation system P-NCAS is introduced. Computer assisted problem solving is one of key methods to promote innovations in science and engineering, and contributes to enrich our society and our life toward a programming-free environment in computing science. Problem solving environments (PSE) research activities had started to enhance the programming power in 1970's. The P-NCAS is one of the PSEs; The PSE concept provides an integrated human-friendly computational software and hardware system to solve a target ...

  13. Parallel Lines

    Directory of Open Access Journals (Sweden)

    James G. Worner

    2017-05-01

    Full Text Available James Worner is an Australian-based writer and scholar currently pursuing a PhD at the University of Technology Sydney. His research seeks to expose masculinities lost in the shadow of Australia’s Anzac hegemony while exploring new opportunities for contemporary historiography. He is the recipient of the Doctoral Scholarship in Historical Consciousness at the university’s Australian Centre of Public History and will be hosted by the University of Bologna during 2017 on a doctoral research writing scholarship.   ‘Parallel Lines’ is one of a collection of stories, The Shapes of Us, exploring liminal spaces of modern life: class, gender, sexuality, race, religion and education. It looks at lives, like lines, that do not meet but which travel in proximity, simultaneously attracted and repelled. James’ short stories have been published in various journals and anthologies.

  14. Scientific computing on bulk synchronous parallel architectures

    NARCIS (Netherlands)

    Bisseling, R.H.; McColl, W.F.

    1993-01-01

    Bulk synchronous parallel architectures oer the prospect of achieving both scalable parallel performance and architecture independent parallel software. They provide a robust model on which to base the future development of general purpose parallel computing systems. In this paper, we theoretically

  15. Distributed and cloud computing from parallel processing to the Internet of Things

    CERN Document Server

    Hwang, Kai; Fox, Geoffrey C

    2012-01-01

    Distributed and Cloud Computing, named a 2012 Outstanding Academic Title by the American Library Association's Choice publication, explains how to create high-performance, scalable, reliable systems, exposing the design principles, architecture, and innovative applications of parallel, distributed, and cloud computing systems. Starting with an overview of modern distributed models, the book provides comprehensive coverage of distributed and cloud computing, including: Facilitating management, debugging, migration, and disaster recovery through virtualization Clustered systems for resear

  16. Continuous Eligibility for Medicaid and CHIP Coverage

    Data.gov (United States)

    U.S. Department of Health & Human Services — States have the option to provide children with 12 months of continuous coverage through Medicaid and CHIP, even if the family experiences a change in income during...

  17. ccTSA: A Coverage-Centric Threaded Sequence Assembler

    Science.gov (United States)

    Ahn, Jung Ho

    2012-01-01

    De novo sequencing, a process to find the whole genome or the regions of a species without references, requires much higher computational power compared to mapped sequencing with references. The advent and continuous evolution of next-generation sequencing technologies further stress the demands of high-throughput processing of myriads of short DNA fragments. Recently announced sequence assemblers, such as Velvet, SOAPdenovo, and ABySS, all exploit parallelism to meet these computational demands since contemporary computer systems primarily rely on scaling the number of computing cores to improve performance. However, most of them are not tailored to exploit the full potential of these systems, leading to suboptimal performance. In this paper, we present ccTSA, a parallel sequence assembler that utilizes coverage to prune k-mers, find preferred edges, and resolve conflicts in preferred edges between k-mers. We minimize computation dependencies between threads to effectively parallelize k-mer processing. We also judiciously allocate and reuse memory space in order to lower memory usage and further improve sequencing speed. The results of ccTSA are compelling such that it runs several times faster than other assemblers while providing comparable quality values such as N50. PMID:22723971

  18. Cooperative Cloud Service Aware Mobile Internet Coverage Connectivity Guarantee Protocol Based on Sensor Opportunistic Coverage Mechanism

    Directory of Open Access Journals (Sweden)

    Qin Qin

    2015-01-01

    Full Text Available In order to improve the Internet coverage ratio and provide connectivity guarantee, based on sensor opportunistic coverage mechanism and cooperative cloud service, we proposed the coverage connectivity guarantee protocol for mobile Internet. In this scheme, based on the opportunistic covering rules, the network coverage algorithm of high reliability and real-time security was achieved by using the opportunity of sensor nodes and the Internet mobile node. Then, the cloud service business support platform is created based on the Internet application service management capabilities and wireless sensor network communication service capabilities, which is the architecture of the cloud support layer. The cooperative cloud service aware model was proposed. Finally, we proposed the mobile Internet coverage connectivity guarantee protocol. The results of experiments demonstrate that the proposed algorithm has excellent performance, in terms of the security of the Internet and the stability, as well as coverage connectivity ability.

  19. Parallel programming with PCN

    Energy Technology Data Exchange (ETDEWEB)

    Foster, I.; Tuecke, S.

    1993-01-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and Cthat allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/pcn at info.mcs. ani.gov (cf. Appendix A). This version of this document describes PCN version 2.0, a major revision of the PCN programming system. It supersedes earlier versions of this report.

  20. Increasing Coverage of Appropriate Vaccinations

    Science.gov (United States)

    Jacob, Verughese; Chattopadhyay, Sajal K.; Hopkins, David P.; Morgan, Jennifer Murphy; Pitan, Adesola A.; Clymer, John

    2016-01-01

    Context Population-level coverage for immunization against many vaccine-preventable diseases remains below optimal rates in the U.S. The Community Preventive Services Task Force recently recommended several interventions to increase vaccination coverage based on systematic reviews of the evaluation literature. The present study provides the economic results from those reviews. Evidence acquisition A systematic review was conducted (search period, January 1980 through February 2012) to identify economic evaluations of 12 interventions recommended by the Task Force. Evidence was drawn from included studies; estimates were constructed for the population reach of each strategy, cost of implementation, and cost per additional vaccinated person because of the intervention. Analyses were conducted in 2014. Evidence synthesis Reminder systems, whether for clients or providers, were among the lowest-cost strategies to implement and the most cost effective in terms of additional people vaccinated. Strategies involving home visits and combination strategies in community settings were both costly and less cost effective. Strategies based in settings such as schools and managed care organizations that reached the target population achieved additional vaccinations in the middle range of cost effectiveness. Conclusions The interventions recommended by the Task Force differed in reach, cost, and cost effectiveness. This systematic review presents the economic information for 12 effective strategies to increase vaccination coverage that can guide implementers in their choice of interventions to fit their local needs, available resources, and budget. PMID:26847663

  1. -Net Approach to Sensor -Coverage

    Directory of Open Access Journals (Sweden)

    Fusco Giordano

    2010-01-01

    Full Text Available Wireless sensors rely on battery power, and in many applications it is difficult or prohibitive to replace them. Hence, in order to prolongate the system's lifetime, some sensors can be kept inactive while others perform all the tasks. In this paper, we study the -coverage problem of activating the minimum number of sensors to ensure that every point in the area is covered by at least sensors. This ensures higher fault tolerance, robustness, and improves many operations, among which position detection and intrusion detection. The -coverage problem is trivially NP-complete, and hence we can only provide approximation algorithms. In this paper, we present an algorithm based on an extension of the classical -net technique. This method gives an -approximation, where is the number of sensors in an optimal solution. We do not make any particular assumption on the shape of the areas covered by each sensor, besides that they must be closed, connected, and without holes.

  2. Immunisation coverage, 2012.

    Science.gov (United States)

    Hull, Brynley P; Dey, Aditi; Menzies, Rob I; Brotherton, Julia M; McIntyre, Peter B

    2014-09-30

    This, the 6th annual immunisation coverage report, documents trends during 2012 for a range of standard measures derived from Australian Childhood Immunisation Register (ACIR) data, and National Human Papillomavirus (HPV) Vaccination Program Register data. These include coverage at standard age milestones and for individual vaccines included on the National Immunisation Program (NIP) and coverage in adolescents and adults. The proportion of Australian children 'fully vaccinated' at 12, 24 and 60 months of age was 91.7%, 92.5% and 91.2%, respectively. For vaccines available on the NIP but not assessed during 2012 for 'fully vaccinated' status or for eligibility for incentive payments (rotavirus and pneumococcal at 12 months and meningococcal C and varicella at 24 months) coverage varied. Although pneumococcal vaccine had similar coverage at 12 months to other vaccines, coverage was lower for rotavirus at 12 months (83.6%) and varicella at 24 months (84.4%). Although 'fully vaccinated' coverage at 12 months of age was lower among Indigenous children than non-Indigenous children in all jurisdictions, the extent of the difference varied, reaching a 15 percentage point differential in South Australia but only a 0.4 percentage point differential in the Northern Territory. Overall, Indigenous coverage at 24 months of age exceeded that at 12 months of age nationally and for all jurisdictions, but as receipt of varicella vaccine at 18 months is excluded from calculations, this represents delayed immunisation, with some contribution from immunisation incentives. The 'fully vaccinated' coverage estimates for vaccinations due by 60 months of age for Indigenous children exceeded 90% at 91% in 2012. Unlike in 2011, at 60 months of age, there was no dramatic variation in coverage between Indigenous and non-Indigenous children for individual jurisdictions. As previously documented, vaccines recommended for Indigenous children only, hepatitis A and pneumococcal vaccine, had

  3. Parallel hierarchical radiosity rendering

    Energy Technology Data Exchange (ETDEWEB)

    Carter, Michael [Iowa State Univ., Ames, IA (United States)

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  4. Ultrascalable petaflop parallel supercomputer

    Science.gov (United States)

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  5. Stability of parallel flows

    CERN Document Server

    Betchov, R

    2012-01-01

    Stability of Parallel Flows provides information pertinent to hydrodynamical stability. This book explores the stability problems that occur in various fields, including electronics, mechanics, oceanography, administration, economics, as well as naval and aeronautical engineering. Organized into two parts encompassing 10 chapters, this book starts with an overview of the general equations of a two-dimensional incompressible flow. This text then explores the stability of a laminar boundary layer and presents the equation of the inviscid approximation. Other chapters present the general equation

  6. Insurance Coverage Policies for Personalized Medicine

    OpenAIRE

    Andrew Hresko; Haga, Susanne B.

    2012-01-01

    Adoption of personalized medicine in practice has been slow, in part due to the lack of evidence of clinical benefit provided by these technologies. Coverage by insurers is a critical step in achieving widespread adoption of personalized medicine. Insurers consider a variety of factors when formulating medical coverage policies for personalized medicine, including the overall strength of evidence for a test, availability of clinical guidelines and health technology assessments by independent ...

  7. Insurance Coverage Policies for Personalized Medicine

    Directory of Open Access Journals (Sweden)

    Andrew Hresko

    2012-10-01

    Full Text Available Adoption of personalized medicine in practice has been slow, in part due to the lack of evidence of clinical benefit provided by these technologies. Coverage by insurers is a critical step in achieving widespread adoption of personalized medicine. Insurers consider a variety of factors when formulating medical coverage policies for personalized medicine, including the overall strength of evidence for a test, availability of clinical guidelines and health technology assessments by independent organizations. In this study, we reviewed coverage policies of the largest U.S. insurers for genomic (disease-related and pharmacogenetic (PGx tests to determine the extent that these tests were covered and the evidence basis for the coverage decisions. We identified 41 coverage policies for 49 unique testing: 22 tests for disease diagnosis, prognosis and risk and 27 PGx tests. Fifty percent (or less of the tests reviewed were covered by insurers. Lack of evidence of clinical utility appears to be a major factor in decisions of non-coverage. The inclusion of PGx information in drug package inserts appears to be a common theme of PGx tests that are covered. This analysis highlights the variability of coverage determinations and factors considered, suggesting that the adoption of personal medicine will affected by numerous factors, but will continue to be slowed due to lack of demonstrated clinical benefit.

  8. Parallel machine architecture and compiler design facilities

    Science.gov (United States)

    Kuck, David J.; Yew, Pen-Chung; Padua, David; Sameh, Ahmed; Veidenbaum, Alex

    1990-01-01

    The objective is to provide an integrated simulation environment for studying and evaluating various issues in designing parallel systems, including machine architectures, parallelizing compiler techniques, and parallel algorithms. The status of Delta project (which objective is to provide a facility to allow rapid prototyping of parallelized compilers that can target toward different machine architectures) is summarized. Included are the surveys of the program manipulation tools developed, the environmental software supporting Delta, and the compiler research projects in which Delta has played a role.

  9. Broadcast Network Coverage with Multicell Cooperation

    Directory of Open Access Journals (Sweden)

    Hongxiang Li

    2010-01-01

    Full Text Available Multicell cooperation has been identified as one of the underlying principles for future wireless communication systems. This paper studies the benefits of multicell cooperation in broadcast TV network from an information theoretical perspective. We define outage capacity as the figure of merit and derive the broadcast coverage area to evaluate such system. Specifically, we calculate the broadcast coverage area with given common information rate and outage probabilities when multiple base stations collaboratively transmit the broadcast signals. For the general MIMO case where receivers have multiple antennas, we provide simulation results to illustrate the expanded coverage area. In all cases, our results show that the coverage of a TV broadcast network can be significantly improved by multicell cooperation.

  10. Parallel Programming with Intel Parallel Studio XE

    CERN Document Server

    Blair-Chappell , Stephen

    2012-01-01

    Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the

  11. Parallel kinematics type, kinematics, and optimal design

    CERN Document Server

    Liu, Xin-Jun

    2014-01-01

    Parallel Kinematics- Type, Kinematics, and Optimal Design presents the results of 15 year's research on parallel mechanisms and parallel kinematics machines. This book covers the systematic classification of parallel mechanisms (PMs) as well as providing a large number of mechanical architectures of PMs available for use in practical applications. It focuses on the kinematic design of parallel robots. One successful application of parallel mechanisms in the field of machine tools, which is also called parallel kinematics machines, has been the emerging trend in advanced machine tools. The book describes not only the main aspects and important topics in parallel kinematics, but also references novel concepts and approaches, i.e. type synthesis based on evolution, performance evaluation and optimization based on screw theory, singularity model taking into account motion and force transmissibility, and others.   This book is intended for researchers, scientists, engineers and postgraduates or above with interes...

  12. A GPS coverage model

    Science.gov (United States)

    Skidmore, Trent A.

    1994-01-01

    The results of several case studies using the Global Positioning System coverage model developed at Ohio University are summarized. Presented are results pertaining to outage area, outage dynamics, and availability. Input parameters to the model include the satellite orbit data, service area of interest, geometry requirements, and horizon and antenna mask angles. It is shown for precision-landing Category 1 requirements that the planned GPS 21 Primary Satellite Constellation produces significant outage area and unavailability. It is also shown that a decrease in the user equivalent range error dramatically decreases outage area and improves the service availability.

  13. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable distributed graph container and a collection of commonly used parallel graph algorithms. The library introduces pGraph pViews that separate algorithm design from the container implementation. It supports three graph processing algorithmic paradigms, level-synchronous, asynchronous and coarse-grained, and provides common graph algorithms based on them. Experimental results demonstrate improved scalability in performance and data size over existing graph libraries on more than 16,000 cores and on internet-scale graphs containing over 16 billion vertices and 250 billion edges. © Springer-Verlag Berlin Heidelberg 2013.

  14. The NICMOS Parallel Observing Program

    Science.gov (United States)

    McCarthy, Patrick

    2002-07-01

    We propose to manage the default set of pure parallels with NICMOS. Our experience with both our GO NICMOS parallel program and the public parallel NICMOS programs in cycle 7 prepared us to make optimal use of the parallel opportunities. The NICMOS G141 grism remains the most powerful survey tool for HAlpha emission-line galaxies at cosmologically interesting redshifts. It is particularly well suited to addressing two key uncertainties regarding the global history of star formation: the peak rate of star formation in the relatively unexplored but critical 1extinction. Our proposed deep G141 exposures will increase the sample of known HAlpha emission- line objects at z ~ 1.3 by roughly an order of magnitude. We will also obtain a mix of F110W and F160W images along random sight-lines to examine the space density and morphologies of the reddest galaxies. The nature of the extremely red galaxies remains unclear and our program of imaging and grism spectroscopy provides unique information regarding both the incidence of obscured star bursts and the build up of stellar mass at intermediate redshifts. In addition to carrying out the parallel program we will populate a public database with calibrated spectra and images, and provide limited ground- based optical and near-IR data for the deepest parallel fields.

  15. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  16. Parallel short sequence assembly of transcriptomes.

    Science.gov (United States)

    Jackson, Benjamin G; Schnable, Patrick S; Aluru, Srinivas

    2009-01-30

    The de novo assembly of genomes and transcriptomes from short sequences is a challenging problem. Because of the high coverage needed to assemble short sequences as well as the overhead of modeling the assembly problem as a graph problem, the methods for short sequence assembly are often validated using data from BACs or small sized prokaryotic genomes. We present a parallel method for transcriptome assembly from large short sequence data sets. Our solution uses a rigorous graph theoretic framework and tames the computational and space complexity using parallel computers. First, we construct a distributed bidirected graph that captures overlap information. Next, we compact all chains in this graph to determine long unique contigs using undirected parallel list ranking, a problem for which we present an algorithm. Finally, we process this compacted distributed graph to resolve unique regions that are separated by repeats, exploiting the naturally occurring coverage variations arising from differential expression. We demonstrate the validity of our method using a synthetic high coverage data set generated from the predicted coding regions of Zea mays. We assemble 925 million sequences consisting of 40 billion nucleotides in a few minutes on a 1024 processor Blue Gene/L. Our method is the first fully distributed method for assembling a non-hierarchical short sequence data set and can scale to large problem sizes.

  17. IOPA: I/O-aware parallelism adaption for parallel programs.

    Science.gov (United States)

    Liu, Tao; Liu, Yi; Qian, Chen; Qian, Depei

    2017-01-01

    With the development of multi-/many-core processors, applications need to be written as parallel programs to improve execution efficiency. For data-intensive applications that use multiple threads to read/write files simultaneously, an I/O sub-system can easily become a bottleneck when too many of these types of threads exist; on the contrary, too few threads will cause insufficient resource utilization and hurt performance. Therefore, programmers must pay much attention to parallelism control to find the appropriate number of I/O threads for an application. This paper proposes a parallelism control mechanism named IOPA that can adjust the parallelism of applications to adapt to the I/O capability of a system and balance computing resources and I/O bandwidth. The programming interface of IOPA is also provided to programmers to simplify parallel programming. IOPA is evaluated using multiple applications with both solid state and hard disk drives. The results show that the parallel applications using IOPA can achieve higher efficiency than those with a fixed number of threads.

  18. 42 CFR 436.330 - Coverage for certain aliens.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Coverage for certain aliens. 436.330 Section 436... Coverage of the Medically Needy § 436.330 Coverage for certain aliens. If an agency provides Medicaid to... condition, as defined in § 440.255(c) of this chapter to those aliens described in § 436.406(c) of this...

  19. Learning in Parallel Universes

    OpenAIRE

    Berthold, Michael R.; Wiswedel, Bernd

    2007-01-01

    This abstract summarizes a brief, preliminary formalization of learning in parallel universes. It also attempts to highlight a few neighboring learning paradigms to illustrate how parallel learning fits into the greater picture.

  20. Parallel computing works!

    CERN Document Server

    Fox, Geoffrey C; Messina, Guiseppe C

    2014-01-01

    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  1. Writing parallel programs that work

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Serial algorithms typically run inefficiently on parallel machines. This may sound like an obvious statement, but it is the root cause of why parallel programming is considered to be difficult. The current state of the computer industry is still that almost all programs in existence are serial. This talk will describe the techniques used in the Intel Parallel Studio to provide a developer with the tools necessary to understand the behaviors and limitations of the existing serial programs. Once the limitations are known the developer can refactor the algorithms and reanalyze the resulting programs with the tools in the Intel Parallel Studio to create parallel programs that work. About the speaker Paul Petersen is a Sr. Principal Engineer in the Software and Solutions Group (SSG) at Intel. He received a Ph.D. degree in Computer Science from the University of Illinois in 1993. After UIUC, he was employed at Kuck and Associates, Inc. (KAI) working on auto-parallelizing compiler (KAP), and was involved in th...

  2. Verbal and Visual Parallelism

    Science.gov (United States)

    Fahnestock, Jeanne

    2003-01-01

    This study investigates the practice of presenting multiple supporting examples in parallel form. The elements of parallelism and its use in argument were first illustrated by Aristotle. Although real texts may depart from the ideal form for presenting multiple examples, rhetorical theory offers a rationale for minimal, parallel presentation. The…

  3. Parallel simulation today

    Science.gov (United States)

    Nicol, David; Fujimoto, Richard

    1992-01-01

    This paper surveys topics that presently define the state of the art in parallel simulation. Included in the tutorial are discussions on new protocols, mathematical performance analysis, time parallelism, hardware support for parallel simulation, load balancing algorithms, and dynamic memory management for optimistic synchronization.

  4. Using readability, comprehensibility and lexical coverage to ...

    African Journals Online (AJOL)

    Finally, Nations' Vocabulary Size Test (Nation and Beglar 2007: 9, 11) was used to determine whether the vocabulary size of the selection of students provides adequate lexical coverage of the lexis used in the textbook to enable comprehension of the text. The findings were somewhat conflicting. The readability indices ...

  5. Actual Test Coverage for Embedded Systems

    NARCIS (Netherlands)

    Timmer, Mark

    2008-01-01

    Testing embedded systems is inherently incomplete; no test suite will ever be able to test all possible usage scenarios. Therefore, in the past decades many coverage measures have been developed. These measures denote the portion of a system that is tested, that way providing a quality criterion for

  6. 5 CFR 534.202 - Coverage.

    Science.gov (United States)

    2010-01-01

    ... Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY UNDER OTHER SYSTEMS Student-Employees in Government Hospitals § 534.202 Coverage. In addition to the student-employees specified in 5 U.S.C. 5351(2)(A), the following student-employees are covered under this program, provided they are...

  7. Barrier Coverage for 3D Camera Sensor Networks.

    Science.gov (United States)

    Si, Pengju; Wu, Chengdong; Zhang, Yunzhou; Jia, Zixi; Ji, Peng; Chu, Hao

    2017-08-03

    Barrier coverage, an important research area with respect to camera sensor networks, consists of a number of camera sensors to detect intruders that pass through the barrier area. Existing works on barrier coverage such as local face-view barrier coverage and full-view barrier coverage typically assume that each intruder is considered as a point. However, the crucial feature (e.g., size) of the intruder should be taken into account in the real-world applications. In this paper, we propose a realistic resolution criterion based on a three-dimensional (3D) sensing model of a camera sensor for capturing the intruder's face. Based on the new resolution criterion, we study the barrier coverage of a feasible deployment strategy in camera sensor networks. Performance results demonstrate that our barrier coverage with more practical considerations is capable of providing a desirable surveillance level. Moreover, compared with local face-view barrier coverage and full-view barrier coverage, our barrier coverage is more reasonable and closer to reality. To the best of our knowledge, our work is the first to propose barrier coverage for 3D camera sensor networks.

  8. Barrier Coverage for 3D Camera Sensor Networks

    Science.gov (United States)

    Wu, Chengdong; Zhang, Yunzhou; Jia, Zixi; Ji, Peng; Chu, Hao

    2017-01-01

    Barrier coverage, an important research area with respect to camera sensor networks, consists of a number of camera sensors to detect intruders that pass through the barrier area. Existing works on barrier coverage such as local face-view barrier coverage and full-view barrier coverage typically assume that each intruder is considered as a point. However, the crucial feature (e.g., size) of the intruder should be taken into account in the real-world applications. In this paper, we propose a realistic resolution criterion based on a three-dimensional (3D) sensing model of a camera sensor for capturing the intruder’s face. Based on the new resolution criterion, we study the barrier coverage of a feasible deployment strategy in camera sensor networks. Performance results demonstrate that our barrier coverage with more practical considerations is capable of providing a desirable surveillance level. Moreover, compared with local face-view barrier coverage and full-view barrier coverage, our barrier coverage is more reasonable and closer to reality. To the best of our knowledge, our work is the first to propose barrier coverage for 3D camera sensor networks. PMID:28771167

  9. A parallel buffer tree

    DEFF Research Database (Denmark)

    Sitchinava, Nodar; Zeh, Norbert

    2012-01-01

    We present the parallel buffer tree, a parallel external memory (PEM) data structure for batched search problems. This data structure is a non-trivial extension of Arge's sequential buffer tree to a private-cache multiprocessor environment and reduces the number of I/O operations by the number...... of available processor cores compared to its sequential counterpart, thereby taking full advantage of multicore parallelism. The parallel buffer tree is a search tree data structure that supports the batched parallel processing of a sequence of N insertions, deletions, membership queries, and range queries...

  10. Determinants of antiretroviral therapy coverage in Sub-Saharan Africa

    Directory of Open Access Journals (Sweden)

    Fumitaka Furuoka

    2015-12-01

    Full Text Available Among 35 million people living with the human immunodeficiency virus (HIV in 2013, only 37% had access to antiretroviral therapy (ART. Despite global concerted efforts to provide the universal access to the ART treatment, the ART coverage varies among countries and regions. At present, there is a lack of systematic empirical analyses on factors that determine the ART coverage. Therefore, the current study aimed to identify the determinants of the ART coverage in 41 countries in Sub-Saharan Africa. It employed statistical analyses for this purpose. Four elements, namely, the HIV prevalence, the level of national income, the level of medical expenditure and the number of nurses, were hypothesised to determine the ART coverage. The findings revealed that among the four proposed determinants only the HIV prevalence had a statistically significant impact on the ART coverage. In other words, the HIV prevalence was the sole determinant of the ART coverage in Sub-Saharan Africa.

  11. Task Parallelism and Data Distribution: An Overview of Explicit Parallel Programming Languages

    OpenAIRE

    Khaldi, Dounia; Jouvelot, Pierre,; Ancourt, Corinne; Irigoin, François

    2012-01-01

    15 pages; International audience; Programming parallelmachines as effectively as sequential ones would ideally require a language that provides high-level programming constructs to avoid the programming errors frequent when expressing parallelism. Since task parallelism is considered more error-prone than data parallelism, we survey six popular and efficient parallel language designs that tackle this difficult issue: Cilk, Chapel, X10, Habanero-Java, OpenMP and OpenCL. Using as single running...

  12. Parallel Atomistic Simulations

    Energy Technology Data Exchange (ETDEWEB)

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  13. Delaunay Triangulation as a New Coverage Measurement Method in Wireless Sensor Network

    Science.gov (United States)

    Chizari, Hassan; Hosseini, Majid; Poston, Timothy; Razak, Shukor Abd; Abdullah, Abdul Hanan

    2011-01-01

    Sensing and communication coverage are among the most important trade-offs in Wireless Sensor Network (WSN) design. A minimum bound of sensing coverage is vital in scheduling, target tracking and redeployment phases, as well as providing communication coverage. Some methods measure the coverage as a percentage value, but detailed information has been missing. Two scenarios with equal coverage percentage may not have the same Quality of Coverage (QoC). In this paper, we propose a new coverage measurement method using Delaunay Triangulation (DT). This can provide the value for all coverage measurement tools. Moreover, it categorizes sensors as ‘fat’, ‘healthy’ or ‘thin’ to show the dense, optimal and scattered areas. It can also yield the largest empty area of sensors in the field. Simulation results show that the proposed DT method can achieve accurate coverage information, and provides many tools to compare QoC between different scenarios. PMID:22163792

  14. Immunisation coverage annual report, 2009.

    Science.gov (United States)

    Hull, Brynley; Dey, Aditi; Mahajan, Deepika; Menzies, Rob; McIntyre, Peter B

    2011-06-01

    This, the third annual immunisation coverage report, documents trends during 2009 for a range of standard measures derived from Australian Childhood Immunisation Register data, including overall coverage at standard age milestones and for individual vaccines included on the National Immunisation Program (NIP). Coverage by Indigenous status and mapping by smaller geographic areas as well as trends in timeliness is also summarised according to standard templates. With respect to overall coverage, the Immunise Australia Program targets have been reached for children at 12 and 24 months of age but not for children at 5 years of age. Coverage at 24 months of age exceeds that at 12 months of age, but as receipt of varicella vaccine at 18 months is excluded from calculations of 'fully immunised' this probably represents delayed immunisation, with some contribution from immunisation incentives. Similarly, the decrease in coverage estimates for immunisations due at 4 years of age from March 2008 is primarily due to changing the assessment age from 6 years to 5 years of age from December 2007. With respect to individual vaccines, a number of those available on the NIP are not currently assessed for 'fully immunised' status or for eligibility for incentive payments. These include pneumococcal conjugate and meningococcal C conjugate vaccines, for which coverage is comparable with vaccines that are assessed for 'fully immunised' status, and rotavirus and varicella vaccines for which coverage is lower. Coverage is also suboptimal for vaccines recommended for Indigenous children only (i.e. hepatitis A and pneumococcal polysaccharide vaccine) as previously reported for other vaccines for both children and adults. Delayed receipt of vaccines is an important issue for vaccines recommended for Indigenous children and has not improved among non-Indigenous children despite improvements in coverage at the 24-month milestone. Although Indigenous children in Australia have coverage levels

  15. Parallel auto-correlative statistics with VTK.

    Energy Technology Data Exchange (ETDEWEB)

    Pebay, Philippe Pierre; Bennett, Janine Camille

    2013-08-01

    This report summarizes existing statistical engines in VTK and presents both the serial and parallel auto-correlative statistics engines. It is a sequel to [PT08, BPRT09b, PT09, BPT09, PT10] which studied the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k-means, and order statistics engines. The ease of use of the new parallel auto-correlative statistics engine is illustrated by the means of C++ code snippets and algorithm verification is provided. This report justifies the design of the statistics engines with parallel scalability in mind, and provides scalability and speed-up analysis results for the autocorrelative statistics engine.

  16. original article assessment of effective coverage of voluntary ...

    African Journals Online (AJOL)

    Abrham

    CONCLUSION: This study demonstrated that effective coverage of Voluntary Counseling and Testing service was very low based on the providers ... questionnaire was developed and used in this study. Training topics included: discussion on ... measurement of HIV/AIDS intervention would be by use of coverage indicators ...

  17. 42 CFR 435.139 - Coverage for certain aliens.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Coverage for certain aliens. 435.139 Section 435... Aliens § 435.139 Coverage for certain aliens. The agency must provide services necessary for the treatment of an emergency medical condition, as defined in § 440.255(c) of this chapter, to those aliens...

  18. Coverage statistics for sequence census methods

    Directory of Open Access Journals (Sweden)

    Evans Steven N

    2010-08-01

    Full Text Available Abstract Background We study the statistical properties of fragment coverage in genome sequencing experiments. In an extension of the classic Lander-Waterman model, we consider the effect of the length distribution of fragments. We also introduce a coding of the shape of the coverage depth function as a tree and explain how this can be used to detect regions with anomalous coverage. This modeling perspective is especially germane to current high-throughput sequencing experiments, where both sample preparation protocols and sequencing technology particulars can affect fragment length distributions. Results Under the mild assumptions that fragment start sites are Poisson distributed and successive fragment lengths are independent and identically distributed, we observe that, regardless of fragment length distribution, the fragments produced in a sequencing experiment can be viewed as resulting from a two-dimensional spatial Poisson process. We then study the successive jumps of the coverage function, and show that they can be encoded as a random tree that is approximately a Galton-Watson tree with generation-dependent geometric offspring distributions whose parameters can be computed. Conclusions We extend standard analyses of shotgun sequencing that focus on coverage statistics at individual sites, and provide a null model for detecting deviations from random coverage in high-throughput sequence census based experiments. Our approach leads to explicit determinations of the null distributions of certain test statistics, while for others it greatly simplifies the approximation of their null distributions by simulation. Our focus on fragments also leads to a new approach to visualizing sequencing data that is of independent interest.

  19. Design considerations for parallel graphics libraries

    Science.gov (United States)

    Crockett, Thomas W.

    1994-01-01

    Applications which run on parallel supercomputers are often characterized by massive datasets. Converting these vast collections of numbers to visual form has proven to be a powerful aid to comprehension. For a variety of reasons, it may be desirable to provide this visual feedback at runtime. One way to accomplish this is to exploit the available parallelism to perform graphics operations in place. In order to do this, we need appropriate parallel rendering algorithms and library interfaces. This paper provides a tutorial introduction to some of the issues which arise in designing parallel graphics libraries and their underlying rendering algorithms. The focus is on polygon rendering for distributed memory message-passing systems. We illustrate our discussion with examples from PGL, a parallel graphics library which has been developed on the Intel family of parallel systems.

  20. Mediating Trust in Terrorism Coverage

    DEFF Research Database (Denmark)

    Mogensen, Kirsten

    crisis. While the framework is presented in the context of television coverage of a terror-related crisis situation, it can equally be used in connection with all other forms of mediated trust. Key words: National crisis, risk communication, crisis management, television coverage, mediated trust....

  1. Parallel External Memory Graph Algorithms

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Sitchinava, Nodari

    2010-01-01

    In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... common ancestors, tree contraction and expression tree evaluation. We also study the problems of computing the connected and biconnected components of a graph, minimum spanning tree of a connected graph and ear decomposition of a biconnected graph. All our solutions on a P-processor PEM model provide...

  2. Parallelization in Modern C++

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The traditionally used and well established parallel programming models OpenMP and MPI are both targeting lower level parallelism and are meant to be as language agnostic as possible. For a long time, those models were the only widely available portable options for developing parallel C++ applications beyond using plain threads. This has strongly limited the optimization capabilities of compilers, has inhibited extensibility and genericity, and has restricted the use of those models together with other, modern higher level abstractions introduced by the C++11 and C++14 standards. The recent revival of interest in the industry and wider community for the C++ language has also spurred a remarkable amount of standardization proposals and technical specifications being developed. Those efforts however have so far failed to build a vision on how to seamlessly integrate various types of parallelism, such as iterative parallel execution, task-based parallelism, asynchronous many-task execution flows, continuation s...

  3. Parallelism in matrix computations

    CERN Document Server

    Gallopoulos, Efstratios; Sameh, Ahmed H

    2016-01-01

    This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...

  4. Parallel digital forensics infrastructure.

    Energy Technology Data Exchange (ETDEWEB)

    Liebrock, Lorie M. (New Mexico Tech, Socorro, NM); Duggan, David Patrick

    2009-10-01

    This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexico Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.

  5. Dental Care Coverage and Use: Modeling Limitations and Opportunities

    Science.gov (United States)

    Moeller, John F.; Chen, Haiyan

    2014-01-01

    Objectives. We examined why older US adults without dental care coverage and use would have lower use rates if offered coverage than do those who currently have coverage. Methods. We used data from the 2008 Health and Retirement Study to estimate a multinomial logistic model to analyze the influence of personal characteristics in the grouping of older US adults into those with and those without dental care coverage and dental care use. Results. Compared with persons with no coverage and no dental care use, users of dental care with coverage were more likely to be younger, female, wealthier, college graduates, married, in excellent or very good health, and not missing all their permanent teeth. Conclusions. Providing dental care coverage to uninsured older US adults without use will not necessarily result in use rates similar to those with prior coverage and use. We have offered a model using modifiable factors that may help policy planners facilitate programs to increase dental care coverage uptake and use. PMID:24328635

  6. Access to Private Coverage for Children Enrolled in CHIP.

    Science.gov (United States)

    McMorrow, Stacey; Kenney, Genevieve M; Waidmann, Timothy; Anderson, Nathaniel

    2015-01-01

    To provide updated information on the potential substitution of public for private coverage among low-income children by examining the type of coverage held by children before they enrolled in Children's Health Insurance Program (CHIP) and exploring the extent to which children covered by CHIP had access to private coverage while they were enrolled. We conducted a major household telephone survey in 2012 of enrollees and disenrollees in CHIP in 10 states. We used the survey responses and Medicaid/CHIP administrative data to estimate the coverage distribution of all new enrollees in the 12 months before CHIP enrollment and to identify children who may have had access to employer coverage through one of their parents while enrolled in CHIP. About 13% of new enrollees had any private coverage in the 12 months before enrolling in CHIP, and most were found to have lost that coverage as a result of parental job loss. About 40% of CHIP enrollees had a parent with an employer-sponsored insurance (ESI) policy, but only half reported that the policy could cover the child. Approximately 30% of new enrollees had public coverage during the year before but were uninsured just before enrolling. Access to private coverage among CHIP enrollees is relatively limited. Furthermore, even when there is potential access to ESI, affordability is a serious concern for parents, making it possible that many children with access to ESI would remain uninsured in the absence of CHIP. Copyright © 2015 Academic Pediatric Association. All rights reserved.

  7. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  8. ATLAS FTK a - very complex - custom parallel supercomputer

    CERN Document Server

    Kimura, Naoki; The ATLAS collaboration

    2016-01-01

    In the ever increasing pile-up LHC environment advanced techniques of analysing the data are implemented in order to increase the rate of relevant physics processes with respect to background processes. The Fast TracKer (FTK) is a track finding implementation at hardware level that is designed to deliver full-scan tracks with $p_{T}$ above 1GeV to the ATLAS trigger system for every L1 accept (at a maximum rate of 100kHz). In order to achieve this performance a highly parallel system was designed and now it is under installation in ATLAS. In the beginning of 2016 it will provide tracks for the trigger system in a region covering the central part of the ATLAS detector, and during the year it's coverage will be extended to the full detector coverage. The system relies on matching hits coming from the silicon tracking detectors against 1 billion patterns stored in specially designed ASICS chips (Associative memory - AM06). In a first stage coarse resolution hits are matched against the patterns and the accepted h...

  9. Determinants of Network News Coverage of the Oil Industry during the Late 1970s.

    Science.gov (United States)

    Erfle, Stephen; McMillan, Henry

    1989-01-01

    Examines which firms and products best predict media coverage of the oil industry. Reports that price variations in testing oil and gasoline correlate with the extent of news coverage provided by network television. (MM)

  10. Computer-Aided Parallelizer and Optimizer

    Science.gov (United States)

    Jin, Haoqiang

    2011-01-01

    The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

  11. Suicide reporting within British newspapers' arts coverage.

    Science.gov (United States)

    Pitman, Alexandra; Stevenson, Fiona

    2015-01-01

    Many suicide prevention strategies promote media guidelines on suicide reporting, given evidence that irresponsible reporting of suicide can influence imitative suicidal behavior. Due to limited resources, monitoring of guideline adherence has tended to focus on news outputs, with a risk of neglecting other journalistic content. To determine whether British newspapers' arts coverage adheres to media guidelines on suicide reporting. Purposive sampling was used to capture current national practice on suicide reporting within newspapers' arts coverage of exhibitions. Recent major UK exhibitions by artists who had died by suicide were identified: Kirchner, Rothko, Gorky, and Van Gogh. Content analysis of all UK national newspaper coverage of these exhibitions was performed to measure the articles' adherence to widely accepted media guidelines. In all, 68 newspaper reviews satisfied inclusion criteria, with 100% failing to show full adherence to media guidelines: 21% used inappropriate language; 38% provided explicit descriptions of the suicide; 7% employed simplistic explanations for suicide triggers; 27% romanticized the suicide; and 100% omitted information on sources of support. British newspapers' arts coverage of exhibitions deviates considerably from media guidelines on the reporting of suicide. The findings suggest scope to improve journalists' awareness of the importance of this component of suicide prevention strategies.

  12. Determinants of vaccination coverage in rural Nigeria

    Directory of Open Access Journals (Sweden)

    Meurice Francois P

    2008-11-01

    Full Text Available Abstract Background Childhood immunization is a cost effective public health strategy. Expanded Programme on Immunisation (EPI services have been provided in a rural Nigerian community (Sabongidda-Ora, Edo State at no cost to the community since 1998 through a privately financed vaccination project (private public partnership. The objective of this survey was to assess vaccination coverage and its determinants in this rural community in Nigeria Methods A cross-sectional survey was conducted in September 2006, which included the use of interviewer-administered questionnaire to assess knowledge of mothers of children aged 12–23 months and vaccination coverage. Survey participants were selected following the World Health Organization's (WHO immunization coverage cluster survey design. Vaccination coverage was assessed by vaccination card and maternal history. A child was said to be fully immunized if he or she had received all of the following vaccines: a dose of Bacille Calmette Guerin (BCG, three doses of oral polio (OPV, three doses of diphtheria, pertussis and tetanus (DPT, three doses of hepatitis B (HB and one dose of measles by the time he or she was enrolled in the survey, i.e. between the ages of 12–23 months. Knowledge of the mothers was graded as satisfactory if mothers had at least a score of 3 out of a maximum of 5 points. Logistic regression was performed to identify determinants of full immunization status. Results Three hundred and thirty-nine mothers and 339 children (each mother had one eligible child were included in the survey. Most of the mothers (99.1% had very positive attitudes to immunization and > 55% were generally knowledgeable about symptoms of vaccine preventable diseases except for difficulty in breathing (as symptom of diphtheria. Two hundred and ninety-five mothers (87.0% had a satisfactory level of knowledge. Vaccination coverage against all the seven childhood vaccine preventable diseases was 61.9% although it

  13. SNP detection for massively parallel whole-genome resequencing.

    Science.gov (United States)

    Li, Ruiqiang; Li, Yingrui; Fang, Xiaodong; Yang, Huanming; Wang, Jian; Kristiansen, Karsten; Wang, Jun

    2009-06-01

    Next-generation massively parallel sequencing technologies provide ultrahigh throughput at two orders of magnitude lower unit cost than capillary Sanger sequencing technology. One of the key applications of next-generation sequencing is studying genetic variation between individuals using whole-genome or target region resequencing. Here, we have developed a consensus-calling and SNP-detection method for sequencing-by-synthesis Illumina Genome Analyzer technology. We designed this method by carefully considering the data quality, alignment, and experimental errors common to this technology. All of this information was integrated into a single quality score for each base under Bayesian theory to measure the accuracy of consensus calling. We tested this methodology using a large-scale human resequencing data set of 36x coverage and assembled a high-quality nonrepetitive consensus sequence for 92.25% of the diploid autosomes and 88.07% of the haploid X chromosome. Comparison of the consensus sequence with Illumina human 1M BeadChip genotyped alleles from the same DNA sample showed that 98.6% of the 37,933 genotyped alleles on the X chromosome and 98% of 999,981 genotyped alleles on autosomes were covered at 99.97% and 99.84% consistency, respectively. At a low sequencing depth, we used prior probability of dbSNP alleles and were able to improve coverage of the dbSNP sites significantly as compared to that obtained using a nonimputation model. Our analyses demonstrate that our method has a very low false call rate at any sequencing depth and excellent genome coverage at a high sequencing depth.

  14. Annual immunisation coverage report, 2010.

    Science.gov (United States)

    Hull, Brynley; Dey, Aditi; Menzies, Rob; McIntyre, Peter

    2013-03-31

    This, the fourth annual immunisation coverage report, documents trends during 2010 for a range of standard measures derived from Australian Childhood Immunisation Register (ACIR) data. These include coverage at standard age milestones and for individual vaccines included on the National Immunisation Program (NIP). For the first time, coverage from other sources for adolescents and the elderly are included. The proportion of children 'fully vaccinated' at 12, 24 and 60 months of age was 91.6%, 92.1% and 89.1% respectively. For vaccines available on the NIP but not currently assessed for 'fully immunised' status or for eligibility for incentive payments (rotavirus and pneumococcal at 12 months and meningococcal C and varicella at 24 months) coverage varied. Although pneumococcal vaccine had similar coverage at 12 months to other vaccines, coverage was lower for rotavirus at 12 months (84.7%) and varicella at 24 months (83.0%). Overall coverage at 24 months of age exceeded that at 12 months of age nationally and for most jurisdictions, but as receipt of varicella vaccine at 18 months is excluded from calculations, this represents delayed immunisation, with some contribution from immunisation incentives. The 'fully immunised' coverage estimates for immunisations due by 60 months increased substantially in 2009, reaching almost 90% in 2010, probably related to completed immunisation by 60 months of age being introduced in 2009 as a requirement for GP incentive payments. As previously documented, vaccines recommended for Indigenous children only (hepatitis A and pneumococcal polysaccharide vaccine) had suboptimal coverage at around 57%. Delayed receipt of vaccines by Indigenous children at the 60-month milestone age improved from 56% to 62% but the disparity in on-time vaccination between Indigenous and non-Indigenous children at earlier age milestones did not improve. Coverage data for human papillomavirus (HPV)from the national HPV register are consistent with high

  15. Scaling up machine learning: parallel and distributed approaches

    National Research Council Canada - National Science Library

    Bekkerman, Ron; Bilenko, Mikhail; Langford, John

    2012-01-01

    ... presented in the book cover a range of parallelization platforms from FPGAs and GPUs to multi-core systems and commodity clusters; concurrent programming frameworks that include CUDA, MPI, MapReduce, and DryadLINQ; and various learning settings: supervised, unsupervised, semi-supervised, and online learning. Extensive coverage of parallelizat...

  16. Parallel discrete event simulation

    NARCIS (Netherlands)

    Overeinder, B.J.; Hertzberger, L.O.; Sloot, P.M.A.; Withagen, W.J.

    1991-01-01

    In simulating applications for execution on specific computing systems, the simulation performance figures must be known in a short period of time. One basic approach to the problem of reducing the required simulation time is the exploitation of parallelism. However, in parallelizing the simulation

  17. Patterns For Parallel Programming

    CERN Document Server

    Mattson, Timothy G; Massingill, Berna L

    2005-01-01

    From grids and clusters to next-generation game consoles, parallel computing is going mainstream. Innovations such as Hyper-Threading Technology, HyperTransport Technology, and multicore microprocessors from IBM, Intel, and Sun are accelerating the movement's growth. Only one thing is missing: programmers with the skills to meet the soaring demand for parallel software.

  18. Parallel distributed computing using Python

    Science.gov (United States)

    Dalcin, Lisandro D.; Paz, Rodrigo R.; Kler, Pablo A.; Cosimo, Alejandro

    2011-09-01

    This work presents two software components aimed to relieve the costs of accessing high-performance parallel computing resources within a Python programming environment: MPI for Python and PETSc for Python. MPI for Python is a general-purpose Python package that provides bindings for the Message Passing Interface (MPI) standard using any back-end MPI implementation. Its facilities allow parallel Python programs to easily exploit multiple processors using the message passing paradigm. PETSc for Python provides access to the Portable, Extensible Toolkit for Scientific Computation (PETSc) libraries. Its facilities allow sequential and parallel Python applications to exploit state of the art algorithms and data structures readily available in PETSc for the solution of large-scale problems in science and engineering. MPI for Python and PETSc for Python are fully integrated to PETSc-FEM, an MPI and PETSc based parallel, multiphysics, finite elements code developed at CIMEC laboratory. This software infrastructure supports research activities related to simulation of fluid flows with applications ranging from the design of microfluidic devices for biochemical analysis to modeling of large-scale stream/aquifer interactions.

  19. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  20. Massively parallel mathematical sieves

    Energy Technology Data Exchange (ETDEWEB)

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  1. Parallelism in practice: approaches to parallelism in bioassays.

    Science.gov (United States)

    Fleetwood, Kelly; Bursa, Francis; Yellowlees, Ann

    2015-01-01

    Relative potency bioassays are used to estimate the potency of a test biological product relative to a standard or reference product. It is established practice to assess the parallelism of the dose-response curves of the products prior to calculating relative potency. This paper provides a review of parallelism testing for bioassays. In particular three common methods for parallelism testing are reviewed: two significance tests (the F-test, the χ(2)-test) and an equivalence test. Simulation is used to compare these methods. We compare the sensitivity and specificity and receiver operating characteristic curves, and find that both the χ(2)-test and the equivalence test outperform the F-test on average, unless the assay-to-assay variation is considerable. No single method is optimal in all situations. We describe how bioassay scientists and statisticians can work together to determine the best approach for each bioassay, taking into account its properties and the context in which it is applied. Bioassays are experiments that use living organisms, tissues, or cells to measure the concentration of a pharmaceutical. Typically the response of the living matter to a test sample with an unknown concentration of a pharmaceutical is compared to the response to a standard reference sample with a known concentration. An important step in the analysis of bioassays is checking that the test sample is responding like a diluted copy of the reference sample; this is known as testing for parallelism. There are three statistical methods commonly used to test for parallelism: the F-test, the χ(2)-test, and the equivalence test. This paper compares the three methods using computer simulations. We conclude that different methods are best in different situations, and we provide guidelines to help bioassay scientists and statisticians decide which method to use. © PDA, Inc. 2015.

  2. Assuring Access to Affordable Coverage

    Data.gov (United States)

    U.S. Department of Health & Human Services — Under the Affordable Care Act, millions of uninsured Americans will gain access to affordable coverage through Affordable Insurance Exchanges and improvements in...

  3. Inequity between male and female coverage in state infertility laws.

    Science.gov (United States)

    Dupree, James M; Dickey, Ryan M; Lipshultz, Larry I

    2016-06-01

    To analyze state insurance laws mandating coverage for male factor infertility and identify possible inequities between male and female coverage in state insurance laws. We identified states with laws or codes related to infertility insurance coverage using the National Conference of States Legislatures' and the National Infertility Association's websites. We performed a primary, systematic analysis of the laws or codes to specifically identify coverage for male factor infertility services. Not applicable. Not applicable. Not applicable. The presence or absence of language in state insurance laws mandating coverage for male factor infertility care. There are 15 states with laws mandating insurance coverage for female factor infertility. Only eight of those states (California, Connecticut, Massachusetts, Montana, New Jersey, New York, Ohio, and West Virginia) have mandates for male factor infertility evaluation or treatment. Insurance coverage for male factor infertility is most specific in Massachusetts, New Jersey, and New York, yet significant differences exist in the male factor policies in all eight states. Three states (Massachusetts, New Jersey, and New York) exempt coverage for vasectomy reversal. Despite national recommendations that male and female partners begin infertility evaluations together, only 8 of 15 states with laws mandating infertility coverage include coverage for the male partner. Excluding men from infertility coverage places an undue burden on female partners and risks missing opportunities to diagnose serious male health conditions, correct reversible causes of infertility, and provide cost-effective treatments that can downgrade the intensity of intervention required to achieve a pregnancy. Copyright © 2016 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  4. Immunisation coverage annual report, 2008.

    Science.gov (United States)

    Hull, Brynley P; Mahajan, Deepika; Dey, Aditi; Menzies, Rob I; McIntyre, Peter B

    2010-09-01

    This, the 2nd annual immunisation coverage report, documents trends during 2008 for a range of standard measures derived from Australian Childhood Immunisation Register data, including overall coverage at standard age milestones and for individual vaccines included on the National Immunisation Program (NIP). Coverage by indigenous status and mapping by smaller geographic areas as well as trends in timeliness are also summarised according to standard templates. With respect to overall coverage, Immunise Australia Program targets have been reached for children at 12 and 24 months of age but not for children at 5 years of age. Coverage at 24 months of age exceeds that at 12 months of age, but as receipt of varicella vaccine at 18 months is excluded from calculations of 'fully immunised' this probably represents delayed immunisation, with some contribution from immunisation incentives. Similarly, the decrease in coverage estimates for immunisations due at 4 years of age from March 2008, is primarily due to changing the assessment age from 6 years to 5 years of age from December 2007. A number of individual vaccines on the NIP are not currently assessed for 'fully immunised' status or for eligibility for incentive payments. These include pneumococcal conjugate and meningococcal C conjugate vaccines for which coverage is comparable to vaccines which are assessed for 'fully immunised' status, and rotavirus and varicella vaccines for which coverage is lower. Coverage is also suboptimal for vaccines recommended for Indigenous children only (i.e. hepatitis A and pneumococcal polysaccharide vaccine) as previously reported for other vaccines for both children and adults. Delayed receipt of vaccines is an important issue for vaccines recommended for Indigenous children and has not improved among non-Indigenous children despite improvements in coverage at the 24-month milestone. Although Indigenous children in Australia have coverage levels that are similar to non

  5. Staff Acceptance of Tele-ICU Coverage

    Science.gov (United States)

    Chan, Paul S.; Cram, Peter

    2011-01-01

    Background: Remote coverage of ICUs is increasing, but staff acceptance of this new technology is incompletely characterized. We conducted a systematic review to summarize existing research on acceptance of tele-ICU coverage among ICU staff. Methods: We searched for published articles pertaining to critical care telemedicine systems (aka, tele-ICU) between January 1950 and March 2010 using PubMed, Cumulative Index to Nursing and Allied Health Literature, Global Health, Web of Science, and the Cochrane Library and abstracts and presentations delivered at national conferences. Studies were included if they provided original qualitative or quantitative data on staff perceptions of tele-ICU coverage. Studies were imported into content analysis software and coded by tele-ICU configuration, methodology, participants, and findings (eg, positive and negative staff evaluations). Results: Review of 3,086 citations yielded 23 eligible studies. Findings were grouped into four categories of staff evaluation: overall acceptance level of tele-ICU coverage (measured in 70% of studies), impact on patient care (measured in 96%), impact on staff (measured in 100%), and organizational impact (measured in 48%). Overall acceptance was high, despite initial ambivalence. Favorable impact on patient care was perceived by > 82% of participants. Staff impact referenced enhanced collaboration, autonomy, and training, although scrutiny, malfunctions, and contradictory advice were cited as potential barriers. Staff perceived the organizational impact to vary. An important limitation of available studies was a lack of rigorous methodology and validated survey instruments in many studies. Conclusions: Initial reports suggest high levels of staff acceptance of tele-ICU coverage, but more rigorous methodologic study is required. PMID:21051386

  6. What is parallelism?

    Science.gov (United States)

    Scotland, Robert W

    2011-01-01

    Although parallel and convergent evolution are discussed extensively in technical articles and textbooks, their meaning can be overlapping, imprecise, and contradictory. The meaning of parallel evolution in much of the evolutionary literature grapples with two separate hypotheses in relation to phenotype and genotype, but often these two hypotheses have been inferred from only one hypothesis, and a number of subsidiary but problematic criteria, in relation to the phenotype. However, examples of parallel evolution of genetic traits that underpin or are at least associated with convergent phenotypes are now emerging. Four criteria for distinguishing parallelism from convergence are reviewed. All are found to be incompatible with any single proposition of homoplasy. Therefore, all homoplasy is equivalent to a broad view of convergence. Based on this concept, all phenotypic homoplasy can be described as convergence and all genotypic homoplasy as parallelism, which can be viewed as the equivalent concept of convergence for molecular data. Parallel changes of molecular traits may or may not be associated with convergent phenotypes but if so describe homoplasy at two biological levels-genotype and phenotype. Parallelism is not an alternative to convergence, but rather it entails homoplastic genetics that can be associated with and potentially explain, at the molecular level, how convergent phenotypes evolve. © 2011 Wiley Periodicals, Inc.

  7. Compositional C++: Compositional Parallel Programming

    OpenAIRE

    Chandy, K. Mani; Kesselman, Carl

    1992-01-01

    A compositional parallel program is a program constructed by composing component programs in parallel, where the composed program inherits properties of its components. In this paper, we describe a small extension of C++ called Compositional C++ or CC++ which is an object-oriented notation that supports compositional parallel programming. CC++ integrates different paradigms of parallel programming: data-parallel, task-parallel and object-parallel paradigms; imperative and declarative programm...

  8. Three-Dimensional Analysis of Deep Space Network Antenna Coverage

    Science.gov (United States)

    Kegege, Obadiah; Fuentes, Michael; Meyer, Nicholas; Sil, Amy

    2012-01-01

    There is a need to understand NASA s Deep Space Network (DSN) coverage gaps and any limitations to provide redundant communication coverage for future deep space missions, especially for manned missions to Moon and Mars. The DSN antennas are required to provide continuous communication coverage for deep space flights, interplanetary missions, and deep space scientific observations. The DSN consists of ground antennas located at three sites: Goldstone in USA, Canberra in Australia, and Madrid in Spain. These locations are not separated by the exactly 120 degrees and some DSN antennas are located in the bowl-shaped mountainous terrain to shield against radiofrequency interference resulting in a coverage gap in the southern hemisphere for the current DSN architecture. To analyze the extent of this gap and other coverage limitations, simulations of the DSN architecture were performed. In addition to the physical properties of the DSN assets, the simulation incorporated communication forward link calculations and azimuth/elevation masks that constrain the effects of terrain for each DSN antenna. Analysis of the simulation data was performed to create coverage profiles with the receiver settings at a deep space altitudes ranging from 2 million to 10 million km and a spherical grid resolution of 0.25 degrees with respect to longitude and latitude. With the results of these simulations, two- and three-dimensional representations of the area without communication coverage and area with coverage were developed, showing the size and shape of the communication coverage gap projected in space. Also, the significance of this communication coverage gap is analyzed from the simulation data.

  9. [Falsified medicines in parallel trade].

    Science.gov (United States)

    Muckenfuß, Heide

    2017-11-01

    The number of falsified medicines on the German market has distinctly increased over the past few years. In particular, stolen pharmaceutical products, a form of falsified medicines, have increasingly been introduced into the legal supply chain via parallel trading. The reasons why parallel trading serves as a gateway for falsified medicines are most likely the complex supply chains and routes of transport. It is hardly possible for national authorities to trace the history of a medicinal product that was bought and sold by several intermediaries in different EU member states. In addition, the heterogeneous outward appearance of imported and relabelled pharmaceutical products facilitates the introduction of illegal products onto the market. Official batch release at the Paul-Ehrlich-Institut offers the possibility of checking some aspects that might provide an indication of a falsified medicine. In some circumstances, this may allow the identification of falsified medicines before they come onto the German market. However, this control is only possible for biomedicinal products that have not received a waiver regarding official batch release. For improved control of parallel trade, better networking among the EU member states would be beneficial. European-wide regulations, e. g., for disclosure of the complete supply chain, would help to minimise the risks of parallel trading and hinder the marketing of falsified medicines.

  10. Prevalence, Characteristics, and Perception of Nursery Antibiotic Stewardship Coverage in the United States.

    Science.gov (United States)

    Cantey, Joseph B; Vora, Niraj; Sunkara, Mridula

    2017-09-01

    Prolonged or unnecessary antibiotic use is associated with adverse outcomes in infants. Antibiotic stewardship programs (ASPs) aim to prevent these adverse outcomes and optimize antibiotic prescribing. However, data evaluating ASP coverage of nurseries are limited. The objectives of this study were to describe the characteristics of nurseries with and without ASP coverage and to determine perceptions of and barriers to nursery ASP coverage. The 2014 American Hospital Association annual survey was used to randomly select a level III neonatal intensive care unit from all 50 states. A level I and level II nursery from the same city as the level III nursery were then randomly selected. Hospital, nursery, and ASP characteristics were collected. Nursery and ASP providers (pharmacists or infectious disease providers) were interviewed using a semistructured template. Transcribed interviews were analyzed for themes. One hundred forty-six centers responded; 104 (71%) provided nursery ASP coverage. In multivariate analysis, level of nursery, university affiliation, and number of full-time equivalent ASP staff were the main predictors of nursery ASP coverage. Several themes were identified from interviews: unwanted coverage, unnecessary coverage, jurisdiction issues, need for communication, and a focus on outcomes. Most providers had a favorable view of nursery ASP coverage. Larger, higher-acuity nurseries in university-affiliated hospitals are more likely to have ASP coverage. Low ASP staffing and a perceived lack of importance were frequently cited as barriers to nursery coverage. Most nursery ASP coverage is viewed favorably by providers, but nursery providers regard it as less important than ASP providers.

  11. The Coverage of Campaign Advertising by the Prestige Press in 1972.

    Science.gov (United States)

    Bowers, Thomas A.

    The nature and extent of the news media coverage of political advertising in the presidential campaign of 1972 was shallow and spotty at best. The candidates' political advertising strategies received limited coverage by reporters and commentators. Even the "prestige" press--16 major newspapers--provided limited coverage to the nature…

  12. Parallelism viewpoint: An architecture viewpoint to model parallelism behaviour of parallelism-intensive software systems

    OpenAIRE

    Muhammad, Naeem; Boucké, Nelis; Berbers, Yolande

    2010-01-01

    The use of parallelism enhances the performance of a software system. However, its excessive use can degrade the system performance. In this report we propose a parallelism viewpoint to optimize the use of parallelism by eliminating unnecessarily used parallelism in legacy systems. The parallelism viewpoint describes parallelism of the system in order to analyze multiple overheads associated with its threads. We use the proposed viewpoint to find parallelism specific performance overheads of ...

  13. CLUVI Parallel Corpus

    OpenAIRE

    Universidade de Vigo. Grupo de investigación TALG

    2012-01-01

    The CLUVI Corpus of the University of Vigo is an open collection of parallel text corpora developed under the direction of Xavier Gómez Guinovart (2003-2012) that covers specific areas of the contemporary Galician language. With 23 million words, the CLUVI Corpus comprises six main parallel corpora belonging to five specialised registers or domains (fiction, computing, popular science, law and administration) and involving five different language combinations (Galician-Spanish bilingual trans...

  14. DPS - Dynamic Parallel Schedules

    OpenAIRE

    IEEE Press; Gerlach, S.; Hersch, R. D.

    2003-01-01

    Dynamic Parallel Schedules (DPS) is a high-level framework for developing parallel applications on distributed memory computers (e.g. clusters of PC). Its model relies on compositional customizable split-compute-merge graphs of operations (directed acyclic flow graphs). The graphs and the mapping of operations to processing nodes are specified dynamically at runtime. DPS applications are pipelined and multithreaded by construction, ensuring a maximal overlap of computations and communications...

  15. Parallel Genetic Algorithm System

    OpenAIRE

    Nagaraju Sangepu; Vikram, K.

    2010-01-01

    Genetic Algorithm (GA) is a popular technique to find the optimum of transformation, because of its simple implementation procedure. In image processing GAs are used as a parameter-search-for procedure, this processing requires very high performance of the computer. Recently, parallel processing used to reduce the time by distributing the appropriate amount of work to each computer in the clustering system. The processing time reduces with the number of dedicated computers. Parallel implement...

  16. Medical coverage of youth basketball events.

    Science.gov (United States)

    Ching, Brian K; Khalili-Borna, Dennis

    2013-01-01

    Basketball is among the most popular team sports for boys and girls in the United States and is continuing to grow in popularity worldwide. Increased popularity translates to an increased number of events and, unfortunately, the injuries that occur as a result. In this article, we discuss ways to be prepared in the coverage of youth basketball events, with an emphasis on the evaluation and treatment of some of the most commonly encountered injuries within the sport of basketball. We also give special consideration to injuries that are specific to the skeletally immature athlete. By having a greater knowledge and understanding of these injuries, a provider of medical coverage for basketball events hopefully will gain a higher sense of confidence in handling associated problems as they arise.

  17. Parallel Computational Protein Design.

    Science.gov (United States)

    Zhou, Yichao; Donald, Bruce R; Zeng, Jianyang

    2017-01-01

    Computational structure-based protein design (CSPD) is an important problem in computational biology, which aims to design or improve a prescribed protein function based on a protein structure template. It provides a practical tool for real-world protein engineering applications. A popular CSPD method that guarantees to find the global minimum energy solution (GMEC) is to combine both dead-end elimination (DEE) and A* tree search algorithms. However, in this framework, the A* search algorithm can run in exponential time in the worst case, which may become the computation bottleneck of large-scale computational protein design process. To address this issue, we extend and add a new module to the OSPREY program that was previously developed in the Donald lab (Gainza et al., Methods Enzymol 523:87, 2013) to implement a GPU-based massively parallel A* algorithm for improving protein design pipeline. By exploiting the modern GPU computational framework and optimizing the computation of the heuristic function for A* search, our new program, called gOSPREY, can provide up to four orders of magnitude speedups in large protein design cases with a small memory overhead comparing to the traditional A* search algorithm implementation, while still guaranteeing the optimality. In addition, gOSPREY can be configured to run in a bounded-memory mode to tackle the problems in which the conformation space is too large and the global optimal solution cannot be computed previously. Furthermore, the GPU-based A* algorithm implemented in the gOSPREY program can be combined with the state-of-the-art rotamer pruning algorithms such as iMinDEE (Gainza et al., PLoS Comput Biol 8:e1002335, 2012) and DEEPer (Hallen et al., Proteins 81:18-39, 2013) to also consider continuous backbone and side-chain flexibility.

  18. A Tutorial on Parallel and Concurrent Programming in Haskell

    Science.gov (United States)

    Peyton Jones, Simon; Singh, Satnam

    This practical tutorial introduces the features available in Haskell for writing parallel and concurrent programs. We first describe how to write semi-explicit parallel programs by using annotations to express opportunities for parallelism and to help control the granularity of parallelism for effective execution on modern operating systems and processors. We then describe the mechanisms provided by Haskell for writing explicitly parallel programs with a focus on the use of software transactional memory to help share information between threads. Finally, we show how nested data parallelism can be used to write deterministically parallel programs which allows programmers to use rich data types in data parallel programs which are automatically transformed into flat data parallel versions for efficient execution on multi-core processors.

  19. Newspaper coverage of biobanks

    Directory of Open Access Journals (Sweden)

    Ubaka Ogbogu

    2014-07-01

    Full Text Available Background. Biobanks are an important research resource that provides researchers with biological samples, tools and data, but have also been associated with a range of ethical, legal and policy issues and concerns. Although there have been studies examining the views of different stakeholders, such as donors, researchers and the general public, the media portrayal of biobanks has been absent from this body of research. This study therefore examines how biobanking has been represented in major print newspapers from Australia, Canada, the United Kingdom and the United States to identify the issues and concerns surrounding biobanks that have featured most prominently in the print media discourse.Methods. Using Factiva, articles published in major broadsheet newspapers in Canada, the US, the UK, and Australia were identified using specified search terms. The final sample size consisted of 163 articles.Results. Majority of articles mentioned or discussed the benefits of biobanking, with medical research being the most prevalent benefit mentioned. Fewer articles discussed risks associated with biobanking. Researchers were the group of people most quoted in the articles, followed by biobank employees. Biobanking was portrayed as mostly neutral or positive, with few articles portraying biobanking in a negative manner.Conclusion. Reporting on biobanks in the print media heavily favours discussions of related benefits over risks. Members of the scientific research community appear to be a primary source of this positive tone. Under-reporting of risks and a downtrend in reporting on legal and regulatory issues suggests that the print media views such matters as less newsworthy than perceived benefits of biobanking.

  20. Contraception and abortion coverage: What do primary care physicians think?

    Science.gov (United States)

    Chuang, Cynthia H; Martenis, Melissa E; Parisi, Sara M; Delano, Rachel E; Sobota, Mindy; Nothnagle, Melissa; Schwarz, Eleanor Bimla

    2012-08-01

    Insurance coverage for family planning services has been a highly controversial element of the US health care reform debate. Whether primary care providers (PCPs) support public and private health insurance coverage for family planning services is unknown. PCPs in three states were surveyed regarding their opinions on health plan coverage and tax dollar use for contraception and abortion services. Almost all PCPs supported health plan coverage for contraception (96%) and use of tax dollars to cover contraception for low-income women (94%). A smaller majority supported health plan coverage for abortions (61%) and use of tax dollars to cover abortions for low-income women (63%). In adjusted models, support of health plan coverage for abortions was associated with female gender and internal medicine specialty, and support of using tax dollars for abortions for low-income women was associated with older age and internal medicine specialty. The majority of PCPs support health insurance coverage of contraception and abortion, as well as tax dollar subsidization of contraception and abortion services for low-income women. Copyright © 2012 Elsevier Inc. All rights reserved.

  1. Automatic Parallelization Tool: Classification of Program Code for Parallel Computing

    Directory of Open Access Journals (Sweden)

    Mustafa Basthikodi

    2016-04-01

    Full Text Available Performance growth of single-core processors has come to a halt in the past decade, but was re-enabled by the introduction of parallelism in processors. Multicore frameworks along with Graphical Processing Units empowered to enhance parallelism broadly. Couples of compilers are updated to developing challenges forsynchronization and threading issues. Appropriate program and algorithm classifications will have advantage to a great extent to the group of software engineers to get opportunities for effective parallelization. In present work we investigated current species for classification of algorithms, in that related work on classification is discussed along with the comparison of issues that challenges the classification. The set of algorithms are chosen which matches the structure with different issues and perform given task. We have tested these algorithms utilizing existing automatic species extraction toolsalong with Bones compiler. We have added functionalities to existing tool, providing a more detailed characterization. The contributions of our work include support for pointer arithmetic, conditional and incremental statements, user defined types, constants and mathematical functions. With this, we can retain significant data which is not captured by original speciesof algorithms. We executed new theories into the device, empowering automatic characterization of program code.

  2. Anti-parallel triplexes

    DEFF Research Database (Denmark)

    Kosbar, Tamer R.; Sofan, Mamdouh A.; Waly, Mohamed A.

    2015-01-01

    The phosphoramidites of DNA monomers of 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine (Y) and 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine LNA (Z) are synthesized, and the thermal stability at pH 7.2 and 8.2 of anti-parallel triplexes modified with these two monomers is determined. When, the anti-parallel...... chain, especially at the end of the TFO strand. On the other hand, the thermal stability of the anti-parallel triplex was dramatically decreased when the TFO strand was modified with the LNA monomer analog Z in the middle of the TFO strand (ΔTm = -9.1 °C). Also the thermal stability decreased...

  3. Immunisation coverage annual report, 2014.

    Science.gov (United States)

    Hull, Brynley P; Hendry, Alexandra J; Dey, Aditi; Beard, Frank H; Brotherton, Julia M; McIntyre, Peter B

    2017-03-31

    This 8th annual immunisation coverage report shows data for 2014 derived from the Australian Childhood Immunisation Register and the National Human Papillomavirus Vaccination Program Register. This report includes coverage data for 'fully immunised' and by individual vaccines at standard age milestones and timeliness of receipt at earlier ages according to Indigenous status. Overall, 'fully immunised' coverage has been mostly stable at the 12- and 24-month age milestones since late 2003, but at 60 months of age, it has increased by more than 10 percentage points since 2009. As in previous years, coverage for 'fully immunised' at 12 months of age among Indigenous children was 3.7% lower than for non-Indigenous children overall, varying from 6.9 percentage points in Western Australia to 0.3 of a percentage point in the Australian Capital Territory. In 2014, 73.4% of Australian females aged 15 years had 3 documented doses of human papillomavirus vaccine (jurisdictional range 67.7% to 77.4%), and 82.7% had at least 1 dose, compared with 71.4% and 81.5%, respectively, in 2013. The disparity in on-time vaccination between Indigenous and non-Indigenous children in 2014 diminished progressively from 20.2% for vaccines due by 12 months to 11.5% for those due by 24 months and 3.0% at 60 months of age.

  4. Crime News Coverage in Perspective.

    Science.gov (United States)

    Graber, Doris A.

    According to one sociological model, news is a product of socially determined notions of who and what is important and the organizational structures that result for routinizing news collection; events that deviate from these notions are ignored. This report describes a study of crime news coverage in the media that used this model to examine the…

  5. Is Crime News Coverage Excessive?

    Science.gov (United States)

    Graber, Doris A.

    1979-01-01

    Reports on the frequency and manner in which various crime and noncrime news topics were presented in selected newspapers and television newscasts in 1976. Examines news flow data to determine whether news output was inflexible, and whether crime news coverage distorted the amount of real-life crime. (PD)

  6. A possibility of parallel and anti-parallel diffraction measurements on ...

    Indian Academy of Sciences (India)

    Invited Talks Volume 63 Issue 1 July 2004 pp 175-181 ... However, a bent perfect crystal (BPC) monochromator at monochromatic focusing condition can provide a quite flat and equal resolution property at both parallel and anti-parallel positions and thus one can have a chance to use both sides for the diffraction ...

  7. Adaptive parallel logic networks

    Science.gov (United States)

    Martinez, Tony R.; Vidal, Jacques J.

    1988-01-01

    Adaptive, self-organizing concurrent systems (ASOCS) that combine self-organization with massive parallelism for such applications as adaptive logic devices, robotics, process control, and system malfunction management, are presently discussed. In ASOCS, an adaptive network composed of many simple computing elements operating in combinational and asynchronous fashion is used and problems are specified by presenting if-then rules to the system in the form of Boolean conjunctions. During data processing, which is a different operational phase from adaptation, the network acts as a parallel hardware circuit.

  8. Parallel programming with Python

    CERN Document Server

    Palach, Jan

    2014-01-01

    A fast, easy-to-follow and clear tutorial to help you develop Parallel computing systems using Python. Along with explaining the fundamentals, the book will also introduce you to slightly advanced concepts and will help you in implementing these techniques in the real world. If you are an experienced Python programmer and are willing to utilize the available computing resources by parallelizing applications in a simple way, then this book is for you. You are required to have a basic knowledge of Python development to get the most of this book.

  9. ADAPTATION OF PARALLEL VIRTUAL MACHINES MECHANISMS TO PARALLEL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Zafer DEMİR

    2001-02-01

    Full Text Available In this study, at first, Parallel Virtual Machine is reviewed. Since It is based upon parallel processing, it is similar to parallel systems in principle in terms of architecture. Parallel Virtual Machine is neither an operating system nor a programming language. It is a specific software tool that supports heterogeneous parallel systems. However, it takes advantage of the features of both to make users close to parallel systems. Since tasks can be executed in parallel on parallel systems by Parallel Virtual Machine, there is an important similarity between PVM and distributed systems and multiple processors. In this study, the relations in question are examined by making use of Master-Slave programming technique. In conclusion, the PVM is tested with a simple factorial computation on a distributed system to observe its adaptation to parallel architects.

  10. Adapting algorithms to massively parallel hardware

    CERN Document Server

    Sioulas, Panagiotis

    2016-01-01

    In the recent years, the trend in computing has shifted from delivering processors with faster clock speeds to increasing the number of cores per processor. This marks a paradigm shift towards parallel programming in which applications are programmed to exploit the power provided by multi-cores. Usually there is gain in terms of the time-to-solution and the memory footprint. Specifically, this trend has sparked an interest towards massively parallel systems that can provide a large number of processors, and possibly computing nodes, as in the GPUs and MPPAs (Massively Parallel Processor Arrays). In this project, the focus was on two distinct computing problems: k-d tree searches and track seeding cellular automata. The goal was to adapt the algorithms to parallel systems and evaluate their performance in different cases.

  11. Parallel programming with PCN. Revision 1

    Energy Technology Data Exchange (ETDEWEB)

    Foster, I.; Tuecke, S.

    1991-12-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

  12. The Modeling of the ERP Systems within Parallel Calculus

    Directory of Open Access Journals (Sweden)

    Loredana MOCEAN

    2011-01-01

    Full Text Available As we know from a few years, the basic characteristics of ERP systems are: modular-design, central common database, integration of the modules, data transfer between modules done automatically, complex systems and flexible configuration. Because this, is obviously a parallel approach to design and implement them within parallel algorithms, parallel calculus and distributed databases. This paper aims to support these assertions and provide a model, in summary, what could be an ERP system based on parallel computing and algorithms.

  13. Introduction to parallel algorithms and architectures arrays, trees, hypercubes

    CERN Document Server

    Leighton, F Thomson

    1991-01-01

    Introduction to Parallel Algorithms and Architectures: Arrays Trees Hypercubes provides an introduction to the expanding field of parallel algorithms and architectures. This book focuses on parallel computation involving the most popular network architectures, namely, arrays, trees, hypercubes, and some closely related networks.Organized into three chapters, this book begins with an overview of the simplest architectures of arrays and trees. This text then presents the structures and relationships between the dominant network architectures, as well as the most efficient parallel algorithms for

  14. Coverage dependence of the structure of tetracene on Ag(110)

    Energy Technology Data Exchange (ETDEWEB)

    Huang Han; Song Fei; Lu Bin; Zhang Hanjie; Dou Weidong; Li Haiyang; He Pimo; Bao Shining [Physics Department, Zhejiang University, Hangzhou 310027 (China); Chen Qiao [Department of Chemistry, School of Life Sciences, University of Sussex, Falmer, Brighton BN1 9QJ (United Kingdom); Zhou Wuzong [School of Chemistry, University of St Andrews, St Andrews KY16 9ST (United Kingdom)], E-mail: phybao@zju.edu.cn, E-mail: qiao.chen@sussex.ac.uk

    2008-08-06

    The ordered adsorption structures of tetracene on Ag(110) have been studied by low energy electron diffraction (LEED), scanning tunneling microscopy (STM) and density functional theory (DFT) calculations. At a low coverage, as calibrated with LEED, both p(4 x 4) and c(8 x 4) ordered structures are simultaneously formed on an Ag(110) surface at room temperature. STM images suggest the molecular plane is parallel to the Ag surface with its long molecular axis aligned along the [001] azimuth. DFT optimization reveals a separation of 0.3 nm between the molecular plane and substrate surface while the center of the tetracene molecule is on the long bridge site. Increasing coverage slightly, a ({sub 2}{sup 6} {sub 5}{sup 2}) structure is formed while the adsorbed molecules maintain the flat-lying geometry with adjacent molecules alternating their height relative to the surface.

  15. Parallel Robots with Configurable Platforms

    NARCIS (Netherlands)

    Lambert, P.

    2013-01-01

    This thesis explores the fundamentals of a new class of parallel mechanisms called parallel mechanisms with configurable platforms as well as the design and analysis of parallel robots that are based on those mechanisms. Pure parallel robots are formed by two rigid links, the base and the

  16. Parallel k-means++

    Energy Technology Data Exchange (ETDEWEB)

    2017-04-04

    A parallelization of the k-means++ seed selection algorithm on three distinct hardware platforms: GPU, multicore CPU, and multithreaded architecture. K-means++ was developed by David Arthur and Sergei Vassilvitskii in 2007 as an extension of the k-means data clustering technique. These algorithms allow people to cluster multidimensional data, by attempting to minimize the mean distance of data points within a cluster. K-means++ improved upon traditional k-means by using a more intelligent approach to selecting the initial seeds for the clustering process. While k-means++ has become a popular alternative to traditional k-means clustering, little work has been done to parallelize this technique. We have developed original C++ code for parallelizing the algorithm on three unique hardware architectures: GPU using NVidia's CUDA/Thrust framework, multicore CPU using OpenMP, and the Cray XMT multithreaded architecture. By parallelizing the process for these platforms, we are able to perform k-means++ clustering much more quickly than it could be done before.

  17. Expressing Parallelism with ROOT

    Energy Technology Data Exchange (ETDEWEB)

    Piparo, D. [CERN; Tejedor, E. [CERN; Guiraud, E. [CERN; Ganis, G. [CERN; Mato, P. [CERN; Moneta, L. [CERN; Valls Pla, X. [CERN; Canal, P. [Fermilab

    2017-11-22

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  18. Note on parallel universes

    OpenAIRE

    Adams, Niall; Hand, David J.

    2007-01-01

    The parallel universes idea is an attempt to integrate several aspects of learning which share some common aspects. This is an interesting idea: if successful, insights could cross-fertilise, leading to advances in each area. The ‘multi-view’ perspective seems to us to have particular potential.

  19. Parallel universes beguile science

    CERN Multimedia

    2007-01-01

    A staple of mind-bending science fiction, the possibility of multiple universes has long intrigued hard-nosed physicists, mathematicians and cosmologists too. We may not be able -- as least not yet -- to prove they exist, many serious scientists say, but there are plenty of reasons to think that parallel dimensions are more than figments of eggheaded imagination.

  20. Parallel Adams methods

    NARCIS (Netherlands)

    P.J. van der Houwen; E. Messina

    1998-01-01

    textabstractIn the literature, various types of parallel methods for integrating nonstiff initial-value problems for first-order ordinary differential equation have been proposed. The greater part of them are based on an implicit multistage method in which the implicit relations are solved by the

  1. Expressing Parallelism with ROOT

    Science.gov (United States)

    Piparo, D.; Tejedor, E.; Guiraud, E.; Ganis, G.; Mato, P.; Moneta, L.; Valls Pla, X.; Canal, P.

    2017-10-01

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  2. Practical parallel programming

    CERN Document Server

    Bauer, Barr E

    2014-01-01

    This is the book that will teach programmers to write faster, more efficient code for parallel processors. The reader is introduced to a vast array of procedures and paradigms on which actual coding may be based. Examples and real-life simulations using these devices are presented in C and FORTRAN.

  3. Parallel Splash Belief Propagation

    Science.gov (United States)

    2010-08-01

    Service, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of...heaps: An alternative to Fibonacci heaps with applications to parallel computation. Communications of the ACM, 31:1343–1354, 1988. G. Elidan, I. Mcgraw

  4. Parallel hierarchical global illumination

    Energy Technology Data Exchange (ETDEWEB)

    Snell, Quinn O. [Iowa State Univ., Ames, IA (United States)

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  5. pMatlab Parallel Matlab Library

    OpenAIRE

    Bliss, Nadya; Kepner, Jeremy

    2006-01-01

    MATLAB has emerged as one of the languages most commonly used by scientists and engineers for technical computing, with ~1,000,000 users worldwide. The compute intensive nature of technical computing means that many MATLAB users have codes that can significantly benefit from the increased performance offered by parallel computing. pMatlab (www.ll.mit.edu/pMatlab) provides this capability by implementing Parallel Global Array Semantics (PGAS) using standard operator overloading techniques. The...

  6. .NET 4.5 parallel extensions

    CERN Document Server

    Freeman, Bryan

    2013-01-01

    This book contains practical recipes on everything you will need to create task-based parallel programs using C#, .NET 4.5, and Visual Studio. The book is packed with illustrated code examples to create scalable programs.This book is intended to help experienced C# developers write applications that leverage the power of modern multicore processors. It provides the necessary knowledge for an experienced C# developer to work with .NET parallelism APIs. Previous experience of writing multithreaded applications is not necessary.

  7. Bioinspired evolutionary algorithm based for improving network coverage in wireless sensor networks.

    Science.gov (United States)

    Abbasi, Mohammadjavad; Bin Abd Latiff, Muhammad Shafie; Chizari, Hassan

    2014-01-01

    Wireless sensor networks (WSNs) include sensor nodes in which each node is able to monitor the physical area and send collected information to the base station for further analysis. The important key of WSNs is detection and coverage of target area which is provided by random deployment. This paper reviews and addresses various area detection and coverage problems in sensor network. This paper organizes many scenarios for applying sensor node movement for improving network coverage based on bioinspired evolutionary algorithm and explains the concern and objective of controlling sensor node coverage. We discuss area coverage and target detection model by evolutionary algorithm.

  8. Coverage of Nutrition Interventions Intended for Infants and Young Children Varies Greatly across Programs: Results from Coverage Surveys in 5 Countries123

    Science.gov (United States)

    Aaron, Grant J; Poonawala, Alia; van Liere, Marti J; Schofield, Dominic; Myatt, Mark

    2017-01-01

    Background: The efficacy of a number of interventions that include fortified complementary foods (FCFs) or other products to improve infant and young child feeding (IYCF) is well established. Programs that provide such products free or at a subsidized price are implemented in many countries around the world. Demonstrating the impact at scale of these programs has been challenging, and rigorous information on coverage and utilization is lacking. Objective: The objective of this article is to review key findings from 11 coverage surveys of IYCF programs distributing or selling FCFs or micronutrient powders in 5 countries. Methods: Programs were implemented in Ghana, Cote d’Ivoire, India, Bangladesh, and Vietnam. Surveys were implemented at different stages of program implementation between 2013 and 2015. The Fortification Assessment Coverage Toolkit (FACT) was developed to assess 3 levels of coverage (message: awareness of the product; contact: use of the product ≥1 time; and effective: regular use aligned with program-specific goals), as well as barriers and factors that facilitate coverage. Analyses included the coverage estimates, as well as an assessment of equity of coverage between the poor and nonpoor, and between those with poor and adequate child feeding practices. Results: Coverage varied greatly between countries and program models. Message coverage ranged from 29.0% to 99.7%, contact coverage from 22.6% to 94.4%, and effective coverage from 0.8% to 88.3%. Beyond creating awareness, programs that achieved high coverage were those with effective mechanisms in place to overcome barriers for both supply and demand. Conclusions: Variability in coverage was likely due to the program design, delivery model, quality of implementation, and product type. Measuring program coverage and understanding its determinants is essential for program improvement and to estimate the potential for impact of programs at scale. Use of the FACT can help overcome this evidence

  9. Medical coverage of cycling events.

    Science.gov (United States)

    Martinez, John M

    2006-05-01

    Medical coverage of recreational and competitive cycling events requires significant planning and cooperation among the race and medical directors, race officials, and local emergency medical services. The medical team should be proficient in treating minor and self-limiting injuries such as abrasions and minor trauma. The medical team should also have contingency plans for medical emergencies, such as cardiac events and major trauma, that ensure rapid stabilization and transport of the athlete to the appropriate medical facility. Stationary and mobile medical teams may be necessary for proper coverage of the event. Event day communication systems between individual medical staff as well as race officials and local emergency medical services is important to the success of the event.

  10. Media coverage of women victimization

    OpenAIRE

    Konstantinović-Vilić, Slobodanka; Žunić, Natalija

    2012-01-01

    Mass media seem to be playing the central role in our everyday life and the media impact is so overpowering nowadays that we live in a mediasaturated culture. Not only are mass media an inseparable part of our contemporary life but they also significantly define and shape our daily existence. In order to explain the cultural impact that the media coverage of crime and victimization has in our society, it is necessary to understand the relationship between crime, victimization and mass media. ...

  11. Medicare Provider Data - Hospice Providers

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Hospice Utilization and Payment Public Use File provides information on services provided to Medicare beneficiaries by hospice providers. The Hospice PUF...

  12. Parallel computers and parallel algorithms for CFD: An introduction

    Science.gov (United States)

    Roose, Dirk; Vandriessche, Rafael

    1995-10-01

    This text presents a tutorial on those aspects of parallel computing that are important for the development of efficient parallel algorithms and software for computational fluid dynamics. We first review the main architectural features of parallel computers and we briefly describe some parallel systems on the market today. We introduce some important concepts concerning the development and the performance evaluation of parallel algorithms. We discuss how work load imbalance and communication costs on distributed memory parallel computers can be minimized. We present performance results for some CFD test cases. We focus on applications using structured and block structured grids, but the concepts and techniques are also valid for unstructured grids.

  13. Inter-laboratory evaluation of SNP-based forensic identification by massively parallel sequencing using the Ion PGM™.

    Science.gov (United States)

    Eduardoff, M; Santos, C; de la Puente, M; Gross, T E; Fondevila, M; Strobl, C; Sobrino, B; Ballard, D; Schneider, P M; Carracedo, Á; Lareu, M V; Parson, W; Phillips, C

    2015-07-01

    Next generation sequencing (NGS) offers the opportunity to analyse forensic DNA samples and obtain massively parallel coverage of targeted short sequences with the variants they carry. We evaluated the levels of sequence coverage, genotyping precision, sensitivity and mixed DNA patterns of a prototype version of the first commercial forensic NGS kit: the HID-Ion AmpliSeq™ Identity Panel with 169-markers designed for the Ion PGM™ system. Evaluations were made between three laboratories following closely matched Ion PGM™ protocols and a simple validation framework of shared DNA controls. The sequence coverage obtained was extensive for the bulk of SNPs targeted by the HID-Ion AmpliSeq™ Identity Panel. Sensitivity studies showed 90-95% of SNP genotypes could be obtained from 25 to 100pg of input DNA. Genotyping concordance tests included Coriell cell-line control DNA analyses checked against whole-genome sequencing data from 1000 Genomes and Complete Genomics, indicating a very high concordance rate of 99.8%. Discordant genotypes detected in rs1979255, rs1004357, rs938283, rs2032597 and rs2399332 indicate these loci should be excluded from the panel. Therefore, the HID-Ion AmpliSeq™ Identity Panel and Ion PGM™ system provide a sensitive and accurate forensic SNP genotyping assay. However, low-level DNA produced much more varied sequence coverage and in forensic use the Ion PGM™ system will require careful calibration of the total samples loaded per chip to preserve the genotyping reliability seen in routine forensic DNA. Furthermore, assessments of mixed DNA indicate the user's control of sequence analysis parameter settings is necessary to ensure mixtures are detected robustly. Given the sensitivity of Ion PGM™, this aspect of forensic genotyping requires further optimisation before massively parallel sequencing is applied to routine casework. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  14. PARALLEL MOVING MECHANICAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Florian Ion Tiberius Petrescu

    2014-09-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Moving mechanical systems parallel structures are solid, fast, and accurate. Between parallel systems it is to be noticed Stewart platforms, as the oldest systems, fast, solid and precise. The work outlines a few main elements of Stewart platforms. Begin with the geometry platform, kinematic elements of it, and presented then and a few items of dynamics. Dynamic primary element on it means the determination mechanism kinetic energy of the entire Stewart platforms. It is then in a record tail cinematic mobile by a method dot matrix of rotation. If a structural mottoelement consists of two moving elements which translates relative, drive train and especially dynamic it is more convenient to represent the mottoelement as a single moving components. We have thus seven moving parts (the six motoelements or feet to which is added mobile platform 7 and one fixed.

  15. Parallel grid population

    Science.gov (United States)

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  16. Seeing in parallel

    Energy Technology Data Exchange (ETDEWEB)

    Little, J.J.; Poggio, T.; Gamble, E.B. Jr.

    1988-01-01

    Computer algorithms have been developed for early vision processes that give separate cues to the distance from the viewer of three-dimensional surfaces, their shape, and their material properties. The MIT Vision Machine is a computer system that integrates several early vision modules to achieve high-performance recognition and navigation in unstructured environments. It is also an experimental environment for theoretical progress in early vision algorithms, their parallel implementation, and their integration. The Vision Machine consists of a movable, two-camera Eye-Head input device and an 8K Connection Machine. The authors have developed and implemented several parallel early vision algorithms that compute edge detection, stereopsis, motion, texture, and surface color in close to real time. The integration stage, based on coupled Markov random field models, leads to a cartoon-like map of the discontinuities in the scene, with partial labeling of the brightness edges in terms of their physical origin.

  17. Homology, convergence and parallelism.

    Science.gov (United States)

    Ghiselin, Michael T

    2016-01-05

    Homology is a relation of correspondence between parts of parts of larger wholes. It is used when tracking objects of interest through space and time and in the context of explanatory historical narratives. Homologues can be traced through a genealogical nexus back to a common ancestral precursor. Homology being a transitive relation, homologues remain homologous however much they may come to differ. Analogy is a relationship of correspondence between parts of members of classes having no relationship of common ancestry. Although homology is often treated as an alternative to convergence, the latter is not a kind of correspondence: rather, it is one of a class of processes that also includes divergence and parallelism. These often give rise to misleading appearances (homoplasies). Parallelism can be particularly hard to detect, especially when not accompanied by divergences in some parts of the body. © 2015 The Author(s).

  18. Parallel Anisotropic Tetrahedral Adaptation

    Science.gov (United States)

    Park, Michael A.; Darmofal, David L.

    2008-01-01

    An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.

  19. Assessing Measurement Error in Medicare Coverage

    Data.gov (United States)

    U.S. Department of Health & Human Services — Assessing Measurement Error in Medicare Coverage From the National Health Interview Survey Using linked administrative data, to validate Medicare coverage estimates...

  20. Xyce parallel electronic simulator.

    Energy Technology Data Exchange (ETDEWEB)

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd S; Pawlowski, Roger P; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide.

  1. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  2. Extracting Parallel Paragraphs from Common Crawl

    Directory of Open Access Journals (Sweden)

    Kúdela Jakub

    2017-04-01

    Full Text Available Most of the current methods for mining parallel texts from the web assume that web pages of web sites share same structure across languages. We believe that there still exists a non-negligible amount of parallel data spread across sources not satisfying this assumption. We propose an approach based on a combination of bivec (a bilingual extension of word2vec and locality-sensitive hashing which allows us to efficiently identify pairs of parallel segments located anywhere on pages of a given web domain, regardless their structure. We validate our method on realigning segments from a large parallel corpus. Another experiment with real-world data provided by Common Crawl Foundation confirms that our solution scales to hundreds of terabytes large set of web-crawled data.

  3. Massively Parallel Computing: A Sandia Perspective

    Energy Technology Data Exchange (ETDEWEB)

    Dosanjh, Sudip S.; Greenberg, David S.; Hendrickson, Bruce; Heroux, Michael A.; Plimpton, Steve J.; Tomkins, James L.; Womble, David E.

    1999-05-06

    The computing power available to scientists and engineers has increased dramatically in the past decade, due in part to progress in making massively parallel computing practical and available. The expectation for these machines has been great. The reality is that progress has been slower than expected. Nevertheless, massively parallel computing is beginning to realize its potential for enabling significant break-throughs in science and engineering. This paper provides a perspective on the state of the field, colored by the authors' experiences using large scale parallel machines at Sandia National Laboratories. We address trends in hardware, system software and algorithms, and we also offer our view of the forces shaping the parallel computing industry.

  4. The Xyce Parallel Electronic Simulator - An Overview

    Energy Technology Data Exchange (ETDEWEB)

    HUTCHINSON,SCOTT A.; KEITER,ERIC R.; HOEKSTRA,ROBERT J.; WATTS,HERMAN A.; WATERS,ARLON J.; SCHELLS,REGINA L.; WIX,STEVEN D.

    2000-12-08

    The Xyce{trademark} Parallel Electronic Simulator has been written to support the simulation needs of the Sandia National Laboratories electrical designers. As such, the development has focused on providing the capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). In addition, they are providing improved performance for numerical kernels using state-of-the-art algorithms, support for modeling circuit phenomena at a variety of abstraction levels and using object-oriented and modern coding-practices that ensure the code will be maintainable and extensible far into the future. The code is a parallel code in the most general sense of the phrase--a message passing parallel implementation--which allows it to run efficiently on the widest possible number of computing platforms. These include serial, shared-memory and distributed-memory parallel as well as heterogeneous platforms. Furthermore, careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved even as the number of processors grows.

  5. Impact of pharmacists as immunizers on influenza vaccination coverage in Nova Scotia, Canada.

    Science.gov (United States)

    Isenor, Jennifer E; Alia, Tania A; Killen, Jessica L; Billard, Beverly A; Halperin, Beth A; Slayter, Kathryn L; McNeil, Shelly A; MacDougall, Donna; Bowles, Susan K

    2016-05-03

    Immunization coverage in Canada has continued to fall below national goals. The addition of pharmacists as immunizers may increase immunization coverage. This study aimed to compare estimated influenza vaccine coverage before and after pharmacists began administering publicly funded influenza immunizations in Nova Scotia, Canada. Vaccination coverage rates and recipient demographics for the influenza vaccination seasons 2010-2011 to 2012-2013 were compared with the 2013-2014 season, the first year pharmacists provided immunizations. In 2013-2014, the vaccination coverage rate for those ≥5 years of age increased 6%, from 36% in 2012-2013 to 42% (pNova Scotia increased in 2013-2014 compared to previous years with a universal influenza program. Various factors may have contributed to the increased coverage, including the addition of pharmacists as immunizers and media coverage of influenza related fatalities. Future research will be necessary to fully determine the impact of pharmacists as immunizers.

  6. Technical support for universal health coverage pilots in Karnataka ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    The project team will provide technical assistance to these early adopter states to assist with UHC intervention activities. The project ... They provided technical assistance to help officials design and develop their universal health coverage action plans based on an extensive baseline assessment and top health priorities.

  7. Directions in parallel programming: HPF, shared virtual memory and object parallelism in pC++

    Science.gov (United States)

    Bodin, Francois; Priol, Thierry; Mehrotra, Piyush; Gannon, Dennis

    1994-01-01

    Fortran and C++ are the dominant programming languages used in scientific computation. Consequently, extensions to these languages are the most popular for programming massively parallel computers. We discuss two such approaches to parallel Fortran and one approach to C++. The High Performance Fortran Forum has designed HPF with the intent of supporting data parallelism on Fortran 90 applications. HPF works by asking the user to help the compiler distribute and align the data structures with the distributed memory modules in the system. Fortran-S takes a different approach in which the data distribution is managed by the operating system and the user provides annotations to indicate parallel control regions. In the case of C++, we look at pC++ which is based on a concurrent aggregate parallel model.

  8. Implications of employer coverage of contraception: Cost-effectiveness analysis of contraception coverage under an employer mandate.

    Science.gov (United States)

    Canestaro, W; Vodicka, E; Downing, D; Trussell, J

    2017-01-01

    Mandatory employer-based insurance coverage of contraception in the US has been a controversial component of the Affordable Care Act (ACA). Prior research has examined the cost-effectiveness of contraception in general; however, no studies have developed a formal decision model in the context of the new ACA provisions. As such, this study aims to estimate the relative cost-effectiveness of insurance coverage of contraception under employer-sponsored insurance coverage taking into consideration newer regulations allowing for religious exemptions. A decision model was developed from the employer perspective to simulate pregnancy costs and outcomes associated with insurance coverage. Method-specific estimates of contraception failure rates, outcomes and costs were derived from the literature. Uptake by marital status and age was drawn from a nationally representative database. Providing no contraception coverage resulted in 33 more unintended pregnancies per 1000 women (95% confidence range: 22.4; 44.0). This subsequently significantly increased the number of unintended births and terminations. Total costs were higher among uninsured women owing to higher costs of pregnancy outcomes. The effect of no insurance was greatest on unmarried women 20-29 years old. Denying female employees' full coverage of contraceptives increases total costs from the employer perspective, as well as the total number of terminations. Insurance coverage was found to be significantly associated with women's choice of contraceptive method in a large nationally representative sample. Using a decision model to extrapolate to pregnancy outcomes, we found a large and statistically significant difference in unintended pregnancy and terminations. Denying women contraception coverage may have significant consequences for pregnancy outcomes. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Improving coverage measurement for reproductive, maternal, neonatal and child health: gaps and opportunities.

    Science.gov (United States)

    Munos, Melinda K; Stanton, Cynthia K; Bryce, Jennifer

    2017-06-01

    Regular monitoring of coverage for reproductive, maternal, neonatal, and child health (RMNCH) is central to assessing progress toward health goals. The objectives of this review were to describe the current state of coverage measurement for RMNCH, assess the extent to which current approaches to coverage measurement cover the spectrum of RMNCH interventions, and prioritize interventions for a novel approach to coverage measurement linking household surveys with provider assessments. We included 58 interventions along the RMNCH continuum of care for which there is evidence of effectiveness against cause-specific mortality and stillbirth. We reviewed household surveys and provider assessments used in low- and middle-income countries (LMICs) to determine whether these tools generate measures of intervention coverage, readiness, or quality. For facility-based interventions, we assessed the feasibility of linking provider assessments to household surveys to provide estimates of intervention coverage. Fewer than half (24 of 58) of included RMNCH interventions are measured in standard household surveys. The periconceptional, antenatal, and intrapartum periods were poorly represented. All but one of the interventions not measured in household surveys are facility-based, and 13 of these would be highly feasible to measure by linking provider assessments to household surveys. We found important gaps in coverage measurement for proven RMNCH interventions, particularly around the time of birth. Based on our findings, we propose three sets of actions to improve coverage measurement for RMNCH, focused on validation of coverage measures and development of new measurement approaches feasible for use at scale in LMICs.

  10. Parallel Architectures and Bioinspired Algorithms

    CERN Document Server

    Pérez, José; Lanchares, Juan

    2012-01-01

    This monograph presents examples of best practices when combining bioinspired algorithms with parallel architectures. The book includes recent work by leading researchers in the field and offers a map with the main paths already explored and new ways towards the future. Parallel Architectures and Bioinspired Algorithms will be of value to both specialists in Bioinspired Algorithms, Parallel and Distributed Computing, as well as computer science students trying to understand the present and the future of Parallel Architectures and Bioinspired Algorithms.

  11. Parallel Eclipse Project Checkout

    Science.gov (United States)

    Crockett, Thomas M.; Joswig, Joseph C.; Shams, Khawaja S.; Powell, Mark W.; Bachmann, Andrew G.

    2011-01-01

    Parallel Eclipse Project Checkout (PEPC) is a program written to leverage parallelism and to automate the checkout process of plug-ins created in Eclipse RCP (Rich Client Platform). Eclipse plug-ins can be aggregated in a feature project. This innovation digests a feature description (xml file) and automatically checks out all of the plug-ins listed in the feature. This resolves the issue of manually checking out each plug-in required to work on the project. To minimize the amount of time necessary to checkout the plug-ins, this program makes the plug-in checkouts parallel. After parsing the feature, a request to checkout for each plug-in in the feature has been inserted. These requests are handled by a thread pool with a configurable number of threads. By checking out the plug-ins in parallel, the checkout process is streamlined before getting started on the project. For instance, projects that took 30 minutes to checkout now take less than 5 minutes. The effect is especially clear on a Mac, which has a network monitor displaying the bandwidth use. When running the client from a developer s home, the checkout process now saturates the bandwidth in order to get all the plug-ins checked out as fast as possible. For comparison, a checkout process that ranged from 8-200 Kbps from a developer s home is now able to saturate a pipe of 1.3 Mbps, resulting in significantly faster checkouts. Eclipse IDE (integrated development environment) tries to build a project as soon as it is downloaded. As part of another optimization, this innovation programmatically tells Eclipse to stop building while checkouts are happening, which dramatically reduces lock contention and enables plug-ins to continue downloading until all of them finish. Furthermore, the software re-enables automatic building, and forces Eclipse to do a clean build once it finishes checking out all of the plug-ins. This software is fully generic and does not contain any NASA-specific code. It can be applied to any

  12. Phase-constrained parallel MR image reconstruction.

    Science.gov (United States)

    Willig-Onwuachi, Jacob D; Yeh, Ernest N; Grant, Aaron K; Ohliger, Michael A; McKenzie, Charles A; Sodickson, Daniel K

    2005-10-01

    A generalized method for phase-constrained parallel MR image reconstruction is presented that combines and extends the concepts of partial-Fourier reconstruction and parallel imaging. It provides a framework for reconstructing images employing either or both techniques and for comparing image quality achieved by varying k-space sampling schemes. The method can be used as a parallel image reconstruction with a partial-Fourier reconstruction built in. It can also be used with trajectories not readily handled by straightforward combinations of partial-Fourier and SENSE-like parallel reconstructions, including variable-density, and non-Cartesian trajectories. The phase constraint specifies a better-conditioned inverse problem compared to unconstrained parallel MR reconstruction alone. This phase-constrained parallel MRI reconstruction offers a one-step alternative to the standard combination of homodyne and SENSE reconstructions with the added benefit of flexibility of sampling trajectory. The theory of the phase-constrained approach is outlined, and its calibration requirements and limitations are discussed. Simulations, phantom experiments, and in vivo experiments are presented.

  13. Armenian media coverage of science topics

    Science.gov (United States)

    Mkhitaryan, Marie

    2016-12-01

    The article discusses features and issues of Armenian media coverage on scientific topics and provides recommendations on how to promote scientific topics in media. The media is more interested in social or public reaction rather than in scientific information itself. Medical science has a large share of the global media coverage. It is followed by articles about environment, space, technology, physics and other areas. Armenian media mainly tends to focus on a scientific topic if at first sight it contains something revolutionary. Media primarily reviews whether that scientific study can affect the Armenian economy and only then decides to refer to it. Unfortunately, nowadays the perception of science is a little distorted in media. We can often see headlines of news where is mentioned that the scientist has made "an invention". Nowadays it is hard to see the border between a scientist and an inventor. In fact, the technological term "invention" attracts the media by making illusionary sensation and ensuring large audience. The report also addresses the "Gitamard" ("A science-man") special project started in 2016 in Mediamax that tells about scientists and their motivations.

  14. Market Liquidity, Analysts Coverage, and Ownership Concentration: Evidence From ASE

    Directory of Open Access Journals (Sweden)

    Majd Iskandrani

    2016-06-01

    Full Text Available This research investigates the association between analyst coverage, ownership concentration and market liquidity in Amman Stock Exchange (ASE. Using a unique dataset about information asymmetry, several proxies related to the information asymmetry are used to clarify certain aspects of market liquidity. In a sample of 131 companies with comprehensive data collected from company guides and Datastream, information asymmetry measured by analysts’ coverage is found to be an important determinant of market liquidity. In particular, market liquidity is lower where firms have larger analysts coverage and where firms are denoted with high degree of ownership concentration. The effect of analysts coverage is, however, found to be more marked in firms with high levels of ownership concentration. The study provides theoretical and empirical improvement of market liquidity literature towards an understanding of the information asymmetry proxies in ASE. Policymakers, after the 2007-2009 scandal have formed governance codes that highlight the importance of disclosure requirements as key responsibility of financial analysts. The link between analysts coverage and market liquidity established in this research provides evidence for insider investors on the roles and potential effectiveness of analysts in carrying this responsibility.

  15. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  16. Massively Parallel QCD

    Energy Technology Data Exchange (ETDEWEB)

    Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G

    2007-04-11

    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results.

  17. Theory of Parallel Mechanisms

    CERN Document Server

    Huang, Zhen; Ding, Huafeng

    2013-01-01

    This book contains mechanism analysis and synthesis. In mechanism analysis, a mobility methodology is first systematically presented. This methodology, based on the author's screw theory, proposed in 1997, of which the generality and validity was only proved recently,  is a very complex issue, researched by various scientists over the last 150 years. The principle of kinematic influence coefficient and its latest developments are described. This principle is suitable for kinematic analysis of various 6-DOF and lower-mobility parallel manipulators. The singularities are classified by a new point of view, and progress in position-singularity and orientation-singularity is stated. In addition, the concept of over-determinate input is proposed and a new method of force analysis based on screw theory is presented. In mechanism synthesis, the synthesis for spatial parallel mechanisms is discussed, and the synthesis method of difficult 4-DOF and 5-DOF symmetric mechanisms, which was first put forward by the a...

  18. Fast parallel event reconstruction

    CERN Multimedia

    CERN. Geneva

    2010-01-01

    On-line processing of large data volumes produced in modern HEP experiments requires using maximum capabilities of modern and future many-core CPU and GPU architectures.One of such powerful feature is a SIMD instruction set, which allows packing several data items in one register and to operate on all of them, thus achievingmore operations per clock cycle. Motivated by the idea of using the SIMD unit ofmodern processors, the KF based track fit has been adapted for parallelism, including memory optimization, numerical analysis, vectorization with inline operator overloading, and optimization using SDKs. The speed of the algorithm has been increased in 120000 times with 0.1 ms/track, running in parallel on 16 SPEs of a Cell Blade computer.  Running on a Nehalem CPU with 8 cores it shows the processing speed of 52 ns/track using the Intel Threading Building Blocks. The same KF algorithm running on an Nvidia GTX 280 in the CUDA frameworkprovi...

  19. Medicare Coverage: You Cannot Play the Game If You Do Not Know the Rules.

    Science.gov (United States)

    Schaum, Kathleen Dianne

    2013-12-01

    WOUND CARE STAKEHOLDERS SHOULD REMEMBER THAT MEDICARE REIMBURSEMENT REQUIRES THREE PARTS: a relevant code, a published Medicare payment rate, and positive coverage or coverage based upon medical necessity. Qualified healthcare professionals, scientists, and manufacturers should establish a monthly routine, where they personally review revisions to pertinent National Coverage Determinations (NCDs) and Local Coverage Determinations (LCDs). These documents provide specific guidelines for positive coverage by the specific Medicare Administrative Contractor that processes the Medicare claims in a specific jurisdiction. When given an opportunity to provide comments on draft coverage determinations, wound care stakeholders should take advantage of the opportunity of educating the contractor medical director. After a LCD has become active, wound care stakeholders can and should request revisions, through the LCD Reconsideration Process, when new clinical evidence becomes available.

  20. Parallel finite-difference time-domain method

    CERN Document Server

    Yu, Wenhua

    2006-01-01

    The finite-difference time-domain (FTDT) method has revolutionized antenna design and electromagnetics engineering. This book raises the FDTD method to the next level by empowering it with the vast capabilities of parallel computing. It shows engineers how to exploit the natural parallel properties of FDTD to improve the existing FDTD method and to efficiently solve more complex and large problem sets. Professionals learn how to apply open source software to develop parallel software and hardware to run FDTD in parallel for their projects. The book features hands-on examples that illustrate the power of parallel FDTD and presents practical strategies for carrying out parallel FDTD. This detailed resource provides instructions on downloading, installing, and setting up the required open source software on either Windows or Linux systems, and includes a handy tutorial on parallel programming.

  1. Building high-coverage monolayers of covalently bound magnetic nanoparticles

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Mackenzie G.; Teplyakov, Andrew V., E-mail: andrewt@udel.edu

    2016-12-01

    Graphical abstract: - Highlights: • A method for forming a layer of covalently bound nanoparticles is offered. • A nearly perfect monolayer of covalently bound magnetic nanoparticles was formed on gold. • Spectroscopic techniques confirmed covalent binding by the “click” reaction. • The influence of the functionalization scheme on surface coverage was investigated. - Abstract: This work presents an approach for producing a high-coverage single monolayer of magnetic nanoparticles using “click chemistry” between complementarily functionalized nanoparticles and a flat substrate. This method highlights essential aspects of the functionalization scheme for substrate surface and nanoparticles to produce exceptionally high surface coverage without sacrificing selectivity or control over the layer produced. The deposition of one single layer of magnetic particles without agglomeration, over a large area, with a nearly 100% coverage is confirmed by electron microscopy. Spectroscopic techniques, supplemented by computational predictions, are used to interrogate the chemistry of the attachment and to confirm covalent binding, rather than attachment through self-assembly or weak van der Waals bonding. Density functional theory calculations for the surface intermediate of this copper-catalyzed process provide mechanistic insight into the effects of the functionalization scheme on surface coverage. Based on this analysis, it appears that steric limitations of the intermediate structure affect nanoparticle coverage on a flat solid substrate; however, this can be overcome by designing a functionalization scheme in such a way that the copper-based intermediate is formed on the spherical nanoparticles instead. This observation can be carried over to other approaches for creating highly controlled single- or multilayered nanostructures of a wide range of materials to result in high coverage and possibly, conformal filling.

  2. Searching the veterinary literature: a comparison of the coverage of veterinary journals by nine bibliographic databases

    OpenAIRE

    Grindlay, Douglas J.C.; Brennan, Marnie L.; Dean, Rachel S.

    2016-01-01

    A thorough search of the literature to find the best evidence is central to the practice of evidence-based veterinary medicine. This requires knowing which databases to search to maximize journal coverage. The aim of the present study was to compare the coverage of active veterinary journals by nine bibliographic databases to inform future systematic reviews and other evidence-based searches. Coverage was assessed using lists of included journals produced by the database providers. For 121 ac...

  3. Developmental parallelism in primates.

    Science.gov (United States)

    Sikorska-Piwowska, Z M; Dawidowicz, A L

    2017-01-01

    The authors examined a large random sample of skulls from two species of macaques: rhesus monkeys and cynomolgus monkeys. The skulls were measured, divided into age and sex groups and thoroughly analysed using statistical methods. The analysis shows that skulls of young rhesuses are considerably more domed, i.e. have better-developed neurocrania, than their adult counterparts. Male and female skulls, on the other hand, were found to be very similar, which means that sexual dimorphism of the rhesus macaque was suppressed. Both of these patterns are known from the human evolutionary pattern. No such parallelism to the development of Homo sapiens was found in the cynomolgus monkeys. The authors conclude that mosaic hominisation trends may have featured in the evolution of all primates. This would mean that apes were not a necessary step on the evolutionary way leading to the development of Homo sapiens, who may have started to evolve at an earlier stage of monkeys.

  4. Parallel Polarization State Generation

    CERN Document Server

    She, Alan

    2016-01-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristi...

  5. Parallel computing and quantum chromodynamics

    CERN Document Server

    Bowler, K C

    1999-01-01

    The study of Quantum Chromodynamics (QCD) remains one of the most challenging topics in elementary particle physics. The lattice formulation of QCD, in which space-time is treated as a four- dimensional hypercubic grid of points, provides the means for a numerical solution from first principles but makes extreme demands upon computational performance. High Performance Computing (HPC) offers us the tantalising prospect of a verification of QCD through the precise reproduction of the known masses of the strongly interacting particles. It is also leading to the development of a phenomenological tool capable of disentangling strong interaction effects from weak interaction effects in the decays of one kind of quark into another, crucial for determining parameters of the standard model of particle physics. The 1980s saw the first attempts to apply parallel architecture computers to lattice QCD. The SIMD and MIMD machines used in these pioneering efforts were the ICL DAP and the Cosmic Cube, respectively. These wer...

  6. The parallel adult education system

    DEFF Research Database (Denmark)

    Wahlgren, Bjarne

    2015-01-01

    .Or they can be organized based on the student’s (social and vocational) competences, on which research-based knowledge is built. University courses are traditionally organized according to the first principle. But in a lifelong learning perspective the last principle will provide the greatest opportunity...... for competence development. The Danish university educational system includes two parallel programs: a traditional academic track (candidatus) and an alternative practice-based track (master). The practice-based program was established in 2001 and organized as part time. The total program takes half the time...... preconditions and some challenges in implementing the program. It describes the difficulties associated with measuring prior learning and the pedagogical problems related to combine vocational experiences with formal school-based knowledge. But experiences show that the difficulties can be overcome....

  7. Change of mobile network coverage in France from 29 August

    CERN Multimedia

    IT Department

    2016-01-01

    The change of mobile network coverage on the French part of the CERN site will take effect on 29 August and not on 11 July as previously announced.    From 29 August, the Swisscom transmitters in France will be deactivated and Orange France will thenceforth provide coverage on the French part of the CERN site.  This switch will result in changes to billing. You should also ensure that you can still be contacted by your colleagues when you are on the French part of the CERN site. Please consult the information and instructions in this official communication.

  8. Synthesis of volumetric ring antenna array for terrestrial coverage pattern.

    Science.gov (United States)

    Reyna, Alberto; Panduro, Marco A; Del Rio Bocio, Carlos

    2014-01-01

    This paper presents a synthesis of a volumetric ring antenna array for a terrestrial coverage pattern. This synthesis regards the spacing among the rings on the planes X-Y, the positions of the rings on the plane X-Z, and uniform and concentric excitations. The optimization is carried out by implementing the particle swarm optimization. The synthesis is compared with previous designs by resulting with proper performance of this geometry to provide an accurate coverage to be applied in satellite applications with a maximum reduction of the antenna hardware as well as the side lobe level reduction.

  9. Synthesis of Volumetric Ring Antenna Array for Terrestrial Coverage Pattern

    Directory of Open Access Journals (Sweden)

    Alberto Reyna

    2014-01-01

    Full Text Available This paper presents a synthesis of a volumetric ring antenna array for a terrestrial coverage pattern. This synthesis regards the spacing among the rings on the planes X-Y, the positions of the rings on the plane X-Z, and uniform and concentric excitations. The optimization is carried out by implementing the particle swarm optimization. The synthesis is compared with previous designs by resulting with proper performance of this geometry to provide an accurate coverage to be applied in satellite applications with a maximum reduction of the antenna hardware as well as the side lobe level reduction.

  10. Quad-Tree Visual-Calculus Analysis of Satellite Coverage

    Science.gov (United States)

    Lo, Martin W.; Hockney, George; Kwan, Bruce

    2003-01-01

    An improved method of analysis of coverage of areas of the Earth by a constellation of radio-communication or scientific-observation satellites has been developed. This method is intended to supplant an older method in which the global-coverage-analysis problem is solved from a ground-to-satellite perspective. The present method provides for rapid and efficient analysis. This method is derived from a satellite-to-ground perspective and involves a unique combination of two techniques for multiresolution representation of map features on the surface of a sphere.

  11. Comparison of two next-generation sequencing kits for diagnosis of epileptic disorders with a user-friendly tool for displaying gene coverage, DeCovA

    Directory of Open Access Journals (Sweden)

    Sarra Dimassi

    2015-12-01

    Full Text Available In recent years, molecular genetics has been playing an increasing role in the diagnostic process of monogenic epilepsies. Knowing the genetic basis of one patient's epilepsy provides accurate genetic counseling and may guide therapeutic options. Genetic diagnosis of epilepsy syndromes has long been based on Sanger sequencing and search for large rearrangements using MLPA or DNA arrays (array-CGH or SNP-array. Recently, next-generation sequencing (NGS was demonstrated to be a powerful approach to overcome the wide clinical and genetic heterogeneity of epileptic disorders. Coverage is critical for assessing the quality and accuracy of results from NGS. However, it is often a difficult parameter to display in practice. The aim of the study was to compare two library-building methods (Haloplex, Agilent and SeqCap EZ, Roche for a targeted panel of 41 genes causing monogenic epileptic disorders. We included 24 patients, 20 of whom had known disease-causing mutations. For each patient both libraries were built in parallel and sequenced on an Ion Torrent Personal Genome Machine (PGM. To compare coverage and depth, we developed a simple homemade tool, named DeCovA (Depth and Coverage Analysis. DeCovA displays the sequencing depth of each base and the coverage of target genes for each genomic position. The fraction of each gene covered at different thresholds could be easily estimated. None of the two methods used, namely NextGene and Ion Reporter, were able to identify all the known mutations/CNVs displayed by the 20 patients. Variant detection rate was globally similar for the two techniques and DeCovA showed that failure to detect a mutation was mainly related to insufficient coverage.

  12. Measles vaccination coverage in high-incidence areas of the ...

    African Journals Online (AJOL)

    A community survey was conducted in the Western Cape to assess measles vaccination coverage attained by routine and campaign services, in children ... were consecutively visited and requested to participate in the survey. Within each ... analysis, in order to provide a pre- and post-campaign profile. Children without an ...

  13. The Coverage of the Holocaust in High School History Textbooks

    Science.gov (United States)

    Lindquist, David

    2009-01-01

    The Holocaust is now a regular part of high school history curricula throughout the United States and, as a result, coverage of the Holocaust has become a standard feature of high school textbooks. As with any major event, it is important for textbooks to provide a rigorously accurate and valid historical account. In dealing with the Holocaust,…

  14. Scalable Coverage Maintenance for Dense Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Jun Lu

    2007-06-01

    Full Text Available Owing to numerous potential applications, wireless sensor networks have been attracting significant research effort recently. The critical challenge that wireless sensor networks often face is to sustain long-term operation on limited battery energy. Coverage maintenance schemes can effectively prolong network lifetime by selecting and employing a subset of sensors in the network to provide sufficient sensing coverage over a target region. We envision future wireless sensor networks composed of a vast number of miniaturized sensors in exceedingly high density. Therefore, the key issue of coverage maintenance for future sensor networks is the scalability to sensor deployment density. In this paper, we propose a novel coverage maintenance scheme, scalable coverage maintenance (SCOM, which is scalable to sensor deployment density in terms of communication overhead (i.e., number of transmitted and received beacons and computational complexity (i.e., time and space complexity. In addition, SCOM achieves high energy efficiency and load balancing over different sensors. We have validated our claims through both analysis and simulations.

  15. Media coverage and public reaction to a celebrity cancer diagnosis.

    Science.gov (United States)

    Metcalfe, D; Price, C; Powell, J

    2011-03-01

    Celebrity diagnoses can have important effects on public behaviour. UK television celebrity Jade Goody died from cervical cancer in 2009. We investigated the impact of her illness on media coverage of cervical cancer prevention, health information seeking behaviour and cervical screening coverage. National UK newspaper articles containing the words 'Jade Goody' and 'cancer' were examined for public health messages. Google Insights for Search was used to quantify Internet searches as a measure of public health information seeking. Cervical screening coverage data were examined for temporal associations with this story. Of 1203 articles, 116 (9.6%) included a clear public health message. The majority highlighted screening (8.2%). Fewer articles provided advice about vaccination (3.0%), number of sexual partners (1.4%), smoking (0.6%) and condom use (0.4%). Key events were associated with increased Internet searches for 'cervical cancer' and 'smear test', although only weakly with searches for 'HPV'. Cervical screening coverage increased during this period. Increased public interest in disease prevention can follow a celebrity diagnosis. Although media coverage sometimes included public health information, articles typically focused on secondary instead of primary prevention. There is further potential to maximize the public health benefit of future celebrity diagnoses.

  16. Insurance coverage for male infertility care in the United States

    Directory of Open Access Journals (Sweden)

    James M Dupree

    2016-01-01

    Full Text Available Infertility is a common condition experienced by many men and women, and treatments are expensive. The World Health Organization and American Society of Reproductive Medicine define infertility as a disease, yet private companies infrequently offer insurance coverage for infertility treatments. This is despite the clear role that healthcare insurance plays in ensuring access to care and minimizing the financial burden of expensive services. In this review, we assess the current knowledge of how male infertility care is covered by insurance in the United States. We begin with an appraisal of the costs of male infertility care, then examine the state insurance laws relevant to male infertility, and close with a discussion of why insurance coverage for male infertility is important to both men and women. Importantly, we found that despite infertility being classified as a disease and males contributing to almost half of all infertility cases, coverage for male infertility is often excluded from health insurance laws. Excluding coverage for male infertility places an undue burden on their female partners. In addition, excluding care for male infertility risks missing opportunities to diagnose important health conditions and identify reversible or irreversible causes of male infertility. Policymakers should consider providing equal coverage for male and female infertility care in future health insurance laws.

  17. Insurance coverage for male infertility care in the United States

    Science.gov (United States)

    Dupree, James M

    2016-01-01

    Infertility is a common condition experienced by many men and women, and treatments are expensive. The World Health Organization and American Society of Reproductive Medicine define infertility as a disease, yet private companies infrequently offer insurance coverage for infertility treatments. This is despite the clear role that healthcare insurance plays in ensuring access to care and minimizing the financial burden of expensive services. In this review, we assess the current knowledge of how male infertility care is covered by insurance in the United States. We begin with an appraisal of the costs of male infertility care, then examine the state insurance laws relevant to male infertility, and close with a discussion of why insurance coverage for male infertility is important to both men and women. Importantly, we found that despite infertility being classified as a disease and males contributing to almost half of all infertility cases, coverage for male infertility is often excluded from health insurance laws. Excluding coverage for male infertility places an undue burden on their female partners. In addition, excluding care for male infertility risks missing opportunities to diagnose important health conditions and identify reversible or irreversible causes of male infertility. Policymakers should consider providing equal coverage for male and female infertility care in future health insurance laws. PMID:27030084

  18. About Parallel Programming: Paradigms, Parallel Execution and Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Loredana MOCEAN

    2009-01-01

    Full Text Available In the last years, there were made efforts for delineation of a stabile and unitary frame, where the problems of logical parallel processing must find solutions at least at the level of imperative languages. The results obtained by now are not at the level of the made efforts. This paper wants to be a little contribution at these efforts. We propose an overview in parallel programming, parallel execution and collaborative systems.

  19. Topic 7: parallel computer architecture and instruction level parallelism

    OpenAIRE

    Ayguadé Parra, Eduard; Wolfgang, Kark; De Bosschere, Koen; Francois, Jean Collard

    2006-01-01

    We welcome you to the two Parallel Computer Architecture and Instruction Level Parallelism sessions of Euro-Par 2006 conference being held in Dresden, Germany. The call for papers for this Euro-Par topic area sought papers on all hardware/software aspects of parallel computer architecture, processor architecture and microarchitecture. This year 12 papers were submitted to this topic area. Among the submissions, 5 papers were accepted as full papers for the conference (41% acceptance rate). ...

  20. Message passing with parallel queue traversal

    Science.gov (United States)

    Underwood, Keith D [Albuquerque, NM; Brightwell, Ronald B [Albuquerque, NM; Hemmert, K Scott [Albuquerque, NM

    2012-05-01

    In message passing implementations, associative matching structures are used to permit list entries to be searched in parallel fashion, thereby avoiding the delay of linear list traversal. List management capabilities are provided to support list entry turnover semantics and priority ordering semantics.

  1. Asymmetric k-Center with Minimum Coverage

    DEFF Research Database (Denmark)

    Gørtz, Inge Li

    2008-01-01

    In this paper we give approximation algorithms and inapproximability results for various asymmetric k-center with minimum coverage problems. In the k-center with minimum coverage problem, each center is required to serve a minimum number of clients. These problems have been studied by Lim et al. [A....... Lim, B. Rodrigues, F. Wang, Z. Xu, k-center problems with minimum coverage, Theoret. Comput. Sci. 332 (1–3) (2005) 1–17] in the symmetric setting....

  2. Surveillance of Vaccination Coverage among Adult Populations - United States, 2015.

    Science.gov (United States)

    Williams, Walter W; Lu, Peng-Jun; O'Halloran, Alissa; Kim, David K; Grohskopf, Lisa A; Pilishvili, Tamara; Skoff, Tami H; Nelson, Noele P; Harpaz, Rafael; Markowitz, Lauri E; Rodriguez-Lainz, Alfonso; Fiebelkorn, Amy Parker

    2017-05-05

    Overall, the prevalence of illness attributable to vaccine-preventable diseases is greater among adults than among children. Adults are recommended to receive vaccinations based on their age, underlying medical conditions, lifestyle, prior vaccinations, and other considerations. Updated vaccination recommendations from CDC are published annually in the U.S. Adult Immunization Schedule. Despite longstanding recommendations for use of many vaccines, vaccination coverage among U.S. adults is low. August 2014-June 2015 (for influenza vaccination) and January-December 2015 (for pneumococcal, tetanus and diphtheria [Td] and tetanus and diphtheria with acellular pertussis [Tdap], hepatitis A, hepatitis B, herpes zoster, and human papillomavirus [HPV] vaccination). The National Health Interview Survey (NHIS) is a continuous, cross-sectional national household survey of the noninstitutionalized U.S. civilian population. In-person interviews are conducted throughout the year in a probability sample of households, and NHIS data are compiled and released annually. The survey objective is to monitor the health of the U.S. population and provide estimates of health indicators, health care use and access, and health-related behaviors. Compared with data from the 2014 NHIS, increases in vaccination coverage occurred for influenza vaccine among adults aged ≥19 years (a 1.6 percentage point increase compared with the 2013-14 season to 44.8%), pneumococcal vaccine among adults aged 19-64 years at increased risk for pneumococcal disease (a 2.8 percentage point increase to 23.0%), Tdap vaccine among adults aged ≥19 years and adults aged 19-64 years (a 3.1 percentage point and 3.3 percentage point increase to 23.1% and to 24.7%, respectively), herpes zoster vaccine among adults aged ≥60 years and adults aged ≥65 years (a 2.7 percentage point and 3.2 percentage point increase to 30.6% and to 34.2%, respectively), and hepatitis B vaccine among health care personnel (HCP) aged

  3. OpenCL parallel programming development cookbook

    CERN Document Server

    Tay, Raymond

    2013-01-01

    OpenCL Parallel Programming Development Cookbook will provide a set of advanced recipes that can be utilized to optimize existing code. This book is therefore ideal for experienced developers with a working knowledge of C/C++ and OpenCL.This book is intended for software developers who have often wondered what to do with that newly bought CPU or GPU they bought other than using it for playing computer games; this book is also for developers who have a working knowledge of C/C++ and who want to learn how to write parallel programs in OpenCL so that life isn't too boring.

  4. Parallel processing for artificial intelligence 2

    CERN Document Server

    Kumar, V; Suttner, CB

    1994-01-01

    With the increasing availability of parallel machines and the raising of interest in large scale and real world applications, research on parallel processing for Artificial Intelligence (AI) is gaining greater importance in the computer science environment. Many applications have been implemented and delivered but the field is still considered to be in its infancy. This book assembles diverse aspects of research in the area, providing an overview of the current state of technology. It also aims to promote further growth across the discipline. Contributions have been grouped according to their

  5. On the Deployment of a Connected Sensor Network for Confident Information Coverage

    Directory of Open Access Journals (Sweden)

    Huping Xu

    2015-05-01

    Full Text Available Coverage and connectivity are two important performance metrics in wireless sensor networks. In this paper, we study the sensor placement problem to achieve both coverage and connectivity. Instead of using the simplistic disk coverage model, we use our recently proposed confident information coverage model as the sensor coverage model. The grid approach is applied to discretize the sensing field, and our objective is to place the minimum number of sensors to form a connected network and to provide confident information coverage for all of the grid points. We first formulate the sensor placement problem as a constrained optimization problem. Then, two heuristic algorithms, namely the connected cover formation (CCF algorithm and the cover formation and relay placement with redundancy removal (CFRP-RR algorithm, are proposed to find the approximate solutions for the sensor placement problem. The simulation results validate their effectiveness, and the CCF algorithm performs slightly better than the CFRP-RR algorithm.

  6. DNA barcoding in the media: does coverage of cool science reflect its social context?

    Science.gov (United States)

    Geary, Janis; Camicioli, Emma; Bubela, Tania

    2016-09-01

    Paul Hebert and colleagues first described DNA barcoding in 2003, which led to international efforts to promote and coordinate its use. Since its inception, DNA barcoding has generated considerable media coverage. We analysed whether this coverage reflected both the scientific and social mandates of international barcoding organizations. We searched newspaper databases to identify 900 English-language articles from 2003 to 2013. Coverage of the science of DNA barcoding was highly positive but lacked context for key topics. Coverage omissions pose challenges for public understanding of the science and applications of DNA barcoding; these included coverage of governance structures and issues related to the sharing of genetic resources across national borders. Our analysis provided insight into how barcoding communication efforts have translated into media coverage; more targeted communication efforts may focus media attention on previously omitted, but important topics. Our analysis is timely as the DNA barcoding community works to establish the International Society for the Barcode of Life.

  7. Enforcing consistency during the adaptation of a parallel component

    OpenAIRE

    Buisson, Jérémy; André, Françoise; Pazat, Jean-Louis

    2005-01-01

    International audience; As Grid architectures provide execution environments that are distributed, parallel and dynamic, applications require to be not only parallel and distributed, but also able to adapt themselves to their execution environment. This article presents a model for designing self-adaptable parallel components that can be assembled to build applications for Grid. This model includes the definition of a consistency criterion for the dynamic adaptation of SPMD components. We pro...

  8. Abstract Level Parallelization of Finite Difference Methods

    Directory of Open Access Journals (Sweden)

    Edwin Vollebregt

    1997-01-01

    Full Text Available A formalism is proposed for describing finite difference calculations in an abstract way. The formalism consists of index sets and stencils, for characterizing the structure of sets of data items and interactions between data items (“neighbouring relations”. The formalism provides a means for lifting programming to a more abstract level. This simplifies the tasks of performance analysis and verification of correctness, and opens the way for automaticcode generation. The notation is particularly useful in parallelization, for the systematic construction of parallel programs in a process/channel programming paradigm (e.g., message passing. This is important because message passing, unfortunately, still is the only approach that leads to acceptable performance for many more unstructured or irregular problems on parallel computers that have non-uniform memory access times. It will be shown that the use of index sets and stencils greatly simplifies the determination of which data must be exchanged between different computing processes.

  9. Parallel Harness for Informatic Stream Hashing

    Energy Technology Data Exchange (ETDEWEB)

    2012-09-11

    PHISH is a lightweight framework which a set of independent processes can use to exchange data as they run on the same desktop machine, on processors of a parallel machine, or on different machines across a network. This enables them to work in a coordinated parallel fashion to perform computations on either streaming, archived, or self-generated data. The PHISH distribution includes a simple, portable library for performing data exchanges in useful patterns either via MPI message-passing or ZMQ sockets. PHISH input scripts are used to describe a data-processing algorithm, and additional tools provided in the PHISH distribution convert the script into a form that can be launched as a parallel job.

  10. Distributed Parallel Architecture for "Big Data"

    Directory of Open Access Journals (Sweden)

    Catalin BOJA

    2012-01-01

    Full Text Available This paper is an extension to the "Distributed Parallel Architecture for Storing and Processing Large Datasets" paper presented at the WSEAS SEPADS’12 conference in Cambridge. In its original version the paper went over the benefits of using a distributed parallel architecture to store and process large datasets. This paper analyzes the problem of storing, processing and retrieving meaningful insight from petabytes of data. It provides a survey on current distributed and parallel data processing technologies and, based on them, will propose an architecture that can be used to solve the analyzed problem. In this version there is more emphasis put on distributed files systems and the ETL processes involved in a distributed environment.

  11. SKIRT: Hybrid parallelization of radiative transfer simulations

    Science.gov (United States)

    Verstocken, S.; Van De Putte, D.; Camps, P.; Baes, M.

    2017-07-01

    We describe the design, implementation and performance of the new hybrid parallelization scheme in our Monte Carlo radiative transfer code SKIRT, which has been used extensively for modelling the continuum radiation of dusty astrophysical systems including late-type galaxies and dusty tori. The hybrid scheme combines distributed memory parallelization, using the standard Message Passing Interface (MPI) to communicate between processes, and shared memory parallelization, providing multiple execution threads within each process to avoid duplication of data structures. The synchronization between multiple threads is accomplished through atomic operations without high-level locking (also called lock-free programming). This improves the scaling behaviour of the code and substantially simplifies the implementation of the hybrid scheme. The result is an extremely flexible solution that adjusts to the number of available nodes, processors and memory, and consequently performs well on a wide variety of computing architectures.

  12. A Two-Phase Coverage-Enhancing Algorithm for Hybrid Wireless Sensor Networks.

    Science.gov (United States)

    Zhang, Qingguo; Fok, Mable P

    2017-01-09

    Providing field coverage is a key task in many sensor network applications. In certain scenarios, the sensor field may have coverage holes due to random initial deployment of sensors; thus, the desired level of coverage cannot be achieved. A hybrid wireless sensor network is a cost-effective solution to this problem, which is achieved by repositioning a portion of the mobile sensors in the network to meet the network coverage requirement. This paper investigates how to redeploy mobile sensor nodes to improve network coverage in hybrid wireless sensor networks. We propose a two-phase coverage-enhancing algorithm for hybrid wireless sensor networks. In phase one, we use a differential evolution algorithm to compute the candidate's target positions in the mobile sensor nodes that could potentially improve coverage. In the second phase, we use an optimization scheme on the candidate's target positions calculated from phase one to reduce the accumulated potential moving distance of mobile sensors, such that the exact mobile sensor nodes that need to be moved as well as their final target positions can be determined. Experimental results show that the proposed algorithm provided significant improvement in terms of area coverage rate, average moving distance, area coverage-distance rate and the number of moved mobile sensors, when compare with other approaches.

  13. Patterns for Parallel Software Design

    CERN Document Server

    Ortega-Arjona, Jorge Luis

    2010-01-01

    Essential reading to understand patterns for parallel programming Software patterns have revolutionized the way we think about how software is designed, built, and documented, and the design of parallel software requires you to consider other particular design aspects and special skills. From clusters to supercomputers, success heavily depends on the design skills of software developers. Patterns for Parallel Software Design presents a pattern-oriented software architecture approach to parallel software design. This approach is not a design method in the classic sense, but a new way of managin

  14. Coverage-based constraints for IMRT optimization.

    Science.gov (United States)

    Mescher, H; Ulrich, S; Bangert, M

    2017-09-05

    Radiation therapy treatment planning requires an incorporation of uncertainties in order to guarantee an adequate irradiation of the tumor volumes. In current clinical practice, uncertainties are accounted for implicitly with an expansion of the target volume according to generic margin recipes. Alternatively, it is possible to account for uncertainties by explicit minimization of objectives that describe worst-case treatment scenarios, the expectation value of the treatment or the coverage probability of the target volumes during treatment planning. In this note we show that approaches relying on objectives to induce a specific coverage of the clinical target volumes are inevitably sensitive to variation of the relative weighting of the objectives. To address this issue, we introduce coverage-based constraints for intensity-modulated radiation therapy (IMRT) treatment planning. Our implementation follows the concept of coverage-optimized planning that considers explicit error scenarios to calculate and optimize patient-specific probabilities [Formula: see text] of covering a specific target volume fraction [Formula: see text] with a certain dose [Formula: see text]. Using a constraint-based reformulation of coverage-based objectives we eliminate the trade-off between coverage and competing objectives during treatment planning. In-depth convergence tests including 324 treatment plan optimizations demonstrate the reliability of coverage-based constraints for varying levels of probability, dose and volume. General clinical applicability of coverage-based constraints is demonstrated for two cases. A sensitivity analysis regarding penalty variations within this planing study based on IMRT treatment planning using (1) coverage-based constraints, (2) coverage-based objectives, (3) probabilistic optimization, (4) robust optimization and (5) conventional margins illustrates the potential benefit of coverage-based constraints that do not require tedious adjustment of target

  15. Universal health care coverage--pitfalls and promise of an employment-based approach.

    Science.gov (United States)

    Budetti, P

    1992-02-01

    America's patchwork quilt of health care coverage is coming apart at the seams. The system, such as it is, is built upon an inherently problematic base: employment. By definition, an employment-based approach, by itself, will not assure universal coverage of the entire population. If an employment-based approach is to be the centerpiece of a system that provides universal coverage, special attention must be paid to all the categories of individuals who are not employees--children, unemployed spouses or singles, the unemployable ill and disabled, persons between jobs, students, retirees, the elderly. Moreover, in a purely voluntary employment-based arrangement, some employers will not provide insurance at all, and others will provide inadequate coverage, necessitating other special provisions for coverage. As a consequence, about one out of six people now has no health coverage whatsoever, and even more have inadequate coverage. All the while, the rapidly-increasing transaction costs of sustaining this grossly inadequate pluralistic system eat up sufficient funds to provide basic benefits to the entire population. The time for systematic reforms has come and gone; what is now needed is action to prevent disaster, followed by a complete rebuilding of this country's health coverage system. Although perhaps more likely to be tried than more radical, completely nationalized, ones, stepwise reforms may not go far enough to cure the significant ills of the current employment-based system. Passage of inadequate reforms, then, could well set the stage for nationalized health care in the not too distant future.

  16. 42 CFR 416.48 - Condition for coverage-Pharmaceutical services.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Condition for coverage-Pharmaceutical services. 416... Coverage § 416.48 Condition for coverage—Pharmaceutical services. The ASC must provide drugs and... direction of an individual designated responsible for pharmaceutical services. (a) Standard: Administration...

  17. Genomic Selection Using Genotyping-By-Sequencing Data with Different Coverage Depth in Perennial Ryegrass

    DEFF Research Database (Denmark)

    Cericola, Fabio; Fé, Dario; Janss, Luc

    investigated how this reduction of the coverage depth affects the genomic relationship matrices used to estimated breeding value of F2 family pools in perennial ryegrass. A total of 995 families were genotyped via GBS providing more than 1.8M allele frequency estimates for each family with an average coverage...

  18. Lexical Threshold Revisited: Lexical Text Coverage, Learners' Vocabulary Size and Reading Comprehension

    Science.gov (United States)

    Laufer, Batia; Ravenhorst-Kalovski, Geke C.

    2010-01-01

    We explore the relationship between second language (L2) learners' vocabulary size, lexical text coverage that their vocabulary provides and their reading comprehension. We also conceptualize "adequate reading comprehension" and look for the lexical threshold for such reading in terms of coverage and vocabulary size. Vocabulary size was…

  19. Combined Scheduling and Mapping for Scalable Computing with Parallel Tasks

    Directory of Open Access Journals (Sweden)

    Jörg Dümmler

    2012-01-01

    Full Text Available Recent and future parallel clusters and supercomputers use symmetric multiprocessors (SMPs and multi-core processors as basic nodes, providing a huge amount of parallel resources. These systems often have hierarchically structured interconnection networks combining computing resources at different levels, starting with the interconnect within multi-core processors up to the interconnection network combining nodes of the cluster or supercomputer. The challenge for the programmer is that these computing resources should be utilized efficiently by exploiting the available degree of parallelism of the application program and by structuring the application in a way which is sensitive to the heterogeneous interconnect. In this article, we pursue a parallel programming method using parallel tasks to structure parallel implementations. A parallel task can be executed by multiple processors or cores and, for each activation of a parallel task, the actual number of executing cores can be adapted to the specific execution situation. In particular, we propose a new combined scheduling and mapping technique for parallel tasks with dependencies that takes the hierarchical structure of modern multi-core clusters into account. An experimental evaluation shows that the presented programming approach can lead to a significantly higher performance compared to standard data parallel implementations.

  20. 24 CFR 200.17 - Mortgage coverage.

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Mortgage coverage. 200.17 Section... Generally Applicable to Multifamily and Health Care Facility Mortgage Insurance Programs; and Continuing Eligibility Requirements for Existing Projects Eligible Mortgage § 200.17 Mortgage coverage. The mortgage...

  1. Proteome coverage prediction with infinite Markov models

    Science.gov (United States)

    Claassen, Manfred; Aebersold, Ruedi; Buhmann, Joachim M.

    2009-01-01

    Motivation: Liquid chromatography tandem mass spectrometry (LC-MS/MS) is the predominant method to comprehensively characterize complex protein mixtures such as samples from prefractionated or complete proteomes. In order to maximize proteome coverage for the studied sample, i.e. identify as many traceable proteins as possible, LC-MS/MS experiments are typically repeated extensively and the results combined. Proteome coverage prediction is the task of estimating the number of peptide discoveries of future LC-MS/MS experiments. Proteome coverage prediction is important to enhance the design of efficient proteomics studies. To date, there does not exist any method to reliably estimate the increase of proteome coverage at an early stage. Results: We propose an extended infinite Markov model DiriSim to extrapolate the progression of proteome coverage based on a small number of already performed LC-MS/MS experiments. The method explicitly accounts for the uncertainty of peptide identifications. We tested DiriSim on a set of 37 LC-MS/MS experiments of a complete proteome sample and demonstrated that DiriSim correctly predicts the coverage progression already from a small subset of experiments. The predicted progression enabled us to specify maximal coverage for the test sample. We demonstrated that quality requirements on the final proteome map impose an upper bound on the number of useful experiment repetitions and limit the achievable proteome coverage. Contact: manfredc@inf.ethz.ch; jbuhmann@inf.ethz.ch PMID:19477982

  2. CDMA coverage under mobile heterogeneous network load

    NARCIS (Netherlands)

    Saban, D.; van den Berg, Hans Leo; Boucherie, Richardus J.; Endrayanto, A.I.

    2002-01-01

    We analytically investigate coverage (determined by the uplink) under non-homogeneous and moving traffic load of third generation UMTS mobile networks. In particular, for different call assignment policies, we investigate cell breathing and the movement of the coverage gap occurring between cells

  3. On optimal coverage with unreliable sensors

    NARCIS (Netherlands)

    Frasca, Paolo; Garin, Federica

    This paper regards the problem of placing unreliable sensors in a given one-dimensional environment, in such a way to optimize a given coverage cost. We specifically consider the disk-coverage cost, whose optimal solution for reliable sensors is simply an equally-spaced configuration of the sensors.

  4. 76 FR 7767 - Student Health Insurance Coverage

    Science.gov (United States)

    2011-02-11

    ... HUMAN SERVICES 45 CFR Parts 144 and 147 RIN 0950-AA20 Student Health Insurance Coverage AGENCY: Centers... proposed regulation that would establish rules for student health insurance coverage under the Public Health Service Act and the Affordable Care Act. The proposed rule would define ``student health insurance...

  5. A Semantic Framework for Test Coverage

    NARCIS (Netherlands)

    Brandan Briones, L.; Brinksma, Hendrik; Stoelinga, Mariëlle Ida Antoinette; Graf, Susanne; Zhang, Wenhui

    2006-01-01

    Since testing is inherently incomplete, test selection has vital importance. Coverage measures evaluate the quality of a test suite and help the tester select test cases with maximal impact at minimum cost. Existing coverage criteria for test suites are usually defined in terms of syntactic

  6. Earthquake Coverage by the Western Press.

    Science.gov (United States)

    Gaddy, Gary D.; Tanjong, Enoh

    1986-01-01

    Describes a study to determine the type and quantity of Western news coverage of Third World earthquakes. Finds little evidence of geographical bias in coverage studied, and suggests that care must be taken to examine the underlying news events before bias is alleged. (MS)

  7. An Educational Tool for Interactive Parallel and Distributed Processing

    DEFF Research Database (Denmark)

    Pagliarini, Luigi; Lund, Henrik Hautop

    2011-01-01

    In this paper we try to describe how the Modular Interactive Tiles System (MITS) can be a valuable tool for introducing students to interactive parallel and distributed processing programming. This is done by providing an educational hands-on tool that allows a change of representation...... of the abstract problems related to designing interactive parallel and distributed systems. Indeed, MITS seems to bring a series of goals into the education, such as parallel programming, distributedness, communication protocols, master dependency, software behavioral models, adaptive interactivity, feedback...... parallel and distributed processing with different software behavioural models such as open loop, randomness based, rule based, user interaction based, AI and ALife based software....

  8. Parallel Adaptive Mesh Refinement

    Energy Technology Data Exchange (ETDEWEB)

    Diachin, L; Hornung, R; Plassmann, P; WIssink, A

    2005-03-04

    As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the

  9. Impact of invitation schemes on screening coverage

    DEFF Research Database (Denmark)

    Jacobsen, Katja Kemp; von Euler Chelpin, My; Vejborg, Ilse

    2017-01-01

    BACKGROUND: The porpuse of mammography screening is to decrease breast cancer mortality. To achieve this a high coverage by examination is needed. Within an organized screening programme, we examined the impact of changes in the invitation schedule on the interplay between coverage...... and participation. METHOD: We studied nine cohorts aged 50-51 when first targeted by mammography screening in Copenhagen, Denmark. Population data were retrieved from the Danish Civil Registration System; invitation and attendance data from the screening programme database. Data were linked using unique personal...... identification numbers. Coverage by invitation was defined as (number of invited women/number of targeted women), coverage by examination as (number of screened women/number of targeted women), and participation rate as (number of screened women/number of invited women). RESULTS: Coverage by invitation was close...

  10. The BLAZE language - A parallel language for scientific programming

    Science.gov (United States)

    Mehrotra, Piyush; Van Rosendale, John

    1987-01-01

    A Pascal-like scientific programming language, BLAZE, is described. BLAZE contains array arithmetic, forall loops, and APL-style accumulation operators, which allow natural expression of fine grained parallelism. It also employs an applicative or functional procedure invocation mechanism, which makes it easy for compilers to extract coarse grained parallelism using machine specific program restructuring. Thus BLAZE should allow one to achieve highly parallel execution on multiprocessor architectures, while still providing the user with conceptually sequential control flow. A central goal in the design of BLAZE is portability across a broad range of parallel architectures. The multiple levels of parallelism present in BLAZE code, in principle, allow a compiler to extract the types of parallelism appropriate for the given architecture while neglecting the remainder. The features of BLAZE are described and it is shown how this language would be used in typical scientific programming.

  11. The BLAZE language: A parallel language for scientific programming

    Science.gov (United States)

    Mehrotra, P.; Vanrosendale, J.

    1985-01-01

    A Pascal-like scientific programming language, Blaze, is described. Blaze contains array arithmetic, forall loops, and APL-style accumulation operators, which allow natural expression of fine grained parallelism. It also employs an applicative or functional procedure invocation mechanism, which makes it easy for compilers to extract coarse grained parallelism using machine specific program restructuring. Thus Blaze should allow one to achieve highly parallel execution on multiprocessor architectures, while still providing the user with onceptually sequential control flow. A central goal in the design of Blaze is portability across a broad range of parallel architectures. The multiple levels of parallelism present in Blaze code, in principle, allow a compiler to extract the types of parallelism appropriate for the given architecture while neglecting the remainder. The features of Blaze are described and shows how this language would be used in typical scientific programming.

  12. Towards a streaming model for nested data parallelism

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner; Filinski, Andrzej

    2013-01-01

    -processable in a streaming fashion. This semantics is directly compatible with previously proposed piecewise execution models for nested data parallelism, but allows the expected space usage to be reasoned about directly at the source-language level. The language definition and implementation are still very much work......The language-integrated cost semantics for nested data parallelism pioneered by NESL provides an intuitive, high-level model for predicting performance and scalability of parallel algorithms with reasonable accuracy. However, this predictability, obtained through a uniform, parallelism......-flattening execution strategy, comes at the price of potentially prohibitive space usage in the common case of computations with an excess of available parallelism, such as dense-matrix multiplication. We present a simple nested data-parallel functional language and associated cost semantics that retains NESL...

  13. PARALLEL IMPORT: REALITY FOR RUSSIA

    Directory of Open Access Journals (Sweden)

    Т. А. Сухопарова

    2014-01-01

    Full Text Available Problem of parallel import is urgent question at now. Parallel import legalization in Russia is expedient. Such statement based on opposite experts opinion analysis. At the same time it’s necessary to negative consequences consider of this decision and to apply remedies to its minimization.Purchase on Elibrary.ru > Buy now

  14. Parallel context-free languages

    DEFF Research Database (Denmark)

    Skyum, Sven

    1974-01-01

    The relation between the family of context-free languages and the family of parallel context-free languages is examined in this paper. It is proved that the families are incomparable. Finally we prove that the family of languages of finite index is contained in the family of parallel context...

  15. The effect of health insurance coverage and the doctor-patient relationship on health care utilization in high poverty neighborhoods

    National Research Council Canada - National Science Library

    Destini A. Smith; Alan Akira; Kenneth Hudson; Andrea Hudson; Marcellus Hudson; Marcus Mitchell; Errol Crook

    2017-01-01

    .... We hypothesize that in low socioeconomic status neighborhoods, having health insurance coverage and a regular health care provider increases the likelihood of receiving diagnostic tests for cardio...

  16. Parallel programming with PCN. Revision 2

    Energy Technology Data Exchange (ETDEWEB)

    Foster, I.; Tuecke, S.

    1993-01-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and Cthat allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/pcn at info.mcs. ani.gov (cf. Appendix A). This version of this document describes PCN version 2.0, a major revision of the PCN programming system. It supersedes earlier versions of this report.

  17. Insurance cancellations in context: stability of coverage in the nongroup market prior to health reform.

    Science.gov (United States)

    Sommers, Benjamin D

    2014-05-01

    Recent cancellations of nongroup health insurance plans generated much policy debate and raised concerns that the Affordable Care Act (ACA) may increase the number of uninsured Americans in the short term. This article provides evidence on the stability of nongroup coverage using US census data for the period 2008-11, before ACA provisions took effect. The principal findings are threefold. First, this market was characterized by high turnover: Only 42 percent of people with nongroup coverage at the outset of the study period retained that coverage after twelve months. Second, 80 percent of people experiencing coverage changes acquired other insurance within a year, most commonly from an employer. Third, turnover varied across groups, with stable coverage more common for whites and self-employed people than for other groups. Turnover was particularly high among adults ages 19-35, with only 21 percent of young adults retaining continuous nongroup coverage for two years. Given estimates from 2012 that 10.8 million people were covered in this market, these results suggest that 6.2 million people leave nongroup coverage annually. This suggests that the nongroup market was characterized by frequent disruptions in coverage before the ACA and that the effects of the recent cancellations are not necessarily out of the norm. These results can serve as a useful pre-ACA baseline with which to evaluate the law's long-term impact on the stability of nongroup coverage.

  18. Seeing or moving in parallel

    DEFF Research Database (Denmark)

    Christensen, Mark Schram; Ehrsson, H Henrik; Nielsen, Jens Bo

    2013-01-01

    adduction-abduction movements symmetrically or in parallel with real-time congruent or incongruent visual feedback of the movements. One network, consisting of bilateral superior and middle frontal gyrus and supplementary motor area (SMA), was more active when subjects performed parallel movements, whereas...... a different network, involving bilateral dorsal premotor cortex (PMd), primary motor cortex, and SMA, was more active when subjects viewed parallel movements while performing either symmetrical or parallel movements. Correlations between behavioral instability and brain activity were present in right lateral...... cerebellum during the symmetric movements. These findings suggest the presence of different error-monitoring mechanisms for symmetric and parallel movements. The results indicate that separate areas within PMd and SMA are responsible for both perception and performance of ongoing movements...

  19. Predictability and Parallelism of Multitrait Adaptation.

    Science.gov (United States)

    Langerhans, Randall Brian

    2017-12-21

    Environments shape the traits of organisms. Environmental variation may rarely alter selection on only a few traits, but instead precipitate wholesale changes of the multidimensional selective regime-many traits might experience divergent selection across divergent environments. Such changes in selection can elicit multifarious evolution. How predictable (from theory) and how parallel (consistent occurrences) is multitrait divergence across replicated environments? Here, I address this question using the post-Pleistocene radiation of Bahamas mosquitofish (Gambusia hubbsi) inhabiting blue holes on Andros Island. These fish independently colonized numerous blue holes, some that harbor a major fish predator (bigmouth sleeper, Gobiomorus dormitor) and some that lack any major predators. I used 5 approaches to quantitatively explore the predictability and parallelism of multitrait divergence between predation regimes in Bahamas mosquitofish. Synthesizing data for 90 traits from 13 different types of character suites (e.g., body morphology, life history, genital morphology, coloration, mating preference, habitat use), I found widespread evidence for strong, predictable, and parallel divergence between predation regimes. Yet despite the great majority of traits showing predictable trajectories of change, and the majority of traits showing significant parallelism and strong magnitudes of predictable divergence, I uncovered that over half of the overall phenotypic variation among populations was not driven by variation in predation regime. Results suggest that focusing on few traits, or focusing on parallel aspects of divergence, can provide a misleading picture of adaptation, and nonparallel divergence appears widespread and warrants greater attention. Taking a multitrait perspective, and quantifying predictability and parallelism, can yield important insights. © The American Genetic Association 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  20. Parallel integer sorting with medium and fine-scale parallelism

    Science.gov (United States)

    Dagum, Leonardo

    1993-01-01

    Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.

  1. Template based parallel checkpointing in a massively parallel computer system

    Science.gov (United States)

    Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  2. EFFICIENT SCHEDULING OF PARALLEL JOBS ON MASSIVELY PARALLEL SYSTEMS

    Energy Technology Data Exchange (ETDEWEB)

    F. PETRINI; W. FENG

    1999-09-01

    We present buffered coscheduling, a new methodology to multitask parallel jobs in a message-passing environment and to develop parallel programs that can pave the way to the efficient implementation of a distributed operating system. Buffered coscheduling is based on three innovative techniques: communication buffering, strobing, and non-blocking communication. By leveraging these techniques, we can perform effective optimizations based on the global status of the parallel machine rather than on the limited knowledge available locally to each processor. The advantages of buffered coscheduling include higher resource utilization, reduced communication overhead, efficient implementation of low-control strategies and fault-tolerant protocols, accurate performance modeling, and a simplified yet still expressive parallel programming model. Preliminary experimental results show that buffered coscheduling is very effective in increasing the overall performance in the presence of load imbalance and communication-intensive workloads.

  3. Improving protein and proteome coverage through data-independent multiplexed peptide fragmentation.

    Science.gov (United States)

    Blackburn, Kevin; Mbeunkui, Flaubert; Mitra, Srijeet K; Mentzel, Tobias; Goshe, Michael B

    2010-07-02

    Performance differences in protein and proteome characterization achieved by data-independent acquisition (DIA) LC/MS(E) and data-dependent acquisition (DDA) LC/MS/MS approaches were investigated. LC/MS(E) is a novel mode of generating product ion data for all coeluting precursors in parallel as opposed to LC/MS/MS where coeluting precursors must be serially fragmented one at a time. During LC/MS(E) analysis, alternating MS scans of "normal" and "elevated" collision energy are collected at regular intervals, providing nearly a 100% duty cycle for precursor detection and fragmentation because all precursors are fragmented across their full chromatographic elution profile. This is in contrast to DDA-based MS/MS where serial selection of precursor ions is biased toward interrogation and detection of the highest abundance sample components by virtue of the intensity-driven interrogation scheme employed. Both modes of acquisition were applied to a simple four-protein standard mixture with a 16-fold dynamic range in concentration, an in-gel digest of the Arabidopsis thaliana protein FLS2 purified by immunoprecipitation, and a solution-digested tomato leaf proteome sample. Dramatic improvement for individual protein sequence coverage was obtained for all three samples analyzed by the DIA approach, particularly for the lowest abundance sample components. In many instances, precursors readily detected and identified during DIA were either interrogated by MS/MS during DDA at inopportune points in their chromatographic elution profiles resulting in poor quality product ion spectra or not interrogated at all. Detailed evaluation of both DDA and DIA raw data and timing of the MS-to-MS/MS switching events clearly revealed the fundamental limitations of serial MS/MS interrogation and the advantages of parallel fragmentation by DIA for more comprehensive protein identification and characterization which holds promise for enhanced isoform and post-translational modification

  4. Vaccination coverage among adults, excluding influenza vaccination - United States, 2013.

    Science.gov (United States)

    Williams, Walter W; Lu, Peng-Jun; O'Halloran, Alissa; Bridges, Carolyn B; Kim, David K; Pilishvili, Tamara; Hales, Craig M; Markowitz, Lauri E

    2015-02-06

    among the general population, and adult patients largely rely on health care provider recommendations for vaccination. The Community Preventive Services Task Force and the National Vaccine Advisory Committee have recommended that health care providers incorporate vaccination needs assessment, recommendation, and offer of vaccination into every clinical encounter with adult patients to improve vaccination rates and to narrow the widening racial/ethnic disparities in vaccination coverage.

  5. A Two-Phase Coverage-Enhancing Algorithm for Hybrid Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Qingguo Zhang

    2017-01-01

    Full Text Available Providing field coverage is a key task in many sensor network applications. In certain scenarios, the sensor field may have coverage holes due to random initial deployment of sensors; thus, the desired level of coverage cannot be achieved. A hybrid wireless sensor network is a cost-effective solution to this problem, which is achieved by repositioning a portion of the mobile sensors in the network to meet the network coverage requirement. This paper investigates how to redeploy mobile sensor nodes to improve network coverage in hybrid wireless sensor networks. We propose a two-phase coverage-enhancing algorithm for hybrid wireless sensor networks. In phase one, we use a differential evolution algorithm to compute the candidate’s target positions in the mobile sensor nodes that could potentially improve coverage. In the second phase, we use an optimization scheme on the candidate’s target positions calculated from phase one to reduce the accumulated potential moving distance of mobile sensors, such that the exact mobile sensor nodes that need to be moved as well as their final target positions can be determined. Experimental results show that the proposed algorithm provided significant improvement in terms of area coverage rate, average moving distance, area coverage–distance rate and the number of moved mobile sensors, when compare with other approaches.

  6. Coverage or costs: the role of health insurance in labor market reentry among early retirees.

    Science.gov (United States)

    Kail, Ben Lennox

    2012-01-01

    This study evaluated the impact of insurance coverage on the odds of returning to work after early retirement and the change in insurance coverage after returning to work. The Health and Retirement Study was used to estimate hierarchical linear models of transitions to full-time work and part-time work relative to remaining retired. A chi-square test was also used to assess change in insurance coverage after returning to work. Insurance coverage was unrelated to the odds of transitioning to full-time work. However, relative to employer-provided insurance, private nongroup insurance increased the odds of transitioning to part-time work, whereas public insurance reduced the odds of making this transition. Additionally, after returning to work, insurance coverage increased among those who were without employer-provided insurance in retirement. Results indicated that source of coverage may be more useful in explaining returns to part-time work than simply whether people have coverage at all. In other words, the mechanism underlying the positive relationship between insurance and returning to work appeared to be limited to those who return to work because of the cost of private nongroup insurance. Among these people, however, there was some evidence that they are able to secure new coverage once they return to work.

  7. SWAMP+: multiple subsequence alignment using associative massive parallelism

    Energy Technology Data Exchange (ETDEWEB)

    Steinfadt, Shannon Irene [Los Alamos National Laboratory; Baker, Johnnie W [KENT STATE UNIV.

    2010-10-18

    A new parallel algorithm SWAMP+ incorporates the Smith-Waterman sequence alignment on an associative parallel model known as ASC. It is a highly sensitive parallel approach that expands traditional pairwise sequence alignment. This is the first parallel algorithm to provide multiple non-overlapping, non-intersecting subsequence alignments with the accuracy of Smith-Waterman. The efficient algorithm provides multiple alignments similar to BLAST while creating a better workflow for the end users. The parallel portions of the code run in O(m+n) time using m processors. When m = n, the algorithmic analysis becomes O(n) with a coefficient of two, yielding a linear speedup. Implementation of the algorithm on the SIMD ClearSpeed CSX620 confirms this theoretical linear speedup with real timings.

  8. MASSIVE HYBRID PARALLELISM FOR FULLY IMPLICIT MULTIPHYSICS

    Energy Technology Data Exchange (ETDEWEB)

    Cody J. Permann; David Andrs; John W. Peterson; Derek R. Gaston

    2013-05-01

    As hardware advances continue to modify the supercomputing landscape, traditional scientific software development practices will become more outdated, ineffective, and inefficient. The process of rewriting/retooling existing software for new architectures is a Sisyphean task, and results in substantial hours of development time, effort, and money. Software libraries which provide an abstraction of the resources provided by such architectures are therefore essential if the computational engineering and science communities are to continue to flourish in this modern computing environment. The Multiphysics Object Oriented Simulation Environment (MOOSE) framework enables complex multiphysics analysis tools to be built rapidly by scientists, engineers, and domain specialists, while also allowing them to both take advantage of current HPC architectures, and efficiently prepare for future supercomputer designs. MOOSE employs a hybrid shared-memory and distributed-memory parallel model and provides a complete and consistent interface for creating multiphysics analysis tools. In this paper, a brief discussion of the mathematical algorithms underlying the framework and the internal object-oriented hybrid parallel design are given. Representative massively parallel results from several applications areas are presented, and a brief discussion of future areas of research for the framework are provided.

  9. Numerical Algorithms for Parallel Computers

    Science.gov (United States)

    1989-08-31

    NUMERICAL ALGORITHMS FOR PARALLEL COMPUTERS 12. PERSONAL AUTHOR(S) Loce M. Adams 13a. TYPE OF REPORT 13b. TIME COVERED 14. DATE OF REPORT (Yea, Month. Day...editions are obsolete. SEC n&"S2IVAQelftS PAGE 90 01 11 131 . . AP06w.TR. 8 9 -l1N5 Numerical Algorithms for Parallel Computers Loyce M. Adams Department of...Conference on Applied Linear Algebra, Loyce Adams presented minisym- posium talk Preconditioners on Parallel Computers , Madison, WI., May 1988. Third

  10. Parallel education: what is it?

    OpenAIRE

    Amos, Michelle Peta

    2017-01-01

    In the history of education it has long been discussed that single-sex and coeducation are the two models of education present in schools. With the introduction of parallel schools over the last 15 years, there has been very little research into this 'new model'. Many people do not understand what it means for a school to be parallel or they confuse a parallel model with co-education, due to the presence of both boys and girls within the one institution. Therefore, the main obj...

  11. Parallel Event Analysis Under Unix

    Science.gov (United States)

    Looney, S.; Nilsson, B. S.; Oest, T.; Pettersson, T.; Ranjard, F.; Thibonnier, J.-P.

    The ALEPH experiment at LEP, the CERN CN division and Digital Equipment Corp. have, in a joint project, developed a parallel event analysis system. The parallel physics code is identical to ALEPH's standard analysis code, ALPHA, only the organisation of input/output is changed. The user may switch between sequential and parallel processing by simply changing one input "card". The initial implementation runs on an 8-node DEC 3000/400 farm, using the PVM software, and exhibits a near-perfect speed-up linearity, reducing the turn-around time by a factor of 8.

  12. Kalman Filter Tracking on Parallel Architectures

    Directory of Open Access Journals (Sweden)

    Cerati Giuseppe

    2016-01-01

    Full Text Available Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen the introduction of lower-power, multi-core processors such as GPGPU, ARM and Intel MIC. In order to achieve the theoretical performance gains of these processors, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High-Luminosity Large Hadron Collider (HL-LHC, for example, this will be by far the dominant problem. The need for greater parallelism has driven investigations of very different track finding techniques such as Cellular Automata or Hough Transforms. The most common track finding techniques in use today, however, are those based on a Kalman filter approach. Significant experience has been accumulated with these techniques on real tracking detector systems, both in the trigger and offline. They are known to provide high physics performance, are robust, and are in use today at the LHC. Given the utility of the Kalman filter in track finding, we have begun to port these algorithms to parallel architectures, namely Intel Xeon and Xeon Phi. We report here on our progress towards an end-to-end track reconstruction algorithm fully exploiting vectorization and parallelization techniques in a simplified experimental environment.

  13. Parallel ac-ac converters using master-slave control

    Science.gov (United States)

    Taufik

    The objective of this study is to develop a parallel ac-ac converter that can be implemented as a means of connecting ac generators in parallel. The most important goal of the parallel converter in this study is to obtain equal current sharing performance among the paralleled generators such that an economically optimal solution for production cost of electricity can be accomplished. At the same time, voltage regulation and stability of the parallel system upon changes of different sizes and load types are to be maintained. Use of the available parallel converter method and voltage inversion technique leads to the formulation of a new solution technique for parallel operation of ac generators. The proposed parallel ac-ac converter involves Sinusoidal Pulse Width Modulation inverter combined with a control scheme called the Stationary Master-Slave control that uses voltage feedback loop and designates one generator unit as the master. The study involves numerous computer simulations to verify the feasibility of the parallel ac-ac converters. In doing so, as many as five models are considered including Current Control Current Source model, PI model, PI-PWM model, PI-PWM model with MOSFETs, and PI-PWM model with real MOSFET model. The results of the simulations show that the parallel ac-ac converter not only successfully provides the current sharing property among the paralleled units but also maintains system stability during changes in load demand. Further research work is suggested to address extensions to parallel ac-dc and dc-ac systems, hardware setup of at least two turbogenerators to test and verify the proposed approach, dynamic analysis to ensure system stability, and implementations of other types of Master-Slave control schemes to improve redundancy.

  14. Vacuum Large Current Parallel Transfer Numerical Analysis

    Directory of Open Access Journals (Sweden)

    Enyuan Dong

    2014-01-01

    Full Text Available The stable operation and reliable breaking of large generator current are a difficult problem in power system. It can be solved successfully by the parallel interrupters and proper timing sequence with phase-control technology, in which the strategy of breaker’s control is decided by the time of both the first-opening phase and second-opening phase. The precise transfer current’s model can provide the proper timing sequence to break the generator circuit breaker. By analysis of the transfer current’s experiments and data, the real vacuum arc resistance and precise correctional model in the large transfer current’s process are obtained in this paper. The transfer time calculated by the correctional model of transfer current is very close to the actual transfer time. It can provide guidance for planning proper timing sequence and breaking the vacuum generator circuit breaker with the parallel interrupters.

  15. Incremental Parallelization of Non-Data-Parallel Programs Using the Charon Message-Passing Library

    Science.gov (United States)

    VanderWijngaart, Rob F.

    2000-01-01

    Message passing is among the most popular techniques for parallelizing scientific programs on distributed-memory architectures. The reasons for its success are wide availability (MPI), efficiency, and full tuning control provided to the programmer. A major drawback, however, is that incremental parallelization, as offered by compiler directives, is not generally possible, because all data structures have to be changed throughout the program simultaneously. Charon remedies this situation through mappings between distributed and non-distributed data. It allows breaking up the parallelization into small steps, guaranteeing correctness at every stage. Several tools are available to help convert legacy codes into high-performance message-passing programs. They usually target data-parallel applications, whose loops carrying most of the work can be distributed among all processors without much dependency analysis. Others do a full dependency analysis and then convert the code virtually automatically. Even more toolkits are available that aid construction from scratch of message passing programs. None, however, allows piecemeal translation of codes with complex data dependencies (i.e. non-data-parallel programs) into message passing codes. The Charon library (available in both C and Fortran) provides incremental parallelization capabilities by linking legacy code arrays with distributed arrays. During the conversion process, non-distributed and distributed arrays exist side by side, and simple mapping functions allow the programmer to switch between the two in any location in the program. Charon also provides wrapper functions that leave the structure of the legacy code intact, but that allow execution on truly distributed data. Finally, the library provides a rich set of communication functions that support virtually all patterns of remote data demands in realistic structured grid scientific programs, including transposition, nearest-neighbor communication, pipelining

  16. Socioeconomic inequalities and vaccination coverage: results of an immunisation coverage survey in 27 Brazilian capitals, 2007-2008.

    Science.gov (United States)

    Barata, Rita Barradas; Ribeiro, Manoel Carlos Sampaio de Almeida; de Moraes, José Cássio; Flannery, Brendan

    2012-10-01

    Since 1988, Brazil's Unified Health System has sought to provide universal and equal access to immunisations. Inequalities in immunisation may be examined by contrasting vaccination coverage among children in the highest versus the lowest socioeconomic strata. The authors examined coverage with routine infant immunisations from a survey of Brazilian children according to socioeconomic stratum of residence census tract. The authors conducted a household cluster survey in census tracts systematically selected from five socioeconomic strata, according to average household income and head of household education, in 26 Brazilian capitals and the federal district. The authors calculated coverage with recommended vaccinations among children until 18 months of age, according to socioeconomic quintile of residence census tract, and examined factors associated with incomplete vaccination. Among 17,295 children with immunisation cards, 14,538 (82.6%) had received all recommended vaccinations by 18 months of age. Among children residing in census tracts in the highest socioeconomic stratum, 77.2% were completely immunised by 18 months of age versus 81.2%-86.2% of children residing in the four census tract quintiles with lower socioeconomic indicators (pimmunisation coverage among poorer children. Strategies are needed to reach children in wealthier areas.

  17. Stable Satellite Orbits for Global Coverage of the Moon

    Science.gov (United States)

    Ely, Todd; Lieb, Erica

    2006-01-01

    A document proposes a constellation of spacecraft to be placed in orbit around the Moon to provide navigation and communication services with global coverage required for exploration of the Moon. There would be six spacecraft in inclined elliptical orbits: three in each of two orthogonal orbital planes, suggestive of a linked-chain configuration. The orbits have been chosen to (1) provide 99.999-percent global coverage for ten years and (2) to be stable under perturbation by Earth gravitation and solar-radiation pressure, so that no deterministic firing of thrusters would be needed to maintain the orbits. However, a minor amount of orbit control might be needed to correct for such unmodeled effects as outgassing of the spacecraft.

  18. Universal Health Coverage - The Critical Importance of Global Solidarity and Good Governance Comment on "Ethical Perspective: Five Unacceptable Trade-offs on the Path to Universal Health Coverage".

    Science.gov (United States)

    Reis, Andreas A

    2016-06-07

    This article provides a commentary to Ole Norheim' s editorial entitled "Ethical perspective: Five unacceptable trade-offs on the path to universal health coverage." It reinforces its message that an inclusive, participatory process is essential for ethical decision-making and underlines the crucial importance of good governance in setting fair priorities in healthcare. Solidarity on both national and international levels is needed to make progress towards the goal of universal health coverage (UHC). © 2016 by Kerman University of Medical Sciences.

  19. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo

    2010-01-01

    Today\\'s large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  20. A Parallel Compact Hash Table

    NARCIS (Netherlands)

    van der Vegt, Steven; Laarman, Alfons; Vojnar, Tomas

    2011-01-01

    We present the first parallel compact hash table algorithm. It delivers high performance and scalability due to its dynamic region-based locking scheme with only a fraction of the memory requirements of a regular hash table.

  1. 32 CFR 199.8 - Double coverage.

    Science.gov (United States)

    2010-07-01

    ... from the operation of a motor vehicle. (4) Exceptions. Double coverage plans do not include: (i) Plans... (for example, the Indian Health Service); or (v) State Victims of Crime Compensation Programs. (c...

  2. Media Coverage of Nuclear Energy after Fukushima

    Energy Technology Data Exchange (ETDEWEB)

    Oltra, C.; Roman, P.; Prades, A.

    2013-07-01

    This report presents the main findings of a content analysis of printed media coverage of nuclear energy in Spain before and after the Fukushima accident. Our main objective is to understand the changes in the presentation of nuclear fission and nuclear fusion as a result of the accident in Japan. We specifically analyze the volume of coverage and thematic content in the media coverage for nuclear fusion from a sample of Spanish print articles in more than 20 newspapers from 2008 to 2012. We also analyze the media coverage of nuclear energy (fission) in three main Spanish newspapers one year before and one year after the accident. The results illustrate how the media contributed to the presentation of nuclear power in the months before and after the accident. This could have implications for the public understanding of nuclear power. (Author)

  3. Fuzzy Clustering in Parallel Universes

    OpenAIRE

    Wiswedel, Bernd; Berthold, Michael R.

    2005-01-01

    We propose a modified fuzzy c-means algorithm that operates on different feature spaces, so-called parallel universes, simultaneously. The method assigns membership values of patterns to different universes, which are then adopted throughout the training. This leads to better clustering results since patterns not contributing to clustering in a universe are (completely or partially) ignored. The outcome of the algorithm are clusters distributed over different parallel universes, each modeling...

  4. Coverage and Capacity Analysis of Sigfox, LoRa, GPRS, and NB-IoT

    DEFF Research Database (Denmark)

    Vejlgaard, Benny; Lauridsen, Mads; Nguyen, Huan Cong

    2017-01-01

    In this paper the coverage and capacity of SigFox, LoRa, GPRS, and NB-IoT is compared using a real site deployment covering 8000 km2 in Northern Denmark. Using the existing Telenor cellular site grid it is shown that the four technologies have more than 99 % outdoor coverage, while GPRS...... uplink and downlink connectivity with less than 5 % failure rate, while SigFox is able to provide an unacknowledged uplink data service with about 12 % failure rate. Both GPRS and LoRa struggle to provide sufficient indoor coverage and capacity....

  5. Length and coverage of inhibitory decision rules

    KAUST Repository

    Alsolami, Fawaz

    2012-01-01

    Authors present algorithms for optimization of inhibitory rules relative to the length and coverage. Inhibitory rules have a relation "attribute ≠ value" on the right-hand side. The considered algorithms are based on extensions of dynamic programming. Paper contains also comparison of length and coverage of inhibitory rules constructed by a greedy algorithm and by the dynamic programming algorithm. © 2012 Springer-Verlag.

  6. Limited Deposit Insurance Coverage and Bank Competition

    OpenAIRE

    SHY, Oz; Stenbacka, Rune; Yankov, Vladimir

    2014-01-01

    Deposit insurance schemes in many countries place a limit on the coverage of deposits in each bank. However, no limits are placed on the number of accounts held with different banks. Therefore, under limited deposit insurance, some consumers open accounts with different banks to achieve higher or full deposit insurance coverage. We compare three regimes of deposit insurance: No deposit insurance, unlimited deposit insurance, and limited deposit insurance. We show that limited deposit insuranc...

  7. St. Lukes' Survey on vaccination coverage

    African Journals Online (AJOL)

    To conf"1rID this very low coverage, a survey was done in the 5 km catchment area around the hospital. ... immunised; 13 (2.1%) had lost their card; 3 (0.5%) had partial immunisation and 2 (0.3%) had not received any ... St. Lukes hospital it was found that the already low estimated vaccine coverage of 57% for 1989, had.

  8. Dermal Coverage of Traumatic War Wounds

    Science.gov (United States)

    2017-01-01

    healing/non-healing of wound and donor site • Graft loss • Heterotrophic ossification • Infection • Scar contracture • Durability (i.e. abrasions/ injuries ...AWARD NUMBER: W81XWH-13-2-0004 TITLE: "Dermal Coverage of Traumatic War Wounds ” PRINCIPAL INVESTIGATOR: Dr. Leon Nesti CONTRACTING...REPORT DATE January 2017 2. REPORT TYPE Final 3. DATES COVERED (From - To) 31 Oct 2012- 30 Oct 2016 " Dermal Coverage of Traumatic War Wounds ” 5a

  9. Mold insurance: crafting coverage for a spreading problem.

    OpenAIRE

    Barrett, Julia R.

    2003-01-01

    Mold contamination is a growing concern for homeowners in terms of both physical health and insurance. Health experts, although they concede that exposure to mold can cause respiratory illnesses, are calling for further research into other mold-related health effects and for development of standards for mold sampling and data analysis. The insurance industry is grappling with how---and whether---to provide coverage for household damage caused by mold, while some state and federal legislators ...

  10. Consumption, labor income uncertainty, and economic news coverage

    OpenAIRE

    Garz, Marcel

    2014-01-01

    In the past decade, weak household consumption was an important reason for low rates of overall economic growth in Germany. Many explanations for the weakness have been provided and investigated in previous studies, but the role of media-driven uncertainty has not been addressed. Therefore, this study examines the link between economic news coverage and aggregate consumption. Consumption, information-processing, and decision-making theory all serve to derive hypotheses, which are evaluated us...

  11. Staff acceptance of tele-ICU coverage: a systematic review.

    Science.gov (United States)

    Young, Lance Brendan; Chan, Paul S; Cram, Peter

    2011-02-01

    Remote coverage of ICUs is increasing, but staff acceptance of this new technology is incompletely characterized. We conducted a systematic review to summarize existing research on acceptance of tele-ICU coverage among ICU staff. We searched for published articles pertaining to critical care telemedicine systems (aka, tele-ICU) between January 1950 and March 2010 using PubMed, Cumulative Index to Nursing and Allied Health Literature, Global Health, Web of Science, and the Cochrane Library and abstracts and presentations delivered at national conferences. Studies were included if they provided original qualitative or quantitative data on staff perceptions of tele-ICU coverage. Studies were imported into content analysis software and coded by tele-ICU configuration, methodology, participants, and findings (eg, positive and negative staff evaluations). Review of 3,086 citations yielded 23 eligible studies. Findings were grouped into four categories of staff evaluation: overall acceptance level of tele-ICU coverage (measured in 70% of studies), impact on patient care (measured in 96%), impact on staff (measured in 100%), and organizational impact (measured in 48%). Overall acceptance was high, despite initial ambivalence. Favorable impact on patient care was perceived by > 82% of participants. Staff impact referenced enhanced collaboration, autonomy, and training, although scrutiny, malfunctions, and contradictory advice were cited as potential barriers. Staff perceived the organizational impact to vary. An important limitation of available studies was a lack of rigorous methodology and validated survey instruments in many studies. Initial reports suggest high levels of staff acceptance of tele-ICU coverage, but more rigorous methodologic study is required.

  12. ELISA reagent coverage evaluation by affinity purification tandem mass spectrometry.

    Science.gov (United States)

    Henry, Scott M; Sutlief, Elissa; Salas-Solano, Oscar; Valliere-Douglass, John

    2017-10-01

    Host cell proteins (HCPs) must be adequately removed from recombinant therapeutics by downstream processing to ensure patient safety, product quality, and regulatory compliance. HCP process clearance is typically monitored by enzyme-linked immunosorbent assay (ELISA) using a polyclonal reagent. Recently, mass spectrometry (MS) has been used to identify specific HCP process impurities and monitor their clearance. Despite this capability, ELISA remains the preferred analytical approach due to its simplicity and throughput. There are, however, inherent difficulties reconciling the protein-centric results of MS characterization with ELISA, or providing assurance that ELISA has acceptable coverage against all process-specific HCP impurities that could pose safety or efficacy risks. Here, we describe efficient determination of ELISA reagent coverage by proteomic analysis following affinity purification with a polyclonal anti-HCP reagent (AP-MS). The resulting HCP identifications can be compared with the actual downstream process impurities for a given process to enable a highly focused assessment of ELISA reagent suitability. We illustrate the utility of this approach by performing coverage evaluation of an anti-HCP polyclonal against both an HCP immunogen and the downstream HCP impurities identified in a therapeutic monoclonal antibody after Protein A purification. The overall goal is to strategically implement affinity-based mass spectrometry as part of a holistic framework for evaluating HCP process clearance, ELISA reagent coverage, and process clearance risks. We envision coverage analysis by AP-MS will further enable a framework for HCP impurity analysis driven by characterization of actual product-specific process impurities, complimenting analytical methods centered on consideration of the total host cell proteome.

  13. Universal health coverage in Turkey: enhancement of equity.

    Science.gov (United States)

    Atun, Rifat; Aydın, Sabahattin; Chakraborty, Sarbani; Sümer, Safir; Aran, Meltem; Gürol, Ipek; Nazlıoğlu, Serpil; Ozgülcü, Senay; Aydoğan, Ulger; Ayar, Banu; Dilmen, Uğur; Akdağ, Recep

    2013-07-06

    Turkey has successfully introduced health system changes and provided its citizens with the right to health to achieve universal health coverage, which helped to address inequities in financing, health service access, and health outcomes. We trace the trajectory of health system reforms in Turkey, with a particular emphasis on 2003-13, which coincides with the Health Transformation Program (HTP). The HTP rapidly expanded health insurance coverage and access to health-care services for all citizens, especially the poorest population groups, to achieve universal health coverage. We analyse the contextual drivers that shaped the transformations in the health system, explore the design and implementation of the HTP, identify the factors that enabled its success, and investigate its effects. Our findings suggest that the HTP was instrumental in achieving universal health coverage to enhance equity substantially, and led to quantifiable and beneficial effects on all health system goals, with an improved level and distribution of health, greater fairness in financing with better financial protection, and notably increased user satisfaction. After the HTP, five health insurance schemes were consolidated to create a unified General Health Insurance scheme with harmonised and expanded benefits. Insurance coverage for the poorest population groups in Turkey increased from 2·4 million people in 2003, to 10·2 million in 2011. Health service access increased across the country-in particular, access and use of key maternal and child health services improved to help to greatly reduce the maternal mortality ratio, and under-5, infant, and neonatal mortality, especially in socioeconomically disadvantaged groups. Several factors helped to achieve universal health coverage and improve outcomes. These factors include economic growth, political stability, a comprehensive transformation strategy led by a transformation team, rapid policy translation, flexible implementation with

  14. Performance Evaluation of a Dual Coverage System for Internet of Things Environments

    Directory of Open Access Journals (Sweden)

    Omar Said

    2016-01-01

    Full Text Available A dual coverage system for Internet of Things (IoT environments is introduced. This system is used to connect IoT nodes regardless of their locations. The proposed system has three different architectures, which are based on satellites and High Altitude Platforms (HAPs. In case of Internet coverage problems, the Internet coverage will be replaced with the Satellite/HAP network coverage under specific restrictions such as loss and delay. According to IoT requirements, the proposed architectures should include multiple levels of satellites or HAPs, or a combination of both, to cover the global Internet things. It was shown that the Satellite/HAP/HAP/Things architecture provides the largest coverage area. A network simulation package, NS2, was used to test the performance of the proposed multilevel architectures. The results indicated that the HAP/HAP/Things architecture has the best end-to-end delay, packet loss, throughput, energy consumption, and handover.

  15. Dynamic balancing of mechanisms and synthesizing of parallel robots

    CERN Document Server

    Wei, Bin

    2016-01-01

    This book covers the state-of-the-art technologies in dynamic balancing of mechanisms with minimum increase of mass and inertia. The synthesis of parallel robots based on the Decomposition and Integration concept is also covered in detail. The latest advances are described, including different balancing principles, design of reactionless mechanisms with minimum increase of mass and inertia, and synthesizing parallel robots. This is an ideal book for mechanical engineering students and researchers who are interested in the dynamic balancing of mechanisms and synthesizing of parallel robots. This book also: ·       Broadens reader understanding of the synthesis of parallel robots based on the Decomposition and Integration concept ·       Reinforces basic principles with detailed coverage of different balancing principles, including input torque balancing mechanisms ·       Reviews exhaustively the key recent research into the design of reactionless mechanisms with minimum increase of mass a...

  16. Searching the veterinary literature: a comparison of the coverage of veterinary journals by nine bibliographic databases.

    Science.gov (United States)

    Grindlay, Douglas J C; Brennan, Marnie L; Dean, Rachel S

    2012-01-01

    A thorough search of the literature to find the best evidence is central to the practice of evidence-based veterinary medicine. This requires knowing which databases to search to maximize journal coverage. The aim of the present study was to compare the coverage of active veterinary journals by nine bibliographic databases to inform future systematic reviews and other evidence-based searches. Coverage was assessed using lists of included journals produced by the database providers. For 121 active veterinary journals in the "Basic List of Veterinary Medical Serials, Third Edition," the percentage coverage was the highest for Scopus (98.3%) and CAB Abstracts (97.5%). For an extensive list of 1,139 journals with significant veterinary content compiled from a variety of sources, coverage was much greater in CAB Abstracts (90.2%) than in any other database, the next highest coverage being in Scopus (58.3%). The maximum coverage of the extensive journal list that could be obtained in a search without including CAB Abstracts was 69.8%. It was concluded that to maximize journal coverage and avoid missing potentially relevant evidence, CAB Abstracts should be included in any veterinary literature search.

  17. The likely effects of employer-mandated complementary health insurance on health coverage in France.

    Science.gov (United States)

    Pierre, Aurélie; Jusot, Florence

    2017-03-01

    In France, access to health care greatly depends on having a complementary health insurance coverage (CHI). Thus, the generalisation of CHI became a core factor in the national health strategy created by the government in 2013. The first measure has been to compulsorily extend employer-sponsored CHI to all private sector employees on January 1st, 2016 and improve its portability coverage for unemployed former employees for up to 12 months. Based on data from the 2012 Health, Health Care and Insurance survey, this article provides a simulation of the likely effects of this mandate on CHI coverage and related inequalities in the general population by age, health status, socio-economic characteristics and time and risk preferences. We show that the non-coverage rate that was estimated to be 5% in 2012 will drop to 4% following the generalisation of employer-sponsored CHI and to 3.7% after accounting for portability coverage. The most vulnerable populations are expected to remain more often without CHI whereas non coverage will significantly decrease among the less risk averse and the more present oriented. With its focus on private sector employees, the policy is thus likely to do little for populations that would benefit most from additional insurance coverage while expanding coverage for other populations that appear to place little value on CHI. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. Incisal coverage or not in ceramic laminate veneers: A systematic review and meta-analysis.

    Science.gov (United States)

    Albanesi, Rafael Borges; Pigozzo, Mônica Nogueira; Sesma, Newton; Laganá, Dalva Cruz; Morimoto, Susana

    2016-09-01

    There is no consensus on whether incisal coverage is a risk or a protective factor in preparations for ceramic veneers. The aim of this systematic review and meta-analysis was to evaluate the survival rates of preparation designs for ceramic veneers with and without incisal coverage. Primary clinical studies with the following characteristics were included: 1) studies related to ceramic laminate veneers and 2) prospective or retrospective studies conducted in humans. From the selected studies, the survival rates and failures rates for ceramic veneers were extracted according to preparation design, with or without incisal coverage. The Cochran Q test and the I(2) statistic were used to evaluate heterogeneity. Metaregression, meta-analysis were performed. Two reviewers searched in the MEDLINE (Pubmed) and Cochrane Central Register of Controlled Trials (Central) electronic databases, from 1977 to June 5, 2016, without language restrictions. Eight studies out of 1145 articles initially identified were included for risk of bias and systematic assessment. No study was identified for crystalline ceramic veneers. The estimated survival rate for laminate veneers with incisal coverage was 88% and 91% for those without incisal coverage. Incisal coverage presented an OR of 1.25. Irrespective of the preparation designs, with or without incisal coverage, ceramic veneers showed high survival rates. As regard implications for future clinical research studies, randomized clinical studies are necessary to compare preparation designs with and without incisal coverage, and to provide clear descriptions of these preparation designs. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Microwave tomography global optimization, parallelization and performance evaluation

    CERN Document Server

    Noghanian, Sima; Desell, Travis; Ashtari, Ali

    2014-01-01

    This book provides a detailed overview on the use of global optimization and parallel computing in microwave tomography techniques. The book focuses on techniques that are based on global optimization and electromagnetic numerical methods. The authors provide parallelization techniques on homogeneous and heterogeneous computing architectures on high performance and general purpose futuristic computers. The book also discusses the multi-level optimization technique, hybrid genetic algorithm and its application in breast cancer imaging.

  20. Parallel carbon nanotube quantum dots and their interactions

    OpenAIRE

    Goss K.; Leijnse M.; Smerat S.; Wegewijs M.R.; Schneider C.M.; Meyer C

    2012-01-01

    We present quantum transport measurements of interacting parallel quantum dots formed in the strands of a carbon nanotube rope. In this molecular quantum dot system, transport is dominated by one quantum dot, while additional resonances from parallel side dots appear, which exhibit a weak gate coupling. This differential gating effect provides a tunability of the quantum dot system with only one gate electrode and provides control over the carbon nanotube strand that carries the current. By t...

  1. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  2. Multilingual interfaces for parallel coupling in multiphysics and multiscale systems.

    Energy Technology Data Exchange (ETDEWEB)

    Ong, E. T.; Larson, J. W.; Norris, B.; Jacob, R. L.; Tobis, M.; Steder, M.; Mathematics and Computer Science; Univ. of Wisconsin; Australian National Univ.; Univ. of Chicago

    2007-01-01

    Multiphysics and multiscale simulation systems are emerging as a new grand challenge in computational science, largely because of increased computing power provided by the distributed-memory parallel programming model on commodity clusters. These systems often present a parallel coupling problem in their intercomponent data exchanges. Another potential problem in these coupled systems is language interoperability between their various constituent codes. In anticipation of combined parallel coupling/language interoperability challenges, we have created a set of interlanguage bindings for a successful parallel coupling library, the Model Coupling Toolkit. We describe the method used for automatically generating the bindings using the Babel language interoperability tool, and illustrate with short examples how MCT can be used from the C++ and Python languages. We report preliminary performance reports for the MCT interpolation benchmark. We conclude with a discussion of the significance of this work to the rapid prototyping of large parallel coupled systems.

  3. Intrinsic parallel rotation drive by electromagnetic ion temperature gradient turbulence

    CERN Document Server

    Peng, Shuitao; Pan, Yuan

    2016-01-01

    The quasilinear intrinsic parallel flow drive including parallel residual stress, kinetic stress, cross Maxwell stress and parallel turbulent acceleration by electromagnetic ion temperature gradient (ITG) turbulence is calculated analytically using electromagnetic gyrokinetic theory. Both the kinetic stress and cross Maxwell stress also enter the mean parallel flow velocity equation via their divergence, as for the usual residual stress. The turbulent acceleration driven by ion pressure gradient along the total magnetic field (including equilibrium magnetic field and fluctuating radial magnetic field) cannot be written as a divergence of stress, and so should be treated as a local source/sink. All these terms can provide intrinsic parallel rotation drive. Electromagnetic effects reduce the non-resonant electrostatic stress force and even reverse it, but enhance the resonant stress force. Both the non-resonant and resonant turbulent acceleration terms are also enhanced by electromagnetic effects. The possible ...

  4. Distributed and parallel Ada and the Ada 9X recommendations

    Science.gov (United States)

    Volz, Richard A.; Goldsack, Stephen J.; Theriault, R.; Waldrop, Raymond S.; Holzbacher-Valero, A. A.

    1992-01-01

    Recently, the DoD has sponsored work towards a new version of Ada, intended to support the construction of distributed systems. The revised version, often called Ada 9X, will become the new standard sometimes in the 1990s. It is intended that Ada 9X should provide language features giving limited support for distributed system construction. The requirements for such features are given. Many of the most advanced computer applications involve embedded systems that are comprised of parallel processors or networks of distributed computers. If Ada is to become the widely adopted language envisioned by many, it is essential that suitable compilers and tools be available to facilitate the creation of distributed and parallel Ada programs for these applications. The major languages issues impacting distributed and parallel programming are reviewed, and some principles upon which distributed/parallel language systems should be built are suggested. Based upon these, alternative language concepts for distributed/parallel programming are analyzed.

  5. Error Modeling and Design Optimization of Parallel Manipulators

    DEFF Research Database (Denmark)

    Wu, Guanglei

    theory and virtual spring approach, a general kinetostatic model of the spherical parallel manipulators is developed and validated with Finite Element approach. This model is applied to the stiness analysis of a special spherical parallel manipulator with unlimited rolling motion and the obtained stiness......Parallel mechanism based robotic manipulators feature higher performance in terms of accuracy, rigidity, speed and payload over the serial manipulators and they have found the industrial applications in many elds. Nevertheless, the design and application of parallel manipulators face many...... technique in the design procedure is a suitable approach to handle these complex tasks. As there is no unied design guideline for the parallel manipulators, the study described in this thesis aims to provide a systematic analysis for this type of mechanisms in the early design stage, focusing on accuracy...

  6. Design Patterns: establishing a discipline of parallel software engineering

    CERN Multimedia

    CERN. Geneva

    2010-01-01

    Many core processors present us with a software challenge. We must turn our serial code into parallel code. To accomplish this wholesale transformation of our software ecosystem, we must define established practice is in parallel programming and then develop tools to support that practice. This leads to design patterns supported by frameworks optimized at runtime with advanced autotuning compilers. In this talk I provide an update of my ongoing research with the ParLab at UC Berkeley to realize this vision. In particular, I will describe our draft parallel pattern language, our early experiments with software frameworks, and the associated runtime optimization tools.About the speakerTim Mattson is a parallel programmer (Ph.D. Chemistry, UCSC, 1985). He does linear algebra, finds oil, shakes molecules, solves differential equations, and models electrons in simple atomic systems. He has spent his career working with computer scientists to make sure the needs of parallel applications programmers are met.Tim has ...

  7. Compilation Techniques for Embedded Data Parallel Languages

    OpenAIRE

    Catanzaro, Bryan Christopher

    2011-01-01

    Contemporary parallel microprocessors exploit Chip Multiprocessing along with Single Instruction, Multiple Data parallelism to deliver high performance on applications that expose substantial fine-grained data parallelism. Although data parallelism is widely available in many computations, implementing data parallel algorithms in low-level efficiency languages such as C++ is often a difficult task, since the programmer is burdened with mapping data parallelism from an application onto the ha...

  8. Parallelizing Gaussian Process Calculations in R

    Directory of Open Access Journals (Sweden)

    Christopher J. Paciorek

    2015-02-01

    Full Text Available We consider parallel computation for Gaussian process calculations to overcome computational and memory constraints on the size of datasets that can be analyzed. Using a hybrid parallelization approach that uses both threading (shared memory and message-passing (distributed memory, we implement the core linear algebra operations used in spatial statistics and Gaussian process regression in an R package called bigGP that relies on C and MPI. The approach divides the covariance matrix into blocks such that the computational load is balanced across processes while communication between processes is limited. The package provides an API enabling R programmers to implement Gaussian process-based methods by using the distributed linear algebra operations without any C or MPI coding. We illustrate the approach and software by analyzing an astrophysics dataset with n = 67, 275 observations.

  9. A parallel robot to assist vitreoretinal surgery

    Energy Technology Data Exchange (ETDEWEB)

    Nakano, Taiga; Sugita, Naohiko; Mitsuishi, Mamoru [University of Tokyo, School of Engineering, Tokyo (Japan); Ueta, Takashi; Tamaki, Yasuhiro [University of Tokyo, Graduate School of Medicine, Tokyo (Japan)

    2009-11-15

    This paper describes the development and evaluation of a parallel prototype robot for vitreoretinal surgery where physiological hand tremor limits performance. The manipulator was specifically designed to meet requirements such as size, precision, and sterilization; this has six-degree-of-freedom parallel architecture and provides positioning accuracy with micrometer resolution within the eye. The manipulator is controlled by an operator with a ''master manipulator'' consisting of multiple joints. Results of the in vitro experiments revealed that when compared to the manual procedure, a higher stability and accuracy of tool positioning could be achieved using the prototype robot. This microsurgical system that we have developed has superior operability as compared to traditional manual procedure and has sufficient potential to be used clinically for vitreoretinal surgery. (orig.)

  10. PARALLEL ALGORITHM FOR BAYESIAN NETWORK STRUCTURE LEARNING

    Directory of Open Access Journals (Sweden)

    S. A. Arustamov

    2013-03-01

    Full Text Available The article deals with implementation of a scalable parallel algorithm for structure learning of Bayesian network. Comparative analysis of sequential and parallel algorithms is done.

  11. Pattern-Driven Automatic Parallelization

    Directory of Open Access Journals (Sweden)

    Christoph W. Kessler

    1996-01-01

    Full Text Available This article describes a knowledge-based system for automatic parallelization of a wide class of sequential numerical codes operating on vectors and dense matrices, and for execution on distributed memory message-passing multiprocessors. Its main feature is a fast and powerful pattern recognition tool that locally identifies frequently occurring computations and programming concepts in the source code. This tool also works for dusty deck codes that have been "encrypted" by former machine-specific code transformations. Successful pattern recognition guides sophisticated code transformations including local algorithm replacement such that the parallelized code need not emerge from the sequential program structure by just parallelizing the loops. It allows access to an expert's knowledge on useful parallel algorithms, available machine-specific library routines, and powerful program transformations. The partially restored program semantics also supports local array alignment, distribution, and redistribution, and allows for faster and more exact prediction of the performance of the parallelized target code than is usually possible.

  12. Parallel plasma fluid turbulence calculations

    Science.gov (United States)

    Leboeuf, J. N.; Carreras, B. A.; Charlton, L. A.; Drake, J. B.; Lynch, V. E.; Newman, D. E.; Sidikman, K. L.; Spong, D. A.

    The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center's CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated.

  13. Evaluating parallel optimization on transputers

    Directory of Open Access Journals (Sweden)

    A.G. Chalmers

    2003-12-01

    Full Text Available The faster processing power of modern computers and the development of efficient algorithms have made it possible for operations researchers to tackle a much wider range of problems than ever before. Further improvements in processing speed can be achieved utilising relatively inexpensive transputers to process components of an algorithm in parallel. The Davidon-Fletcher-Powell method is one of the most successful and widely used optimisation algorithms for unconstrained problems. This paper examines the algorithm and identifies the components that can be processed in parallel. The results of some experiments with these components are presented which indicates under what conditions parallel processing with an inexpensive configuration is likely to be faster than the traditional sequential implementations. The performance of the whole algorithm with its parallel components is then compared with the original sequential algorithm. The implementation serves to illustrate the practicalities of speeding up typical OR algorithms in terms of difficulty, effort and cost. The results give an indication of the savings in time a given parallel implementation can be expected to yield.

  14. Including "evidentiary balance" in news media coverage of vaccine risk.

    Science.gov (United States)

    Clarke, Christopher E; Dixon, Graham N; Holton, Avery; McKeever, Brooke Weberling

    2015-01-01

    Journalists communicating risk-related uncertainty must accurately convey scientific evidence supporting particular conclusions. Scholars have explored how "balanced" coverage of opposing risk claims shapes uncertainty judgments. In situations where a preponderance of evidence points to a particular conclusion, balanced coverage reduces confidence in such a consensus and heightens uncertainty about whether a risk exists. Using the autism-vaccine controversy as a case study, we describe how journalists can cover multiple sides of an issue and provide insight into where the strength of evidence lies by focusing on "evidentiary balance." Our results suggest that evidentiary balance shapes perceived certainty that vaccines are safe, effective, and not linked to autism through the mediating role of a perception that scientists are divided about whether a link exists. Deference toward science, moreover, moderates these relationships under certain conditions. We discuss implications for journalism practice and risk communication.

  15. Amnion membrane for coverage of gingival recession: A novel application

    Directory of Open Access Journals (Sweden)

    Rucha Shah

    2014-01-01

    Full Text Available Introduction: Amnion allograft has been used in the field of medicine for its exceptional wound-modulating properties. However, in the field of dentistry, only a limited number of reports have explored its potential in healing of oral wounds. Materials and Methods: Amnion allograft in conjunction with coronally advanced flap has been used in the management of gingival recession. Results: A complete coverage along with excellent esthetics and an improvement in gingival biotype was observed at 6 months postoperatively. Discussion: Because of its inherent wound-modulating properties, amnion allograft may be used to enhance periodontal wound healing and enable tissue regeneration such as that in the coverage of gingival recession. Conclusion: Amnion allograft may provide an alternative to other conventional methods of treating gingival recession.

  16. Cholera in Haiti: Reproductive numbers and vaccination coverage estimates

    Science.gov (United States)

    Mukandavire, Zindoga; Smith, David L.; Morris, J. Glenn, Jr.

    2013-01-01

    Cholera reappeared in Haiti in October, 2010 after decades of absence. Cases were first detected in Artibonite region and in the ensuing months the disease spread to every department in the country. The rate of increase in the number of cases at the start of epidemics provides valuable information about the basic reproductive number (). Quantitative analysis of such data gives useful information for planning and evaluating disease control interventions, including vaccination. Using a mathematical model, we fitted data on the cumulative number of reported hospitalized cholera cases in Haiti. varied by department, ranging from 1.06 to 2.63. At a national level, 46% vaccination coverage would result in an () cholera vaccines in endemic and non-endemic regions, our results suggest that moderate cholera vaccine coverage would be an important element of disease control in Haiti.

  17. Brain maps and parallel computers.

    Science.gov (United States)

    Nelson, M E; Bower, J M

    1990-10-01

    It is well known that neural responses in many brain regions are organized in characteristic spatial patterns referred to as brain maps. It is likely that these patterns in some way reflect aspects of the neural computations being performed, but to date there are no general guiding principles for relating the structure of a brain map to the properties of the associated computation. In the field of parallel computing, maps similar to brain maps arise when computations are distributed across the multiple processors of a parallel computer. In this case, the relationship between maps and computations is well understood and general principles for optimally mapping computations onto parallel computers have been developed. In this paper we discuss how these principles may help illuminate the relationship between maps and computations in the nervous system.

  18. Fast data parallel polygon rendering

    Energy Technology Data Exchange (ETDEWEB)

    Ortega, F.A.; Hansen, C.D.

    1993-09-01

    This paper describes a parallel method for polygonal rendering on a massively parallel SIMD machine. This method, based on a simple shading model, is targeted for applications which require very fast polygon rendering for extremely large sets of polygons such as is found in many scientific visualization applications. The algorithms described in this paper are incorporated into a library of 3D graphics routines written for the Connection Machine. The routines are implemented on both the CM-200 and the CM-5. This library enables a scientists to display 3D shaded polygons directly from a parallel machine without the need to transmit huge amounts of data to a post-processing rendering system.

  19. Parallel artificial liquid membrane extraction

    DEFF Research Database (Denmark)

    Gjelstad, Astrid; Rasmussen, Knut Einar; Parmer, Marthe Petrine

    2013-01-01

    This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated by an arti......This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated...... by an artificial liquid membrane. Parallel artificial liquid membrane extraction is a modification of hollow-fiber liquid-phase microextraction, where the hollow fibers are replaced by flat membranes in a 96-well plate format....

  20. Innovative Uses of Parallel Computers

    Science.gov (United States)

    1990-05-01

    by the title of our Proposal to AFOSP?: "Innovative Uses of Parallel Computers ." It aims to use advanced computers in innovative ways that bypass both...8217- ATI" NES ’) . ARPOT OATS S. Ri(,ST TIt ANO) I May 1, 190 Final Report Nov 88 to 31 Oct 89 Z < n o 109THO - S FUBilNdingu4S Innovative Uses of Parallel ... Computers AFOSR-89-0119 61102F 2304/A3 pGerard Yichniac n IMPONA €R AUMNl~ NAW(S) ANO AONESS AS) L H9Mgum OONJA~ nf Plasma Fusion Centerd

  1. Cellular automata a parallel model

    CERN Document Server

    Mazoyer, J

    1999-01-01

    Cellular automata can be viewed both as computational models and modelling systems of real processes. This volume emphasises the first aspect. In articles written by leading researchers, sophisticated massive parallel algorithms (firing squad, life, Fischer's primes recognition) are treated. Their computational power and the specific complexity classes they determine are surveyed, while some recent results in relation to chaos from a new dynamic systems point of view are also presented. Audience: This book will be of interest to specialists of theoretical computer science and the parallelism challenge.

  2. Energy-efficient area coverage for intruder detection in sensor networks

    CERN Document Server

    He, Shibo; Li, Junkun

    2014-01-01

    This Springer Brief presents recent research results on area coverage for intruder detection from an energy-efficient perspective. These results cover a variety of topics, including environmental surveillance and security monitoring. The authors also provide the background and range of applications for area coverage and elaborate on system models such as the formal definition of area coverage and sensing models. Several chapters focus on energy-efficient intruder detection and intruder trapping under the well-known binary sensing model, along with intruder trapping under the probabilistic sens

  3. An integrated approach to improving the parallel applications development process

    Energy Technology Data Exchange (ETDEWEB)

    Rasmussen, Craig E [Los Alamos National Laboratory; Watson, Gregory R [IBM; Tibbitts, Beth R [IBM

    2009-01-01

    The development of parallel applications is becoming increasingly important to a broad range of industries. Traditionally, parallel programming was a niche area that was primarily exploited by scientists trying to model extremely complicated physical phenomenon. It is becoming increasingly clear, however, that continued hardware performance improvements through clock scaling and feature-size reduction are simply not going to be achievable for much longer. The hardware vendor's approach to addressing this issue is to employ parallelism through multi-processor and multi-core technologies. While there is little doubt that this approach produces scaling improvements, there are still many significant hurdles to be overcome before parallelism can be employed as a general replacement to more traditional programming techniques. The Parallel Tools Platform (PTP) Project was created in 2005 in an attempt to provide developers with new tools aimed at addressing some of the parallel development issues. Since then, the introduction of a new generation of peta-scale and multi-core systems has highlighted the need for such a platform. In this paper, we describe some of the challenges facing parallel application developers, present the current state of PTP, and provide a simple case study that demonstrates how PTP can be used to locate a potential deadlock situation in an MPI code.

  4. Conceptualising the lack of health insurance coverage.

    Science.gov (United States)

    Davis, J B

    2000-01-01

    This paper examines the lack of health insurance coverage in the US as a public policy issue. It first compares the problem of health insurance coverage to the problem of unemployment to show that in terms of the numbers of individuals affected lack of health insurance is a problem comparable in importance to the problem of unemployment. Secondly, the paper discusses the methodology involved in measuring health insurance coverage, and argues that the current method of estimation of the uninsured underestimates the extent that individuals go without health insurance. Third, the paper briefly introduces Amartya Sen's functioning and capabilities framework to suggest a way of representing the extent to which individuals are uninsured. Fourth, the paper sketches a means of operationalizing the Sen representation of the uninsured in terms of the disability-adjusted life year (DALY) measure.

  5. Resolution, coverage, and geometry beyond traditional limits

    Energy Technology Data Exchange (ETDEWEB)

    Ronen, Shuki; Ferber, Ralf

    1998-12-31

    The presentation relates to the optimization of the image of seismic data and improved resolution and coverage of acquired data. Non traditional processing methods such as inversion to zero offset (IZO) are used. To realize the potential of saving acquisition cost by reducing in-fill and to plan resolution improvement by processing, geometry QC methods such as DMO Dip Coverage Spectrum (DDCS) and Bull`s Eyes Analysis are used. The DDCS is a 2-D spectrum whose entries consist of the DMO (Dip Move Out) coverage for a particular reflector specified by it`s true time dip and reflector normal strike. The Bull`s Eyes Analysis relies on real time processing of synthetic data generated with the real geometry. 4 refs., 6 figs.

  6. Exploiting multi-level parallelism in streaming applications for heterogeneous platforms with GPUs

    NARCIS (Netherlands)

    Balevic, Ana

    2013-01-01

    Heterogeneous computing platforms support the traditional types of parallelism, such as e.g., instruction-level, data, task, and pipeline parallelism, and provide the opportunity to exploit a combination of different types of parallelism at different platform levels. The architectural diversity of

  7. Lemon : An MPI parallel I/O library for data encapsulation using LIME

    NARCIS (Netherlands)

    Deuzeman, Albert; Reker, Siebren; Urbach, Carsten

    We introduce Lemon, an MPI parallel I/O library that provides efficient parallel I/O of both binary and metadata on massively parallel architectures. Motivated by the demands of the lattice Quantum Chromodynamics community, the data is stored in the SciDAC Lattice QCD Interchange Message

  8. The nature of newspaper coverage of homicide.

    Science.gov (United States)

    Taylor, C A; Sorenson, S B

    2002-06-01

    Previous research has shown that some homicides are more likely than others to receive newspaper coverage (for example, homicides by strangers). The present investigation examined whether, once the decision has been made to report on a homicide, the nature of the coverage (that is, how much visibility is given to a story, what information is included, and how a story is written) differs according to two key variables, victim ethnicity, and victim-suspect relationship. Los Angeles, California (USA). Homicide articles from the 1990-94 issues of the Los Angeles Times were stratified according to the predictors of interest (victim ethnicity and victim-suspect relationship) and a sample was drawn. Data that characterized two primary aspects of newspaper coverage, prominence and story framing (including background information, story focus, use of opinions, story tone, and "hook" or leading introductory lines) were abstracted from the articles. Descriptive statistics and cross tabulations were generated. Multivariate analyses were conducted to examine the predictive value of victim ethnicity and victim-suspect relationship on the nature of the newspaper coverage. Newspaper coverage of homicide was generally factual, episodic, and unemotional in tone. Victim-suspect relationship, but not victim ethnicity, was related to how a story was covered, particularly the story frame. Homicides by intimates were covered consistently differently from other types of homicides; these stories were less likely to be opinion dominated, be emotional, and begin with a "hook". Victim-suspect relationship was related to the nature of coverage of homicides in a large, metropolitan newspaper. Given the agenda setting and issue framing functions of the news media, these findings have implications for the manner in which the public and policy makers perceive homicides and, consequently, for the support afforded to various types of solutions for addressing and preventing violence.

  9. RCT: Module 2.11, Radiological Work Coverage, Course 8777

    Energy Technology Data Exchange (ETDEWEB)

    Hillmer, Kurt T. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-07-20

    Radiological work is usually approved and controlled by radiation protection personnel by using administrative and procedural controls, such as radiological work permits (RWPs). In addition, some jobs will require working in, or will have the potential for creating, very high radiation, contamination, or airborne radioactivity areas. Radiological control technicians (RCTs) providing job coverage have an integral role in controlling radiological hazards. This course will prepare the student with the skills necessary for RCT qualification by passing quizzes, tests, and the RCT Comprehensive Phase 1, Unit 2 Examination (TEST 27566) and will provide in-the-field skills.

  10. Handbook of infrared standards II with spectral coverage between

    CERN Document Server

    Meurant, Gerard

    1993-01-01

    This timely compilation of infrared standards has been developed for use by infrared researchers in chemistry, physics, engineering, astrophysics, and laser and atmospheric sciences. Providing maps of closely spaced molecular spectra along with their measured wavenumbers between 1.4vm and 4vm, this handbook will complement the 1986 Handbook of Infrared Standards that included special coverage between 3 and 2600vm. It will serve as a necessary reference for all researchers conducting spectroscopic investigations in the near-infrared region.Key Features:- Provides all new spec

  11. CALTRANS: A parallel, deterministic, 3D neutronics code

    Energy Technology Data Exchange (ETDEWEB)

    Carson, L.; Ferguson, J.; Rogers, J.

    1994-04-01

    Our efforts to parallelize the deterministic solution of the neutron transport equation has culminated in a new neutronics code CALTRANS, which has full 3D capability. In this article, we describe the layout and algorithms of CALTRANS and present performance measurements of the code on a variety of platforms. Explicit implementation of the parallel algorithms of CALTRANS using both the function calls of the Parallel Virtual Machine software package (PVM 3.2) and the Meiko CS-2 tagged message passing library (based on the Intel NX/2 interface) are provided in appendices.

  12. Prosodic structure as a parallel to musical structure

    Directory of Open Access Journals (Sweden)

    Christopher Cullen Heffner

    2015-12-01

    Full Text Available What structural properties do language and music share? Although early speculation identified a wide variety of possibilities, the literature has largely focused on the parallels between musical structure and syntactic structure. Here, we argue that parallels between musical structure and prosodic structure deserve more attention. We review the evidence for a link between musical and prosodic structure and find it to be strong. In fact, certain elements of prosodic structure may provide a parsimonious comparison with musical structure without sacrificing empirical findings related to the parallels between language and music. We then develop several predictions related to such a hypothesis.

  13. Tile-based Level of Detail for the Parallel Age

    Energy Technology Data Exchange (ETDEWEB)

    Niski, K; Cohen, J D

    2007-08-15

    Today's PCs incorporate multiple CPUs and GPUs and are easily arranged in clusters for high-performance, interactive graphics. We present an approach based on hierarchical, screen-space tiles to parallelizing rendering with level of detail. Adapt tiles, render tiles, and machine tiles are associated with CPUs, GPUs, and PCs, respectively, to efficiently parallelize the workload with good resource utilization. Adaptive tile sizes provide load balancing while our level of detail system allows total and independent management of the load on CPUs and GPUs. We demonstrate our approach on parallel configurations consisting of both single PCs and a cluster of PCs.

  14. 5 CFR 875.412 - When will my coverage terminate?

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false When will my coverage terminate? 875.412... REGULATIONS (CONTINUED) FEDERAL LONG TERM CARE INSURANCE PROGRAM Coverage § 875.412 When will my coverage terminate? Your coverage will terminate on the earliest of the following dates: (a) The date you specify to...

  15. 7 CFR 1737.31 - Area Coverage Survey (ACS).

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 11 2010-01-01 2010-01-01 false Area Coverage Survey (ACS). 1737.31 Section 1737.31... Studies-Area Coverage Survey and Loan Design § 1737.31 Area Coverage Survey (ACS). (a) The Area Coverage Survey (ACS) is a market forecast of service requirements of subscribers in a proposed service area. (b...

  16. 42 CFR 440.330 - Benchmark health benefits coverage.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Benchmark health benefits coverage. 440.330 Section... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS SERVICES: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...

  17. Parallel Quantum Circuit in a Tunnel Junction

    Science.gov (United States)

    Faizy Namarvar, Omid; Dridi, Ghassen; Joachim, Christian; GNS theory Group Team

    In between 2 metallic nanopads, adding identical and independent electron transfer paths in parallel increases the electronic effective coupling between the 2 nanopads through the quantum circuit defined by those paths. Measuring this increase of effective coupling using the tunnelling current intensity can lead for example for 2 paths in parallel to the now standard G =G1 +G2 + 2√{G1 .G2 } conductance superposition law (1). This is only valid for the tunnelling regime (2). For large electronic coupling to the nanopads (or at resonance), G can saturate and even decay as a function of the number of parallel paths added in the quantum circuit (3). We provide here the explanation of this phenomenon: the measurement of the effective Rabi oscillation frequency using the current intensity is constrained by the normalization principle of quantum mechanics. This limits the quantum conductance G for example to go when there is only one channel per metallic nanopads. This ef fect has important consequences for the design of Boolean logic gates at the atomic scale using atomic scale or intramolecular circuits. References: This has the financial support by European PAMS project.

  18. Kalman Filter Tracking on Parallel Architectures

    Science.gov (United States)

    Cerati, Giuseppe; Elmer, Peter; Lantz, Steven; McDermott, Kevin; Riley, Dan; Tadel, Matevž; Wittich, Peter; Würthwein, Frank; Yagil, Avi

    2015-12-01

    Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen the introduction of lower-power, multi-core processors, but the future will be even more exciting. In order to stay within the power density limits but still obtain Moore's Law performance/price gains, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Example technologies today include Intel's Xeon Phi and GPGPUs. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High Luminosity LHC, for example, this will be by far the dominant problem. The need for greater parallelism has driven investigations of very different track finding techniques including Cellular Automata or returning to Hough Transform. The most common track finding techniques in use today are however those based on the Kalman Filter [2]. Significant experience has been accumulated with these techniques on real tracking detector systems, both in the trigger and offline. They are known to provide high physics performance, are robust and are exactly those being used today for the design of the tracking system for HL-LHC. Our previous investigations showed that, using optimized data structures, track fitting with Kalman Filter can achieve large speedup both with Intel Xeon and Xeon Phi. We report here our further progress towards an end-to-end track reconstruction algorithm fully exploiting vectorization and parallelization techniques in a realistic simulation setup.

  19. Parallelizing Particle Swarm Optimization in a Functional Programming Environment

    Directory of Open Access Journals (Sweden)

    Pablo Rabanal

    2014-10-01

    Full Text Available Many bioinspired methods are based on using several simple entities which search for a reasonable solution (somehow independently. This is the case of Particle Swarm Optimization (PSO, where many simple particles search for the optimum solution by using both their local information and the information of the best solution found so far by any of the other particles. Particles are partially independent, and we can take advantage of this fact to parallelize PSO programs. Unfortunately, providing good parallel implementations for each specific PSO program can be tricky and time-consuming for the programmer. In this paper we introduce several parallel functional skeletons which, given a sequential PSO implementation, automatically provide the corresponding parallel implementations of it. We use these skeletons and report some experimental results. We observe that, despite the low effort required by programmers to use these skeletons, empirical results show that skeletons reach reasonable speedups.

  20. Parallel computing: numerics, applications, and trends

    National Research Council Canada - National Science Library

    Trobec, Roman; Vajteršic, Marián; Zinterhof, Peter

    2009-01-01

    ... and/or distributed systems. The contributions to this book are focused on topics most concerned in the trends of today's parallel computing. These range from parallel algorithmics, programming, tools, network computing to future parallel computing. Particular attention is paid to parallel numerics: linear algebra, differential equations, numerica...

  1. Experiments with parallel algorithms for combinatorial problems

    NARCIS (Netherlands)

    G.A.P. Kindervater (Gerard); H.W.J.M. Trienekens

    1985-01-01

    textabstractIn the last decade many models for parallel computation have been proposed and many parallel algorithms have been developed. However, few of these models have been realized and most of these algorithms are supposed to run on idealized, unrealistic parallel machines. The parallel machines

  2. Reflections on parallel functional languages

    NARCIS (Netherlands)

    Vrancken, J.L.M.

    Are parallel functional languages feasible? The large majority of the current projects investigating this question are based on MIMD machines and the current set of implementation methods for functional languages which is graph rewriting and combinators. We regret that we have to come to a

  3. Parallel Sparse Matrix - Vector Product

    DEFF Research Database (Denmark)

    Alexandersen, Joe; Lazarov, Boyan Stefanov; Dammann, Bernd

    This technical report contains a case study of a sparse matrix-vector product routine, implemented for parallel execution on a compute cluster with both pure MPI and hybrid MPI-OpenMP solutions. C++ classes for sparse data types were developed and the report shows how these class can be used...

  4. Elongation Cutoff Technique: Parallel Performance

    Directory of Open Access Journals (Sweden)

    Jacek Korchowiec

    2008-01-01

    Full Text Available It is demonstrated that the elongation cutoff technique (ECT substantially speeds up thequantum-chemical calculation at Hartree-Fock (HF level of theory and is especially wellsuited for parallel performance. A comparison of ECT timings for water chains with thereference HF calculations is given. The analysis includes the overall CPU (central processingunit time and its most time consuming steps.

  5. Massively parallel quantum computer simulator

    NARCIS (Netherlands)

    De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.

    2007-01-01

    We describe portable software to simulate universal quantum computers on massive parallel Computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray

  6. Parallel and Distributed Databases: Introduction

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Kemper, Alfons; Prieto, Manuel; Szalay, Alex

    Euro-Par Topic 5 addresses data management issues in parallel and distributed computing. Advances in data management (storage, access, querying, retrieval, mining) are inherent to current and future information systems. Today, accessing large volumes of information is a reality: Data-intensive

  7. Lightweight Specifications for Parallel Correctness

    Science.gov (United States)

    2012-12-05

    The reads and writes occur inside the constructor of a temporary object created in each iteration.) To a näıve, traditional conflict...pp. 207–227. [10] Krste Asanovic et al. The Parallel Computing Laboratory at U.C. Berkeley: A Re- search Agenda Based on the Berkeley View. Tech. rep

  8. Matpar: Parallel Extensions for MATLAB

    Science.gov (United States)

    Springer, P. L.

    1998-01-01

    Matpar is a set of client/server software that allows a MATLAB user to take advantage of a parallel computer for very large problems. The user can replace calls to certain built-in MATLAB functions with calls to Matpar functions.

  9. Pricing of drugs with heterogeneous health insurance coverage.

    Science.gov (United States)

    Ferrara, Ida; Missios, Paul

    2012-03-01

    In this paper, we examine the role of insurance coverage in explaining the generic competition paradox in a two-stage game involving a single producer of brand-name drugs and n quantity-competing producers of generic drugs. Independently of brand loyalty, which some studies rely upon to explain the paradox, we show that heterogeneity in insurance coverage may result in higher prices of brand-name drugs following generic entry. With market segmentation based on insurance coverage present in both the pre- and post-entry stages, the paradox can arise when the two types of drugs are highly substitutable and the market is quite profitable but does not have to arise when the two types of drugs are highly differentiated. However, with market segmentation occurring only after generic entry, the paradox can arise when the two types of drugs are weakly substitutable, provided, however, that the industry is not very profitable. In both cases, that is, when market segmentation is present in the pre-entry stage and when it is not, the paradox becomes more likely to arise as the market expands and/or insurance companies decrease deductibles applied on the purchase of generic drugs. Copyright © 2012 Elsevier B.V. All rights reserved.

  10. Conventional sunscreen application does not lead to sufficient body coverage.

    Science.gov (United States)

    Jovanovic, Z; Schornstein, T; Sutor, A; Neufang, G; Hagens, R

    2017-10-01

    This study aimed to assess sunscreen application habits and relative body coverage after single whole body application. Fifty-two healthy volunteers were asked to use the test product once, following their usual sunscreen application routine. Standardized UV photographs, which were evaluated by Image Analysis, were conducted before and immediately after product application to evaluate relative body coverage. In addition to these procedures, the volunteers completed an online self-assessment questionnaire to assess sunscreen usage habits. After product application, the front side showed significantly less non-covered skin (4.35%) than the backside (17.27%) (P = 0.0000). Females showed overall significantly less non-covered skin (8.98%) than males (13.16%) (P = 0.0381). On the backside, females showed significantly less non-covered skin (13.57%) (P = 0.0045) than males (21.94%), while on the front side, this difference between females (4.14%) and males (4.53%) was not significant. In most cases, the usual sunscreen application routine does not provide complete body coverage even though an extra light sunscreen with good absorption properties was used. On average, 11% of the body surface was not covered by sunscreen at all. Therefore, appropriate consumer education is required to improve sunscreen application and to warrant effective sun protection. © 2017 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  11. Precision Agriculture without borders: Practical issues and improvements in farmland coverage with aerial vehicles.

    OpenAIRE

    Pereira Valente, Joao Ricardo; Barrientos Cruz, Antonio; Cerro Giner, Jaime del; Sanz Muñoz, David; Garzón Oviedo, Mario Andrei; Rossi, Claudio

    2011-01-01

    This work presents a solution for the aerial coverage of a field by using a fleet of aerial vehicles. The use of Unmanned Aerial Vehicles allows to obtain high resolution mosaics to be used in Precision Agriculture techniques. This report is focus on providing a solution for the full simultaneous coverage problem taking into account restrictions as the required spatial resolution and overlap while maintaining similar light conditions and safety operation of the drones. Results obtained from rea...

  12. Universal Health Coverage – The Critical Importance of Global Solidarity and Good Governance

    Science.gov (United States)

    Reis, Andreas A.

    2016-01-01

    This article provides a commentary to Ole Norheim’ s editorial entitled "Ethical perspective: Five unacceptable trade-offs on the path to universal health coverage." It reinforces its message that an inclusive, participatory process is essential for ethical decision-making and underlines the crucial importance of good governance in setting fair priorities in healthcare. Solidarity on both national and international levels is needed to make progress towards the goal of universal health coverage (UHC). PMID:27694683

  13. Distributed parallel processing for multidimensional maximum entropy reconstruction.

    Science.gov (United States)

    Li, K B; Stern, A S; Hoch, J C

    1998-09-01

    We have developed a two-dimensional maximum entropy spectrum reconstruction program designed to run in parallel on workstation clusters. Test reconstructions of planes extracted from a three-dimensional NMR data set indicate that the parallel speedup is nearly equal to the number of processors provided that the individual processors have comparable performance and that there are at least as many planes as processors. The program also works well in a typical laboratory setting consisting of heterogeneous workstations. Copyright 1998 Academic Press.

  14. A parallelism viewpoint to analyze performance bottlenecks of parallelism-intensive software systems

    OpenAIRE

    Muhammad, Naeem; Boucké, Nelis; Berbers, Yolande

    2010-01-01

    The use of parallelism enhances the performance of a software system. However, its excessive use can degrade the system performance. In this paper we propose a parallelism viewpoint to optimize the use of parallelism by eliminating unnecessarily used parallelism in legacy systems. The parallelism viewpoint describes parallelism of the system in order to analyze multiple overheads associated with its threads. We use the proposed viewpoint to find parallelism specific performance overheads of a...

  15. [Is muscle the best coverage for leg Gustilo IIIb fractures? A retrospective comparative study].

    Science.gov (United States)

    Danino, A-M; Gras, M; Coeugniet, E; Jebrane, A; Harris, P G

    2008-12-01

    Well-vascularized muscle flaps have been the traditional gold standard for coverage of open fracture of the lower extremity. The last 15 years have brought the fasciocutaneous and perforator flaps and raised the issue of the type of coverage required for open fracture of the lower extremity. In recent years, in selected compromised patient, we have been using nonmuscular flaps for reconstruction. The goal of this study is to compare the results of fasciocutaneous reconstruction to those of classical muscular flaps. A comparative retrospective study, including all patients from 2002 to 2006 requiring a coverage of a Gustillo III b fracture of the lower extremity, is done. The type of flaps, the fracture localization, the infection rate, the time required for consolidation of the fracture and the complication rate are reviewed. An independent university laboratory verified the statistical analysis. Twenty patients have experienced coverage by muscular flaps and 18 by fasciocutaneous flaps. We found a skin fistula and a chronic infection in the muscular-flap group, and two skin fistulae in the fasciocutaneous flaps group. The overall surgical results were comparable, except the bony union delay shorter in the fasciocutaneous flaps group. Muscle coverage is not mandatory to cover bone in the lower leg. The fasciocutaneous flaps can provide a good alternative for muscle-flap coverage. There is no significant difference as far as consolidation and infection are concerned, between the coverage by muscular or fasciocutaneous flaps.

  16. Performance Analysis of Improved Glowworm Swarm Optimization Algorithm and the Application in Coverage Optimization of WSNs

    Directory of Open Access Journals (Sweden)

    Xu Jingqi

    2014-04-01

    Full Text Available The performance of improved glowworm swarm optimization (GSO algorithm and its application in coverage optimization of WSNs are analyzed in this paper. The global convergence analysis of basic GSO is made. In order to improve the GSO convergence efficiency, an improved GSO (IGSO is presented, and it is proved to be guaranteed to the global optimization with probability one. Further, a new coverage optimization algorithm for WSNS, based on IGSO, is presented according to the analysis of GSO. A model of coverage optimization in WSNS is built up by taking node uniformity and network coverage rate as the criterion, and the relationship between node redundancy and network coverage rate and the node dormancy strategy are presented. Then the deployment of nodes is divided into different stages, and the IGSO is used to solve the model in each stage. Through testing classical test functions and optimizing the problems of coverage in WSNS, the simulation results show that the IGSO achieves more reasonable results and can effectively provide the optimal solution of network coverage.

  17. Clustered lot quality assurance sampling: a pragmatic tool for timely assessment of vaccination coverage.

    Science.gov (United States)

    Greenland, K; Rondy, M; Chevez, A; Sadozai, N; Gasasira, A; Abanida, E A; Pate, M A; Ronveaux, O; Okayasu, H; Pedalino, B; Pezzoli, L

    2011-07-01

    To evaluate oral poliovirus vaccine (OPV) coverage of the November 2009 round in five Northern Nigeria states with ongoing wild poliovirus transmission using clustered lot quality assurance sampling (CLQAS). We selected four local government areas in each pre-selected state and sampled six clusters of 10 children in each Local Government Area, defined as the lot area. We used three decision thresholds to classify OPV coverage: 75-90%, 55-70% and 35-50%. A full lot was completed, but we also assessed in retrospect the potential time-saving benefits of stopping sampling when a lot had been classified. We accepted two local government areas (LGAs) with vaccination coverage above 75%. Of the remaining 18 rejected LGAs, 11 also failed to reach 70% coverage, of which four also failed to reach 50%. The average time taken to complete a lot was 10 h. By stopping sampling when a decision was reached, we could have classified lots in 5.3, 7.7 and 7.3 h on average at the 90%, 70% and 50% coverage targets, respectively. Clustered lot quality assurance sampling was feasible and useful to estimate OPV coverage in Northern Nigeria. The multi-threshold approach provided useful information on the variation of IPD vaccination coverage. CLQAS is a very timely tool, allowing corrective actions to be directly taken in insufficiently covered areas. © 2011 Blackwell Publishing Ltd.

  18. National, state, and local area vaccination coverage among children aged 19-35 months--United States, 2007.

    Science.gov (United States)

    2008-09-05

    The National Immunization Survey (NIS) provides vaccination coverage estimates among children aged 19--35 months for each of the 50 states and selected urban areas. This report describes the results of the 2007 NIS, which provided coverage estimates among children born during January 2004-July 2006. Healthy People 2010 established vaccination coverage targets of 90% for each of the vaccines included in the combined 4:3:1:3:3:1 vaccine series and a target of 80% for the combined series. Findings from the 2007 NIS indicated that >/=90% coverage was achieved for most of the routinely recommended vaccines. The majority of parents were vaccinating their children, with less than 1% of children receiving no vaccines by age 19-35 months. The coverage level for the 4:3:1:3:3:1 series remained steady at 77.4%, compared with 76.9% in 2006. Among states and local areas, substantial variability continued, with estimated vaccination coverage ranging from 63.1% to 91.3%. Coverage remained high across all racial/ethnic groups and was not significantly different among racial/ethnic groups after adjusting for poverty status. However, for some vaccines, coverage remained lower among children living below the poverty level compared with children living at or above the poverty level. Maintaining high vaccination coverage and continued attention to reducing current poverty disparities is needed to limit the spread -preventable diseases and ensure that children are protected.

  19. Automatic magnetometer calibration with small space coverage

    Science.gov (United States)

    Wahdan, Ahmed

    The use of a standalone Global Navigation Satellite System (GNSS) has proved to be insufficient when navigating indoors or in urban canyons due to multipath or obstruction. Recent technological advances in low cost micro-electro-mechanical system (MEMS) -- based sensors (like accelerometers, gyroscopes and magnetometers) enabled the development of sensor-based navigation systems. Although MEMS sensors are low-cost, lightweight, small size, and have low-power consumption, they have complex error characteristics. Accurate computation of the heading angle (azimuth) is one of the most important aspects of any navigation system. It can be computed either by gyroscopes or magnetometers. Gyroscopes are inertial sensors that can provide the angular rate from which the heading can be calculated, however, their outputs drift with time. Moreover, the accumulated errors due to mathematical integration, performed to obtain the heading angle, lead to large heading errors. On the other hand, magnetometers do not suffer from drift and the calculation of heading does not suffer from error accumulation. They can provide an absolute heading from the magnetic north by sensing the earth's magnetic field. However, magnetometer readings are usually affected by magnetic fields, other than the earth magnetic field, and by other error sources; therefore magnetometer calibration is required to use magnetometer as a reliable source of heading in navigation applications. In this thesis, a framework for fast magnetometer calibration is proposed. This framework requires little space coverage with no user involvement in the calibration process, and does not need specific movements to be performed. The proposed techniques are capable of performing both 2-dimensional (2D) and 3-dimensional (3D) calibration for magnetometers. They are developed to consider different scenarios suitable for different applications, and can benefit from natural device movements. Some applications involve tethering the

  20. Coverage of space by random sets

    Indian Academy of Sciences (India)

    Consider the non-negative integer line. For each integer point we toss a coin. If the toss at location i is a. Heads we place an interval (of random length) there and move to location i + 1,. Tails we move to location i + 1. Coverage of space by random sets – p. 2/29 ...

  1. Tetanus Toxoid Vaccination Coverage And Differential Between ...

    African Journals Online (AJOL)

    Background: Government commitment and support from a range of partnerships have led to a massive increase in tetanus toxoid immunization coverage among women of childbearing age, ensuring that both mothers and babies are protected against tetanus infection in. Bangladesh. In order to control and eliminate the ...

  2. The Sad State of Education Coverage.

    Science.gov (United States)

    Batory, Joseph P.

    1999-01-01

    A 1997 report by Public Agenda, a nonpartisan public-opinion research firm, confirmed that educators deplore the quality of press coverage of public education. While questioning journalistic effectiveness and credibility, the study offers objective insights about citizens' expectations. Superintendents must communicate concerns to editors and…

  3. Binning metagenomic contigs by coverage and composition

    NARCIS (Netherlands)

    Alneberg, J.; Bjarnason, B.S.; Bruijn, de I.; Schirmer, M.; Quick, J.; Ijaz, U.Z.; Lahti, L.M.; Loman, N.J.; Andersson, A.F.; Quince, C.

    2014-01-01

    Shotgun sequencing enables the reconstruction of genomes from complex microbial communities, but because assembly does not reconstruct entire genomes, it is necessary to bin genome fragments. Here we present CONCOCT, a new algorithm that combines sequence composition and coverage across multiple

  4. 5 CFR 792.103 - Coverage.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Coverage. 792.103 Section 792.103 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED) CIVIL SERVICE REGULATIONS (CONTINUED) FEDERAL EMPLOYEES' HEALTH AND COUNSELING PROGRAMS Regulatory Requirements for Alcoholism and Drug Abuse Programs and...

  5. The hunt for 100% sky coverage

    Science.gov (United States)

    Meimon, Serge; Fusco, Thierry; Clenet, Yann; Conan, Jean-Marc; Assémat, François; Michau, Vincent

    2010-07-01

    Tomographic AO (or Wide Field AO) systems use LGS to build a 3D model of turbulence, but rely on NGS for low order sensing. .To preserve reasonable sky coverage, each photon coming from the NGS to sense Tip Tilt has to be optimally exploited. That means a smart control law, a low detection noise, a concentration of the photons onto a small patch and a wave front sensor concept with favorable noise propagation. In this paper, we describe the system choices that were made during the E-ELT laser tomographic system ATLAS phase A study, in order to get a sky coverage as close as possible to 100%. A correct estimation of the sky coverage is therefore a key issue. We have developped a sky coverage estimation strategy based on a Besaņcon model starfield generation, a star(s) selection tool, and a careful estimation of the residual anisoplanatism (after reconstruction process between the NGSs), noise and temporal contributors. We describe the details of the procedure, and derive the ATLAS expected performance.

  6. Immunization coverage: role of sociodemographic variables.

    Science.gov (United States)

    Sharma, Bhuwan; Mahajan, Hemant; Velhal, G D

    2013-01-01

    Children are considered fully immunized if they receive one dose of BCG, three doses of DPT and polio vaccine each, and one measles vaccine. In India, only 44% of children aged 12-23 months are fully vaccinated and about 5% have not received any vaccination at all. Even if national immunization coverage levels are sufficiently high to block disease transmission, pockets of susceptibility may act as potential reservoirs of infection. This study was done to assess the immunization coverage in an urban slum area and determine various sociodemographic variables affecting the same. A total of 210 children were selected from study population using WHO's 30 cluster sampling method. Coverage of BCG was found to be the highest (97.1%) while that of measles was the lowest. The main reason for noncompliance was given as child's illness at the time of scheduled vaccination followed by lack of knowledge regarding importance of immunization. Low education status of mother, high birth order, and place of delivery were found to be positively associated with low vaccination coverage. Regular IEC activities (group talks, role plays, posters, pamphlets, and competitions) should be conducted in the community to ensure that immunization will become a "felt need" of the mothers in the community.

  7. 29 CFR 1603.101 - Coverage.

    Science.gov (United States)

    2010-07-01

    ... STATE AND LOCAL GOVERNMENT EMPLOYEE COMPLAINTS OF EMPLOYMENT DISCRIMINATION UNDER SECTION 304 OF THE GOVERNMENT EMPLOYEE RIGHTS ACT OF 1991 Administrative Process § 1603.101 Coverage. Section 304 of the Government Employee Rights Act of 1991 applies to employment, which includes application for employment, of...

  8. 5 CFR 430.202 - Coverage.

    Science.gov (United States)

    2010-01-01

    ... Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERFORMANCE MANAGEMENT Performance Appraisal for General Schedule, Prevailing Rate, and Certain Other Employees § 430.202 Coverage. (a) Employees and agencies covered by statute. (1) Section 4301(1) of title 5, United States Code...

  9. EDITORIAL COVERAGE OF SCIENTIFIC RESEARCH BY THE ...

    African Journals Online (AJOL)

    hi-tech

    The New England Journal of Medicine published an article on coverage by the news media of the benefits and risks of medications by Moynihan et al(l). These authors studied 207 news media stories reporting on benefits and risks of pravastatin (a cholesterol lowering drug), alendronate (a biophosphonate for treatment ...

  10. Danish Media coverage of 22/7

    DEFF Research Database (Denmark)

    Hervik, Peter; Boisen, Sophie

    2013-01-01

    ’s Danish connections through an analysis of the first 100 days of Danish media coverage. We scrutinised 188 articles in the largest daily newspapers to find out how Danish actors related to ABB’s ideas. The key argument is that the discourses and opinions reflect pre-existing opinions and entrenched...

  11. 5 CFR 9701.505 - Coverage.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Coverage. 9701.505 Section 9701.505 Administrative Personnel DEPARTMENT OF HOMELAND SECURITY HUMAN RESOURCES MANAGEMENT SYSTEM (DEPARTMENT OF HOMELAND SECURITY-OFFICE OF PERSONNEL MANAGEMENT) DEPARTMENT OF HOMELAND SECURITY HUMAN RESOURCES...

  12. 5 CFR 9701.402 - Coverage.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Coverage. 9701.402 Section 9701.402 Administrative Personnel DEPARTMENT OF HOMELAND SECURITY HUMAN RESOURCES MANAGEMENT SYSTEM (DEPARTMENT OF HOMELAND SECURITY-OFFICE OF PERSONNEL MANAGEMENT) DEPARTMENT OF HOMELAND SECURITY HUMAN RESOURCES...

  13. 5 CFR 9701.302 - Coverage.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Coverage. 9701.302 Section 9701.302 Administrative Personnel DEPARTMENT OF HOMELAND SECURITY HUMAN RESOURCES MANAGEMENT SYSTEM (DEPARTMENT OF HOMELAND SECURITY-OFFICE OF PERSONNEL MANAGEMENT) DEPARTMENT OF HOMELAND SECURITY HUMAN RESOURCES...

  14. 5 CFR 9701.202 - Coverage.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Coverage. 9701.202 Section 9701.202 Administrative Personnel DEPARTMENT OF HOMELAND SECURITY HUMAN RESOURCES MANAGEMENT SYSTEM (DEPARTMENT OF HOMELAND SECURITY-OFFICE OF PERSONNEL MANAGEMENT) DEPARTMENT OF HOMELAND SECURITY HUMAN RESOURCES...

  15. Factors influencing immunisation coverage among children under ...

    African Journals Online (AJOL)

    Results. The correct vaccination coverage rate for children was found to be high. Children in urban and rural areas differed substantially in their correct vaccination rates and their receipt of each vaccine separately. Walking or travelling time to the place of vaccination was found to be longer in rural areas when compared ...

  16. True Public Access Defibrillator Coverage is Overestimated

    NARCIS (Netherlands)

    Sun, Christopher L.F.; Demirtas, Derya; Brooks, Steven C.; Morrison, Laurie J.; Chan, Timothy C.Y.

    2015-01-01

    Background: Out-of-hospital cardiac arrests (OHCAs) occur at all times of the day and night. Immediate access to an AED increases survival. However, most public-location AEDs are placed in buildings without 24 hour access. Objective: To measure fixed-location public AED coverage of OHCAs by time of

  17. Parallel Computing Methods For Particle Accelerator Design

    CERN Document Server

    Popescu, Diana Andreea; Hersch, Roger

    We present methods for parallelizing the transport map construction for multi-core processors and for Graphics Processing Units (GPUs). We provide an efficient implementation of the transport map construction. We describe a method for multi-core processors using the OpenMP framework which brings performance improvement over the serial version of the map construction. We developed a novel and efficient algorithm for multivariate polynomial multiplication for GPUs and we implemented it using the CUDA framework. We show the benefits of using the multivariate polynomial multiplication algorithm for GPUs in the map composition operation for high orders. Finally, we present an algorithm for map composition for GPUs.

  18. The Challenge of Massively Parallel Computing

    Energy Technology Data Exchange (ETDEWEB)

    WOMBLE,DAVID E.

    1999-11-03

    Since the mid-1980's, there have been a number of commercially available parallel computers with hundreds or thousands of processors. These machines have provided a new capability to the scientific community, and they been used successfully by scientists and engineers although with varying degrees of success. One of the reasons for the limited success is the difficulty, or perceived difficulty, in developing code for these machines. In this paper we discuss many of the issues and challenges in developing scalable hardware, system software and algorithms for machines comprising hundreds or thousands of processors.

  19. Pair distribution functions of colloidal particles on a quartz collector in a parallel plate and stagnation point flow chamber

    NARCIS (Netherlands)

    Yang, JL; Busscher, HJ; Bos, R.R.M.

    2000-01-01

    Pair distribution functions of polystyrene particles adhering on a quartz collector surface are compared for a parallel plate (PP) and stagnation point (SP) flow chamber at a common Peclet number and identical surface coverage. Radial pair distribution functions of deposition patterns around the

  20. 75 FR 34571 - Group Health Plans and Health Insurance Coverage Rules Relating to Status as a Grandfathered...

    Science.gov (United States)

    2010-06-17

    ... Revenue Service 26 CFR Part 54 RIN 1545-BJ50 Group Health Plans and Health Insurance Coverage Rules... respect to group health plans and health insurance coverage offered in connection with a group health plan... temporary regulations provide guidance to employers, group health plans, and health insurance issuers...

  1. Evaluating Potential Bias in Media Coverage of the Public Debate over Acid Rain and Chlorofluorocarbons in the 1980s

    Science.gov (United States)

    Williams, Tiffany Dawn; Moore, Rebecca; Markewitz, Daniel

    2012-01-01

    This study evaluates media coverage of two important environmental issues from the 1980s (acid rain and chlorofluorocarbons), providing historical context for current media coverage analysis. Focusing on popular magazine articles, this study identifies key characteristics of content and presentation. Content-related characteristics are inclusion…

  2. Immunisation coverage of adults: a vaccination counselling campaign in the pharmacies in Switzerland.

    Science.gov (United States)

    Valeri, Fabio; Hatz, Christoph; Jordan, Dominique; Leuthold, Claudine; Czock, Astrid; Lang, Phung

    2014-01-01

    To assess vaccination coverage for adults living in Switzerland. Through a media campaign, the general population was invited during 1 month to bring their vaccination certificates to the pharmacies to have their immunisation status evaluated with the software viavac©, and to complete a questionnaire. A total of 496 pharmacies in Switzerland participated in the campaign, of which 284 (57%) submitted valid vaccination information. From a total of 3,634 participants in the campaign, there were 3,291 valid cases (participants born ≤ 1992) and 1,011 questionnaires completed. Vaccination coverage for the participants was 45.9% and 34.6% for five and six doses of diphtheria, 56.4% and 44.0% for tetanus and 66.3% and 48.0% for polio, respectively. Coverage estimates for one and two doses of measles vaccine were 76.5% and 49.4%, respectively, for the birth cohort 1967-1992 and 4.0% and 0.8%, respectively, for the cohort ≤ 1966. There was a significant difference in coverage for most vaccinations between the two aforementioned birth cohorts. A plot of the measles vaccine coverage over time shows that the increase in coverage correlated with policy changes in the Swiss Immunisation Schedule. Despite selection bias and low participation, this study indicates that vaccination coverage for the basic recommended immunisations in the adult population in Switzerland is suboptimal. More efforts using various means and methods are needed to increase immunisation coverage in adolescents before they leave school. An established method to determine vaccination coverage for the general population could provide invaluable insights into the effects of changes in vaccination policies and disease outbreaks.

  3. TIMELINESS AND LEVEL OF PRIMARY IMMUNIZATION COVERAGE AGAINST MEASLES AND RUBELLA IN MONTENEGRO

    Directory of Open Access Journals (Sweden)

    Branislav Tiodorović

    2009-10-01

    Full Text Available The aim of the paper was to determine the timeliness and level of primary immunization coverage against measles and rubella in Montenegro in the cohort born from January 1 to December 31, 2006.Cross-sectional study was conducted in the period from October to December 2008. All immunization points in Montenegro were visited and immunization records of the entire cohort born in 2006 were reviewed.Timeliness of primary immunization coverage with MMR was 91.4% at the level of Montenegro, but in seven (33,3% municipalities timeliness of primary immunization coverage was less than 90%, including one municipality even with less than 80%. After the additional activities on the vaccination of previously unvaccinated children, primary immunization coverage with MMR reached the value of 96.1% at the level of Montenegro, and in the majority of municipalities exceeded the value of 95%. However, after additional immunization activities in six out of 21 municipalities (28.6%, primary immunization coverage with MMR was below 95% of which in one municipality below 90%.In the cohort born during 2006, timely primary immunization with MMR was performed in one third of Montenegrin municipalities with the value less than 90%. Supplemental immunization activities related to unvaccinated children significantly increased the primary immunization coverage with MMR in the cohort born in 2006. Yet, in a certain number of municipalities even after additional immunization activities, the primary immunization coverage did not reach the required 95%. In comparison with routine administrative reporting on immunization coverage, the surveys which involve the review of immunization records after additional immunization activities provide more realistic rate of completeness and timeliness of primary immunization coverage.

  4. Theoretical study of nonlinear optical properties of "parallel connection" chromophores containing parallel nonconjugated D-pi-A units.

    Science.gov (United States)

    Zhang, Chao-Zhi; Cao, Hui; Im, Chan; Lu, Guo-Yuan

    2009-11-05

    Chromophores containing two parallel nonconjugated D-pi-A units are effective chromophores with high hyperpolarizability and good optical transparency. It provides a method for the design and synthesis of effective chromophores. The semiempirical method ZINDO was employed to study the relationship between enhancement of the static first hyperpolarizabilities (beta0) per D-pi-A unit and the number of parallel nonconjugated D-pi-A units in a chromophore. The results show that the chromophores containing two parallel nonconjugated D-pi-A units would exhibit higher beta0 values than two times the beta0 value of the corresponding reference chromophore containing a D-pi-A unit. The chromophore containing three parallel nonconjugated D-pi-A units exhibits the highest enhancement of beta0 per D-pi-A unit, which is 10.1 times the beta0 value of the corresponding reference chromophore. However, the beta0 value of the chromophore containing four parallel nonconjugated D-pi-A units is very small, and the enhancement of beta0 value per D-pi-A unit decreases sharply, from 10.1 to 0.3, with increasing the number of parallel D-pi-A units in a chromophore from 3 to 4. It could give a useful suggestion for designing chromophores containing parallel nonconjugated D-pi-A units.

  5. A parallel Fast Fourier transform

    CERN Document Server

    Morante, S; Salina, G

    1999-01-01

    In this paper we discuss the general problem of implementing the multidimensional Fast Fourier Transform algorithm on parallel computers. We show that, on a machine with P processors and fully parallel node communications, the optimal asymptotic scaling behavior of the total computational time with the number of data points, N, given in d dimensions by the formula aN/Plog(N/P)+bN/P/sup (d-1)/d/, can actually be achieved on realistic platforms. As a concrete realization of our strategy, we have produced codes efficiently running on machines of the APE family and on Cray T3E. On the former for asymptotic values of N our codes attain the above optimal result. (16 refs).

  6. Merlin - Massively parallel heterogeneous computing

    Science.gov (United States)

    Wittie, Larry; Maples, Creve

    1989-01-01

    Hardware and software for Merlin, a new kind of massively parallel computing system, are described. Eight computers are linked as a 300-MIPS prototype to develop system software for a larger Merlin network with 16 to 64 nodes, totaling 600 to 3000 MIPS. These working prototypes help refine a mapped reflective memory technique that offers a new, very general way of linking many types of computer to form supercomputers. Processors share data selectively and rapidly on a word-by-word basis. Fast firmware virtual circuits are reconfigured to match topological needs of individual application programs. Merlin's low-latency memory-sharing interfaces solve many problems in the design of high-performance computing systems. The Merlin prototypes are intended to run parallel programs for scientific applications and to determine hardware and software needs for a future Teraflops Merlin network.

  7. Structural synthesis of parallel robots

    CERN Document Server

    Gogu, Grigore

    This book represents the fifth part of a larger work dedicated to the structural synthesis of parallel robots. The originality of this work resides in the fact that it combines new formulae for mobility, connectivity, redundancy and overconstraints with evolutionary morphology in a unified structural synthesis approach that yields interesting and innovative solutions for parallel robotic manipulators.  This is the first book on robotics that presents solutions for coupled, decoupled, uncoupled, fully-isotropic and maximally regular robotic manipulators with Schönflies motions systematically generated by using the structural synthesis approach proposed in Part 1.  Overconstrained non-redundant/overactuated/redundantly actuated solutions with simple/complex limbs are proposed. Many solutions are presented here for the first time in the literature. The author had to make a difficult and challenging choice between protecting these solutions through patents and releasing them directly into the public domain. T...

  8. GPU Parallel Bundle Block Adjustment

    Directory of Open Access Journals (Sweden)

    ZHENG Maoteng

    2017-09-01

    Full Text Available To deal with massive data in photogrammetry, we introduce the GPU parallel computing technology. The preconditioned conjugate gradient and inexact Newton method are also applied to decrease the iteration times while solving the normal equation. A brand new workflow of bundle adjustment is developed to utilize GPU parallel computing technology. Our method can avoid the storage and inversion of the big normal matrix, and compute the normal matrix in real time. The proposed method can not only largely decrease the memory requirement of normal matrix, but also largely improve the efficiency of bundle adjustment. It also achieves the same accuracy as the conventional method. Preliminary experiment results show that the bundle adjustment of a dataset with about 4500 images and 9 million image points can be done in only 1.5 minutes while achieving sub-pixel accuracy.

  9. Xyce parallel electronic simulator : users' guide.

    Energy Technology Data Exchange (ETDEWEB)

    Mei, Ting; Rankin, Eric Lamont; Thornquist, Heidi K.; Santarelli, Keith R.; Fixel, Deborah A.; Coffey, Todd Stirling; Russo, Thomas V.; Schiek, Richard Louis; Warrender, Christina E.; Keiter, Eric Richard; Pawlowski, Roger Patrick

    2011-05-01

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: (1) Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers; (2) Improved performance for all numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-art algorithms and novel techniques. (3) Device models which are specifically tailored to meet Sandia's needs, including some radiation-aware devices (for Sandia users only); and (4) Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing parallel implementation - which allows it to run efficiently on the widest possible number of computing platforms. These include serial, shared-memory and distributed-memory parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The development of Xyce provides a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods, parallel solver algorithms) research and development can be performed. As a result, Xyce is

  10. ACO-Based Sweep Coverage Scheme in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Peng Huang

    2015-01-01

    Full Text Available Coverage problem is one of the major issues in wireless sensor networks (WSN. In order to optimize the network coverage, different coverage formulations have been proposed. Recently, a newly emerging coverage scheme in wireless sensor networks, sweep coverage, which uses mobile sensors to monitor certain points of interest (POIs, is proposed. However, the data delivery to sink, an important problem in WSN, is not considered in original sweep coverage and many of the existing works did not consider it yet. In this work, a novel algorithm named ACOSC (ACO-based sweep coverage to solve the sweep coverage problem considering periodical coverage of POIs and delivery of data simultaneously is proposed. The evaluation results show that our algorithm has better performance than existing schemes.

  11. Streaming for Functional Data-Parallel Languages

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner

    In this thesis, we investigate streaming as a general solution to the space inefficiency commonly found in functional data-parallel programming languages. The data-parallel paradigm maps well to parallel SIMD-style hardware. However, the traditional fully materializing execution strategy......, and the limited memory in these architectures, severely constrains the data sets that can be processed. Moreover, the language-integrated cost semantics for nested data parallelism pioneered by NESL depends on a parallelism-flattening execution strategy that only exacerbates the problem. This is because...... by extending two existing data-parallel languages: NESL and Accelerate. In the extensions we map bulk operations to data-parallel streams that can evaluate fully sequential, fully parallel or anything in between. By a dataflow, piecewise parallel execution strategy, the runtime system can adjust to any target...

  12. Combinatorics of spreads and parallelisms

    CERN Document Server

    Johnson, Norman

    2010-01-01

    Partitions of Vector Spaces Quasi-Subgeometry Partitions Finite Focal-SpreadsGeneralizing André SpreadsThe Going Up Construction for Focal-SpreadsSubgeometry Partitions Subgeometry and Quasi-Subgeometry Partitions Subgeometries from Focal-SpreadsExtended André SubgeometriesKantor's Flag-Transitive DesignsMaximal Additive Partial SpreadsSubplane Covered Nets and Baer Groups Partial Desarguesian t-Parallelisms Direct Products of Affine PlanesJha-Johnson SL(2,

  13. Fuzzy clustering in parallel universes

    OpenAIRE

    Wiswedel, Bernd; Berthold, Michael R.

    2007-01-01

    We present an extension of the fuzzy c-Means algorithm, which operates simultaneously on different feature spaces so-called parallel universes and also incorporates noise detection. The method assigns membership values of patterns to different universes, which are then adopted throughout the training. This leads to better clustering results since patterns not contributing to clustering in a universe are (completely or partially) ignored. The method also uses an auxiliary universe to capt...

  14. Northeast Parallel Architectures Center (NPAC)

    Science.gov (United States)

    1992-07-01

    networks using parallel algorithms for detection of lineaments in remotely sensed LandSat date of the Canadian Shield and for detection of abnormalities in...University Ashok K. Joshi Ashok K. Joshi, Syracuse University 211 Link Hall Syracuse NY 13244 Abstract of Research Remotely sensed data from satelite ...are not readily visible in the imageries. Photo- interpretation of these satelite images is the more commonly used technique and not much emphasis

  15. Broader health coverage is good for the nation's health: evidence from country level panel data.

    Science.gov (United States)

    Moreno-Serra, Rodrigo; Smith, Peter C

    2015-01-01

    Progress towards universal health coverage involves providing people with access to needed health services without entailing financial hardship and is often advocated on the grounds that it improves population health. The paper offers econometric evidence on the effects of health coverage on mortality outcomes at the national level. We use a large panel data set of countries, examined by using instrumental variable specifications that explicitly allow for potential reverse causality and unobserved country-specific characteristics. We employ various proxies for the coverage level in a health system. Our results indicate that expanded health coverage, particularly through higher levels of publicly funded health spending, results in lower child and adult mortality, with the beneficial effect on child mortality being larger in poorer countries.

  16. BCG coverage and barriers to BCG vaccination in Guinea-Bissau

    DEFF Research Database (Denmark)

    Thysen, Sanne Marie; Byberg, Stine; Pedersen, Marie

    2014-01-01

    BACKGROUND: BCG vaccination is recommended at birth in low-income countries, but vaccination is often delayed. Often 20-dose vials of BCG are not opened unless at least ten children are present for vaccination ("restricted vial-opening policy"). BCG coverage is usually reported as 12-month coverage......, not disclosing the delay in vaccination. Several studies show that BCG at birth lowers neonatal mortality. We assessed BCG coverage at different ages and explored reasons for delay in BCG vaccination in rural Guinea-Bissau. METHODS: Bandim Health Project (BHP) runs a health and demographic surveillance system...... in selected intervention regions. Factors associated with delayed BCG vaccination were evaluated using logistic regression models. Coverage between intervention and control regions were evaluated in log-binomial regression models providing prevalence ratios. RESULTS: Among 3951 children born in 2010...

  17. Designing Service Coverage and Measuring Accessibility and Serviceability of Rural and Small Urban Ambulance Systems

    Directory of Open Access Journals (Sweden)

    EunSu Lee

    2014-03-01

    Full Text Available This paper proposes a novel approach to analyze potential accessibility to ambulance services by combining the demand-covered-ratio and potential serviceability with the ambulance-covering-ratio. A Geographic Information System (GIS-based spatial analysis will assist ambulance service planners and designers to assess and provide rational service coverage based on simulated random incidents. The proposed analytical model is compared to the gravity-based two-step floating catchment area method. The study found that the proposed model could efficiently identify under-covered and overlapped ambulance service coverage to improve service quality, timeliness, and efficiency. The spatial accessibility and serviceability identified with geospatial random events show that the model is able to plan rational ambulance service coverage in consideration of households and travel time. The model can be applied to both regional and statewide coverage plans to aid the interpretation of those plans.

  18. Parallel circuit simulation on supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Saleh, R.A.; Gallivan, K.A. (Illinois Univ., Urbana, IL (USA). Center for Supercomputing Research and Development); Chang, M.C. (Texas Instruments, Inc., Dallas, TX (USA)); Hajj, I.N.; Trick, T.N. (Illinois Univ., Urbana, IL (USA). Coordinated Science Lab.); Smart, D. (Semiconductor Div., Analog Devices, Wilmington, MA (US))

    1989-12-01

    Circuit simulation is a very time-consuming and numerically intensive application, especially when the problem size is large as in the case of VLSI circuits. To improve the performance of circuit simulators without sacrificing accuracy, a variety of parallel processing algorithms have been investigated due to the recent availability of a number of commercial multiprocessor machines. In this paper, research in the field of parallel circuit simulation is surveyed and the ongoing research in this area at the University of Illinois is described. Both standard and relaxation-based approaches are considered. In particular, the forms of parallelism available within the direct method approach, used in programs such as SPICE2 and SLATE, and within the relaxation-based approaches, such as waveform relaxation, iterated timing analysis, and waveform-relaxation-Newton, are described. The specific implementation issues addressed here are primarily related to general-purpose multiprocessors with a shared-memory architecture having a limited number of processors, although many of the comments also apply to a number of other architectures.

  19. An educational tool for interactive parallel and distributed processing

    DEFF Research Database (Denmark)

    Pagliarini, Luigi; Lund, Henrik Hautop

    2012-01-01

    In this article we try to describe how the modular interactive tiles system (MITS) can be a valuable tool for introducing students to interactive parallel and distributed processing programming. This is done by providing a handson educational tool that allows a change in the representation...... of abstract problems related to designing interactive parallel and distributed systems. Indeed, the MITS seems to bring a series of goals into education, such as parallel programming, distributedness, communication protocols, master dependency, software behavioral models, adaptive interactivity, feedback...... interactive parallel and distributed processing with different behavioral software models such as open loop, randomness-based, rule-based, user interaction-based, and AI- and ALife-based software....

  20. schwimmbad: A uniform interface to parallel processing pools in Python

    Science.gov (United States)

    Price-Whelan, Adrian M.; Foreman-Mackey, Daniel

    2017-09-01

    Many scientific and computing problems require doing some calculation on all elements of some data set. If the calculations can be executed in parallel (i.e. without any communication between calculations), these problems are said to be perfectly parallel. On computers with multiple processing cores, these tasks can be distributed and executed in parallel to greatly improve performance. A common paradigm for handling these distributed computing problems is to use a processing "pool": the "tasks" (the data) are passed in bulk to the pool, and the pool handles distributing the tasks to a number of worker processes when available. schwimmbad provides a uniform interface to parallel processing pools and enables switching easily between local development (e.g., serial processing or with multiprocessing) and deployment on a cluster or supercomputer (via, e.g., MPI or JobLib).

  1. A Prototype Embedded Microprocessor Interconnect for Distributed and Parallel Computing

    Directory of Open Access Journals (Sweden)

    Bryan Hughes

    2008-08-01

    Full Text Available Parallel computing is currently undergoing a transition from a niche use to widespread acceptance due to new, computationally intensive applications and multi-core processors. While parallel processing is an invaluable tool for increasing performance, more time and expertise are required to develop a parallel system than are required for sequential systems. This paper discusses a toolkit currently in development that will simplify both the hardware and software development of embedded distributed and parallel systems. The hardware interconnection mechanism uses the Serial Peripheral Interface as a physical medium and provides routing and management services for the system. The topics in this paper are primarily limited to the interconnection aspect of the toolkit.

  2. Strategies for retargeting of existing sequential programs for parallel processing

    Energy Technology Data Exchange (ETDEWEB)

    Leu, J.S.

    1987-01-01

    There are relative advantages and disadvantages of small-grain and large-grain parallelism. It is well established that, for MIMD machines, small-grain parallelism is not recommended because of associated excessive interprocessor communication overhead. On the other hand, the large-grain approach does not provide an adequate degree of parallelism and may not provide necessary speedup. The author adopted an optimal-grain approach such that the parallelism obtained at small-grain level is retained while minimizing the communication overhead. The idea is to employ a systematic approach in partitioning an existing program into a set of large grains such that the best performance in terms of total execution time is achieved by evaluating the tradeoff between parallelism and communication cost. To do this effectively, a model is introduced that can accurately represent the possible communication between various computational units of a program, and can measure possible computational overlap between interacting computational units. The tradeoff between parallelism and communication cost leads to an improved performance.

  3. Cervical cancer screening policies and coverage in Europe

    DEFF Research Database (Denmark)

    Anttila, Ahti; von Karsa, Lawrence; Aasmaa, Auni

    2009-01-01

    The aim of the study was to compare current policy, organisation and coverage of cervical cancer screening programmes in the European Union (EU) member states with European and other international recommendations. According to the questionnaire-based survey, there are large variations in cervical...... with education, training and communication among women, medical professionals and authorities are required, accordingly. The study indicates that, despite substantial efforts, the recommendations of the Council of the EU on organised population-based screening for cervical cancer are not yet fulfilled. Decision......-makers and health service providers should consider stronger measures or incentives in order to improve cervical cancer control in Europe....

  4. Syrian refugees in Lebanon: the search for universal health coverage.

    Science.gov (United States)

    Blanchet, Karl; Fouad, Fouad M; Pherali, Tejendra

    2016-01-01

    The crisis in Syria has forced more than 4 million people to find refuge outside Syria. In Lebanon, in 2015, the refugee population represented 30 % of the total population. International health assistance has been provided to refugee populations in Lebanon. However, the current humanitarian system has also contributed to increase fragmentation of the Lebanese health system. Ensuring universal health coverage to vulnerable Lebanese, Syrian and Palestinian refugees will require in Lebanon to redistribute the key functions and responsibilities of the Ministry of Health and its partners to generate more coherence and efficiency.

  5. A QoS-Guaranteed Coverage Precedence Routing Algorithm for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Jiun-Chuan Lin

    2011-03-01

    Full Text Available For mission-critical applications of wireless sensor networks (WSNs involving extensive battlefield surveillance, medical healthcare, etc., it is crucial to have low-power, new protocols, methodologies and structures for transferring data and information in a network with full sensing coverage capability for an extended working period. The upmost mission is to ensure that the network is fully functional providing reliable transmission of the sensed data without the risk of data loss. WSNs have been applied to various types of mission-critical applications. Coverage preservation is one of the most essential functions to guarantee quality of service (QoS in WSNs. However, a tradeoff exists between sensing coverage and network lifetime due to the limited energy supplies of sensor nodes. In this study, we propose a routing protocol to accommodate both energy-balance and coverage-preservation for sensor nodes in WSNs. The energy consumption for radio transmissions and the residual energy over the network are taken into account when the proposed protocol determines an energy-efficient route for a packet. The simulation results demonstrate that the proposed protocol is able to increase the duration of the on-duty network and provide up to 98.3% and 85.7% of extra service time with 100% sensing coverage ratio comparing with LEACH and the LEACH-Coverage-U protocols, respectively.

  6. First-line treatment with cephalosporins in spontaneous bacterial peritonitis provides poor antibiotic coverage

    DEFF Research Database (Denmark)

    Novovic, Srdan; Semb, Synne; Olsen, Henrik

    2012-01-01

    Abstract Objective. Spontaneous bacterial peritonitis is a common infection in cirrhosis, associated with a high mortality. Third-generation cephalosporins are recommended as first-line treatment. The aim was to evaluate the epidemiology of microbiological ascitic fluid findings and antimicrobial...... resistance in Denmark. Material and Methods. All patients with cirrhosis and a positive ascitic fluid culture, at three university hospitals in the Copenhagen area during a 7-year period, were retrospectively evaluated. Patients with apparent secondary peritonitis were excluded from the study. Results. One...

  7. Analysis of aeromedical retrieval coverage using elliptical isochrones: An evaluation of helicopter fleet size configurations in Scotland.

    Science.gov (United States)

    Dodds, Naomi; Emerson, Philip; Phillips, Stephanie; Green, David R; Jansen, Jan O

    2017-03-01

    Trauma systems in remote and rural regions often rely on helicopter emergency medical services to facilitate access to definitive care. The siting of such resources is key, but often relies on simplistic modeling of coverage, using circular isochrones. Scotland is in the process of implementing a national trauma network, and there have been calls for an expansion of aeromedical retrieval capacity. The aim of this study was to analyze population and area coverage of the current retrieval service configuration, with three aircraft, and a configuration with an additional helicopter, in the North East of Scotland, using a novel methodology. Both overall coverage and coverage by physician-staffed aircraft, with enhanced clinical capability, were analyzed. This was a geographical analysis based on calculation of elliptical isochrones, which consider the "open-jaw" configuration of many retrieval flights. Helicopters are not always based at hospitals. We modeled coverage based on different outbound and inbound flights. Areally referenced population data were obtained from the Scottish Government. The current helicopter network configuration provides 94.2% population coverage and 59.0% area coverage. The addition of a fourth helicopter would marginally increase population coverage to 94.4% and area coverage to 59.1%. However, when considering only physician-manned aircraft, the current configuration provides only 71.7% population coverage and 29.4% area coverage, which would be increased to 91.1% and 51.2%, respectively, with a second aircraft. Scotland's current helicopter network configuration provides good population coverage for retrievals to major trauma centers, which would only be increased minimally by the addition of a fourth aircraft in the North East. The coverage provided by the single physician-staffed aircraft is more limited, however, and would be increased considerably by a second physician-staffed aircraft in the North East. Elliptical isochrones provide a

  8. Increasing coverage and decreasing inequity in insecticide-treated bed net use among rural Kenyan children.

    Directory of Open Access Journals (Sweden)

    Abdisalan M Noor

    2007-08-01

    Full Text Available Inexpensive and efficacious interventions that avert childhood deaths in sub-Saharan Africa have failed to reach effective coverage, especially among the poorest rural sectors. One particular example is insecticide-treated bed nets (ITNs. In this study, we present repeat observations of ITN coverage among rural Kenyan homesteads exposed at different times to a range of delivery models, and assess changes in coverage across socioeconomic groups.We undertook a study of annual changes in ITN coverage among a cohort of 3,700 children aged 0-4 y in four districts of Kenya (Bondo, Greater Kisii, Kwale, and Makueni annually between 2004 and 2006. Cross-sectional surveys of ITN coverage were undertaken coincidentally with the incremental availability of commercial sector nets (2004, the introduction of heavily subsidized nets through clinics (2005, and the introduction of free mass distributed ITNs (2006. The changing prevalence of ITN coverage was examined with special reference to the degree of equity in each delivery approach. ITN coverage was only 7.1% in 2004 when the predominant source of nets was the commercial retail sector. By the end of 2005, following the expansion of heavily subsidized clinic distribution system, ITN coverage rose to 23.5%. In 2006 a large-scale mass distribution of ITNs was mounted providing nets free of charge to children, resulting in a dramatic increase in ITN coverage to 67.3%. With each subsequent survey socioeconomic inequity in net coverage sequentially decreased: 2004 (most poor [2.9%] versus least poor [15.6%]; concentration index 0.281; 2005 (most poor [17.5%] versus least poor [37.9%]; concentration index 0.131, and 2006 with near-perfect equality (most poor [66.3%] versus least poor [66.6%]; concentration index 0.000. The free mass distribution method achieved highest coverage among the poorest children, the highly subsidised clinic nets programme was marginally in favour of the least poor, and the commercial

  9. Public Health Workers and Vaccination Coverage in Eastern China: A Health Economic Analysis

    Directory of Open Access Journals (Sweden)

    Yu Hu

    2014-05-01

    Full Text Available Background: Vaccine-preventable diseases cause more than one million deaths among children under 5 years of age every year. Public Health Workers (PHWs are needed to provide immunization services, but the role of human resources for public health as a determinant of vaccination coverage at the population level has not been assessed in China. The objective of this study was to test whether PHW density was positively associated with childhood vaccination coverage in Zhejiang Province, East China. Methods: The vaccination coverage rates of Measles Containing Vaccine (MCV, Diphtheria, Tetanus and Pertussis combined vaccine (DTP, and Poliomyelitis Vaccine (PV were chosen as the dependent variables. Vaccination coverage data of children aged 13–24 months for each county in Zhejiang Province were taken from the Zhejiang Immunization Information System (ZJIIS. Aggregate PHW density was an independent variable in one set of regressions, and Vaccine Personnel (VP and other PHW densities were used separately in another set. Data on densities of PHW and VP were taken from a national investigation on EPI launched by Ministry of Health of China in 2013. We controlled other determinants that may influence the vaccination coverage like Gross Domestic Product (GDP per person, proportion of migrant children aged <7 years, and land area. These data were taken from Zhejiang Provincial Bureau of Statistics and ZJIIS. Results: PHW density was significantly influence the coverage rates of MCV [Adjusted Odds Ratio(AOR = 4.29], DTP3(AOR = 2.16, and PV3 (AOR = 3.30. However, when the effects of VPs and other PHWs were assessed separately, we found that VP density was significantly associated with coverage of all three vaccinations (MCV AOR = 7.05; DTP3 AOR = 1.82; PV3 AOR = 4.83, while other PHW density was not. Proportion of migrant children < 7 years and Land area were found as negative and significant determinants for vaccination coverage, while GDP per person had

  10. Medicare clarified support surface policies and coverage requirements.

    Science.gov (United States)

    Schaum, Kathleen D

    2010-07-01

    Before providers order pressure-reducing support surfaces for Medicare beneficiaries, they should obtain and read (1) the LCD and attached articles that pertain to their DME MAC jurisdiction and (2) the Special Edition SE1014 educational article released by the Medicare Learning Network of CMS. Providers should be sure that the patient's medical record contains the required order (including the dated and signed physician order) and documentation that proves medical necessity for the support surface ordered. The OIG report has identified that a large percentage of medical records are deficient in this area. Now CMS has provided special education about their order, coverage, and documentation requirements. The OIG report and the CMS educational article should serve as a warning that audits on this topic are likely. Providers should take time to review the pressure-reducing support documents and immediately refine their support surface ordering and documentation.

  11. Cost-effectiveness of full coverage of aromatase inhibitors for Medicare beneficiaries with early breast cancer.

    Science.gov (United States)

    Ito, Kouta; Elkin, Elena; Blinder, Victoria; Keating, Nancy; Choudhry, Niteesh

    2013-07-01

    Rates of nonadherence to aromatase inhibitors (AIs) among Medicare beneficiaries with hormone receptor-positive early breast cancer are high. Out-of-pocket drug costs appear to be an important contributor to this and may be addressed by eliminating copayments and other forms of patient cost sharing. The authors estimated the incremental cost-effectiveness of providing Medicare beneficiaries with full prescription coverage for AIs compared with usual prescription coverage under the Medicare Part D program. A Markov state-transition model was developed to simulate AI use and disease progression in a hypothetical cohort of postmenopausal Medicare beneficiaries with hormone receptor-positive early breast cancer. The analysis was conducted from the societal perspective and considered a lifetime horizon. The main outcome was an incremental cost-effectiveness ratio, which was measured as the cost per quality-adjusted life-year (QALY) gained. For patients receiving usual prescription coverage, average quality-adjusted survival was 11.35 QALYs, and lifetime costs were $83,002. For patients receiving full prescription coverage, average quality-adjusted survival was 11.38 QALYs, and lifetime costs were $82,728. Compared with usual prescription coverage, full prescription coverage would result in greater quality-adjusted survival (0.03 QALYs) and less resource use ($275) per beneficiary. From the perspective of Medicare, full prescription coverage was cost-effective (incremental cost-effectiveness ratio, $15,128 per QALY gained) but not cost saving. Providing full prescription coverage for AIs to Medicare beneficiaries with hormone receptor-positive early breast cancer would both improve health outcomes and save money from the societal perspective. Copyright © 2013 American Cancer Society.

  12. A Survey of Coverage Problems in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Junbin LIANG

    2014-01-01

    Full Text Available Coverage problem is an important issue in wireless sensor networks, which has a great impact on the performance of wireless sensor networks. Given a sensor network, the coverage problem is to determine how well the sensing field is monitored or tracked by sensors. In this paper, we classify the coverage problem into three categories: area coverage, target coverage, and barrier coverage, give detailed description of different algorithms belong to these three categories. Moreover, we specify the advantages and disadvantages of the existing classic algorithms, which can give a useful direction in this area.

  13. WFC3 Infrared Spectroscopic Parallel Survey WISP: A Survey of Star Formation Across Cosmic Time

    Science.gov (United States)

    Malkan, Matthew

    2013-10-01

    Our WFC3 Infrared Spectroscopic Parallels {WISPs} have shown the power of slitless spectroscopy to probe galaxy evolution from 0.5WISP is particularly sensitive to low-mass, metal-poor, galaxies with extreme star formation rates. These are missed by conventional continuum-selected surveys. The broad, continuous, spectral coverage of the G102 and G141 grisms {0.8-1.7 um} provides the best measurement of the de-reddened star formation rate, and the mass-metallicity relation, throughout this epoch, over which ground-based searches are severely limited.We propose to extend this cost-effective WFC3 Survey by using 375 pure parallel orbits for grism spectroscopy in 50 deep {4-5 orbit} and 50 shallow {3-orbit} fields. This will complete a sample of 6000 galaxies with [OII], [OIII], Ha, Hb, or [SII] in the redshift desert. Our primary science goals are: {1} Derive the extinction-corrected Ha luminosity function, and the resulting cosmic history of star formation across 0.51 to low masses, with the support of our ongoing ground-based follow-up. {3} Examine the role of metal-poor dwarfs and extreme starbursts in galaxy assembly. {4} Use the Balmer break and D4000 diagnostics to find and determine the ages of absorption-line galaxies down to J=24-25. {5} Search for rare objects such as Lya emitters at z>6, reddened AGN, close physical pairs of galaxies, T- and Y-dwarf stars {of which we have already found three}.The WISP value-added public data release is likely to be one of Hubble's major legacies of 0.8-1.7 um spectroscopy.

  14. Parallelizing TTree::Draw functionality with PROOF

    CERN Document Server

    Marinaci, Stefano

    2014-01-01

    In ROOT, the software context of this project, multi-threading is not currently an easy option, because ROOT is not by construction thread-aware and thread-safeness can only be achieved with heavy locking. Therefore, for a ROOT task, multi-processing is currently the most eective way to achieve cuncurrency. Multi-processing in ROOT is done via PROOF. PROOF is used to enable interactive analysis of large sets of ROOT les in parallel on clusters of computers or many-core machines. More generally PROOF can parallelize tasks that can be formulated as a set of independent sub-tasks. The PROOF technology is rather ecient to exploit all the CPU's provided by many-core processors. A dedicated version of PROOF, PROOF-Lite, provides an out-of-the-box solution to take full advantage of the additional cores available in today desktops or laptops. One of the items on the PROOF plan of work is to improve the inte- gration of PROOF-Lite for local processing of ROOT trees. In this project we investigate the case of the Draw ...

  15. Parallel Task Processing on a Multicore Platform in a PC-based Control System for Parallel Kinematics

    Directory of Open Access Journals (Sweden)

    Yannick Dadji

    2009-02-01

    Full Text Available Multicore platforms are such that have one physical processor chip with multiple cores interconnected via a chip level bus. Because they deliver a greater computing power through concurrency, offer greater system density multicore platforms provide best qualifications to address the performance bottleneck encountered in PC-based control systems for parallel kinematic robots with heavy CPU-load. Heavy load control tasks are generated by new control approaches that include features like singularity prediction, structure control algorithms, vision data integration and similar tasks. In this paper we introduce the parallel task scheduling extension of a communication architecture specially tailored for the development of PC-based control of parallel kinematics. The Sche-duling is specially designed for the processing on a multicore platform. It breaks down the serial task processing of the robot control cycle and extends it with parallel task processing paths in order to enhance the overall control performance.

  16. PALNS - A software framework for parallel large neighborhood search

    DEFF Research Database (Denmark)

    Røpke, Stefan

    2009-01-01

    This paper propose a simple, parallel, portable software framework for the metaheuristic named large neighborhood search (LNS). The aim is to provide a framework where the user has to set up a few data structures and implement a few functions and then the framework provides a metaheuristic where...

  17. Coverage threshold for laser-induced lithography

    Science.gov (United States)

    Martins, Weliton S.; Oriá, Marcos; Passerat de Silans, Thierry; Chevrollier, Martine

    2017-05-01

    Recent experimental observations of laser-induced adsorption at the interface between an alkali vapor and a dielectric surface have demonstrated the possibility of growing metallic films of nanometric thickness on dielectric surfaces, with arbitrary shapes determined by the intensity profile of the light. The mechanisms directly responsible for the accumulation of atoms at the irradiated surface have been shown to involve photo-ionization of atoms very close to the surface. However, the existence of a vapor-pressure threshold for initiating the film growth still raises questions on the processes occurring at the surface. In this letter, we report on the observation that the vapor-pressure threshold corresponds to a minimum adatom coverage necessary for the surface to effectively neutralize the incoming ions and make possible the growth of a multilayer film. We discuss the hypothesis that the coverage threshold is a surface conductivity threshold.

  18. Interpregnancy intervals: impact of postpartum contraceptive effectiveness and coverage.

    Science.gov (United States)

    Thiel de Bocanegra, Heike; Chang, Richard; Howell, Mike; Darney, Philip

    2014-04-01

    The purpose of this study was to determine the use of contraceptive methods, which was defined by effectiveness, length of coverage, and their association with short interpregnancy intervals, when controlling for provider type and client demographics. We identified a cohort of 117,644 women from the 2008 California Birth Statistical Master file with second or higher order birth and at least 1 Medicaid (Family Planning, Access, Care, and Treatment [Family PACT] program or Medi-Cal) claim within 18 months after index birth. We explored the effect of contraceptive method provision on the odds of having an optimal interpregnancy interval and controlled for covariates. The average length of contraceptive coverage was 3.81 months (SD = 4.84). Most women received user-dependent hormonal contraceptives as their most effective contraceptive method (55%; n = 65,103 women) and one-third (33%; n = 39,090 women) had no contraceptive claim. Women who used long-acting reversible contraceptive methods had 3.89 times the odds and women who used user-dependent hormonal methods had 1.89 times the odds of achieving an optimal birth interval compared with women who used barrier methods only; women with no method had 0.66 times the odds. When user-dependent methods are considered, the odds of having an optimal birth interval increased for each additional month of contraceptive coverage by 8% (odds ratio, 1.08; 95% confidence interval, 1.08-1.09). Women who were seen by Family PACT or by both Family PACT and Medi-Cal providers had significantly higher odds of optimal birth intervals compared with women who were served by Medi-Cal only. To achieve optimal birth spacing and ultimately to improve birth outcomes, attention should be given to contraceptive counseling and access to contraceptive methods in the postpartum period. Copyright © 2014 Mosby, Inc. All rights reserved.

  19. State Medicaid Expansion Tobacco Cessation Coverage and Number of Adult Smokers Enrolled in Expansion Coverage - United States, 2016.

    Science.gov (United States)

    DiGiulio, Anne; Haddix, Meredith; Jump, Zach; Babb, Stephen; Schecter, Anna; Williams, Kisha-Ann S; Asman, Kat; Armour, Brian S

    2016-12-09

    In 2015, 27.8% of adult Medicaid enrollees were current cigarette smokers, compared with 11.1% of adults with private health insurance, placing Medicaid enrollees at increased risk for smoking-related disease and death (1). In addition, smoking-related diseases are a major contributor to Medicaid costs, accounting for about 15% (>$39 billion) of annual Medicaid spending during 2006-2010 (2). Individual, group, and telephone counseling and seven Food and Drug Administration (FDA)-approved medications are effective treatments for helping tobacco users quit (3). Insurance coverage for tobacco cessation treatments is associated with increased quit attempts, use of cessation treatments, and successful smoking cessation (3); this coverage has the potential to reduce Medicaid costs (4). However, barriers such as requiring copayments and prior authorization for treatment can impede access to cessation treatments (3,5). As of July 1, 2016, 32 states (including the District of Columbia) have expanded Medicaid eligibility through the Patient Protection and Affordable Care Act (ACA),*(,†) which has increased access to health care services, including cessation treatments (5). CDC used data from the Centers for Medicare and Medicaid Services (CMS) Medicaid Budget and Expenditure System (MBES) and the Behavioral Risk Factor Surveillance System (BRFSS) to estimate the number of adult smokers enrolled in Medicaid expansion coverage. To assess cessation coverage among Medicaid expansion enrollees, the American Lung Association collected data on coverage of, and barriers to accessing, evidence-based cessation treatments. As of December 2015, approximately 2.3 million adult smokers were newly enrolled in Medicaid because of Medicaid expansion. As of July 1, 2016, all 32 states that have expanded Medicaid eligibility under ACA covered some cessation treatments for all Medicaid expansion enrollees, with nine states covering all nine cessation treatments for all Medicaid expansion

  20. Out-of-order parallel discrete event simulation for electronic system-level design

    CERN Document Server

    Chen, Weiwei

    2014-01-01

    This book offers readers a set of new approaches and tools a set of tools and techniques for facing challenges in parallelization with design of embedded systems.? It provides an advanced parallel simulation infrastructure for efficient and effective system-level model validation and development so as to build better products in less time.? Since parallel discrete event simulation (PDES) has the potential to exploit the underlying parallel computational capability in today's multi-core simulation hosts, the author begins by reviewing the parallelization of discrete event simulation, identifyin

  1. Chemically grafted carbon nanotube surface coverage gradients.

    Science.gov (United States)

    Shearer, Cameron J; Ellis, Amanda V; Shapter, Joseph G; Voelcker, Nicolas H

    2010-12-07

    Two approaches to producing gradients of vertically aligned single-walled carbon nanotubes (SWCNTs) on silicon surfaces by chemical grafting are presented here. The first approach involves the use of a porous silicon (pSi) substrate featuring a pore size gradient, which is functionalized with 3-aminopropyltriethoxysilane (APTES). Carboxylated SWCNTs are then immobilized on the topography gradient via carbodiimide coupling. Our results show that as the pSi pore size and porosity increase across the substrate the SWCNT coverage decreases concurrently. In contrast, the second gradient is an amine-functionality gradient produced by means of vapor-phase diffusion of APTES from a reservoir onto a silicon wafer where APTES attachment changes as a function of distance from the APTES reservoir. Carboxylated SWCNTs are then immobilized via carbodiimide coupling to the amine-terminated silicon gradient. Our observations confirm that with decreasing APTES density on the surface the coverage of the attached SWCNTs also decreases. These gradient platforms pave the way for the time-efficient optimization of SWCNT coverage for applications ranging from field emission to water filtration to drug delivery.

  2. Coverage, continuity, and visual cortical architecture.

    Science.gov (United States)

    Keil, Wolfgang; Wolf, Fred

    2011-12-29

    The primary visual cortex of many mammals contains a continuous representation of visual space, with a roughly repetitive aperiodic map of orientation preferences superimposed. It was recently found that orientation preference maps (OPMs) obey statistical laws which are apparently invariant among species widely separated in eutherian evolution. Here, we examine whether one of the most prominent models for the optimization of cortical maps, the elastic net (EN) model, can reproduce this common design. The EN model generates representations which optimally trade of stimulus space coverage and map continuity. While this model has been used in numerous studies, no analytical results about the precise layout of the predicted OPMs have been obtained so far. We present a mathematical approach to analytically calculate the cortical representations predicted by the EN model for the joint mapping of stimulus position and orientation. We find that in all the previously studied regimes, predicted OPM layouts are perfectly periodic. An unbiased search through the EN parameter space identifies a novel regime of aperiodic OPMs with pinwheel densities lower than found in experiments. In an extreme limit, aperiodic OPMs quantitatively resembling experimental observations emerge. Stabilization of these layouts results from strong nonlocal interactions rather than from a coverage-continuity-compromise. Our results demonstrate that optimization models for stimulus representations dominated by nonlocal suppressive interactions are in principle capable of correctly predicting the common OPM design. They question that visual cortical feature representations can be explained by a coverage-continuity-compromise.

  3. Applied Parallel Computing Industrial Computation and Optimization

    DEFF Research Database (Denmark)

    Madsen, Kaj; NA NA NA Olesen, Dorte

    Proceedings and the Third International Workshop on Applied Parallel Computing in Industrial Problems and Optimization (PARA96)......Proceedings and the Third International Workshop on Applied Parallel Computing in Industrial Problems and Optimization (PARA96)...

  4. Massively parallel implementation of character recognition systems

    Science.gov (United States)

    Garris, Michael D.; Wilson, Charles L.; Blue, James L.; Candela, Gerald T.; Grother, Patrick J.; Janet, Stanley A.; Wilkinson, R. A.

    1992-08-01

    A massively parallel character recognition system has been implemented. The system is designed to study the feasibility of the recognition of handprinted text in a loosely constrained environment. The NIST handprint database, NIST Special Database 1, is used to provide test data for the recognition system. The system consists of eight functional components. The loading of the image into the system and storing the recognition results from the system are I/O components. In between are components responsible for image processing and recognition. The first image processing component is responsible for image correction for scale and rotation, data field isolation, and character data location within each field; the second performs character segmentation; and the third does character normalization. Three recognition components are responsible for feature extraction and character reconstruction, neural network-based character recognition, and low-confidence classification rejection. The image processing to load and isolate 34 fields on a scientific workstation takes 900 seconds. The same processing takes only 11 seconds using a massively parallel array processor. The image processing components, including the time to load the image data, use 94 of the system time. The segmentation time is 15 ms/character and segmentation accuracy is 89 for handprinted digits and alphas. Character recognition accuracy for medium quality machine print is 99.8. On handprinted digits, the recognition accuracy is 96 and recognition speeds of 10,100 characters/second can be realized. The limiting factor in the recognition portion of the system is feature extraction, which occurs at 806 characters/second. Through the use of a massively parallel machine and neural recognition algorithms, significant improvements in both accuracy and speed have been achieved, making this technology effective as a replacement for key data entry in existing data capture systems.

  5. The parallel volume at large distances

    DEFF Research Database (Denmark)

    Kampf, Jürgen

    In this paper we examine the asymptotic behavior of the parallel volume of planar non-convex bodies as the distance tends to infinity. We show that the difference between the parallel volume of the convex hull of a body and the parallel volume of the body itself tends to . This yields a new proof...... for the fact that a planar body can only have polynomial parallel volume, if it is convex. Extensions to Minkowski spaces and random sets are also discussed....

  6. The parallel volume at large distances

    DEFF Research Database (Denmark)

    Kampf, Jürgen

    In this paper we examine the asymptotic behavior of the parallel volume of planar non-convex bodies as the distance tends to infinity. We show that the difference between the parallel volume of the convex hull of a body and the parallel volume of the body itself tends to 0. This yields a new proof...... for the fact that a planar body can only have polynomial parallel volume, if it is convex. Extensions to Minkowski spaces and random sets are also discussed....

  7. Parallel Graph Transformation based on Merged Approach

    Directory of Open Access Journals (Sweden)

    Asmaa Aouat

    2013-01-01

    Full Text Available Graph transformation is one of the key concepts in graph grammar. In order to accelerate the graph transformation, the concept of parallel graph transformation has been proposed by different tools such as AGG tool. The theory of parallel graph transformation used by AGG just allows clarifying the concepts of conflict and dependency between the transformation rules. This work proposes an approach of parallel graph transformations which enables dependent transformation rules to be executed in parallel.

  8. 1990 point population coverage for the Conterminous United States

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This is a point coverage of the 1990 Census of Population and Housing for the conterminous United States. (Alaska and Hawaii are available separately). The coverage...

  9. 5 CFR 847.204 - Elections of FERS coverage.

    Science.gov (United States)

    2010-01-01

    ...) from an FERS-covered position to an NAFI may elect to continue FERS coverage. (b) An employee who elects FERS coverage under this section will be covered by FERS during all periods of future service not...

  10. Coverage and Compliance of Mass Drug Administration in Lymphatic Filariasis: A Comparative Analysis in a District of West Bengal, India

    Directory of Open Access Journals (Sweden)

    Tanmay Kanti Panja

    2012-01-01

    Full Text Available Background: Despite several rounds of Mass Drug Administration (MDA as an elimination strategy of Lymphatic Filariasis (LF from India, still the coverage is far behind the required level of 85%.Objectives: The present study was carried out with the objectives to assess the coverage and compliance of MDA and their possible determinants. Methods: A cross-sectional community based study was conducted in Paschim Midnapur district of West Bengal, India for consecutive two years following MDA. Study participants were chosen by 30-cluster sampling technique. Data was collected by using pre-tested semi-structured proforma to assess the coverage and compliance of MDA along with possible determinants for non-attaining the expected coverage. Results: In the year 2009, coverage, compliance, coverage compliance gap (CCG and effective coverage was seen to be 84.1%, 70.5%, 29.5% and 59.3% respectively. In 2010, the results further deteriorated to 78.5%, 66.9%, 33.3% and 57% respectively. The poor coverage and compliance were attributed to improper training of service providers and lack of community awareness regarding MDA.Conclusion: The study emphasized supervised consumption, retraining of service providers before MDA activities, strengthening behaviour change communication strategy for community awareness. Advocacy by the program managers and policy makers towards prioritization of MDA program will make the story of filaria elimination a success.

  11. Parallel algorithms and cluster computing

    CERN Document Server

    Hoffmann, Karl Heinz

    2007-01-01

    This book presents major advances in high performance computing as well as major advances due to high performance computing. It contains a collection of papers in which results achieved in the collaboration of scientists from computer science, mathematics, physics, and mechanical engineering are presented. From the science problems to the mathematical algorithms and on to the effective implementation of these algorithms on massively parallel and cluster computers we present state-of-the-art methods and technology as well as exemplary results in these fields. This book shows that problems which seem superficially distinct become intimately connected on a computational level.

  12. Scalable Parallel Algebraic Multigrid Solvers

    Energy Technology Data Exchange (ETDEWEB)

    Bank, R; Lu, S; Tong, C; Vassilevski, P

    2005-03-23

    The authors propose a parallel algebraic multilevel algorithm (AMG), which has the novel feature that the subproblem residing in each processor is defined over the entire partition domain, although the vast majority of unknowns for each subproblem are associated with the partition owned by the corresponding processor. This feature ensures that a global coarse description of the problem is contained within each of the subproblems. The advantages of this approach are that interprocessor communication is minimized in the solution process while an optimal order of convergence rate is preserved; and the speed of local subproblem solvers can be maximized using the best existing sequential algebraic solvers.

  13. Utilizing health information technology to improve vaccine communication and coverage.

    Science.gov (United States)

    Stockwell, Melissa S; Fiks, Alexander G

    2013-08-01

    Vaccination coverage is still below the Healthy People 2010 and 2020 goals. Technology use in the US is widespread by patients and providers including text message, email, internet, social media and electronic health records. Health information technology (IT) interventions can facilitate the rapid or real-time identification of children in need of vaccination and provide the foundation for vaccine-oriented parental communication or clinical alerts in a flexible and tailored manner. There has been a small but burgeoning field of work integrating IT into vaccination interventions including reminder/recall using non-traditional methods, clinical decision support for providers in the electronic health record, use of technology to affect work-flow and the use of social media. The aim of this review is to introduce and present current data regarding the effectiveness of a range of technology tools to promote vaccination, describe gaps in the literature and offer insights into future directions for research and intervention.

  14. Comparison of sampling techniques for parallel analysis of transcript and metabolite levels in Saccharomyces cerevisiae.

    Science.gov (United States)

    Martins, Ana Margarida; Sha, Wei; Evans, Clive; Martino-Catt, Susan; Mendes, Pedro; Shulaev, Vladimir

    2007-03-01

    Mathematical modelling of cellular processes is crucial for the understanding of the cell or organism as a whole. Genome-wide observations, at the levels of the transcriptome, proteome and metabolome, provide a high coverage of the molecular constituents of the system in study. Time-course experiments are important for gaining insight into a system's dynamics and are needed for mathematical modelling. In time-course experiments it is crucial to use efficient and fast sampling techniques. We evaluated several techniques to sample and process yeast cultures for parallel analysis of the transcriptome and metabolome. The evaluation was made by measuring the quality of the RNA obtained with UV-spectroscopy, capillary electrophoresis and microarray hybridization. The protocol developed involves rapid collection by spraying the sample into -40 degrees C tricine-buffered methanol (as previously described for yeast metabolome analysis), followed by the separation of cells from the culture medium in low-temperature rapid centrifugation. Removal of the residual methanol is carried out by freeze-drying the pellet at -35 degrees C. RNA and metabolites can then be extracted from the same freeze-dried sample obtained with this procedure.

  15. Automatic Parallelization Using OpenMP Based on STL Semantics

    Energy Technology Data Exchange (ETDEWEB)

    Liao, C; Quinlan, D J; Willcock, J J; Panas, T

    2008-06-03

    Automatic parallelization of sequential applications using OpenMP as a target has been attracting significant attention recently because of the popularity of multicore processors and the simplicity of using OpenMP to express parallelism for shared-memory systems. However, most previous research has only focused on C and Fortran applications operating on primitive data types. C++ applications using high level abstractions such as STL containers are largely ignored due to the lack of research compilers that are readily able to recognize high level object-oriented abstractions of STL. In this paper, we use ROSE, a multiple-language source-to-source compiler infrastructure, to build a parallelizer that can recognize such high level semantics and parallelize C++ applications using certain STL containers. The idea of our work is to automatically insert OpenMP constructs using extended conventional dependence analysis and the known domain-specific semantics of high-level abstractions with optional assistance from source code annotations. In addition, the parallelizer is followed by an OpenMP translator to translate the generated OpenMP programs into multi-threaded code targeted to a popular OpenMP runtime library. Our work extends the applicability of automatic parallelization and provides another way to take advantage of multicore processors.

  16. Parallel relative radiometric normalisation for remote sensing image mosaics

    Science.gov (United States)

    Chen, Chong; Chen, Zhenjie; Li, Manchun; Liu, Yongxue; Cheng, Liang; Ren, Yibin

    2014-12-01

    Relative radiometric normalisation (RRN) is a vital step to achieve radiometric consistency among remote sensing images. Geo-analysis over large areas often involves mosaicking massive remote sensing images. Hence RRN becomes a data-intensive and computing-intensive task. This study implements a parallel RNN method based on the iteratively re-weighted multivariate alteration detection (IR-MAD) transformation and orthogonal regression. To parallelise the method of IR-MAD and orthogonal regression, there are two key problems: the normalisation path determination and the task dependence on normalisation coefficients calculation. In this paper, the reference image and normalisation paths are determined based on the shortest distance algorithm to reduce normalisation error. Formulas of orthogonal regression are acquired considering the effect of the normalisation path to reduce the task dependence on the calculation of coefficients. A master-slave parallel mode is proposed to implement the parallel method, and a task queue and a process queue are used for task scheduling. Experiments show that the parallel RRN method provides good normalisation results and favourable parallel speed-up, efficiency and scalability, which indicate that the parallel method can handle large volumes of remote sensing images efficiently.

  17. The probability of parallel genetic evolution from standing genetic variation.

    Science.gov (United States)

    MacPherson, A; Nuismer, S L

    2017-02-01

    Parallel evolution is often assumed to result from repeated adaptation to novel, yet ecologically similar, environments. Here, we develop and analyse a mathematical model that predicts the probability of parallel genetic evolution from standing genetic variation as a function of the strength of phenotypic selection and constraints imposed by genetic architecture. Our results show that the probability of parallel genetic evolution increases with the strength of natural selection and effective population size and is particularly likely to occur for genes with large phenotypic effects. Building on these results, we develop a Bayesian framework for estimating the strength of parallel phenotypic selection from genetic data. Using extensive individual-based simulations, we show that our estimator is robust across a wide range of genetic and evolutionary scenarios and provides a useful tool for rigorously testing the hypothesis that parallel genetic evolution is the result of adaptive evolution. An important result that emerges from our analyses is that existing studies of parallel genetic evolution frequently rely on data that is insufficient for distinguishing between adaptive evolution and neutral evolution driven by random genetic drift. Overcoming this challenge will require sampling more populations and the inclusion of larger numbers of loci. © 2016 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2016 European Society For Evolutionary Biology.

  18. A solution for automatic parallelization of sequential assembly code

    Directory of Open Access Journals (Sweden)

    Kovačević Đorđe

    2013-01-01

    Full Text Available Since modern multicore processors can execute existing sequential programs only on a single core, there is a strong need for automatic parallelization of program code. Relying on existing algorithms, this paper describes one new software solution tool for parallelization of sequential assembly code. The main goal of this paper is to develop the parallelizator which reads sequential assembler code and at the output provides parallelized code for MIPS processor with multiple cores. The idea is the following: the parser translates assembler input file to program objects suitable for further processing. After that the static single assignment is done. Based on the data flow graph, the parallelization algorithm separates instructions on different cores. Once sequential code is parallelized by the parallelization algorithm, registers are allocated with the algorithm for linear allocation, and the result at the end of the program is distributed assembler code on each of the cores. In the paper we evaluate the speedup of the matrix multiplication example, which was processed by the parallelizator of assembly code. The result is almost linear speedup of code execution, which increases with the number of cores. The speed up on the two cores is 1.99, while on 16 cores the speed up is 13.88.

  19. Premium subsidies, the mandate, and Medicaid expansion: Coverage effects of the Affordable Care Act.

    Science.gov (United States)

    Frean, Molly; Gruber, Jonathan; Sommers, Benjamin D

    2017-05-01

    Using premium subsidies for private coverage, an individual mandate, and Medicaid expansion, the Affordable Care Act (ACA) has increased insurance coverage. We provide the first comprehensive assessment of these provisions' effects, using the 2012-2015 American Community Survey and a triple-difference estimation strategy that exploits variation by income, geography, and time. Overall, our model explains 60% of the coverage gains in 2014-2015. We find that coverage was moderately responsive to price subsidies, with larger gains in state-based insurance exchanges than the federal exchange. The individual mandate's exemptions and penalties had little impact on coverage rates. The law increased Medicaid among individuals gaining eligibility under the ACA and among previously-eligible populations ("woodwork effect") even in non-expansion states, with no resulting reductions in private insurance. Overall, exchange premium subsidies produced 40% of the coverage gains explained by our ACA policy measures, and Medicaid the other 60%, of which 1/2 occurred among previously-eligible individuals. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Utilizing collagen membranes for guided tissue regeneration-based root coverage.

    Science.gov (United States)

    Wang, Hom-Lay; Modarressi, Marmar; Fu, Jia-Hui

    2012-06-01

    Gingival recession is a common clinical problem that can result in hypersensitivity, pain, root caries and esthetic concerns. Conventional soft tissue procedures for root coverage require an additional surgical site, thereby causing additional trauma and donor site morbidity. In addition, the grafted tissues heal by repair, with formation of long junctional epithelium with some connective tissue attachment. Guided tissue regeneration-based root coverage was thus developed in an attempt to overcome these limitations while providing comparable clinical results. This paper addresses the biologic foundation of guided tissue regeneration-based root coverage, and describes the indications and contraindications for this technique, as well as the factors that influence outcomes. The step-by-step clinical techniques utilizing collagen membranes are also described. In comparison with conventional soft tissue procedures, the benefits of guided tissue regeneration-based root coverage procedures include new attachment formation, elimination of donor site morbidity, less chair-time, and unlimited availability and uniform thickness of the product. Collagen membranes, in particular, benefit from product biocompatibility with the host, while promoting chemotaxis, hemostasis, and exchange of gas and nutrients. Such characteristics lead to better wound healing by promoting primary wound coverage, angiogenesis, space creation and maintenance, and clot stability. In conclusion, collagen membranes are a reliable alternative for use in root coverage procedures. © 2012 John Wiley & Sons A/S.

  1. A biologically inspired controller to solve the coverage problem in robotics.

    Science.gov (United States)

    Rañó, Iñaki; Santos, José A

    2017-06-05

    The coverage problem consists on computing a path or trajectory for a robot to pass over all the points in some free area and has applications ranging from floor cleaning to demining. Coverage is solved as a planning problem-providing theoretical validation of the solution-or through heuristic techniques which rely on experimental validation. Through a combination of theoretical results and simulations, this paper presents a novel solution to the coverage problem that exploits the chaotic behaviour of a simple biologically inspired motion controller, the Braitenberg vehicle 2b. Although chaos has been used for coverage, our approach has much less restrictive assumptions about the environment and can be implemented using on-board sensors. First, we prove theoretically that this vehicle-a well known model of animal tropotaxis-behaves as a charge in an electro-magnetic field. The motion equations can be reduced to a Hamiltonian system, and, therefore the vehicle follows quasi-periodic or chaotic trajectories, which pass arbitrarily close to any point in the work-space, i.e. it solves the coverage problem. Secondly, through a set of extensive simulations, we show that the trajectories cover regions of bounded workspaces, and full coverage is achieved when the perceptual range of the vehicle is short. We compare the performance of this new approach with different types of random motion controllers in the same bounded environments.

  2. The influence of patient positioning in breast CT on breast tissue coverage and patient comfort

    Energy Technology Data Exchange (ETDEWEB)

    Roessler, A.C.; Althoff, F.; Kalender, W. [Erlangen Univ. (Germany). Inst. of Medical Physics; Wenkel, E. [University Hospital of Erlangen (Germany). Radiological Inst.

    2015-02-15

    The presented study aimed at optimizing a patient table design for breast CT (BCT) systems with respect to breast tissue coverage and patient comfort. Additionally, the benefits and acceptance of an immobilization device for BCT using underpressure were evaluated. Three different study parts were carried out. In a positioning study women were investigated on an MRI tabletop with exchangeable inserts (flat and cone-shaped with different opening diameters) to evaluate their influence on breast coverage and patient comfort in various positioning alternatives. Breast length and volume were calculated to compare positioning modalities including various opening diameters and forms. In the second study part, an underpressure system was tested for its functionality and comfort on a stereotactic biopsy table mimicking a future CT scanner table. In the last study part, this system was tested regarding breast tissue coverage. Best results for breast tissue coverage were shown for cone-shaped table inserts with an opening of 180 mm. Flat inserts did not provide complete coverage of breast tissue. The underpressure system showed robust function and tended to pull more breast tissue into the field of view. Patient comfort was rated good for all table inserts, with highest ratings for cone-shaped inserts. Cone-shaped tabletops appeared to be adequate for BCT systems and to allow imaging of almost the complete breast. An underpressure system proved promising for the fixation of the breast during imaging and increased coverage. Patient comfort appears to be adequate.

  3. What hysteria? A systematic study of newspaper coverage of accused child molesters.

    Science.gov (United States)

    Cheit, Ross E

    2003-06-01

    There were three aims: First, to determine the extent to which those charged with child molestation receive newspaper coverage; second, to analyze the nature of that coverage; and third, to compare the universe of coverage to the nature of child molestation charges in the criminal justice system as a whole. Two databases were created. The first one identified all defendants charged with child molestation in Rhode Island in 1993. The database was updated after 5 years to include relevant information about case disposition. The second database was created by electronic searching the Providence Journal for every story that mentioned each defendant. Most defendants (56.1%) were not mentioned in the newspaper. Factors associated with a greater chance of coverage include: cases involving first-degree charges, cases with multiple counts, cases involving additional violence or multiple victims, and cases resulting in long prison sentences. The data indicate that the press exaggerates "stranger danger," while intra-familial cases are underreported. Newspaper accounts also minimize the extent to which guilty defendants avoid prison. Generalizing about the nature of child molestation cases in criminal court on the basis of newspaper coverage is inappropriate. The coverage is less extensive than often claimed, and it is skewed in ways that are typical of the mass media.

  4. Parallelism in G. V. Mona's UVulindlela

    African Journals Online (AJOL)

    2010-01-02

    Jan 2, 2010 ... tiful structural patterns and some musical effect in the poetry. The concept of parallelism. Parallelism is a stylistic device of repetition. ... ing of “phrases or sentences of similar construction and meaning placed side by side, balancing each other ”. Myers and Simms (1985: 223) define parallelism as:.

  5. Identifying, Quantifying, Extracting and Enhancing Implicit Parallelism

    Science.gov (United States)

    Agarwal, Mayank

    2009-01-01

    The shift of the microprocessor industry towards multicore architectures has placed a huge burden on the programmers by requiring explicit parallelization for performance. Implicit Parallelization is an alternative that could ease the burden on programmers by parallelizing applications "under the covers" while maintaining sequential semantics…

  6. Parallel line scanning ophthalmoscope for retinal imaging

    NARCIS (Netherlands)

    Vienola, K.V.; Damodaran, M.; Braaf, B.; Vermeer, K.A.; de Boer, J.F.

    2015-01-01

    A parallel line scanning ophthalmoscope (PLSO) is presented using a digital micromirror device (DMD) for parallel confocal line imaging of the retina. The posterior part of the eye is illuminated using up to seven parallel lines, which were projected at 100 Hz. The DMD offers a high degree of

  7. Inductive Information Retrieval Using Parallel Distributed Computation.

    Science.gov (United States)

    Mozer, Michael C.

    This paper reports on an application of parallel models to the area of information retrieval and argues that massively parallel, distributed models of computation, called connectionist, or parallel distributed processing (PDP) models, offer a new approach to the representation and manipulation of knowledge. Although this document focuses on…

  8. Second derivative parallel block backward differentiation type ...

    African Journals Online (AJOL)

    A class of second derivative parallel block Backward differentiation type formulas is developed and the methods are inherently parallel and can be distributed over parallel processors. They are L–stable for block size k 6 with small error constants when compared to the conventional sequential Linear multi –step methods of ...

  9. Comparison of Parallel Viscosity with Neoclassical Theory

    OpenAIRE

    K., Ida; N., Nakajima

    1996-01-01

    Toroidal rotation profiles are measured with charge exchange spectroscopy for the plasma heated with tangential NBI in CHS heliotron/torsatron device to estimate parallel viscosity. The parallel viscosity derived from the toroidal rotation velocity shows good agreement with the neoclassical parallel viscosity plus the perpendicular viscosity. (mu_perp =2m^2 /s).

  10. Improving operating room productivity via parallel anesthesia processing.

    Science.gov (United States)

    Brown, Michael J; Subramanian, Arun; Curry, Timothy B; Kor, Daryl J; Moran, Steven L; Rohleder, Thomas R

    2014-01-01

    Parallel processing of regional anesthesia may improve operating room (OR) efficiency in patients undergoes upper extremity surgical procedures. The purpose of this paper is to evaluate whether performing regional anesthesia outside the OR in parallel increases total cases per day, improve efficiency and productivity. Data from all adult patients who underwent regional anesthesia as their primary anesthetic for upper extremity surgery over a one-year period were used to develop a simulation model. The model evaluated pure operating modes of regional anesthesia performed within and outside the OR in a parallel manner. The scenarios were used to evaluate how many surgeries could be completed in a standard work day (555 minutes) and assuming a standard three cases per day, what was the predicted end-of-day time overtime. Modeling results show that parallel processing of regional anesthesia increases the average cases per day for all surgeons included in the study. The average increase was 0.42 surgeries per day. Where it was assumed that three cases per day would be performed by all surgeons, the days going to overtime was reduced by 43 percent with parallel block. The overtime with parallel anesthesia was also projected to be 40 minutes less per day per surgeon. Key limitations include the assumption that all cases used regional anesthesia in the comparisons. Many days may have both regional and general anesthesia. Also, as a case study, single-center research may limit generalizability. Perioperative care providers should consider parallel administration of regional anesthesia where there is a desire to increase daily upper extremity surgical case capacity. Where there are sufficient resources to do parallel anesthesia processing, efficiency and productivity can be significantly improved. Simulation modeling can be an effective tool to show practice change effects at a system-wide level.

  11. U.S. Media Coverage of Africa. A Media Source Guide, Issues for the '80s.

    Science.gov (United States)

    Wiley, David S.

    One of a series on topics of concern to the U.S. media, this guide is intended to provide journalists with a critical analysis of U.S. media coverage of Africa. Section I provides an overview of the folklore about Africa and the nature and sources of stereotypes and misconceptions about Africa and the Western world. Findings and interpretations of…

  12. Computer Security in the Introductory Business Information Systems Course: An Exploratory Study of Textbook Coverage

    Science.gov (United States)

    Sousa, Kenneth J.; MacDonald, Laurie E.; Fougere, Kenneth T.

    2005-01-01

    The authors conducted an evaluation of Management Information Systems (MIS) textbooks and found that computer security receives very little in-depth coverage. The textbooks provide, at best, superficial treatment of security issues. The research results suggest that MIS faculty need to provide material to supplement the textbook to provide…

  13. 42 CFR 436.128 - Coverage for certain qualified aliens.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Coverage for certain qualified aliens. 436.128... Mandatory Coverage of the Categorically Needy § 436.128 Coverage for certain qualified aliens. The agency... § 440.255(c) of this chapter to those aliens described in § 436.406(c) of this subpart. ...

  14. 42 CFR 435.350 - Coverage for certain aliens.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Coverage for certain aliens. 435.350 Section 435... ISLANDS, AND AMERICAN SAMOA Optional Coverage of the Medically Needy § 435.350 Coverage for certain aliens... treatment of an emergency medical condition, as defined in § 440.255(c) of this chapter, to those aliens...

  15. 26 CFR 54.4980B-5 - COBRA continuation coverage.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 17 2010-04-01 2010-04-01 false COBRA continuation coverage. 54.4980B-5 Section...) MISCELLANEOUS EXCISE TAXES (CONTINUED) PENSION EXCISE TAXES § 54.4980B-5 COBRA continuation coverage. The following questions-and-answers address the requirements for coverage to constitute COBRA continuation...

  16. 42 CFR 457.420 - Benchmark health benefits coverage.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Benchmark health benefits coverage. 457.420 Section 457.420 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... State Plan Requirements: Coverage and Benefits § 457.420 Benchmark health benefits coverage. Benchmark...

  17. Pap smear coverage among rural workers | London | South African ...

    African Journals Online (AJOL)

    Data describing Papanicolaou smear coverage and factors related to .coverage are presented from 9 surveys ofrural women workers in the food canning and processing industry in the Cape. Adequacy of Pap smear coverage was assessed according to whether the l'espondent had ever previously had a Pap smear, or had ...

  18. 48 CFR 9903.201-2 - Types of CAS coverage.

    Science.gov (United States)

    2010-10-01

    ... coverage. Full coverage requires that the business unit comply with all of the CAS specified in part 9904... later award of a CAS-covered contract. Full coverage applies to contractor business units that— (1... covered contract of less than $50 million awarded to a business unit that received less than $50 million...

  19. Advances in randomized parallel computing

    CERN Document Server

    Rajasekaran, Sanguthevar

    1999-01-01

    The technique of randomization has been employed to solve numerous prob­ lems of computing both sequentially and in parallel. Examples of randomized algorithms that are asymptotically better than their deterministic counterparts in solving various fundamental problems abound. Randomized algorithms have the advantages of simplicity and better performance both in theory and often in practice. This book is a collection of articles written by renowned experts in the area of randomized parallel computing. A brief introduction to randomized algorithms In the aflalysis of algorithms, at least three different measures of performance can be used: the best case, the worst case, and the average case. Often, the average case run time of an algorithm is much smaller than the worst case. 2 For instance, the worst case run time of Hoare's quicksort is O(n ), whereas its average case run time is only O( n log n). The average case analysis is conducted with an assumption on the input space. The assumption made to arrive at t...

  20. Xyce parallel electronic simulator design.

    Energy Technology Data Exchange (ETDEWEB)

    Thornquist, Heidi K.; Rankin, Eric Lamont; Mei, Ting; Schiek, Richard Louis; Keiter, Eric Richard; Russo, Thomas V.

    2010-09-01

    This document is the Xyce Circuit Simulator developer guide. Xyce has been designed from the 'ground up' to be a SPICE-compatible, distributed memory parallel circuit simulator. While it is in many respects a research code, Xyce is intended to be a production simulator. As such, having software quality engineering (SQE) procedures in place to insure a high level of code quality and robustness are essential. Version control, issue tracking customer support, C++ style guildlines and the Xyce release process are all described. The Xyce Parallel Electronic Simulator has been under development at Sandia since 1999. Historically, Xyce has mostly been funded by ASC, the original focus of Xyce development has primarily been related to circuits for nuclear weapons. However, this has not been the only focus and it is expected that the project will diversify. Like many ASC projects, Xyce is a group development effort, which involves a number of researchers, engineers, scientists, mathmaticians and computer scientists. In addition to diversity of background, it is to be expected on long term projects for there to be a certain amount of staff turnover, as people move on to different projects. As a result, it is very important that the project maintain high software quality standards. The point of this document is to formally document a number of the software quality practices followed by the Xyce team in one place. Also, it is hoped that this document will be a good source of information for new developers.

  1. Performance limitations of parallel simulations

    Directory of Open Access Journals (Sweden)

    Liang Chen

    1998-01-01

    Full Text Available This study shows how the performance of a parallel simulation may be affected by the structure of the system being simulated. We consider a wide class of “linearly synchronous” simulations consisting of asynchronous and synchronous parallel simulations (or other distributed-processing systems, with conservative or optimistic protocols, in which the differences in the virtual times of the logical processes being simulated in real time t are of the order o(t as t tends to infinity. Using a random time transformation idea, we show how a simulation's processing rate in real time is related to the throughput rates in virtual time of the system being simulated. This relation is the basis for establishing upper bounds on simulation processing rates. The bounds for the rates are tight and are close to the actual rates as numerical experiments indicate. We use the bounds to determine the maximum number of processors that a simulation can effectively use. The bounds also give insight into efficient assignment of processors to the logical processes in a simulation.

  2. Parallel hyperspectral compressive sensing method on GPU

    Science.gov (United States)

    Bernabé, Sergio; Martín, Gabriel; Nascimento, José M. P.

    2015-10-01

    Remote hyperspectral sensors collect large amounts of data per flight usually with low spatial resolution. It is known that the bandwidth connection between the satellite/airborne platform and the ground station is reduced, thus a compression onboard method is desirable to reduce the amount of data to be transmitted. This paper presents a parallel implementation of an compressive sensing method, called parallel hyperspectral coded aperture (P-HYCA), for graphics processing units (GPU) using the compute unified device architecture (CUDA). This method takes into account two main properties of hyperspectral dataset, namely the high correlation existing among the spectral bands and the generally low number of endmembers needed to explain the data, which largely reduces the number of measurements necessary to correctly reconstruct the original data. Experimental results conducted using synthetic and real hyperspectral datasets on two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN, reveal that the use of GPUs can provide real-time compressive sensing performance. The achieved speedup is up to 20 times when compared with the processing time of HYCA running on one core of the Intel i7-2600 CPU (3.4GHz), with 16 Gbyte memory.

  3. Mesh-based parallel code coupling interface

    Energy Technology Data Exchange (ETDEWEB)

    Wolf, K.; Steckel, B. (eds.) [GMD - Forschungszentrum Informationstechnik GmbH, St. Augustin (DE). Inst. fuer Algorithmen und Wissenschaftliches Rechnen (SCAI)

    2001-04-01

    MpCCI (mesh-based parallel code coupling interface) is an interface for multidisciplinary simulations. It provides industrial end-users as well as commercial code-owners with the facility to combine different simulation tools in one environment. Thereby new solutions for multidisciplinary problems will be created. This opens new application dimensions for existent simulation tools. This Book of Abstracts gives a short overview about ongoing activities in industry and research - all presented at the 2{sup nd} MpCCI User Forum in February 2001 at GMD Sankt Augustin. (orig.) [German] MpCCI (mesh-based parallel code coupling interface) definiert eine Schnittstelle fuer multidisziplinaere Simulationsanwendungen. Sowohl industriellen Anwender als auch kommerziellen Softwarehersteller wird mit MpCCI die Moeglichkeit gegeben, Simulationswerkzeuge unterschiedlicher Disziplinen miteinander zu koppeln. Dadurch entstehen neue Loesungen fuer multidisziplinaere Problemstellungen und fuer etablierte Simulationswerkzeuge ergeben sich neue Anwendungsfelder. Dieses Book of Abstracts bietet einen Ueberblick ueber zur Zeit laufende Arbeiten in der Industrie und in der Forschung, praesentiert auf dem 2{sup nd} MpCCI User Forum im Februar 2001 an der GMD Sankt Augustin. (orig.)

  4. Parallelizing SLPA for Scalable Overlapping Community Detection

    Directory of Open Access Journals (Sweden)

    Konstantin Kuzmin

    2015-01-01

    Full Text Available Communities in networks are groups of nodes whose connections to the nodes in a community are stronger than with the nodes in the rest of the network. Quite often nodes participate in multiple communities; that is, communities can overlap. In this paper, we first analyze what other researchers have done to utilize high performance computing to perform efficient community detection in social, biological, and other networks. We note that detection of overlapping communities is more computationally intensive than disjoint community detection, and the former presents new challenges that algorithm designers have to face. Moreover, the efficiency of many existing algorithms grows superlinearly with the network size making them unsuitable to process large datasets. We use the Speaker-Listener Label Propagation Algorithm (SLPA as the basis for our parallel overlapping community detection implementation. SLPA provides near linear time overlapping community detection and is well suited for parallelization. We explore the benefits of a multithreaded programming paradigm and show that it yields a significant performance gain over sequential execution while preserving the high quality of community detection. The algorithm was tested on four real-world datasets with up to 5.5 million nodes and 170 million edges. In order to assess the quality of community detection, at least 4 different metrics were used for each of the datasets.

  5. Multinode acoustic focusing for parallel flow cytometry

    Science.gov (United States)

    Piyasena, Menake E.; Suthanthiraraj, Pearlson P. Austin; Applegate, Robert W.; Goumas, Andrew M.; Woods, Travis A.; López, Gabriel P.; Graves, Steven W.

    2012-01-01

    Flow cytometry can simultaneously measure and analyze multiple properties of single cells or particles with high sensitivity and precision. Yet, conventional flow cytometers have fundamental limitations with regards to analyzing particles larger than about 70 microns, analyzing at flow rates greater than a few hundred microliters per minute, and providing analysis rates greater than 50,000 per second. To overcome these limits, we have developed multi-node acoustic focusing flow cells that can position particles (as small as a red blood cell and as large as 107 microns in diameter) into as many as 37 parallel flow streams. We demonstrate the potential of such flow cells for the development of high throughput, parallel flow cytometers by precision focusing of flow cytometry alignment microspheres, red blood cells, and the analysis of CD4+ cellular immunophenotyping assay. This approach will have significant impact towards the creation of high throughput flow cytometers for rare cell detection applications (e.g. circulating tumor cells), applications requiring large particle analysis, and high volume flow cytometry. PMID:22239072

  6. Validity of vaccination cards and parental recall to estimate vaccination coverage: a systematic review of the literature.

    Science.gov (United States)

    Miles, Melody; Ryman, Tove K; Dietz, Vance; Zell, Elizabeth; Luman, Elizabeth T

    2013-03-15

    Immunization programs frequently rely on household vaccination cards, parental recall, or both to calculate vaccination coverage. This information is used at both the global and national level for planning and allocating performance-based funds. However, the validity of household-derived coverage sources has not yet been widely assessed or discussed. To advance knowledge on the validity of different sources of immunization coverage, we undertook a global review of literature. We assessed concordance, sensitivity, specificity, positive and negative predictive value, and coverage percentage point difference when subtracting household vaccination source from a medical provider source. Median coverage difference per paper ranged from -61 to +1 percentage points between card versus provider sources and -58 to +45 percentage points between recall versus provider source. When card and recall sources were combined, median coverage difference ranged from -40 to +56 percentage points. Overall, concordance, sensitivity, specificity, positive and negative predictive value showed poor agreement, providing evidence that household vaccination information may not be reliable, and should be interpreted with care. While only 5 papers (11%) included in this review were from low-middle income countries, low-middle income countries often rely more heavily on household vaccination information for decision making. Recommended actions include strengthening quality of child-level data and increasing investments to improve vaccination card availability and card marking. There is also an urgent need for additional validation studies of vaccine coverage in low and middle income countries. Copyright © 2013. Published by Elsevier Ltd.

  7. Self-balanced modulation and magnetic rebalancing method for parallel multilevel inverters

    Energy Technology Data Exchange (ETDEWEB)

    Li, Hui; Shi, Yanjun

    2017-11-28

    A self-balanced modulation method and a closed-loop magnetic flux rebalancing control method for parallel multilevel inverters. The combination of the two methods provides for balancing of the magnetic flux of the inter-cell transformers (ICTs) of the parallel multilevel inverters without deteriorating the quality of the output voltage. In various embodiments a parallel multi-level inverter modulator is provide including a multi-channel comparator to generate a multiplexed digitized ideal waveform for a parallel multi-level inverter and a finite state machine (FSM) module coupled to the parallel multi-channel comparator, the FSM module to receive the multiplexed digitized ideal waveform and to generate a pulse width modulated gate-drive signal for each switching device of the parallel multi-level inverter. The system and method provides for optimization of the output voltage spectrum without influence the magnetic balancing.

  8. Influenza Vaccination Coverage Among Pregnant Women - United States, 2016-17 Influenza Season.

    Science.gov (United States)

    Ding, Helen; Black, Carla L; Ball, Sarah; Fink, Rebecca V; Williams, Walter W; Fiebelkorn, Amy Parker; Lu, Peng-Jun; Kahn, Katherine E; D'Angelo, Denise V; Devlin, Rebecca; Greby, Stacie M

    2017-09-29

    Pregnant women and their infants are at increased risk for severe influenza-associated illness (1), and since 2004, the Advisory Committee on Immunization Practices (ACIP) has recommended influenza vaccination for all women who are or might be pregnant during the influenza season, regardless of the trimester of the pregnancy (2). To assess influenza vaccination coverage among pregnant women during the 2016-17 influenza season, CDC analyzed data from an Internet panel survey conducted during March 28-April 7, 2017. Among 1,893 survey respondents pregnant at any time during October 2016-January 2017, 53.6% reported having received influenza vaccination before (16.2%) or during (37.4%) pregnancy, similar to coverage during the preceding four influenza seasons. Also similar to the preceding influenza season, 67.3% of women reported receiving a provider offer for influenza vaccination, 11.9% reported receiving a recommendation but no offer, and 20.7% reported receiving no recommendation; among these women, reported influenza vaccination coverage was 70.5%, 43.7%, and 14.8%, respectively. Among women who received a provider offer for vaccination, vaccination coverage differed by race/ethnicity, education, insurance type, and other sociodemographic factors. Use of evidence-based practices such as provider reminders and standing orders could reduce missed opportunities for vaccination and increase vaccination coverage among pregnant women.

  9. Evaluation of potentially achievable vaccination coverage with simultaneous administration of vaccines among children in the United States.

    Science.gov (United States)

    Zhao, Zhen; Smith, Philip J; Hill, Holly A

    2016-06-08

    Routine administration of all age-appropriate doses of vaccines during the same visit is recommended for children by the National Vaccine Advisory Committee (NVAC) and the Advisory Committee on Immunization Practices (ACIP). Evaluate the potentially achievable vaccination coverage for ≥4 doses of diphtheria and tetanus toxoids and acellular pertussis vaccine (4+DTaP), ≥4 doses of pneumococcal conjugate vaccine (4+PCV), and the full series of Haemophilus influenzae type b vaccine (Hib-FS) with simultaneous administration of all recommended childhood vaccines. Compare the potentially achievable vaccination coverage to the reported vaccination coverage for calendar years 2001 through 2013; by state in the United States and by selected socio-demographic factors in 2013. The potentially achievable vaccination coverage was defined as the coverage possible for the recommended 4+DTaP, 4+PCV, and Hib-FS if missed opportunities for simultaneous administration of all age-appropriate doses of vaccines for children had been eliminated. Compared to the reported vaccination coverage, the potentially achievable vaccination coverage for 4+DTaP, 4+PCV, and Hib-FS could have increased significantly (Pvaccination coverage would have achieved the 90% target of Healthy People 2020 for the three vaccines beginning in 2005, 2008, and 2011 respectively. In 2013, the potentially achievable vaccination coverage increased significantly across all selected socio-demographic factors, potentially achievable vaccination coverage would have reached the 90% target for more than 51% of the states in the United States. The findings in this study suggest that fully utilization of all opportunities for simultaneous administration of all age-eligible childhood doses of vaccines during the same vaccination visit is a critical strategy for achieving the vaccination coverage target of Healthy People 2020. Encouraging providers to deliver all recommended vaccines that are due at each visit by

  10. PDDP, A Data Parallel Programming Model

    Directory of Open Access Journals (Sweden)

    Karen H. Warren

    1996-01-01

    Full Text Available PDDP, the parallel data distribution preprocessor, is a data parallel programming model for distributed memory parallel computers. PDDP implements high-performance Fortran-compatible data distribution directives and parallelism expressed by the use of Fortran 90 array syntax, the FORALL statement, and the WHERE construct. Distributed data objects belong to a global name space; other data objects are treated as local and replicated on each processor. PDDP allows the user to program in a shared memory style and generates codes that are portable to a variety of parallel machines. For interprocessor communication, PDDP uses the fastest communication primitives on each platform.

  11. Design, analysis and control of cable-suspended parallel robots and its applications

    CERN Document Server

    Zi, Bin

    2017-01-01

    This book provides an essential overview of the authors’ work in the field of cable-suspended parallel robots, focusing on innovative design, mechanics, control, development and applications. It presents and analyzes several typical mechanical architectures of cable-suspended parallel robots in practical applications, including the feed cable-suspended structure for super antennae, hybrid-driven-based cable-suspended parallel robots, and cooperative cable parallel manipulators for multiple mobile cranes. It also addresses the fundamental mechanics of cable-suspended parallel robots on the basis of their typical applications, including the kinematics, dynamics and trajectory tracking control of the feed cable-suspended structure for super antennae. In addition it proposes a novel hybrid-driven-based cable-suspended parallel robot that uses integrated mechanism design methods to improve the performance of traditional cable-suspended parallel robots. A comparative study on error and performance indices of hybr...

  12. Parallel fluorescent probe synthesis based on the large-scale preparation of BODIPY FL propionic acid.

    Science.gov (United States)

    Katoh, Taisuke; Yoshikawa, Masato; Yamamoto, Takeshi; Arai, Ryosuke; Nii, Noriyuki; Tomata, Yoshihide; Suzuki, Shinkichi; Koyama, Ryoukichi; Negoro, Nobuyuki; Yogo, Takatoshi

    2017-03-01

    We describe a methodology for quick development of fluorescent probes with the desired potency for the target of interest by using a method of parallel synthesis, termed as Parallel Fluorescent Probe Synthesis (Parallel-FPS). BODIPY FL propionic acid 1 is a widely used fluorophore, but it is difficult to prepare a large amount of 1, which hinders its use in parallel synthesis. Optimization of a synthetic scheme enabled us to obtain 50g of 1 in one batch. With this large quantity of 1 in hand, we performed Parallel-FPS of BODIPY FL-labeled ligands for estrogen related receptor-α (ERRα). An initial trial of the parallel synthesis with various linkers provided a potent ligand for ERRα (Reporter IC 50 =80nM), demonstrating the usefulness of Parallel-FPS. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. The good, the bad, and the ugly of medication coverage: Is altering a diagnosis to ensure medication coverage ethical?

    Directory of Open Access Journals (Sweden)

    Gillian Weston, BS

    2016-06-01

    Full Text Available Recently, a patient presented to the dermatology clinic suffering from disabling, recurrent palmoplantar vesicles and pustules. Biopsy demonstrated nondiagnostic histologic findings without unequivocal evidence for psoriasis. The localized rash was recalcitrant to a host of standard therapies. An anti-tumor necrosis factor biologic was considered, and experience suggested that this expensive medication would only be approved for coverage if a diagnosis was submitted for a Food and Drug Administration–approved indication as psoriasis. All health-care providers face similar dilemmas in caring for their own patients. To whom is the physician’s primary responsibility when what is best for the patient may not align with the realities of our health-care system? Should a physician alter or exaggerate a medical diagnosis to obtain insurance coverage for a needed medication? What are the ethical implications of this action? If the physician’s fiduciary duty to the patient had no limits, there would be multiple potential consequences including compromise of the health-care provider’s integrity and relationships with patients, other providers, and third-party payers as well as the risk to an individual patient’s health and creation of injustices within the health-care system.

  14. Design, Implementation and Evaluation of Parallel Pipelined STAP on Parallel Computers

    Science.gov (United States)

    1998-04-01

    parallel computers . In particular, the paper describes the issues involved in parallelization, our approach to parallelization and performance results...on an Intel Paragon. The paper also discusses the process of developing software for such an application on parallel computers when latency and

  15. Systematic approach for deriving feasible mappings of parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir; Imre, Kayhan M.

    2017-01-01

    The need for high-performance computing together with the increasing trend from single processor to parallel computer architectures has leveraged the adoption of parallel computing. To benefit from parallel computing power, usually parallel algorithms are defined that can be mapped and executed

  16. Using LTE Networks for UAV Command and Control Link: A Rural-Area Coverage Analysis

    DEFF Research Database (Denmark)

    Nguyen, Huan Cong; Amorim, Rafhael Medeiros de; Wigard, Jeroen

    2017-01-01

    In this paper we investigate the ability of Long-Term Evolution (LTE) network to provide coverage for Unmanned Aerial Vehicles (UAVs) in a rural area, in particular for the Command and Control (C2) downlink. The study takes into consideration the dependency of the large-scale path loss on the hei......In this paper we investigate the ability of Long-Term Evolution (LTE) network to provide coverage for Unmanned Aerial Vehicles (UAVs) in a rural area, in particular for the Command and Control (C2) downlink. The study takes into consideration the dependency of the large-scale path loss...

  17. The global coverage of prevalence data for mental disorders in children and adolescents.

    Science.gov (United States)

    Erskine, H E; Baxter, A J; Patton, G; Moffitt, T E; Patel, V; Whiteford, H A; Scott, J G

    2017-08-01

    Children and adolescents make up almost a quarter of the world's population with 85% living in low- and middle-income countries (LMICs). Globally, mental (and substance use) disorders are the leading cause of disability in young people; however, the representativeness or 'coverage' of the prevalence data is unknown. Coverage refers to the proportion of the target population (ages 5-17 years) represented by the available data. Prevalence data for conduct disorder (CD), attention-deficit/hyperactivity disorder (ADHD), autism spectrum disorders (ASDs), eating disorders (EDs), depression, and anxiety disorders were sourced from systematic reviews conducted for the Global Burden of Disease Study 2010 (GBD 2010) and 2013 (GBD 2013). For each study, the location proportion was multiplied by the age proportion to give study coverage. Location proportion was calculated by dividing the total study location population by the total study location population. Age proportion was calculated by dividing the population of the country aged within the age range of the study sample by the population of the country aged within the age range of the study sample. If a study only sampled one sex, study coverage was halved. Coverage across studies was then summed for each country to give coverage by country. This method was repeated at the region and global level, and separately for GBD 2013 and GBD 2010. Mean global coverage of prevalence data for mental disorders in ages 5-17 years was 6.7% (CD: 5.0%, ADHD: 5.5%, ASDs: 16.1%, EDs: 4.4%, depression: 6.2%, anxiety: 3.2%). Of 187 countries, 124 had no data for any disorder. Many LMICs were poorly represented in the available prevalence data, for example, no region in sub-Saharan Africa had more than 2% coverage for any disorder. While coverage increased between GBD 2010 and GBD 2013, this differed greatly between disorders and few new countries provided data. The global coverage of prevalence data for mental disorders in children and

  18. An equity dashboard to monitor vaccination coverage

    Science.gov (United States)

    Harper, Sam; Nandi, Arijit; Rodríguez, José M Mendoza; Hansen, Peter M; Johri, Mira

    2017-01-01

    Abstract Equity monitoring is a priority for Gavi, the Vaccine Alliance, and for those implementing The 2030 agenda for sustainable development. For its new phase of operations, Gavi reassessed its approach to monitoring equity in vaccination coverage. To help inform this effort, we made a systematic analysis of inequalities in vaccination coverage across 45 Gavi-supported countries and compared results from different measurement approaches. Based on our findings, we formulated recommendations for Gavi’s equity monitoring approach. The approach involved defining the vulnerable populations, choosing appropriate measures to quantify inequalities, and defining equity benchmarks that reflect the ambitions of the sustainable development agenda. In this article, we explain the rationale for the recommendations and for the development of an improved equity monitoring tool. Gavi’s previous approach to measuring equity was the difference in vaccination coverage between a country’s richest and poorest wealth quintiles. In addition to the wealth index, we recommend monitoring other dimensions of vulnerability (maternal education, place of residence, child sex and the multidimensional poverty index). For dimensions with multiple subgroups, measures of inequality that consider information on all subgroups should be used. We also recommend that both absolute and relative measures of inequality be tracked over time. Finally, we propose that equity benchmarks target complete elimination of inequalities. To facilitate equity monitoring, we recommend the use of a data display tool – the equity dashboard – to support decision-making in the sustainable development period. We highlight its key advantages using data from Côte d’Ivoire and Haiti. PMID:28250513

  19. Parallel Tensor Compression for Large-Scale Scientific Data.

    Energy Technology Data Exchange (ETDEWEB)

    Kolda, Tamara G. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ballard, Grey [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Austin, Woody Nathan [Univ. of Texas, Austin, TX (United States)

    2015-10-01

    As parallel computing trends towards the exascale, scientific data produced by high-fidelity simulations are growing increasingly massive. For instance, a simulation on a three-dimensional spatial grid with 512 points per dimension that tracks 64 variables per grid point for 128 time steps yields 8 TB of data. By viewing the data as a dense five way tensor, we can compute a Tucker decomposition to find inherent low-dimensional multilinear structure, achieving compression ratios of up to 10000 on real-world data sets with negligible loss in accuracy. So that we can operate on such massive data, we present the first-ever distributed memory parallel implementation for the Tucker decomposition, whose key computations correspond to parallel linear algebra operations, albeit with nonstandard data layouts. Our approach specifies a data distribution for tensors that avoids any tensor data redistribution, either locally or in parallel. We provide accompanying analysis of the computation and communication costs of the algorithms. To demonstrate the compression and accuracy of the method, we apply our approach to real-world data sets from combustion science simulations. We also provide detailed performance results, including parallel performance in both weak and strong scaling experiments.

  20. Parallel Careers and their Consequences for Companies in Brazil

    Directory of Open Access Journals (Sweden)

    Maria Candida Baumer Azevedo

    2014-04-01

    Full Text Available Given the relevance of the need to manage parallel careers to attract and retain people in organizations, this paper provides insight into this phenomenon from an organizational perspective. The parallel career concept, introduced by Alboher (2007 and recently addressed by Schuiling (2012, has previously been examined only from the perspective of the parallel career holder (PC holder. The paper provides insight from both individual and organizational perspectives on the phenomenon of parallel careers and considers how it can function as an important tool for attracting and retaining people by contributing to human development. This paper employs a qualitative approach that includes 30 semi-structured one-on-one interviews. The organizational perspective arises from the 15 interviews with human resources (HR executives from different companies. The individual viewpoint originates from the interviews with 15 executives who are also PC holders. An inductive content analysis approach was used to examine Brazilian companies and the Brazilian office of multinationals. Companies that are concerned about having the best talent on their teams can benefit from a deeper understanding of parallel careers, which can be used to attract, develop, and retain talent. Limitations and directions for future research are discussed.