WorldWideScience

Sample records for providing parallel coverage

  1. Providing Universal Health Insurance Coverage in Nigeria.

    Science.gov (United States)

    Okebukola, Peter O; Brieger, William R

    2016-07-07

    Despite a stated goal of achieving universal coverage, the National Health Insurance Scheme of Nigeria had achieved only 4% coverage 12 years after it was launched. This study assessed the plans of the National Health Insurance Scheme to achieve universal health insurance coverage in Nigeria by 2015 and discusses the challenges facing the scheme in achieving insurance coverage. In-depth interviews from various levels of the health-care system in the country, including providers, were conducted. The results of the analysis suggest that challenges to extending coverage include the difficulty in convincing autonomous state governments to buy into the scheme and an inadequate health workforce that might not be able to meet increased demand. Recommendations for increasing the scheme's coverage include increasing decentralization and strengthening human resources for health in the service delivery systems. Strong political will is needed as a catalyst to achieving these goals. © The Author(s) 2016.

  2. Patient Experience Of Provider Refusal Of Medicaid Coverage And Its Implications.

    Science.gov (United States)

    Bhandari, Neeraj; Shi, Yunfeng; Jung, Kyoungrae

    2016-01-01

    Previous studies show that many physicians do not accept new patients with Medicaid coverage, but no study has examined Medicaid enrollees' actual experience of provider refusal of their coverage and its implications. Using the 2012 National Health Interview Survey, we estimate provider refusal of health insurance coverage reported by 23,992 adults with continuous coverage for the past 12 months. We find that among Medicaid enrollees, 6.73% reported their coverage being refused by a provider in 2012, a rate higher than that in Medicare and private insurance by 4.07 (p<.01) and 3.68 (p<.001) percentage points, respectively. Refusal of Medicaid coverage is associated with delaying needed care, using emergency room (ER) as a usual source of care, and perceiving current coverage as worse than last year. In view of the Affordable Care Act's (ACA) Medicaid expansion, future studies should continue monitoring enrollees' experience of coverage refusal.

  3. 42 CFR 423.464 - Coordination of benefits with other providers of prescription drug coverage.

    Science.gov (United States)

    2010-10-01

    ... fees. CMS may impose user fees on Part D plans for the transmittal of information necessary for benefit...) Provides supplemental drug coverage to individuals based on financial need, age, or medical condition, and... effective exchange of information and coordination between such plan and SPAPs and entities providing other...

  4. A NEPA compliance strategy plan for providing programmatic coverage to agency problems

    International Nuclear Information System (INIS)

    Eccleston, C.H.

    1994-04-01

    The National Environmental Policy Act (NEPA) of 1969, requires that all federal actions be reviewed before making a final decision to pursue a proposed action or one of its reasonable alternatives. The NEPA process is expected to begin early in the planning process. This paper discusses an approach for providing efficient and comprehensive NEPA coverage to large-scale programs. Particular emphasis has been given to determining bottlenecks and developing workarounds to such problems. Specifically, the strategy is designed to meet four specific goals: (1) provide comprehensive coverage, (2) reduce compliance cost/time, (3) prevent project delays, and (4) reduce document obsolescence

  5. The generation of chromosomal deletions to provide extensive coverage and subdivision of the Drosophila melanogaster genome.

    Science.gov (United States)

    Cook, R Kimberley; Christensen, Stacey J; Deal, Jennifer A; Coburn, Rachel A; Deal, Megan E; Gresens, Jill M; Kaufman, Thomas C; Cook, Kevin R

    2012-01-01

    Chromosomal deletions are used extensively in Drosophila melanogaster genetics research. Deletion mapping is the primary method used for fine-scale gene localization. Effective and efficient deletion mapping requires both extensive genomic coverage and a high density of molecularly defined breakpoints across the genome. A large-scale resource development project at the Bloomington Drosophila Stock Center has improved the choice of deletions beyond that provided by previous projects. FLP-mediated recombination between FRT-bearing transposon insertions was used to generate deletions, because it is efficient and provides single-nucleotide resolution in planning deletion screens. The 793 deletions generated pushed coverage of the euchromatic genome to 98.4%. Gaps in coverage contain haplolethal and haplosterile genes, but the sizes of these gaps were minimized by flanking these genes as closely as possible with deletions. In improving coverage, a complete inventory of haplolethal and haplosterile genes was generated and extensive information on other haploinsufficient genes was compiled. To aid mapping experiments, a subset of deletions was organized into a Deficiency Kit to provide maximal coverage efficiently. To improve the resolution of deletion mapping, screens were planned to distribute deletion breakpoints evenly across the genome. The median chromosomal interval between breakpoints now contains only nine genes and 377 intervals contain only single genes. Drosophila melanogaster now has the most extensive genomic deletion coverage and breakpoint subdivision as well as the most comprehensive inventory of haploinsufficient genes of any multicellular organism. The improved selection of chromosomal deletion strains will be useful to nearly all Drosophila researchers.

  6. Defining the essential anatomical coverage provided by military body armour against high energy projectiles.

    Science.gov (United States)

    Breeze, John; Lewis, E A; Fryer, R; Hepper, A E; Mahoney, Peter F; Clasper, Jon C

    2016-08-01

    Body armour is a type of equipment worn by military personnel that aims to prevent or reduce the damage caused by ballistic projectiles to structures within the thorax and abdomen. Such injuries remain the leading cause of potentially survivable deaths on the modern battlefield. Recent developments in computer modelling in conjunction with a programme to procure the next generation of UK military body armour has provided the impetus to re-evaluate the optimal anatomical coverage provided by military body armour against high energy projectiles. A systematic review of the literature was undertaken to identify those anatomical structures within the thorax and abdomen that if damaged were highly likely to result in death or significant long-term morbidity. These structures were superimposed upon two designs of ceramic plate used within representative body armour systems using a computerised representation of human anatomy. Those structures requiring essential medical coverage by a plate were demonstrated to be the heart, great vessels, liver and spleen. For the 50th centile male anthropometric model used in this study, the front and rear plates from the Enhanced Combat Body Armour system only provide limited coverage, but do fulfil their original requirement. The plates from the current Mark 4a OSPREY system cover all of the structures identified in this study as requiring coverage except for the abdominal sections of the aorta and inferior vena cava. Further work on sizing of plates is recommended due to its potential to optimise essential medical coverage. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  7. 75 FR 27141 - Group Health Plans and Health Insurance Issuers Providing Dependent Coverage of Children to Age...

    Science.gov (United States)

    2010-05-13

    ... Group Health Plans and Health Insurance Issuers Providing Dependent Coverage of Children to Age 26 Under... Information and Insurance Oversight of the U.S. Department of Health and Human Services are issuing substantially similar interim final regulations with respect to group health plans and health insurance coverage...

  8. 75 FR 41787 - Requirement for Group Health Plans and Health Insurance Issuers To Provide Coverage of Preventive...

    Science.gov (United States)

    2010-07-19

    ... Requirement for Group Health Plans and Health Insurance Issuers To Provide Coverage of Preventive Services... Insurance Oversight of the U.S. Department of Health and Human Services are issuing substantially similar interim final regulations with respect to group health plans and health insurance coverage offered in...

  9. A PC parallel port button box provides millisecond response time accuracy under Linux.

    Science.gov (United States)

    Stewart, Neil

    2006-02-01

    For psychologists, it is sometimes necessary to measure people's reaction times to the nearest millisecond. This article describes how to use the PC parallel port to receive signals from a button box to achieve millisecond response time accuracy. The workings of the parallel port, the corresponding port addresses, and a simple Linux program for controlling the port are described. A test of the speed and reliability of button box signal detection is reported. If the reader is moderately familiar with Linux, this article should provide sufficient instruction for him or her to build and test his or her own parallel port button box. This article also describes how the parallel port could be used to control an external apparatus.

  10. The health and healthcare impact of providing insurance coverage to uninsured children: A prospective observational study

    Directory of Open Access Journals (Sweden)

    Glenn Flores

    2017-05-01

    Full Text Available Abstract Background Of the 4.8 million uninsured children in America, 62–72% are eligible for but not enrolled in Medicaid or CHIP. Not enough is known, however, about the impact of health insurance on outcomes and costs for previously uninsured children, which has never been examined prospectively. Methods This prospective observational study of uninsured Medicaid/CHIP-eligible minority children compared children obtaining coverage vs. those remaining uninsured. Subjects were recruited at 97 community sites, and 11 outcomes monitored monthly for 1 year. Results In this sample of 237 children, those obtaining coverage were significantly (P 6 months at baseline were associated with remaining uninsured for the entire year. In multivariable analysis, children who had been uninsured for >6 months at baseline (odds ratio [OR], 3.8; 95% confidence interval [CI], 1.4–10.3 and African-American children (OR, 2.8; 95% CI, 1.1–7.3 had significantly higher odds of remaining uninsured for the entire year. Insurance saved $2886/insured child/year, with mean healthcare costs = $5155/uninsured vs. $2269/insured child (P = .04. Conclusions Providing health insurance to Medicaid/CHIP-eligible uninsured children improves health, healthcare access and quality, and parental satisfaction; reduces unmet needs and out-of-pocket costs; and saves $2886/insured child/year. African-American children and those who have been uninsured for >6 months are at greatest risk for remaining uninsured. Extrapolation of the savings realized by insuring uninsured, Medicaid/CHIP-eligible children suggests that America potentially could save $8.7–$10.1 billion annually by providing health insurance to all Medicaid/CHIP-eligible uninsured children.

  11. Rotational electrical impedance tomography using electrodes with limited surface coverage provides window for multimodal sensing

    Science.gov (United States)

    Lehti-Polojärvi, Mari; Koskela, Olli; Seppänen, Aku; Figueiras, Edite; Hyttinen, Jari

    2018-02-01

    Electrical impedance tomography (EIT) is an imaging method that could become a valuable tool in multimodal applications. One challenge in simultaneous multimodal imaging is that typically the EIT electrodes cover a large portion of the object surface. This paper investigates the feasibility of rotational EIT (rEIT) in applications where electrodes cover only a limited angle of the surface of the object. In the studied rEIT, the object is rotated a full 360° during a set of measurements to increase the information content of the data. We call this approach limited angle full revolution rEIT (LAFR-rEIT). We test LAFR-rEIT setups in two-dimensional geometries with computational and experimental data. We use up to 256 rotational measurement positions, which requires a new way to solve the forward and inverse problem of rEIT. For this, we provide a modification, available for EIDORS, in the supplementary material. The computational results demonstrate that LAFR-rEIT with eight electrodes produce the same image quality as conventional 16-electrode rEIT, when data from an adequate number of rotational measurement positions are used. Both computational and experimental results indicate that the novel LAFR-rEIT provides good EIT with setups with limited surface coverage and a small number of electrodes.

  12. Radiographic Underestimation of In Vivo Cup Coverage Provided by Total Hip Arthroplasty for Dysplasia.

    Science.gov (United States)

    Nie, Yong; Wang, HaoYang; Huang, ZeYu; Shen, Bin; Kraus, Virginia Byers; Zhou, Zongke

    2018-01-01

    The accuracy of using 2-dimensional anteroposterior pelvic radiography to assess acetabular cup coverage among patients with developmental dysplasia of the hip after total hip arthroplasty (THA) remains unclear in retrospective clinical studies. A group of 20 patients with developmental dysplasia of the hip (20 hips) underwent cementless THA. During surgery but after acetabular reconstruction, bone wax was pressed onto the uncovered surface of the acetabular cup. A surface model of the bone wax was generated with 3-dimensional scanning. The percentage of the acetabular cup that was covered by intact host acetabular bone in vivo was calculated with modeling software. Acetabular cup coverage also was determined from a postoperative supine anteroposterior pelvic radiograph. The height of the hip center (distance from the center of the femoral head perpendicular to the inter-teardrop line) also was determined from radiographs. Radiographic cup coverage was a mean of 6.93% (SD, 2.47%) lower than in vivo cup coverage for these 20 patients with developmental dysplasia of the hip (Pcup coverage (Pearson r=0.761, Pcup (P=.001) but not the position of the hip center (high vs normal) was significantly associated with the difference between radiographic and in vivo cup coverage. Two-dimensional radiographically determined cup coverage conservatively reflects in vivo cup coverage and remains an important index (taking 7% underestimation errors and the effect of greater underestimation of larger cup size into account) for assessing the stability of the cup and monitoring for adequate ingrowth of bone. [Orthopedics. 2018; 41(1):e46-e51.]. Copyright 2017, SLACK Incorporated.

  13. Achieving universal health coverage in small island states: could importing health services provide a solution?

    Science.gov (United States)

    Walls, Helen; Smith, Richard

    2018-01-01

    Background Universal health coverage (UHC) is difficult to achieve in settings short of medicines, health workers and health facilities. These characteristics define the majority of the small island developing states (SIDS), where population size negates the benefits of economies of scale. One option to alleviate this constraint is to import health services, rather than focus on domestic production. This paper provides empirical analysis of the potential impact of this option. Methods Analysis was based on publicly accessible data for 14 SIDS, covering health-related travel and health indicators for the period 2003–2013, together with in-depth review of medical travel schemes for the two highest importing SIDS—the Maldives and Tuvalu. Findings Medical travel from SIDS is accelerating. The SIDS studied generally lacked health infrastructure and technologies, and the majority of them had lower than the recommended number of physicians in a country, which limits their capacity for achieving UHC. Tuvalu and the Maldives were the highest importers of healthcare and notably have public schemes that facilitate medical travel and help lower the out-of-pocket expenditure on medical travel. Although different in approach, design and performance, the medical travel schemes in Tuvalu and the Maldives are both examples of measures used to increase access to health services that cannot feasibly be provided in SIDS. Interpretation Our findings suggest that importing health services (through schemes to facilitate medical travel) is a potential mechanism to help achieve universal healthcare for SIDS but requires due diligence over cost, equity and quality control. PMID:29527349

  14. HPV vaccination coverage of teen girls: the influence of health care providers.

    Science.gov (United States)

    Smith, Philip J; Stokley, Shannon; Bednarczyk, Robert A; Orenstein, Walter A; Omer, Saad B

    2016-03-18

    Between 2010 and 2014, the percentage of 13-17 year-old girls administered ≥3 doses of the human papilloma virus (HPV) vaccine ("fully vaccinated") increased by 7.7 percentage points to 39.7%, and the percentage not administered any doses of the HPV vaccine ("not immunized") decreased by 11.3 percentage points to 40.0%. To evaluate the complex interactions between parents' vaccine-related beliefs, demographic factors, and HPV immunization status. Vaccine-related parental beliefs and sociodemographic data collected by the 2010 National Immunization Survey-Teen among teen girls (n=8490) were analyzed. HPV vaccination status was determined from teens' health care provider (HCP) records. Among teen girls either unvaccinated or fully vaccinated against HPV, teen girls whose parent was positively influenced to vaccinate their teen daughter against HPV were 48.2 percentage points more likely to be fully vaccinated. Parents who reported being positively influenced to vaccinate against HPV were 28.9 percentage points more likely to report that their daughter's HCP talked about the HPV vaccine, 27.2 percentage points more likely to report that their daughter's HCP gave enough time to discuss the HPV shot, and 43.4 percentage points more likely to report that their daughter's HCP recommended the HPV vaccine (pteen girls administered 1-2 doses of the HPV vaccine, 87.0% had missed opportunities for HPV vaccine administration. Results suggest that an important pathway to achieving higher ≥3 dose HPV vaccine coverage is by increasing HPV vaccination series initiation though HCP talking to parents about the HPV vaccine, giving parents time to discuss the vaccine, and by making a strong recommendation for the HPV. Also, HPV vaccination series completion rates may be increased by eliminating missed opportunities to vaccinate against HPV and scheduling additional follow-up visits to administer missing HPV vaccine doses. Published by Elsevier Ltd.

  15. Medicaid and CHIP Provide Coverage to More than Half of All Children in D.C. Policy Snapshot

    Science.gov (United States)

    DC Action for Children, 2011

    2011-01-01

    Medicaid and CHIP are crucial parts of the social safety net, providing health insurance coverage to more than half of all children ages 0-21 in D.C. and a third of children nationally. Without these two programs, more than 97,000 children in the District would have been uninsured in 2010. New research indicates that compared with the uninsured,…

  16. Coverage and quality of antenatal care provided at primary health care facilities in the 'Punjab' province of 'Pakistan'.

    Directory of Open Access Journals (Sweden)

    Muhammad Ashraf Majrooh

    Full Text Available BACKGROUND: Antenatal care is a very important component of maternal health services. It provides the opportunity to learn about risks associated with pregnancy and guides to plan the place of deliveries thereby preventing maternal and infant morbidity and mortality. In 'Pakistan' antenatal services to rural population are being provided through a network of primary health care facilities designated as 'Basic Health Units and Rural Health Centers. Pakistan is a developing country, consisting of four provinces and federally administered areas. Each province is administratively subdivided in to 'Divisions' and 'Districts'. By population 'Punjab' is the largest province of Pakistan having 36 districts. This study was conducted to assess the coverage and quality antenatal care in the primary health care facilities in 'Punjab' province of 'Pakistan'. METHODS: Quantitative and Qualitative methods were used to collect data. Using multistage sampling technique nine out of thirty six districts were selected and 19 primary health care facilities of public sector (seventeen Basic Health Units and two Rural Health Centers were randomly selected from each district. Focus group discussions and in-depth interviews were conducted with clients, providers and health managers. RESULTS: The overall enrollment for antenatal checkup was 55.9% and drop out was 32.9% in subsequent visits. The quality of services regarding assessment, treatment and counseling was extremely poor. The reasons for low coverage and quality were the distant location of facilities, deficiency of facility resources, indifferent attitude and non availability of the staff. Moreover, lack of client awareness about importance of antenatal care and self empowerment for decision making to seek care were also responsible for low coverage. CONCLUSION: The coverage and quality of the antenatal care services in 'Punjab' are extremely compromised. Only half of the expected pregnancies are enrolled and

  17. Providing Coverage for the Unique Lifelong Health Care Needs of Living Kidney Donors Within the Framework of Financial Neutrality.

    Science.gov (United States)

    Gill, J S; Delmonico, F; Klarenbach, S; Capron, A M

    2017-05-01

    Organ donation should neither enrich donors nor impose financial burdens on them. We described the scope of health care required for all living kidney donors, reflecting contemporary understanding of long-term donor health outcomes; proposed an approach to identify donor health conditions that should be covered within the framework of financial neutrality; and proposed strategies to pay for this care. Despite the Affordable Care Act in the United States, donors continue to have inadequate coverage for important health conditions that are donation related or that may compromise postdonation kidney function. Amendment of Medicare regulations is needed to clarify that surveillance and treatment of conditions that may compromise postdonation kidney function following donor nephrectomy will be covered without expense to the donor. In other countries lacking health insurance for all residents, sufficient data exist to allow the creation of a compensation fund or donor insurance policies to ensure appropriate care. Providing coverage for donation-related sequelae as well as care to preserve postdonation kidney function ensures protection against the financial burdens of health care encountered by donors throughout their lives. Providing coverage for this care should thus be cost-effective, even without considering the health care cost savings that occur for living donor transplant recipients. © 2016 The American Society of Transplantation and the American Society of Transplant Surgeons.

  18. Chinese newspaper coverage of (unproven) stem cell therapies and their providers.

    Science.gov (United States)

    Ogbogu, Ubaka; Du, Li; Rachul, Christen; Bélanger, Lisa; Caulfield, Timothy

    2013-04-01

    China is a primary destination for stem cell tourism, the phenomenon whereby patients travel abroad to receive unproven stem cell-based treatments that have not been approved in their home countries. Yet, much remains unknown about the state of the stem cell treatment industry in China and about how the Chinese view treatments and providers. Given the media's crucial role in science/health communication and in framing public dialogue, this study sought to examine Chinese newspaper portrayal and perceptions of stem cell treatments and their providers. Based on a content analysis of over 300 newspaper articles, the study revealed that while Chinese newspaper reporting is generally neutral in tone, it is also inaccurate, overly positive, heavily influenced by "interested" treatment providers and focused on the therapeutic uses of stem cells to address the health needs of the local population. The study findings suggest a need to counterbalance providers' influence on media reporting through strategies that encourage media uptake of accurate information about stem cell research and treatments.

  19. Parallel processing from applications to systems

    CERN Document Server

    Moldovan, Dan I

    1993-01-01

    This text provides one of the broadest presentations of parallelprocessing available, including the structure of parallelprocessors and parallel algorithms. The emphasis is on mappingalgorithms to highly parallel computers, with extensive coverage ofarray and multiprocessor architectures. Early chapters provideinsightful coverage on the analysis of parallel algorithms andprogram transformations, effectively integrating a variety ofmaterial previously scattered throughout the literature. Theory andpractice are well balanced across diverse topics in this concisepresentation. For exceptional cla

  20. Microparticles shed from multidrug resistant breast cancer cells provide a parallel survival pathway through immune evasion.

    Science.gov (United States)

    Jaiswal, Ritu; Johnson, Michael S; Pokharel, Deep; Krishnan, S Rajeev; Bebawy, Mary

    2017-02-06

    Breast cancer is the most frequently diagnosed cancer in women. Resident macrophages at distant sites provide a highly responsive and immunologically dynamic innate immune response against foreign infiltrates. Despite extensive characterization of the role of macrophages and other immune cells in malignant tissues, there is very little known about the mechanisms which facilitate metastatic breast cancer spread to distant sites of immunological integrity. The mechanisms by which a key healthy defense mechanism fails to protect distant sites from infiltration by metastatic cells in cancer patients remain undefined. Breast tumors, typical of many tumor types, shed membrane vesicles called microparticles (MPs), ranging in size from 0.1-1 μm in diameter. MPs serve as vectors in the intercellular transfer of functional proteins and nucleic acids and in drug sequestration. In addition, MPs are also emerging to be important players in the evasion of cancer cell immune surveillance. A comparative analysis of effects of MPs isolated from human breast cancer cells and non-malignant human brain endothelial cells were examined on THP-1 derived macrophages in vitro. MP-mediated effects on cell phenotype and functionality was assessed by cytokine analysis, cell chemotaxis and phagocytosis, immunolabelling, flow cytometry and confocal imaging. Student's t-test or a one-way analysis of variance (ANOVA) was used for comparison and statistical analysis. In this paper we report on the discovery of a new cellular basis for immune evasion, which is mediated by breast cancer derived MPs. MPs shed from multidrug resistant (MDR) cells were shown to selectively polarize macrophage cells to a functionally incapacitated state and facilitate their engulfment by foreign cells. We propose this mechanism may serve to physically disrupt the inherent immune response prior to cancer cell colonization whilst releasing mediators required for the recruitment of distant immune cells. These findings

  1. Variation in hepatitis B immunization coverage rates associated with provider practices after the temporary suspension of the birth dose

    Directory of Open Access Journals (Sweden)

    Mullooly John P

    2006-11-01

    Full Text Available Abstract Background In 1999, the American Academy of Pediatrics and U.S. Public Health Service recommended suspending the birth dose of hepatitis B vaccine due to concerns about potential mercury exposure. A previous report found that overall national hepatitis B vaccination coverage rates decreased in association with the suspension. It is unknown whether this underimmunization occurred uniformly or was associated with how providers changed their practices for the timing of hepatitis B vaccine doses. We evaluate the impact of the birth dose suspension on underimmunization for the hepatitis B vaccine series among 24-month-olds in five large provider groups and describe provider practices potentially associated with underimmunization following the suspension. Methods Retrospective cohort study of children enrolled in five large provider groups in the United States (A-E. Logistic regression was used to evaluate the association between the birth dose suspension and a child's probability of being underimmunized at 24 months for the hepatitis B vaccine series. Results Prior to July 1999, the percent of children who received a hepatitis B vaccination at birth varied widely (3% to 90% across the five provider groups. After the national recommendation to suspend the hepatitis B birth dose, the percent of children who received a hepatitis B vaccination at birth decreased in all provider groups, and this trend persisted after the policy was reversed. The most substantial decreases were observed in the two provider groups that shifted the first hepatitis B dose from birth to 5–6 months of age. Accounting for temporal trend, children in these two provider groups were significantly more likely to be underimmunized for the hepatitis B series at 24 months of age if they were in the birth dose suspension cohort compared with baseline (Group D OR 2.7, 95% CI 1.7 – 4.4; Group E OR 3.1, 95% CI 2.3 – 4.2. This represented 6% more children in Group D and 9

  2. Assessment of systems for paying health care providers in Vietnam: implications for equity, efficiency and expanding effective health coverage.

    Science.gov (United States)

    Phuong, Nguyen Khanh; Oanh, Tran Thi Mai; Phuong, Hoang Thi; Tien, Tran Van; Cashin, Cheryl

    2015-01-01

    Provider payment arrangements are currently a core concern for Vietnam's health sector and a key lever for expanding effective coverage and improving the efficiency and equity of the health system. This study describes how different provider payment systems are designed and implemented in practice across a sample of provinces and districts in Vietnam. Key informant interviews were conducted with over 100 health policy-makers, purchasers and providers using a structured interview guide. The results of the different payment methods were scored by respondents and assessed against a set of health system performance criteria. Overall, the public health insurance agency, Vietnam Social Security (VSS), is focused on managing expenditures through a complicated set of reimbursement policies and caps, but the incentives for providers are unclear and do not consistently support Vietnam's health system objectives. The results of this study are being used by the Ministry of Health and VSS to reform the provider payment systems to be more consistent with international definitions and good practices and to better support Vietnam's health system objectives.

  3. Diagnostic imaging, a 'parallel' discipline. Can current technology provide a reliable digital diagnostic radiology department

    International Nuclear Information System (INIS)

    Moore, C.J.; Eddleston, B.

    1985-01-01

    Only recently has any detailed criticism been voiced about the practicalities of the introduction of generalised, digital, imaging complexes in diagnostic radiology. Although attendant technological problems are highlighted the authors argue that the fundamental causes of current difficulties are not in the generation but in the processing, filing and subsequent retrieval for display of digital image records. In the real world, looking at images is a parallel process of some complexity and so it is perhaps untimely to expect versatile handling of vast image data bases by existing computer hardware and software which, by their current nature, perform tasks serially. (author)

  4. A model for determining when an analysis contains sufficient detail to provide adequate NEPA coverage for a proposed action

    International Nuclear Information System (INIS)

    Eccleston, C.H.

    1994-11-01

    Neither the National Environmental Policy Act (NEPA) nor its subsequent regulations provide substantive guidance for determining the Level of detail, discussion, and analysis that is sufficient to adequately cover a proposed action. Yet, decisionmakers are routinely confronted with the problem of making such determinations. Experience has shown that no two decisionmakers are Likely to completely agree on the amount of discussion that is sufficient to adequately cover a proposed action. one decisionmaker may determine that a certain Level of analysis is adequate, while another may conclude the exact opposite. Achieving a consensus within the agency and among the public can be problematic. Lacking definitive guidance, decisionmakers and critics alike may point to a universe of potential factors as the basis for defending their claim that an action is or is not adequately covered. Experience indicates that assertions are often based on ambiguous opinions that can be neither proved nor disproved. Lack of definitive guidance slows the decisionmaking process and can result in project delays. Furthermore, it can also Lead to inconsistencies in decisionmaking, inappropriate Levels of NEPA documentation, and increased risk of a project being challenged for inadequate coverage. A more systematic and less subjective approach for making such determinations is obviously needed. A paradigm for reducing the degree of subjectivity inherent in such decisions is presented in the following paper. The model is specifically designed to expedite the decisionmaking process by providing a systematic approach for making these determination. In many cases, agencies may find that using this model can reduce the analysis and size of NEPA documents

  5. Comparison of NIS and NHIS/NIPRCS vaccination coverage estimates. National Immunization Survey. National Health Interview Survey/National Immunization Provider Record Check Study.

    Science.gov (United States)

    Bartlett, D L; Ezzati-Rice, T M; Stokley, S; Zhao, Z

    2001-05-01

    The National Immunization Survey (NIS) and the National Health Interview Survey (NHIS) produce national coverage estimates for children aged 19 months to 35 months. The NIS is a cost-effective, random-digit-dialing telephone survey that produces national and state-level vaccination coverage estimates. The National Immunization Provider Record Check Study (NIPRCS) is conducted in conjunction with the annual NHIS, which is a face-to-face household survey. As the NIS is a telephone survey, potential coverage bias exists as the survey excludes children living in nontelephone households. To assess the validity of estimates of vaccine coverage from the NIS, we compared 1995 and 1996 NIS national estimates with results from the NHIS/NIPRCS for the same years. Both the NIS and the NHIS/NIPRCS produce similar results. The NHIS/NIPRCS supports the findings of the NIS.

  6. PROVIDING QUALITY OF ELECTRIC POWER IN ELECTRIC POWER SYSTEM IN PARALLEL OPERATION WITH WIND TURBINE

    Directory of Open Access Journals (Sweden)

    Yu. A. Rolik

    2016-01-01

    Full Text Available The problem of providing electric power quality in the electric power systems (EPS that are equipped with sufficiently long air or cable transmission lines is under consideration. This problem proved to be of particular relevance to the EPS in which a source of electrical energy is the generator of wind turbines since the wind itself is an instable primary energy source. Determination of the degree of automation of voltage regulation in the EPS is reduced to the choice of methods and means of regulation of power quality parameters. The concept of a voltage loss and the causes of the latter are explained by the simplest power system that is presented by a single-line diagram. It is suggested to regulate voltage by means of changing parameters of the network with the use of the method of reducing loss of line voltage by reducing its reactance. The latter is achieved by longitudinal capacitive compensation of the inductive reactance of the line. The effect is illustrated by vector diagrams of currents and voltages in the equivalent circuits of transmission lines with and without the use of longitudinal capacitive compensation. The analysis of adduced formulas demonstrated that the use of this method of regulation is useful only in the systems of power supply with a relatively low power factor (cosφ < 0.7 to 0.9. This power factor is typical for the situation of inclusion the wind turbine with asynchronous generator in the network since the speed of wind is instable. The voltage regulation fulfilled with the aid of the proposed method will make it possible to provide the required quality of the consumers’ busbars voltage in this situation. In is turn, it will make possible to create the necessary conditions for the economical transmission of electric power with the lowest outlay of reactive power and the lowest outlay of active power losses.

  7. runjags: An R Package Providing Interface Utilities, Model Templates, Parallel Computing Methods and Additional Distributions for MCMC Models in JAGS

    Directory of Open Access Journals (Sweden)

    Matthew J. Denwood

    2016-07-01

    Full Text Available The runjags package provides a set of interface functions to facilitate running Markov chain Monte Carlo models in JAGS from within R. Automated calculation of appropriate convergence and sample length diagnostics, user-friendly access to commonly used graphical outputs and summary statistics, and parallelized methods of running JAGS are provided. Template model specifications can be generated using a standard lme4-style formula interface to assist users less familiar with the BUGS syntax. Automated simulation study functions are implemented to facilitate model performance assessment, as well as drop-k type cross-validation studies, using high performance computing clusters such as those provided by parallel. A module extension for JAGS is also included within runjags, providing the Pareto family of distributions and a series of minimally-informative priors including the DuMouchel and half-Cauchy priors. This paper outlines the primary functions of this package, and gives an illustration of a simulation study to assess the sensitivity of two equivalent model formulations to different prior distributions.

  8. Climate Feedback: Bringing the Scientific Community to Provide Direct Feedback on the Credibility of Climate Media Coverage

    Science.gov (United States)

    Vincent, E. M.; Matlock, T.; Westerling, A. L.

    2015-12-01

    While most scientists recognize climate change as a major societal and environmental issue, social and political will to tackle the problem is still lacking. One of the biggest obstacles is inaccurate reporting or even outright misinformation in climate change coverage that result in the confusion of the general public on the issue.In today's era of instant access to information, what we read online usually falls outside our field of expertise and it is a real challenge to evaluate what is credible. The emerging technology of web annotation could be a game changer as it allows knowledgeable individuals to attach notes to any piece of text of a webpage and to share them with readers who will be able to see the annotations in-context -like comments on a pdf.Here we present the Climate Feedback initiative that is bringing together a community of climate scientists who collectively evaluate the scientific accuracy of influential climate change media coverage. Scientists annotate articles sentence by sentence and assess whether they are consistent with scientific knowledge allowing readers to see where and why the coverage is -or is not- based on science. Scientists also summarize the essence of their critical commentary in the form of a simple article-level overall credibility rating that quickly informs readers about the credibility of the entire piece.Web-annotation allows readers to 'hear' directly from the experts and to sense the consensus in a personal way as one can literaly see how many scientists agree with a given statement. It also allows a broad population of scientists to interact with the media, notably early career scientists.In this talk, we will present results on the impacts annotations have on readers -regarding their evaluation of the trustworthiness of the information they read- and on journalists -regarding their reception of scientists comments.Several dozen scientists have contributed to this effort to date and the system offers potential to

  9. Maiden immunization coverage survey in the republic of South Sudan: a cross-sectional study providing baselines for future performance measurement

    Science.gov (United States)

    Mbabazi, William; Lako, Anthony K; Ngemera, Daniel; Laku, Richard; Yehia, Mostafah; Nshakira, Nathan

    2013-01-01

    Introduction Since the comprehensive peace agreement was signed in 2005, institutionalization of immunization services in South Sudan remained a priority. Routine administrative reporting systems were established and showed that national coverage rates for DTP-3 rose from 20% in 2002 to 80% in 2011. This survey was conducted as part of an overall review of progress in implementation of the first EPI Multi-Year Plan for South Sudan 2007-2011. This report provides maiden community coverage estimates for immunization. Methods A cross sectional community survey was conducted between January and May 2012. Ten cluster surveys were conducted to generate state-specific coverage estimates. The WHO 30x7 cluster sampling method was employed. Data was collected using pre-tested, interviewer guided, structured questionnaires through house to house visits. Results The fully immunized children were 7.3%. Coverage for specific antigens were; BCG (28.3%), DTP-1(25.9%), DTP-3 (22.0%), Measles (16.8%). The drop-out rate between the first and third doses of DTP was 21.3%. Immunization coverage estimates based on card and history were higher, at 45.7% for DTP-3, 45.8% for MCV and 32.2% for full immunization. Majority of immunizations (80.8%) were received at health facilities compared to community service points (19.2%). The major reason for missed immunizations was inadequate information (41.1%). Conclusion The proportion of card-verified, fully vaccinated among children aged 12-23 months is very low at 7.3%. Future efforts to improve vaccination quality and coverage should prioritize training of vaccinators and program communication to levels equivalent or higher than investments in EPI cold chain systems since 2007. PMID:24876899

  10. Parallel experimental design and multivariate analysis provides efficient screening of cell culture media supplements to improve biosimilar product quality.

    Science.gov (United States)

    Brühlmann, David; Sokolov, Michael; Butté, Alessandro; Sauer, Markus; Hemberger, Jürgen; Souquet, Jonathan; Broly, Hervé; Jordan, Martin

    2017-07-01

    Rational and high-throughput optimization of mammalian cell culture media has a great potential to modulate recombinant protein product quality. We present a process design method based on parallel design-of-experiment (DoE) of CHO fed-batch cultures in 96-deepwell plates to modulate monoclonal antibody (mAb) glycosylation using medium supplements. To reduce the risk of losing valuable information in an intricate joint screening, 17 compounds were separated into five different groups, considering their mode of biological action. The concentration ranges of the medium supplements were defined according to information encountered in the literature and in-house experience. The screening experiments produced wide glycosylation pattern ranges. Multivariate analysis including principal component analysis and decision trees was used to select the best performing glycosylation modulators. Subsequent D-optimal quadratic design with four factors (three promising compounds and temperature shift) in shake tubes confirmed the outcome of the selection process and provided a solid basis for sequential process development at a larger scale. The glycosylation profile with respect to the specifications for biosimilarity was greatly improved in shake tube experiments: 75% of the conditions were equally close or closer to the specifications for biosimilarity than the best 25% in 96-deepwell plates. Biotechnol. Bioeng. 2017;114: 1448-1458. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  11. Reporting by multiple employer welfare arrangements and certain other entities that offer or provide coverage for medical care to the employees of two or more employers. Final rule.

    Science.gov (United States)

    2003-04-09

    This document contains a final rule governing certain reporting requirements under Title I of the Employee Retirement Income Security Act of 1974 (ERISA) for multiple employer welfare arrangements (MEWAs) and certain other entities that offer or provide coverage for medical care to the employees of two or more employers. The final rule generally requires the administrator of a MEWA, and certain other entities, to file a form with the Secretary of Labor for the purpose of determining whether the requirements of certain recent health care laws are being met.

  12. The impacts of DRG-based payments on health care provider behaviors under a universal coverage system: a population-based study.

    Science.gov (United States)

    Cheng, Shou-Hsia; Chen, Chi-Chen; Tsai, Shu-Ling

    2012-10-01

    To examine the impacts of diagnosis-related group (DRG) payments on health care provider's behavior under a universal coverage system in Taiwan. This study employed a population-based natural experiment study design. Patients who underwent coronary artery bypass graft surgery or percutaneous transluminal coronary angioplasty, which were incorporated in the Taiwan version of DRG payments in 2010, were defined as the intervention group. The comparison group consisted of patients who underwent cardiovascular procedures which were paid for by fee-for-services schemes and were selected by propensity score matching from patients treated by the same group of surgeons. The generalized estimating equations model and difference-in-difference analysis was used in this study. The introduction of DRG payment resulted in a 10% decrease (pDRG-based payment resulted in reduced intensity of care and shortened length of stay. The findings might be valuable to other countries that are developing or reforming their payment system under a universal coverage system. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  13. Parallel rendering

    Science.gov (United States)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  14. More Rhode Island Adults Have Dental Coverage After the Medicaid Expansion: Did More Adults Receive Dental Services? Did More Dentists Provide Services?

    Science.gov (United States)

    Zwetchkenbaum, Samuel; Oh, Junhie

    2017-10-02

    Under the Affordable Care Act (ACA) Medicaid expansion since 2014, 68,000 more adults under age 65 years were enrolled in Rhode Island Medicaid as of December 2015, a 78% increase from 2013 enrollment. This report assesses changes in dental utilization associated with this expansion. Medicaid enrollment and dental claims for calendar years 2012-2015 were extracted from the RI Medicaid Management Information System. Among adults aged 18-64 years, annual numbers and percentages of Medicaid enrollees who received any dental service were summarized. Additionally, dental service claims were assessed by provider type (private practice or health center). Although 15,000 more adults utilized dental services by the end of 2015, the annual percentage of Medicaid enrollees who received any dental services decreased over the reporting periods, compared to pre-ACA years (2012-13: 39%, 2014: 35%, 2015: 32%). From 2012 to 2015, dental patient increases in community health centers were larger than in private dental offices (78% vs. 34%). Contrary to the Medicaid population increase, the number of dentists that submitted Medicaid claims decreased, particularly among dentists in private dental offices; the percentage of RI private dentists who provided any dental service to adult Medicaid enrollees decreased from 29% in 2012 to 21% in 2015. Implementation of Medicaid expansion has played a critical role in increasing the number of Rhode Islanders with dental coverage, particularly among low-income adults under age 65. However, policymakers must address the persistent and worsening shortage of dental providers that accept Medicaid to provide a more accessible source of oral healthcare for all Rhode Islanders. [Full article available at http://rimed.org/rimedicaljournal-2017-10.asp].

  15. Regulating the for-profit private healthcare providers towards universal health coverage: A qualitative study of legal and organizational framework in Mongolia.

    Science.gov (United States)

    Tsevelvaanchig, Uranchimeg; Narula, Indermohan S; Gouda, Hebe; Hill, Peter S

    2018-01-01

    Regulating the behavior of private providers in the context of mixed health systems has become increasingly important and challenging in many developing countries moving towards universal health coverage including Mongolia. This study examines the current regulatory architecture for private healthcare in Mongolia exploring its role for improving accessibility, affordability, and quality of private care and identifies gaps in policy design and implementation. Qualitative research methods were used including documentary review, analysis, and in-depth interviews with 45 representatives of key actors involved in and affected by regulations in Mongolia's mixed health system, along with long-term participant observation. There has been extensive legal documentation developed regulating private healthcare, with specific organizations assigned to conduct health regulations and inspections. However, the regulatory architecture for healthcare in Mongolia is not optimally designed to improve affordability and quality of private care. This is not limited only to private care: important regulatory functions targeted to quality of care do not exist at the national level. The imprecise content and details of regulations in laws inviting increased political interference, governance issues, unclear roles, and responsibilities of different government regulatory bodies have contributed to failures in implementation of existing regulations. Copyright © 2017 John Wiley & Sons, Ltd.

  16. An Enumerative Combinatorics Model for Fragmentation Patterns in RNA Sequencing Provides Insights into Nonuniformity of the Expected Fragment Starting-Point and Coverage Profile.

    Science.gov (United States)

    Prakash, Celine; Haeseler, Arndt Von

    2017-03-01

    RNA sequencing (RNA-seq) has emerged as the method of choice for measuring the expression of RNAs in a given cell population. In most RNA-seq technologies, sequencing the full length of RNA molecules requires fragmentation into smaller pieces. Unfortunately, the issue of nonuniform sequencing coverage across a genomic feature has been a concern in RNA-seq and is attributed to biases for certain fragments in RNA-seq library preparation and sequencing. To investigate the expected coverage obtained from fragmentation, we develop a simple fragmentation model that is independent of bias from the experimental method and is not specific to the transcript sequence. Essentially, we enumerate all configurations for maximal placement of a given fragment length, F, on transcript length, T, to represent every possible fragmentation pattern, from which we compute the expected coverage profile across a transcript. We extend this model to incorporate general empirical attributes such as read length, fragment length distribution, and number of molecules of the transcript. We further introduce the fragment starting-point, fragment coverage, and read coverage profiles. We find that the expected profiles are not uniform and that factors such as fragment length to transcript length ratio, read length to fragment length ratio, fragment length distribution, and number of molecules influence the variability of coverage across a transcript. Finally, we explore a potential application of the model where, with simulations, we show that it is possible to correctly estimate the transcript copy number for any transcript in the RNA-seq experiment.

  17. Percent Coverage

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Percent Coverage is a spreadsheet that keeps track of and compares the number of vessels that have departed with and without observers to the numbers of vessels...

  18. Providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer

    Energy Technology Data Exchange (ETDEWEB)

    Archer, Charles J.; Faraj, Daniel A.; Inglett, Todd A.; Ratterman, Joseph D.

    2018-01-30

    Methods, apparatus, and products are disclosed for providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: receiving a network packet in a compute node, the network packet specifying a destination compute node; selecting, in dependence upon the destination compute node, at least one of the links for the compute node along which to forward the network packet toward the destination compute node; and forwarding the network packet along the selected link to the adjacent compute node connected to the compute node through the selected link.

  19. Coverage Metrics for Model Checking

    Science.gov (United States)

    Penix, John; Visser, Willem; Norvig, Peter (Technical Monitor)

    2001-01-01

    When using model checking to verify programs in practice, it is not usually possible to achieve complete coverage of the system. In this position paper we describe ongoing research within the Automated Software Engineering group at NASA Ames on the use of test coverage metrics to measure partial coverage and provide heuristic guidance for program model checking. We are specifically interested in applying and developing coverage metrics for concurrent programs that might be used to support certification of next generation avionics software.

  20. Immunization Coverage

    Science.gov (United States)

    ... room/fact-sheets/detail/immunization-coverage","@context":"http://schema.org","@type":"Article"}; العربية 中文 français русский español ... Plan Global Health Observatory (GHO) data - Immunization More information on vaccines and immunization News 1 in 10 ...

  1. Functional coverages

    NARCIS (Netherlands)

    Donchyts, G.; Baart, F.; Jagers, H.R.A.; Van Dam, A.

    2011-01-01

    A new Application Programming Interface (API) is presented which simplifies working with geospatial coverages as well as many other data structures of a multi-dimensional nature. The main idea extends the Common Data Model (CDM) developed at the University Corporation for Atmospheric Research

  2. Practical parallel computing

    CERN Document Server

    Morse, H Stephen

    1994-01-01

    Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi

  3. Multiwavelength Study of Quiescent States of Mrk 421 with Unprecedented Hard X-Ray Coverage Provided by NuSTAR in 2013

    CERN Document Server

    Baloković, M.; Madejski, G.; Furniss, A.; Chiang, J.; Ajello, M.; Alexander, D.M.; Barret, D.; Blandford, R.; Boggs, S.E.; Christensen, F.E.; Craig, W.W.; Forster, K.; Giommi, P.; Grefenstette, B.W.; Hailey, C.J.; Harrison, F.A.; Hornstrup, A.; Kitaguchi, T.; Koglin, J.E.; Madsen, K.K.; Mao, P.H.; Miyasaka, H.; Mori, K.; Perri, M.; Pivovaroff, M.J.; Puccetti, S.; Rana, V.; Stern, D.; Tagliaferri, G.; Urry, C.M.; Westergaard, N.J.; Zhang, W.W.; Zoglauer, A.; Archambault, S.; Archer, A.A.; Barnacka, A.; Benbow, W.; Bird, R.; Buckley, J.; Bugaev, V.; Cerruti, M.; Chen, X.; Ciupik, L.; Connolly, M.P.; Cui, W.; Dickinson, H.J.; Dumm, J.; Eisch, J.D.; Falcone, A.; Feng, Q.; Finley, J.P.; Fleischhack, H.; Fortson, L.; Griffin, S.; Griffiths, S.T.; Grube, J.; Gyuk, G.; Huetten, M.; Haakansson, N.; Holder, J.; Humensky, T.B.; Johnson, C.A.; Kaaret, P.; Kertzman, M.; Khassen, Y.; Kieda, D.; Krause, M.; Krennrich, F.; Lang, M.J.; Maier, G.; McArthur, S.; Meagher, K.; Moriarty, P.; Nelson, T.; Nieto, D.; Ong, R.A.; Park, N.; Pohl, M.; Popkow, A.; Pueschel, E.; Reynolds, P.T.; Richards, G.T.; Roache, E.; Santander, M.; Sembroski, G.H.; Shahinyan, K.; Smith, A.W.; Staszak, D.; Telezhinsky, I.; Todd, N.W.; Tucci, J.V.; Tyler, J.; Vincent, S.; Weinstein, A.; Wilhelm, A.; Williams, D.A.; Zitzer, B.; Ahnen, M.L.; Ansoldi, S.; Antonelli, L.A.; Antoranz, P.; Babic, A.; Banerjee, B.; Bangale, P.; Barres de Almeida, U.; Barrio, J.; Becerra González, J.; Bednarek, W.; Bernardini, E.; Biasuzzi, B.; Biland, A.; Blanch, O.; Bonnefoy, S.; Bonnoli, G.; Borracci, F.; Bretz, T.; Carmona, E.; Carosi, A.; Chatterjee, A.; Clavero, R.; Colin, P.; Colombo, E.; Contreras, J.L.; Cortina, J.; Covino, S.; Da Vela, P.; Dazzi, F.; de Angelis, A.; De Lotto, B.; Wilhelmi, E. D. de Oña; Delgado Mendez, C.; Di Pierro, F.; Dominis Prester, D.; Dorner, D.; Doro, M.; Einecke, S.; Elsaesser, D.; Fernández-Barral, A.; Fidalgo, D.; Fonseca, M.V.; Font, L.; Frantzen, K.; Fruck, C.; Galindo, D.; López, R. J. García; Garczarczyk, M.; Garrido Terrats, D.; Gaug, M.; Giammaria, P.; Eisenacher, D.; Godinović, N.; González Muñoz, A.; Guberman, D.; Hahn, A.; Hanabata, Y.; Hayashida, M.; Herrera, J.; Hose, J.; Hrupec, D.; Hughes, G.; Idec, W.; Kodani, K.; Konno, Y.; Kubo, H.; Kushida, J.; La Barbera, A.; Lelas, D.; Lindfors, E.; Lombardi, S.; Longo, F.; López, M.; López-Coto, R.; López-Oramas, A.; Lorenz, E.; Majumdar, P.; Makariev, M.; Mallot, K.; Maneva, G.; Manganaro, M.; Mannheim, K.; Maraschi, L.; Marcote, B.; Mariotti, M.; Martínez, M.; Mazin, D.; Menzel, U.; Miranda, J.M.; Mirzoyan, R.; Moralejo, A.; Moretti, E.; Nakajima, D.; Neustroev, V.; Niedzwiecki, A.; Nievas-Rosillo, M.; Nilsson, K.; Nishijima, K.; Noda, K.; Orito, R.; Overkemping, A.; Paiano, S.; Palacio, S.; Palatiello, M.; Paoletti, R.; Paredes, J.M.; Paredes-Fortuny, X.; Persic, M.; Poutanen, J.; Prada Moroni, P. G.; Prandini, E.; Puljak, I.; Rhode, W.; Ribó, M.; Rico, J.; Garcia, J. Rodriguez; Saito, T.; Satalecka, K.; Scapin, V.; Schultz, C.; Schweizer, T.; Shore, S.N.; Sillanpää, A.; Sitarek, J.; Snidaric, I.; Sobczynska, D.; Stamerra, A.; Steinbring, T.; Strzys, M.; Takalo, L.O.; Takami, H.; Tavecchio, F.; Temnikov, P.; Terzić, T.; Tescaro, D.; Teshima, M.; Thaele, J.; Torres, D.F.; Toyama, T.; Treves, A.; Verguilov, V.; Vovk, I.; Ward, J.E.; Will, M.; Wu, M.H.; Zanin, R.; Perkins, J.; Verrecchia, F.; Leto, C.; Böttcher, M.; Villata, M.; Raiteri, C.M.; Acosta-Pulido, J.A.; Bachev, R.; Berdyugin, A.; Blinov, D.A.; Carnerero, M.I.; Chen, W.P.; Chinchilla, P.; Damljanovic, G.; Eswaraiah, C.; Grishina, T.S.; Ibryamov, S.; Jordan, B.; Jorstad, S.G.; Joshi, M.; Kopatskaya, E.N.; Kurtanidze, O.M.; Kurtanidze, S.O.; Larionova, E.G.; Larionova, L.V.; Larionov, V.M.; Latev, G.; Lin, H.C.; Marscher, A.P.; Mokrushina, A.A.; Morozova, D.A.; Nikolashvili, M.G.; Semkov, E.; Strigachev, A.; Troitskaya, Yu. V.; Troitsky, I.S.; Vince, O.; Barnes, J.; Güver, T.; Moody, J.W.; Sadun, A.C.; Sun, S.; Hovatta, T.; Richards, J.L.; Max-Moerbeck, W.; Readhead, A.C.; Lähteenmäki, A.; Tornikoski, M.; Tammi, J.; Ramakrishnan, V.; Reinthal, R.; Angelakis, E.; Fuhrmann, L.; Myserlis, I.; Karamanavis, V.; Sievers, A.; Ungerechts, H.; Zensus, J.A.

    2016-01-01

    We present coordinated multiwavelength observations of the bright, nearby BL Lac object Mrk 421 taken in 2013 January-March, involving GASP-WEBT, Swift, NuSTAR, Fermi-LAT, MAGIC, VERITAS, and other collaborations and instruments, providing data from radio to very-high-energy (VHE) gamma-ray bands. NuSTAR yielded previously unattainable sensitivity in the 3-79 keV range, revealing that the spectrum softens when the source is dimmer until the X-ray spectral shape saturates into a steep power law with a photon index of approximately 3, with no evidence for an exponential cutoff or additional hard components up to about 80 keV. For the first time, we observed both the synchrotron and the inverse-Compton peaks of the spectral energy distribution (SED) simultaneously shifted to frequencies below the typical quiescent state by an order of magnitude. The fractional variability as a function of photon energy shows a double-bump structure which relates to the two bumps of the broadband SED. In each bump, the variabilit...

  4. 29 CFR 95.31 - Insurance coverage.

    Science.gov (United States)

    2010-07-01

    ... recipient. Federally-owned property need not be insured unless required by the terms and conditions of the... § 95.31 Insurance coverage. Recipients shall, at a minimum, provide the equivalent insurance coverage...

  5. Medicare Coverage Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Medicare Coverage Database (MCD) contains all National Coverage Determinations (NCDs) and Local Coverage Determinations (LCDs), local articles, and proposed NCD...

  6. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  7. Application Portable Parallel Library

    Science.gov (United States)

    Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott

    1995-01-01

    Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

  8. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  9. CaZF, a plant transcription factor functions through and parallel to HOG and calcineurin pathways in Saccharomyces cerevisiae to provide osmotolerance.

    Directory of Open Access Journals (Sweden)

    Deepti Jain

    Full Text Available Salt-sensitive yeast mutants were deployed to characterize a gene encoding a C2H2 zinc finger protein (CaZF that is differentially expressed in a drought-tolerant variety of chickpea (Cicer arietinum and provides salinity-tolerance in transgenic tobacco. In Saccharomyces cerevisiae most of the cellular responses to hyper-osmotic stress is regulated by two interconnected pathways involving high osmolarity glycerol mitogen-activated protein kinase (Hog1p and Calcineurin (CAN, a Ca(2+/calmodulin-regulated protein phosphatase 2B. In this study, we report that heterologous expression of CaZF provides osmotolerance in S. cerevisiae through Hog1p and Calcineurin dependent as well as independent pathways. CaZF partially suppresses salt-hypersensitive phenotypes of hog1, can and hog1can mutants and in conjunction, stimulates HOG and CAN pathway genes with subsequent accumulation of glycerol in absence of Hog1p and CAN. CaZF directly binds to stress response element (STRE to activate STRE-containing promoter in yeast. Transactivation and salt tolerance assays of CaZF deletion mutants showed that other than the transactivation domain a C-terminal domain composed of acidic and basic amino acids is also required for its function. Altogether, results from this study suggests that CaZF is a potential plant salt-tolerance determinant and also provide evidence that in budding yeast expression of HOG and CAN pathway genes can be stimulated in absence of their regulatory enzymes to provide osmotolerance.

  10. Women's Health Insurance Coverage

    Science.gov (United States)

    ... Women's Health Policy Women’s Health Insurance Coverage Women’s Health Insurance Coverage Published: Oct 31, 2017 Facebook Twitter LinkedIn ... that many women continue to face. Sources of Health Insurance Coverage Employer-Sponsored Insurance: Approximately 57.9 million ...

  11. Coverage of the Stanford Prison Experiment in Introductory Psychology Courses

    Science.gov (United States)

    Bartels, Jared M.; Milovich, Marilyn M.; Moussier, Sabrina

    2016-01-01

    The present study examined the coverage of Stanford prison experiment (SPE), including criticisms of the study, in introductory psychology courses through an online survey of introductory psychology instructors (N = 117). Results largely paralleled those of the recently published textbook analyses with ethical issues garnering the most coverage,…

  12. 14 CFR 1260.131 - Insurance coverage.

    Science.gov (United States)

    2010-01-01

    ... coverage. Recipients shall, at a minimum, provide the equivalent insurance coverage for real property and equipment acquired with Federal funds as provided for property owned by the recipient. Federally-owned property need not be insured unless required by the terms and conditions of the award. ...

  13. 2 CFR 215.31 - Insurance coverage.

    Science.gov (United States)

    2010-01-01

    ... Insurance coverage. Recipients shall, at a minimum, provide the equivalent insurance coverage for real property and equipment acquired with Federal funds as provided to property owned by the recipient. Federally-owned property need not be insured unless required by the terms and conditions of the award. ...

  14. 36 CFR 1210.31 - Insurance coverage.

    Science.gov (United States)

    2010-07-01

    ....31 Insurance coverage. Recipients shall, at a minimum, provide the equivalent insurance coverage for real property and equipment acquired with NHPRC funds as provided to property owned by the recipient. Federally-owned property need not be insured unless required by the terms and conditions of the award. ...

  15. Parallel computation

    International Nuclear Information System (INIS)

    Jejcic, A.; Maillard, J.; Maurel, G.; Silva, J.; Wolff-Bacha, F.

    1997-01-01

    The work in the field of parallel processing has developed as research activities using several numerical Monte Carlo simulations related to basic or applied current problems of nuclear and particle physics. For the applications utilizing the GEANT code development or improvement works were done on parts simulating low energy physical phenomena like radiation, transport and interaction. The problem of actinide burning by means of accelerators was approached using a simulation with the GEANT code. A program of neutron tracking in the range of low energies up to the thermal region has been developed. It is coupled to the GEANT code and permits in a single pass the simulation of a hybrid reactor core receiving a proton burst. Other works in this field refers to simulations for nuclear medicine applications like, for instance, development of biological probes, evaluation and characterization of the gamma cameras (collimators, crystal thickness) as well as the method for dosimetric calculations. Particularly, these calculations are suited for a geometrical parallelization approach especially adapted to parallel machines of the TN310 type. Other works mentioned in the same field refer to simulation of the electron channelling in crystals and simulation of the beam-beam interaction effect in colliders. The GEANT code was also used to simulate the operation of germanium detectors designed for natural and artificial radioactivity monitoring of environment

  16. Parallel R

    CERN Document Server

    McCallum, Ethan

    2011-01-01

    It's tough to argue with R as a high-quality, cross-platform, open source statistical software product-unless you're in the business of crunching Big Data. This concise book introduces you to several strategies for using R to analyze large datasets. You'll learn the basics of Snow, Multicore, Parallel, and some Hadoop-related tools, including how to find them, how to use them, when they work well, and when they don't. With these packages, you can overcome R's single-threaded nature by spreading work across multiple CPUs, or offloading work to multiple machines to address R's memory barrier.

  17. Summary of DOD Acquisition Program Audit Coverage

    National Research Council Canada - National Science Library

    2001-01-01

    This report will provide the DoD audit community with information to support their planning efforts and provide management with information on the extent of audit coverage of DoD acquisition programs...

  18. 5 CFR 531.402 - Employee coverage.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Employee coverage. 531.402 Section 531... GENERAL SCHEDULE Within-Grade Increases § 531.402 Employee coverage. (a) Except as provided in paragraph (b) of this section, this subpart applies to employees who— (1) Are classified and paid under the...

  19. Massively parallel multicanonical simulations

    Science.gov (United States)

    Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard

    2018-03-01

    Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.

  20. Parallel Lines

    Directory of Open Access Journals (Sweden)

    James G. Worner

    2017-05-01

    Full Text Available James Worner is an Australian-based writer and scholar currently pursuing a PhD at the University of Technology Sydney. His research seeks to expose masculinities lost in the shadow of Australia’s Anzac hegemony while exploring new opportunities for contemporary historiography. He is the recipient of the Doctoral Scholarship in Historical Consciousness at the university’s Australian Centre of Public History and will be hosted by the University of Bologna during 2017 on a doctoral research writing scholarship.   ‘Parallel Lines’ is one of a collection of stories, The Shapes of Us, exploring liminal spaces of modern life: class, gender, sexuality, race, religion and education. It looks at lives, like lines, that do not meet but which travel in proximity, simultaneously attracted and repelled. James’ short stories have been published in various journals and anthologies.

  1. Distributed and cloud computing from parallel processing to the Internet of Things

    CERN Document Server

    Hwang, Kai; Fox, Geoffrey C

    2012-01-01

    Distributed and Cloud Computing, named a 2012 Outstanding Academic Title by the American Library Association's Choice publication, explains how to create high-performance, scalable, reliable systems, exposing the design principles, architecture, and innovative applications of parallel, distributed, and cloud computing systems. Starting with an overview of modern distributed models, the book provides comprehensive coverage of distributed and cloud computing, including: Facilitating management, debugging, migration, and disaster recovery through virtualization Clustered systems for resear

  2. Terrorism and nuclear damage coverage

    International Nuclear Information System (INIS)

    Horbach, N. L. J. T.; Brown, O. F.; Vanden Borre, T.

    2004-01-01

    This paper deals with nuclear terrorism and the manner in which nuclear operators can insure themselves against it, based on the international nuclear liability conventions. It concludes that terrorism is currently not covered under the treaty exoneration provisions on 'war-like events' based on an analysis of the concept on 'terrorism' and travaux preparatoires. Consequently, operators remain liable for nuclear damage resulting from terrorist acts, for which mandatory insurance is applicable. Since nuclear insurance industry looks at excluding such insurance coverage from their policies in the near future, this article aims to suggest alternative means for insurance, in order to ensure adequate compensation for innocent victims. The September 11, 2001 attacks at the World Trade Center in New York City and the Pentagon in Washington, DC resulted in the largest loss in the history of insurance, inevitably leading to concerns about nuclear damage coverage, should future such assaults target a nuclear power plant or other nuclear installation. Since the attacks, some insurers have signalled their intentions to exclude coverage for terrorism from their nuclear liability and property insurance policies. Other insurers are maintaining coverage for terrorism, but are establishing aggregate limits or sublimits and are increasing premiums. Additional changes by insurers are likely to occur. Highlighted by the September 11th events, and most recently by those in Madrid on 11 March 2004, are questions about how to define acts of terrorism and the extent to which such are covered under the international nuclear liability conventions and various domestic nuclear liability laws. Of particular concern to insurers is the possibility of coordinated simultaneous attacks on multiple nuclear facilities. This paper provides a survey of the issues, and recommendations for future clarifications and coverage options.(author)

  3. 22 CFR 518.31 - Insurance coverage.

    Science.gov (United States)

    2010-04-01

    ... property owned by the recipient. Federally-owned property need not be insured unless required by the terms... Requirements Property Standards § 518.31 Insurance coverage. Recipients shall, at a minimum, provide the...

  4. 7 CFR 3019.31 - Insurance coverage.

    Science.gov (United States)

    2010-01-01

    ... recipient. Federally-owned property need not be insured unless required by the terms and conditions of the... Standards § 3019.31 Insurance coverage. Recipients shall, at a minimum, provide the equivalent insurance...

  5. 34 CFR 74.31 - Insurance coverage.

    Science.gov (United States)

    2010-07-01

    ... by the recipient. Federally-owned property need not be insured unless required by the terms and... Property Standards § 74.31 Insurance coverage. Recipients shall, at a minimum, provide the equivalent...

  6. 49 CFR 19.31 - Insurance coverage.

    Science.gov (United States)

    2010-10-01

    ... property owned by the recipient. Federally-owned property need not be insured unless required by the terms... Requirements Property Standards § 19.31 Insurance coverage. Recipients shall, at a minimum, provide the...

  7. 10 CFR 600.131 - Insurance coverage.

    Science.gov (United States)

    2010-01-01

    ... provided to property owned by the recipient. Federally-owned property need not be insured unless required... Nonprofit Organizations Post-Award Requirements § 600.131 Insurance coverage. Recipients shall, at a minimum...

  8. 20 CFR 435.31 - Insurance coverage.

    Science.gov (United States)

    2010-04-01

    ... funds as provided to property owned by the recipient. Federally-owned property need not be insured... ORGANIZATIONS Post-Award Requirements Property Standards § 435.31 Insurance coverage. Recipients must, at a...

  9. 28 CFR 70.31 - Insurance coverage.

    Science.gov (United States)

    2010-07-01

    ... with Federal funds as provided to property owned by the recipient. Federally-owned property need not be...-PROFIT ORGANIZATIONS Post-Award Requirements Property Standards § 70.31 Insurance coverage. Recipients...

  10. Development and application of a 6.5 million feature affymetrix genechip® for massively parallel discovery of single position polymorphisms in lettuce (Lactuca spp.)

    OpenAIRE

    Stoffel, Kevin; van Leeuwen, Hans; Kozik, Alexander; Caldwell, David; Ashrafi, Hamid; Cui, Xinping; Tan, Xiaoping; Hill, Theresa; Reyes-Chin-Wo, Sebastian; Truco, Maria-Jose; Michelmore, Richard W; Van Deynze, Allen

    2012-01-01

    Abstract Background High-resolution genetic maps are needed in many crops to help characterize the genetic diversity that determines agriculturally important traits. Hybridization to microarrays to detect single feature polymorphisms is a powerful technique for marker discovery and genotyping because of its highly parallel nature. However, microarrays designed for gene expression analysis rarely provide sufficient gene coverage for optimal detection o...

  11. Understanding of the characteristics of the local newspapers providing media coverage on the matters of nuclear energy in the regions where nuclear facilities are located. Based on analysis of the media reports and interviews with journalists

    International Nuclear Information System (INIS)

    Tsuchida, Tatsuro; Kimura, Hiroshi

    2011-01-01

    Taking into consideration the influence of the media coverage, this research aims to analyze the characteristics of the local newspapers that cover diverse events relevant to nuclear energy in regional areas where nuclear facilities are located (hereinafter called the 'region'). According to the previous surveys, local residents in the region are more interested in the nuclear energy matters than those who live in urban areas. Plus, the local newspapers turn out to report more events of nuclear energy from a variety of angles. Through interviews with executives and journalists of the local newspaper companies in the regions, it is revealed that the local newspapers tend not to report news sensationally, but they would rather take a supportive stance toward the development in their regions. The interviewees hope that various activities of the nuclear industry will promote education, employment and cooperation among government, industry and academia. They also desire that the industry's activities will help to increase benefits in their regions. It appears that the interviewees' awareness reflects articles of the local newspapers. As a result of the surveys conducted for this research, it is considered that the journalists expect that their region will make particularly qualitative progress in the future. (author)

  12. Cooperative Cloud Service Aware Mobile Internet Coverage Connectivity Guarantee Protocol Based on Sensor Opportunistic Coverage Mechanism

    Directory of Open Access Journals (Sweden)

    Qin Qin

    2015-01-01

    Full Text Available In order to improve the Internet coverage ratio and provide connectivity guarantee, based on sensor opportunistic coverage mechanism and cooperative cloud service, we proposed the coverage connectivity guarantee protocol for mobile Internet. In this scheme, based on the opportunistic covering rules, the network coverage algorithm of high reliability and real-time security was achieved by using the opportunity of sensor nodes and the Internet mobile node. Then, the cloud service business support platform is created based on the Internet application service management capabilities and wireless sensor network communication service capabilities, which is the architecture of the cloud support layer. The cooperative cloud service aware model was proposed. Finally, we proposed the mobile Internet coverage connectivity guarantee protocol. The results of experiments demonstrate that the proposed algorithm has excellent performance, in terms of the security of the Internet and the stability, as well as coverage connectivity ability.

  13. -Net Approach to Sensor -Coverage

    Directory of Open Access Journals (Sweden)

    Fusco Giordano

    2010-01-01

    Full Text Available Wireless sensors rely on battery power, and in many applications it is difficult or prohibitive to replace them. Hence, in order to prolongate the system's lifetime, some sensors can be kept inactive while others perform all the tasks. In this paper, we study the -coverage problem of activating the minimum number of sensors to ensure that every point in the area is covered by at least sensors. This ensures higher fault tolerance, robustness, and improves many operations, among which position detection and intrusion detection. The -coverage problem is trivially NP-complete, and hence we can only provide approximation algorithms. In this paper, we present an algorithm based on an extension of the classical -net technique. This method gives an -approximation, where is the number of sensors in an optimal solution. We do not make any particular assumption on the shape of the areas covered by each sensor, besides that they must be closed, connected, and without holes.

  14. 7 CFR 1737.31 - Area Coverage Survey (ACS).

    Science.gov (United States)

    2010-01-01

    ... an ACS are provided in RUS Telecommunications Engineering and Construction Manual section 205. (e... Studies-Area Coverage Survey and Loan Design § 1737.31 Area Coverage Survey (ACS). (a) The Area Coverage... the borrower's records contain sufficient information as to subscriber development to enable cost...

  15. Advanced parallel processing with supercomputer architectures

    International Nuclear Information System (INIS)

    Hwang, K.

    1987-01-01

    This paper investigates advanced parallel processing techniques and innovative hardware/software architectures that can be applied to boost the performance of supercomputers. Critical issues on architectural choices, parallel languages, compiling techniques, resource management, concurrency control, programming environment, parallel algorithms, and performance enhancement methods are examined and the best answers are presented. The authors cover advanced processing techniques suitable for supercomputers, high-end mainframes, minisupers, and array processors. The coverage emphasizes vectorization, multitasking, multiprocessing, and distributed computing. In order to achieve these operation modes, parallel languages, smart compilers, synchronization mechanisms, load balancing methods, mapping parallel algorithms, operating system functions, application library, and multidiscipline interactions are investigated to ensure high performance. At the end, they assess the potentials of optical and neural technologies for developing future supercomputers

  16. The Galley Parallel File System

    Science.gov (United States)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  17. Parallel hierarchical radiosity rendering

    Energy Technology Data Exchange (ETDEWEB)

    Carter, Michael [Iowa State Univ., Ames, IA (United States)

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  18. [Medical coverage of a road bicycle race].

    Science.gov (United States)

    Reifferscheid, Florian; Stuhr, Markus; Harding, Ulf; Schüler, Christine; Thoms, Jürgen; Püschel, Klaus; Kappus, Stefan

    2010-07-01

    Major sport events require adequate expertise and experience concerning medical coverage and support. Medical and ambulance services need to cover both participants and spectators. Likewise, residents at the venue need to be provided for. Concepts have to include the possibility of major incidents related to the event. Using the example of the Hamburg Cyclassics, a road bicycle race and major event for professional and amateur cyclists, this article describes the medical coverage, number of patients, types of injuries and emergencies. Objectives regarding the planning of future events and essential medical coverage are consequently discussed. (c) Georg Thieme Verlag Stuttgart-New York.

  19. Ultrascalable petaflop parallel supercomputer

    Science.gov (United States)

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  20. Stability of parallel flows

    CERN Document Server

    Betchov, R

    2012-01-01

    Stability of Parallel Flows provides information pertinent to hydrodynamical stability. This book explores the stability problems that occur in various fields, including electronics, mechanics, oceanography, administration, economics, as well as naval and aeronautical engineering. Organized into two parts encompassing 10 chapters, this book starts with an overview of the general equations of a two-dimensional incompressible flow. This text then explores the stability of a laminar boundary layer and presents the equation of the inviscid approximation. Other chapters present the general equation

  1. "Feeling" Series and Parallel Resistances.

    Science.gov (United States)

    Morse, Robert A.

    1993-01-01

    Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)

  2. Insurance Coverage Policies for Personalized Medicine

    Directory of Open Access Journals (Sweden)

    Andrew Hresko

    2012-10-01

    Full Text Available Adoption of personalized medicine in practice has been slow, in part due to the lack of evidence of clinical benefit provided by these technologies. Coverage by insurers is a critical step in achieving widespread adoption of personalized medicine. Insurers consider a variety of factors when formulating medical coverage policies for personalized medicine, including the overall strength of evidence for a test, availability of clinical guidelines and health technology assessments by independent organizations. In this study, we reviewed coverage policies of the largest U.S. insurers for genomic (disease-related and pharmacogenetic (PGx tests to determine the extent that these tests were covered and the evidence basis for the coverage decisions. We identified 41 coverage policies for 49 unique testing: 22 tests for disease diagnosis, prognosis and risk and 27 PGx tests. Fifty percent (or less of the tests reviewed were covered by insurers. Lack of evidence of clinical utility appears to be a major factor in decisions of non-coverage. The inclusion of PGx information in drug package inserts appears to be a common theme of PGx tests that are covered. This analysis highlights the variability of coverage determinations and factors considered, suggesting that the adoption of personal medicine will affected by numerous factors, but will continue to be slowed due to lack of demonstrated clinical benefit.

  3. Aspects of coverage in medical DNA sequencing

    Directory of Open Access Journals (Sweden)

    Wilson Richard K

    2008-05-01

    Full Text Available Abstract Background DNA sequencing is now emerging as an important component in biomedical studies of diseases like cancer. Short-read, highly parallel sequencing instruments are expected to be used heavily for such projects, but many design specifications have yet to be conclusively established. Perhaps the most fundamental of these is the redundancy required to detect sequence variations, which bears directly upon genomic coverage and the consequent resolving power for discerning somatic mutations. Results We address the medical sequencing coverage problem via an extension of the standard mathematical theory of haploid coverage. The expected diploid multi-fold coverage, as well as its generalization for aneuploidy are derived and these expressions can be readily evaluated for any project. The resulting theory is used as a scaling law to calibrate performance to that of standard BAC sequencing at 8× to 10× redundancy, i.e. for expected coverages that exceed 99% of the unique sequence. A differential strategy is formalized for tumor/normal studies wherein tumor samples are sequenced more deeply than normal ones. In particular, both tumor alleles should be detected at least twice, while both normal alleles are detected at least once. Our theory predicts these requirements can be met for tumor and normal redundancies of approximately 26× and 21×, respectively. We explain why these values do not differ by a factor of 2, as might intuitively be expected. Future technology developments should prompt even deeper sequencing of tumors, but the 21× value for normal samples is essentially a constant. Conclusion Given the assumptions of standard coverage theory, our model gives pragmatic estimates for required redundancy. The differential strategy should be an efficient means of identifying potential somatic mutations for further study.

  4. Aspects of computation on asynchronous parallel processors

    International Nuclear Information System (INIS)

    Wright, M.

    1989-01-01

    The increasing availability of asynchronous parallel processors has provided opportunities for original and useful work in scientific computing. However, the field of parallel computing is still in a highly volatile state, and researchers display a wide range of opinion about many fundamental questions such as models of parallelism, approaches for detecting and analyzing parallelism of algorithms, and tools that allow software developers and users to make effective use of diverse forms of complex hardware. This volume collects the work of researchers specializing in different aspects of parallel computing, who met to discuss the framework and the mechanics of numerical computing. The far-reaching impact of high-performance asynchronous systems is reflected in the wide variety of topics, which include scientific applications (e.g. linear algebra, lattice gauge simulation, ordinary and partial differential equations), models of parallelism, parallel language features, task scheduling, automatic parallelization techniques, tools for algorithm development in parallel environments, and system design issues

  5. Non-Cartesian parallel imaging reconstruction.

    Science.gov (United States)

    Wright, Katherine L; Hamilton, Jesse I; Griswold, Mark A; Gulani, Vikas; Seiberlich, Nicole

    2014-11-01

    Non-Cartesian parallel imaging has played an important role in reducing data acquisition time in MRI. The use of non-Cartesian trajectories can enable more efficient coverage of k-space, which can be leveraged to reduce scan times. These trajectories can be undersampled to achieve even faster scan times, but the resulting images may contain aliasing artifacts. Just as Cartesian parallel imaging can be used to reconstruct images from undersampled Cartesian data, non-Cartesian parallel imaging methods can mitigate aliasing artifacts by using additional spatial encoding information in the form of the nonhomogeneous sensitivities of multi-coil phased arrays. This review will begin with an overview of non-Cartesian k-space trajectories and their sampling properties, followed by an in-depth discussion of several selected non-Cartesian parallel imaging algorithms. Three representative non-Cartesian parallel imaging methods will be described, including Conjugate Gradient SENSE (CG SENSE), non-Cartesian generalized autocalibrating partially parallel acquisition (GRAPPA), and Iterative Self-Consistent Parallel Imaging Reconstruction (SPIRiT). After a discussion of these three techniques, several potential promising clinical applications of non-Cartesian parallel imaging will be covered. © 2014 Wiley Periodicals, Inc.

  6. Overview of the Force Scientific Parallel Language

    Directory of Open Access Journals (Sweden)

    Gita Alaghband

    1994-01-01

    Full Text Available The Force parallel programming language designed for large-scale shared-memory multiprocessors is presented. The language provides a number of parallel constructs as extensions to the ordinary Fortran language and is implemented as a two-level macro preprocessor to support portability across shared memory multiprocessors. The global parallelism model on which the Force is based provides a powerful parallel language. The parallel constructs, generic synchronization, and freedom from process management supported by the Force has resulted in structured parallel programs that are ported to the many multiprocessors on which the Force is implemented. Two new parallel constructs for looping and functional decomposition are discussed. Several programming examples to illustrate some parallel programming approaches using the Force are also presented.

  7. Balanced, parallel operation of flashlamps

    International Nuclear Information System (INIS)

    Carder, B.M.; Merritt, B.T.

    1979-01-01

    A new energy store, the Compensated Pulsed Alternator (CPA), promises to be a cost effective substitute for capacitors to drive flashlamps that pump large Nd:glass lasers. Because the CPA is large and discrete, it will be necessary that it drive many parallel flashlamp circuits, presenting a problem in equal current distribution. Current division to +- 20% between parallel flashlamps has been achieved, but this is marginal for laser pumping. A method is presented here that provides equal current sharing to about 1%, and it includes fused protection against short circuit faults. The method was tested with eight parallel circuits, including both open-circuit and short-circuit fault tests

  8. An Introduction to Parallel Computation R

    Indian Academy of Sciences (India)

    How are they programmed? This article provides an introduction. A parallel computer is a network of processors built for ... and have been used to solve problems much faster than a single ... in parallel computer design is to select an organization which ..... The most ambitious approach to parallel computing is to develop.

  9. Parallel Programming with Intel Parallel Studio XE

    CERN Document Server

    Blair-Chappell , Stephen

    2012-01-01

    Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the

  10. Contraceptive Coverage and the Affordable Care Act.

    Science.gov (United States)

    Tschann, Mary; Soon, Reni

    2015-12-01

    A major goal of the Patient Protection and Affordable Care Act is reducing healthcare spending by shifting the focus of healthcare toward preventive care. Preventive services, including all FDA-approved contraception, must be provided to patients without cost-sharing under the ACA. No-cost contraception has been shown to increase uptake of highly effective birth control methods and reduce unintended pregnancy and abortion; however, some institutions and corporations argue that providing contraceptive coverage infringes on their religious beliefs. The contraceptive coverage mandate is evolving due to legal challenges, but it has already demonstrated success in reducing costs and improving access to contraception. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Parallel kinematics type, kinematics, and optimal design

    CERN Document Server

    Liu, Xin-Jun

    2014-01-01

    Parallel Kinematics- Type, Kinematics, and Optimal Design presents the results of 15 year's research on parallel mechanisms and parallel kinematics machines. This book covers the systematic classification of parallel mechanisms (PMs) as well as providing a large number of mechanical architectures of PMs available for use in practical applications. It focuses on the kinematic design of parallel robots. One successful application of parallel mechanisms in the field of machine tools, which is also called parallel kinematics machines, has been the emerging trend in advanced machine tools. The book describes not only the main aspects and important topics in parallel kinematics, but also references novel concepts and approaches, i.e. type synthesis based on evolution, performance evaluation and optimization based on screw theory, singularity model taking into account motion and force transmissibility, and others.   This book is intended for researchers, scientists, engineers and postgraduates or above with interes...

  12. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable distributed graph container and a collection of commonly used parallel graph algorithms. The library introduces pGraph pViews that separate algorithm design from the container implementation. It supports three graph processing algorithmic paradigms, level-synchronous, asynchronous and coarse-grained, and provides common graph algorithms based on them. Experimental results demonstrate improved scalability in performance and data size over existing graph libraries on more than 16,000 cores and on internet-scale graphs containing over 16 billion vertices and 250 billion edges. © Springer-Verlag Berlin Heidelberg 2013.

  13. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  14. Technical support for universal health coverage pilots in Karnataka ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Technical support for universal health coverage pilots in Karnataka and Kerala. This project will provide evidence-based support to implement universal health coverage (UHC) pilot activities in two Indian states: Kerala and Karnataka. The project team will provide technical assistance to these early adopter states to assist ...

  15. Introduction to parallel programming

    CERN Document Server

    Brawer, Steven

    1989-01-01

    Introduction to Parallel Programming focuses on the techniques, processes, methodologies, and approaches involved in parallel programming. The book first offers information on Fortran, hardware and operating system models, and processes, shared memory, and simple parallel programs. Discussions focus on processes and processors, joining processes, shared memory, time-sharing with multiple processors, hardware, loops, passing arguments in function/subroutine calls, program structure, and arithmetic expressions. The text then elaborates on basic parallel programming techniques, barriers and race

  16. Parallel computing works!

    CERN Document Server

    Fox, Geoffrey C; Messina, Guiseppe C

    2014-01-01

    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  17. 42 CFR 436.330 - Coverage for certain aliens.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Coverage for certain aliens. 436.330 Section 436... Coverage of the Medically Needy § 436.330 Coverage for certain aliens. If an agency provides Medicaid to... condition, as defined in § 440.255(c) of this chapter to those aliens described in § 436.406(c) of this...

  18. Writing parallel programs that work

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Serial algorithms typically run inefficiently on parallel machines. This may sound like an obvious statement, but it is the root cause of why parallel programming is considered to be difficult. The current state of the computer industry is still that almost all programs in existence are serial. This talk will describe the techniques used in the Intel Parallel Studio to provide a developer with the tools necessary to understand the behaviors and limitations of the existing serial programs. Once the limitations are known the developer can refactor the algorithms and reanalyze the resulting programs with the tools in the Intel Parallel Studio to create parallel programs that work. About the speaker Paul Petersen is a Sr. Principal Engineer in the Software and Solutions Group (SSG) at Intel. He received a Ph.D. degree in Computer Science from the University of Illinois in 1993. After UIUC, he was employed at Kuck and Associates, Inc. (KAI) working on auto-parallelizing compiler (KAP), and was involved in th...

  19. Parallel Computing in SCALE

    International Nuclear Information System (INIS)

    DeHart, Mark D.; Williams, Mark L.; Bowman, Stephen M.

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  20. Parallel Atomistic Simulations

    Energy Technology Data Exchange (ETDEWEB)

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  1. PKA increases in the olfactory bulb act as unconditioned stimuli and provide evidence for parallel memory systems: pairing odor with increased PKA creates intermediate- and long-term, but not short-term, memories.

    Science.gov (United States)

    Grimes, Matthew T; Harley, Carolyn W; Darby-King, Andrea; McLean, John H

    2012-02-21

    Neonatal odor-preference memory in rat pups is a well-defined associative mammalian memory model dependent on cAMP. Previous work from this laboratory demonstrates three phases of neonatal odor-preference memory: short-term (translation-independent), intermediate-term (translation-dependent), and long-term (transcription- and translation-dependent). Here, we use neonatal odor-preference learning to explore the role of olfactory bulb PKA in these three phases of mammalian memory. PKA activity increased normally in learning animals 10 min after a single training trial. Inhibition of PKA by Rp-cAMPs blocked intermediate-term and long-term memory, with no effect on short-term memory. PKA inhibition also prevented learning-associated CREB phosphorylation, a transcription factor implicated in long-term memory. When long-term memory was rescued through increased β-adrenoceptor activation, CREB phosphorylation was restored. Intermediate-term and long-term, but not short-term odor-preference memories were generated by pairing odor with direct PKA activation using intrabulbar Sp-cAMPs, which bypasses β-adrenoceptor activation. Higher levels of Sp-cAMPs enhanced memory by extending normal 24-h retention to 48-72 h. These results suggest that increased bulbar PKA is necessary and sufficient for the induction of intermediate-term and long-term odor-preference memory, and suggest that PKA activation levels also modulate memory duration. However, short-term memory appears to use molecular mechanisms other than the PKA/CREB pathway. These mechanisms, which are also recruited by β-adrenoceptor activation, must operate in parallel with PKA activation.

  2. Delaunay Triangulation as a New Coverage Measurement Method in Wireless Sensor Network

    Science.gov (United States)

    Chizari, Hassan; Hosseini, Majid; Poston, Timothy; Razak, Shukor Abd; Abdullah, Abdul Hanan

    2011-01-01

    Sensing and communication coverage are among the most important trade-offs in Wireless Sensor Network (WSN) design. A minimum bound of sensing coverage is vital in scheduling, target tracking and redeployment phases, as well as providing communication coverage. Some methods measure the coverage as a percentage value, but detailed information has been missing. Two scenarios with equal coverage percentage may not have the same Quality of Coverage (QoC). In this paper, we propose a new coverage measurement method using Delaunay Triangulation (DT). This can provide the value for all coverage measurement tools. Moreover, it categorizes sensors as ‘fat’, ‘healthy’ or ‘thin’ to show the dense, optimal and scattered areas. It can also yield the largest empty area of sensors in the field. Simulation results show that the proposed DT method can achieve accurate coverage information, and provides many tools to compare QoC between different scenarios. PMID:22163792

  3. PSHED: a simplified approach to developing parallel programs

    International Nuclear Information System (INIS)

    Mahajan, S.M.; Ramesh, K.; Rajesh, K.; Somani, A.; Goel, M.

    1992-01-01

    This paper presents a simplified approach in the forms of a tree structured computational model for parallel application programs. An attempt is made to provide a standard user interface to execute programs on BARC Parallel Processing System (BPPS), a scalable distributed memory multiprocessor. The interface package called PSHED provides a basic framework for representing and executing parallel programs on different parallel architectures. The PSHED package incorporates concepts from a broad range of previous research in programming environments and parallel computations. (author). 6 refs

  4. Parallel auto-correlative statistics with VTK.

    Energy Technology Data Exchange (ETDEWEB)

    Pebay, Philippe Pierre; Bennett, Janine Camille

    2013-08-01

    This report summarizes existing statistical engines in VTK and presents both the serial and parallel auto-correlative statistics engines. It is a sequel to [PT08, BPRT09b, PT09, BPT09, PT10] which studied the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k-means, and order statistics engines. The ease of use of the new parallel auto-correlative statistics engine is illustrated by the means of C++ code snippets and algorithm verification is provided. This report justifies the design of the statistics engines with parallel scalability in mind, and provides scalability and speed-up analysis results for the autocorrelative statistics engine.

  5. 42 CFR 435.139 - Coverage for certain aliens.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Coverage for certain aliens. 435.139 Section 435... Aliens § 435.139 Coverage for certain aliens. The agency must provide services necessary for the treatment of an emergency medical condition, as defined in § 440.255(c) of this chapter, to those aliens...

  6. Default Parallels Plesk Panel Page

    Science.gov (United States)

    services that small businesses want and need. Our software includes key building blocks of cloud service virtualized servers Service Provider Products Parallels® Automation Hosting, SaaS, and cloud computing , the leading hosting automation software. You see this page because there is no Web site at this

  7. Compiler Technology for Parallel Scientific Computation

    Directory of Open Access Journals (Sweden)

    Can Özturan

    1994-01-01

    Full Text Available There is a need for compiler technology that, given the source program, will generate efficient parallel codes for different architectures with minimal user involvement. Parallel computation is becoming indispensable in solving large-scale problems in science and engineering. Yet, the use of parallel computation is limited by the high costs of developing the needed software. To overcome this difficulty we advocate a comprehensive approach to the development of scalable architecture-independent software for scientific computation based on our experience with equational programming language (EPL. Our approach is based on a program decomposition, parallel code synthesis, and run-time support for parallel scientific computation. The program decomposition is guided by the source program annotations provided by the user. The synthesis of parallel code is based on configurations that describe the overall computation as a set of interacting components. Run-time support is provided by the compiler-generated code that redistributes computation and data during object program execution. The generated parallel code is optimized using techniques of data alignment, operator placement, wavefront determination, and memory optimization. In this article we discuss annotations, configurations, parallel code generation, and run-time support suitable for parallel programs written in the functional parallel programming language EPL and in Fortran.

  8. Design considerations for parallel graphics libraries

    Science.gov (United States)

    Crockett, Thomas W.

    1994-01-01

    Applications which run on parallel supercomputers are often characterized by massive datasets. Converting these vast collections of numbers to visual form has proven to be a powerful aid to comprehension. For a variety of reasons, it may be desirable to provide this visual feedback at runtime. One way to accomplish this is to exploit the available parallelism to perform graphics operations in place. In order to do this, we need appropriate parallel rendering algorithms and library interfaces. This paper provides a tutorial introduction to some of the issues which arise in designing parallel graphics libraries and their underlying rendering algorithms. The focus is on polygon rendering for distributed memory message-passing systems. We illustrate our discussion with examples from PGL, a parallel graphics library which has been developed on the Intel family of parallel systems.

  9. Parallelization in Modern C++

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The traditionally used and well established parallel programming models OpenMP and MPI are both targeting lower level parallelism and are meant to be as language agnostic as possible. For a long time, those models were the only widely available portable options for developing parallel C++ applications beyond using plain threads. This has strongly limited the optimization capabilities of compilers, has inhibited extensibility and genericity, and has restricted the use of those models together with other, modern higher level abstractions introduced by the C++11 and C++14 standards. The recent revival of interest in the industry and wider community for the C++ language has also spurred a remarkable amount of standardization proposals and technical specifications being developed. Those efforts however have so far failed to build a vision on how to seamlessly integrate various types of parallelism, such as iterative parallel execution, task-based parallelism, asynchronous many-task execution flows, continuation s...

  10. Parallelism in matrix computations

    CERN Document Server

    Gallopoulos, Efstratios; Sameh, Ahmed H

    2016-01-01

    This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...

  11. Modelling the implications of moving towards universal coverage in Tanzania.

    Science.gov (United States)

    Borghi, Josephine; Mtei, Gemini; Ally, Mariam

    2012-03-01

    A model was developed to assess the impact of possible moves towards universal coverage in Tanzania over a 15-year time frame. Three scenarios were considered: maintaining the current situation ('the status quo'); expanded health insurance coverage (the estimated maximum achievable coverage in the absence of premium subsidies, coverage restricted to those who can pay); universal coverage to all (government revenues used to pay the premiums for the poor). The model estimated the costs of delivering public health services and all health services to the population as a proportion of Gross Domestic Product (GDP), and forecast revenue from user fees and insurance premiums. Under the status quo, financial protection is provided to 10% of the population through health insurance schemes, with the remaining population benefiting from subsidized user charges in public facilities. Seventy-six per cent of the population would benefit from financial protection through health insurance under the expanded coverage scenario, and 100% of the population would receive such protection through a mix of insurance cover and government funding under the universal coverage scenario. The expanded and universal coverage scenarios have a significant effect on utilization levels, especially for public outpatient care. Universal coverage would require an initial doubling in the proportion of GDP going to the public health system. Government health expenditure would increase to 18% of total government expenditure. The results are sensitive to the cost of health system strengthening, the level of real GDP growth, provider reimbursement rates and administrative costs. Promoting greater cross-subsidization between insurance schemes would provide sufficient resources to finance universal coverage. Alternately, greater tax funding for health could be generated through an increase in the rate of Value-Added Tax (VAT) or expanding the income tax base. The feasibility and sustainability of efforts to

  12. A parallel buffer tree

    DEFF Research Database (Denmark)

    Sitchinava, Nodar; Zeh, Norbert

    2012-01-01

    We present the parallel buffer tree, a parallel external memory (PEM) data structure for batched search problems. This data structure is a non-trivial extension of Arge's sequential buffer tree to a private-cache multiprocessor environment and reduces the number of I/O operations by the number of...... in the optimal OhOf(psortN + K/PB) parallel I/O complexity, where K is the size of the output reported in the process and psortN is the parallel I/O complexity of sorting N elements using P processors....

  13. Parallel MR imaging.

    Science.gov (United States)

    Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A; Seiberlich, Nicole

    2012-07-01

    Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the undersampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. Copyright © 2012 Wiley Periodicals, Inc.

  14. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  15. Development and application of a 6.5 million feature Affymetrix Genechip® for massively parallel discovery of single position polymorphisms in lettuce (Lactuca spp.)

    OpenAIRE

    Stoffel, Kevin; Kozik, Alexander; Ashrafi, Hamid; Cui, Xinping; Tan, Xiaoping; Hill, Theresa; Reyes-Chin-Wo, Sebastian; Truco, Maria-Jose; Michelmore, Richard W; Van Deynze, Allen

    2012-01-01

    Abstract Background High-resolution genetic maps are needed in many crops to help characterize the genetic diversity that determines agriculturally important traits. Hybridization to microarrays to detect single feature polymorphisms is a powerful technique for marker discovery and genotyping because of its highly parallel nature. However, microarrays designed for gene expression analysis rarely provide sufficient gene coverage for optimal detection of nucleotide polymorphisms, which limits u...

  16. Dental Care Coverage and Use: Modeling Limitations and Opportunities

    Science.gov (United States)

    Moeller, John F.; Chen, Haiyan

    2014-01-01

    Objectives. We examined why older US adults without dental care coverage and use would have lower use rates if offered coverage than do those who currently have coverage. Methods. We used data from the 2008 Health and Retirement Study to estimate a multinomial logistic model to analyze the influence of personal characteristics in the grouping of older US adults into those with and those without dental care coverage and dental care use. Results. Compared with persons with no coverage and no dental care use, users of dental care with coverage were more likely to be younger, female, wealthier, college graduates, married, in excellent or very good health, and not missing all their permanent teeth. Conclusions. Providing dental care coverage to uninsured older US adults without use will not necessarily result in use rates similar to those with prior coverage and use. We have offered a model using modifiable factors that may help policy planners facilitate programs to increase dental care coverage uptake and use. PMID:24328635

  17. State contraceptive coverage laws: creative responses to questions of "conscience".

    Science.gov (United States)

    Dailard, C

    1999-08-01

    The Federal Employees Health Benefits Program (FEHBP) guaranteed contraceptive coverage for employees of the federal government. However, opponents of the FEHBP contraceptive coverage questioned the viability of the conscience clause. Supporters of the contraceptive coverage pressed for the narrowest exemption, one that only permit religious plans that clearly states religious objection to contraception. There are six of the nine states that have enacted contraceptive coverage laws aimed at the private sector. The statutes included a provision of conscience clause. The private sector disagrees to the plan since almost all of the employees¿ work for employers who only offer one plan. The scope of exemption for employers was an issue in five states that have enacted the contraceptive coverage. In Hawaii and California, it was exemplified that if employers are exempted from the contraceptive coverage based on religious grounds, an employee will be entitled to purchase coverage directly from the plan. There are still questions on how an insurer, who objects based on religious grounds to a plan with contraceptive coverage, can function in a marketplace where such coverage is provided by most private sector employers.

  18. ATLAS FTK a - very complex - custom parallel supercomputer

    CERN Document Server

    Kimura, Naoki; The ATLAS collaboration

    2016-01-01

    In the ever increasing pile-up LHC environment advanced techniques of analysing the data are implemented in order to increase the rate of relevant physics processes with respect to background processes. The Fast TracKer (FTK) is a track finding implementation at hardware level that is designed to deliver full-scan tracks with $p_{T}$ above 1GeV to the ATLAS trigger system for every L1 accept (at a maximum rate of 100kHz). In order to achieve this performance a highly parallel system was designed and now it is under installation in ATLAS. In the beginning of 2016 it will provide tracks for the trigger system in a region covering the central part of the ATLAS detector, and during the year it's coverage will be extended to the full detector coverage. The system relies on matching hits coming from the silicon tracking detectors against 1 billion patterns stored in specially designed ASICS chips (Associative memory - AM06). In a first stage coarse resolution hits are matched against the patterns and the accepted h...

  19. 42 CFR 436.321 - Medically needy coverage of the blind.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Medically needy coverage of the blind. 436.321... Optional Coverage of the Medically Needy § 436.321 Medically needy coverage of the blind. If the agency provides Medicaid to the medically needy, it may provide Medicaid to blind individuals who meet— (a) The...

  20. Computer-Aided Parallelizer and Optimizer

    Science.gov (United States)

    Jin, Haoqiang

    2011-01-01

    The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

  1. Parallel discrete event simulation

    NARCIS (Netherlands)

    Overeinder, B.J.; Hertzberger, L.O.; Sloot, P.M.A.; Withagen, W.J.

    1991-01-01

    In simulating applications for execution on specific computing systems, the simulation performance figures must be known in a short period of time. One basic approach to the problem of reducing the required simulation time is the exploitation of parallelism. However, in parallelizing the simulation

  2. Parallel reservoir simulator computations

    International Nuclear Information System (INIS)

    Hemanth-Kumar, K.; Young, L.C.

    1995-01-01

    The adaptation of a reservoir simulator for parallel computations is described. The simulator was originally designed for vector processors. It performs approximately 99% of its calculations in vector/parallel mode and relative to scalar calculations it achieves speedups of 65 and 81 for black oil and EOS simulations, respectively on the CRAY C-90

  3. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  4. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  5. Massively parallel mathematical sieves

    Energy Technology Data Exchange (ETDEWEB)

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  6. Scaling up machine learning: parallel and distributed approaches

    National Research Council Canada - National Science Library

    Bekkerman, Ron; Bilenko, Mikhail; Langford, John

    2012-01-01

    ... presented in the book cover a range of parallelization platforms from FPGAs and GPUs to multi-core systems and commodity clusters; concurrent programming frameworks that include CUDA, MPI, MapReduce, and DryadLINQ; and various learning settings: supervised, unsupervised, semi-supervised, and online learning. Extensive coverage of parallelizat...

  7. Parallel fuzzy connected image segmentation on GPU

    OpenAIRE

    Zhuge, Ying; Cao, Yong; Udupa, Jayaram K.; Miller, Robert W.

    2011-01-01

    Purpose: Image segmentation techniques using fuzzy connectedness (FC) principles have shown their effectiveness in segmenting a variety of objects in several large applications. However, one challenge in these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays, commodity graphics hardware provides a highly parallel computing environment. In this paper, the authors present a parallel fuzzy connected image segmentation algorithm impleme...

  8. Assuring Access to Affordable Coverage

    Data.gov (United States)

    U.S. Department of Health & Human Services — Under the Affordable Care Act, millions of uninsured Americans will gain access to affordable coverage through Affordable Insurance Exchanges and improvements in...

  9. Inequity between male and female coverage in state infertility laws.

    Science.gov (United States)

    Dupree, James M; Dickey, Ryan M; Lipshultz, Larry I

    2016-06-01

    To analyze state insurance laws mandating coverage for male factor infertility and identify possible inequities between male and female coverage in state insurance laws. We identified states with laws or codes related to infertility insurance coverage using the National Conference of States Legislatures' and the National Infertility Association's websites. We performed a primary, systematic analysis of the laws or codes to specifically identify coverage for male factor infertility services. Not applicable. Not applicable. Not applicable. The presence or absence of language in state insurance laws mandating coverage for male factor infertility care. There are 15 states with laws mandating insurance coverage for female factor infertility. Only eight of those states (California, Connecticut, Massachusetts, Montana, New Jersey, New York, Ohio, and West Virginia) have mandates for male factor infertility evaluation or treatment. Insurance coverage for male factor infertility is most specific in Massachusetts, New Jersey, and New York, yet significant differences exist in the male factor policies in all eight states. Three states (Massachusetts, New Jersey, and New York) exempt coverage for vasectomy reversal. Despite national recommendations that male and female partners begin infertility evaluations together, only 8 of 15 states with laws mandating infertility coverage include coverage for the male partner. Excluding men from infertility coverage places an undue burden on female partners and risks missing opportunities to diagnose serious male health conditions, correct reversible causes of infertility, and provide cost-effective treatments that can downgrade the intensity of intervention required to achieve a pregnancy. Copyright © 2016 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  10. Practical tools to implement massive parallel pyrosequencing of PCR products in next generation molecular diagnostics.

    Directory of Open Access Journals (Sweden)

    Kim De Leeneer

    Full Text Available Despite improvements in terms of sequence quality and price per basepair, Sanger sequencing remains restricted to screening of individual disease genes. The development of massively parallel sequencing (MPS technologies heralded an era in which molecular diagnostics for multigenic disorders becomes reality. Here, we outline different PCR amplification based strategies for the screening of a multitude of genes in a patient cohort. We performed a thorough evaluation in terms of set-up, coverage and sequencing variants on the data of 10 GS-FLX experiments (over 200 patients. Crucially, we determined the actual coverage that is required for reliable diagnostic results using MPS, and provide a tool to calculate the number of patients that can be screened in a single run. Finally, we provide an overview of factors contributing to false negative or false positive mutation calls and suggest ways to maximize sensitivity and specificity, both important in a routine setting. By describing practical strategies for screening of multigenic disorders in a multitude of samples and providing answers to questions about minimum required coverage, the number of patients that can be screened in a single run and the factors that may affect sensitivity and specificity we hope to facilitate the implementation of MPS technology in molecular diagnostics.

  11. [Falsified medicines in parallel trade].

    Science.gov (United States)

    Muckenfuß, Heide

    2017-11-01

    The number of falsified medicines on the German market has distinctly increased over the past few years. In particular, stolen pharmaceutical products, a form of falsified medicines, have increasingly been introduced into the legal supply chain via parallel trading. The reasons why parallel trading serves as a gateway for falsified medicines are most likely the complex supply chains and routes of transport. It is hardly possible for national authorities to trace the history of a medicinal product that was bought and sold by several intermediaries in different EU member states. In addition, the heterogeneous outward appearance of imported and relabelled pharmaceutical products facilitates the introduction of illegal products onto the market. Official batch release at the Paul-Ehrlich-Institut offers the possibility of checking some aspects that might provide an indication of a falsified medicine. In some circumstances, this may allow the identification of falsified medicines before they come onto the German market. However, this control is only possible for biomedicinal products that have not received a waiver regarding official batch release. For improved control of parallel trade, better networking among the EU member states would be beneficial. European-wide regulations, e. g., for disclosure of the complete supply chain, would help to minimise the risks of parallel trading and hinder the marketing of falsified medicines.

  12. Improving Health Care Coverage, Equity, And Financial Protection Through A Hybrid System: Malaysia's Experience.

    Science.gov (United States)

    Rannan-Eliya, Ravindra P; Anuranga, Chamara; Manual, Adilius; Sararaks, Sondi; Jailani, Anis S; Hamid, Abdul J; Razif, Izzanie M; Tan, Ee H; Darzi, Ara

    2016-05-01

    Malaysia has made substantial progress in providing access to health care for its citizens and has been more successful than many other countries that are better known as models of universal health coverage. Malaysia's health care coverage and outcomes are now approaching levels achieved by member nations of the Organization for Economic Cooperation and Development. Malaysia's results are achieved through a mix of public services (funded by general revenues) and parallel private services (predominantly financed by out-of-pocket spending). We examined the distributional aspects of health financing and delivery and assessed financial protection in Malaysia's hybrid system. We found that this system has been effective for many decades in equalizing health care use and providing protection from financial risk, despite modest government spending. Our results also indicate that a high out-of-pocket share of total financing is not a consistent proxy for financial protection; greater attention is needed to the absolute level of out-of-pocket spending. Malaysia's hybrid health system presents continuing unresolved policy challenges, but the country's experience nonetheless provides lessons for other emerging economies that want to expand access to health care despite limited fiscal resources. Project HOPE—The People-to-People Health Foundation, Inc.

  13. Algorithms for parallel computers

    International Nuclear Information System (INIS)

    Churchhouse, R.F.

    1985-01-01

    Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

  14. Parallelism and array processing

    International Nuclear Information System (INIS)

    Zacharov, V.

    1983-01-01

    Modern computing, as well as the historical development of computing, has been dominated by sequential monoprocessing. Yet there is the alternative of parallelism, where several processes may be in concurrent execution. This alternative is discussed in a series of lectures, in which the main developments involving parallelism are considered, both from the standpoint of computing systems and that of applications that can exploit such systems. The lectures seek to discuss parallelism in a historical context, and to identify all the main aspects of concurrency in computation right up to the present time. Included will be consideration of the important question as to what use parallelism might be in the field of data processing. (orig.)

  15. Parallel magnetic resonance imaging

    International Nuclear Information System (INIS)

    Larkman, David J; Nunes, Rita G

    2007-01-01

    Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed. (invited topical review)

  16. Parallel R-matrix computation

    International Nuclear Information System (INIS)

    Heggarty, J.W.

    1999-06-01

    For almost thirty years, sequential R-matrix computation has been used by atomic physics research groups, from around the world, to model collision phenomena involving the scattering of electrons or positrons with atomic or molecular targets. As considerable progress has been made in the understanding of fundamental scattering processes, new data, obtained from more complex calculations, is of current interest to experimentalists. Performing such calculations, however, places considerable demands on the computational resources to be provided by the target machine, in terms of both processor speed and memory requirement. Indeed, in some instances the computational requirements are so great that the proposed R-matrix calculations are intractable, even when utilising contemporary classic supercomputers. Historically, increases in the computational requirements of R-matrix computation were accommodated by porting the problem codes to a more powerful classic supercomputer. Although this approach has been successful in the past, it is no longer considered to be a satisfactory solution due to the limitations of current (and future) Von Neumann machines. As a consequence, there has been considerable interest in the high performance multicomputers, that have emerged over the last decade which appear to offer the computational resources required by contemporary R-matrix research. Unfortunately, developing codes for these machines is not as simple a task as it was to develop codes for successive classic supercomputers. The difficulty arises from the considerable differences in the computing models that exist between the two types of machine and results in the programming of multicomputers to be widely acknowledged as a difficult, time consuming and error-prone task. Nevertheless, unless parallel R-matrix computation is realised, important theoretical and experimental atomic physics research will continue to be hindered. This thesis describes work that was undertaken in

  17. Parallel S/sub n/ iteration schemes

    International Nuclear Information System (INIS)

    Wienke, B.R.; Hiromoto, R.E.

    1986-01-01

    The iterative, multigroup, discrete ordinates (S/sub n/) technique for solving the linear transport equation enjoys widespread usage and appeal. Serial iteration schemes and numerical algorithms developed over the years provide a timely framework for parallel extension. On the Denelcor HEP, the authors investigate three parallel iteration schemes for solving the one-dimensional S/sub n/ transport equation. The multigroup representation and serial iteration methods are also reviewed. This analysis represents a first attempt to extend serial S/sub n/ algorithms to parallel environments and provides good baseline estimates on ease of parallel implementation, relative algorithm efficiency, comparative speedup, and some future directions. The authors examine ordered and chaotic versions of these strategies, with and without concurrent rebalance and diffusion acceleration. Two strategies efficiently support high degrees of parallelization and appear to be robust parallel iteration techniques. The third strategy is a weaker parallel algorithm. Chaotic iteration, difficult to simulate on serial machines, holds promise and converges faster than ordered versions of the schemes. Actual parallel speedup and efficiency are high and payoff appears substantial

  18. The Coverage of Campaign Advertising by the Prestige Press in 1972.

    Science.gov (United States)

    Bowers, Thomas A.

    The nature and extent of the news media coverage of political advertising in the presidential campaign of 1972 was shallow and spotty at best. The candidates' political advertising strategies received limited coverage by reporters and commentators. Even the "prestige" press--16 major newspapers--provided limited coverage to the nature…

  19. 76 FR 46677 - Requirements for Group Health Plans and Health Insurance Issuers Relating to Coverage of...

    Science.gov (United States)

    2011-08-03

    ... Requirements for Group Health Plans and Health Insurance Issuers Relating to Coverage of Preventive Services... regulations published July 19, 2010 with respect to group health plans and health insurance coverage offered... plans, and health insurance issuers providing group health insurance coverage. The text of those...

  20. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,; Fidel, Adam; Amato, Nancy M.; Rauchwerger, Lawrence

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable

  1. Massachusetts health reform: employer coverage from employees' perspective.

    Science.gov (United States)

    Long, Sharon K; Stockley, Karen

    2009-01-01

    The national health reform debate continues to draw on Massachusetts' 2006 reform initiative, with a focus on sustaining employer-sponsored insurance. This study provides an update on employers' responses under health reform in fall 2008, using data from surveys of working-age adults. Results show that concerns about employers' dropping coverage or scaling back benefits under health reform have not been realized. Access to employer coverage has increased, as has the scope and quality of their coverage as assessed by workers. However, premiums and out-of-pocket costs have become more of an issue for employees in small firms.

  2. Mediating Trust in Terrorism Coverage

    DEFF Research Database (Denmark)

    Mogensen, Kirsten

    crisis. While the framework is presented in the context of television coverage of a terror-related crisis situation, it can equally be used in connection with all other forms of mediated trust. Key words: National crisis, risk communication, crisis management, television coverage, mediated trust.......Mass mediated risk communication can contribute to perceptions of threats and fear of “others” and/or to perceptions of trust in fellow citizens and society to overcome problems. This paper outlines a cross-disciplinary holistic framework for research in mediated trust building during an acute...

  3. Monitoring intervention coverage in the context of universal health coverage.

    Directory of Open Access Journals (Sweden)

    Ties Boerma

    2014-09-01

    Full Text Available Monitoring universal health coverage (UHC focuses on information on health intervention coverage and financial protection. This paper addresses monitoring intervention coverage, related to the full spectrum of UHC, including health promotion and disease prevention, treatment, rehabilitation, and palliation. A comprehensive core set of indicators most relevant to the country situation should be monitored on a regular basis as part of health progress and systems performance assessment for all countries. UHC monitoring should be embedded in a broad results framework for the country health system, but focus on indicators related to the coverage of interventions that most directly reflect the results of UHC investments and strategies in each country. A set of tracer coverage indicators can be selected, divided into two groups-promotion/prevention, and treatment/care-as illustrated in this paper. Disaggregation of the indicators by the main equity stratifiers is critical to monitor progress in all population groups. Targets need to be set in accordance with baselines, historical rate of progress, and measurement considerations. Critical measurement gaps also exist, especially for treatment indicators, covering issues such as mental health, injuries, chronic conditions, surgical interventions, rehabilitation, and palliation. Consequently, further research and proxy indicators need to be used in the interim. Ideally, indicators should include a quality of intervention dimension. For some interventions, use of a single indicator is feasible, such as management of hypertension; but in many areas additional indicators are needed to capture quality of service provision. The monitoring of UHC has significant implications for health information systems. Major data gaps will need to be filled. At a minimum, countries will need to administer regular household health surveys with biological and clinical data collection. Countries will also need to improve the

  4. Newspaper coverage of biobanks

    Directory of Open Access Journals (Sweden)

    Ubaka Ogbogu

    2014-07-01

    Full Text Available Background. Biobanks are an important research resource that provides researchers with biological samples, tools and data, but have also been associated with a range of ethical, legal and policy issues and concerns. Although there have been studies examining the views of different stakeholders, such as donors, researchers and the general public, the media portrayal of biobanks has been absent from this body of research. This study therefore examines how biobanking has been represented in major print newspapers from Australia, Canada, the United Kingdom and the United States to identify the issues and concerns surrounding biobanks that have featured most prominently in the print media discourse.Methods. Using Factiva, articles published in major broadsheet newspapers in Canada, the US, the UK, and Australia were identified using specified search terms. The final sample size consisted of 163 articles.Results. Majority of articles mentioned or discussed the benefits of biobanking, with medical research being the most prevalent benefit mentioned. Fewer articles discussed risks associated with biobanking. Researchers were the group of people most quoted in the articles, followed by biobank employees. Biobanking was portrayed as mostly neutral or positive, with few articles portraying biobanking in a negative manner.Conclusion. Reporting on biobanks in the print media heavily favours discussions of related benefits over risks. Members of the scientific research community appear to be a primary source of this positive tone. Under-reporting of risks and a downtrend in reporting on legal and regulatory issues suggests that the print media views such matters as less newsworthy than perceived benefits of biobanking.

  5. Building a parallel file system simulator

    International Nuclear Information System (INIS)

    Molina-Estolano, E; Maltzahn, C; Brandt, S A; Bent, J

    2009-01-01

    Parallel file systems are gaining in popularity in high-end computing centers as well as commercial data centers. High-end computing systems are expected to scale exponentially and to pose new challenges to their storage scalability in terms of cost and power. To address these challenges scientists and file system designers will need a thorough understanding of the design space of parallel file systems. Yet there exist few systematic studies of parallel file system behavior at petabyte- and exabyte scale. An important reason is the significant cost of getting access to large-scale hardware to test parallel file systems. To contribute to this understanding we are building a parallel file system simulator that can simulate parallel file systems at very large scale. Our goal is to simulate petabyte-scale parallel file systems on a small cluster or even a single machine in reasonable time and fidelity. With this simulator, file system experts will be able to tune existing file systems for specific workloads, scientists and file system deployment engineers will be able to better communicate workload requirements, file system designers and researchers will be able to try out design alternatives and innovations at scale, and instructors will be able to study very large-scale parallel file system behavior in the class room. In this paper we describe our approach and provide preliminary results that are encouraging both in terms of fidelity and simulation scalability.

  6. Recommendation system for immunization coverage and monitoring.

    Science.gov (United States)

    Bhatti, Uzair Aslam; Huang, Mengxing; Wang, Hao; Zhang, Yu; Mehmood, Anum; Di, Wu

    2018-01-02

    Immunization averts an expected 2 to 3 million deaths every year from diphtheria, tetanus, pertussis (whooping cough), and measles; however, an additional 1.5 million deaths could be avoided if vaccination coverage was improved worldwide. 1 1 Data source for immunization records of 1.5 M: http://www.who.int/mediacentre/factsheets/fs378/en/ New vaccination technologies provide earlier diagnoses, personalized treatments and a wide range of other benefits for both patients and health care professionals. Childhood diseases that were commonplace less than a generation ago have become rare because of vaccines. However, 100% vaccination coverage is still the target to avoid further mortality. Governments have launched special campaigns to create an awareness of vaccination. In this paper, we have focused on data mining algorithms for big data using a collaborative approach for vaccination datasets to resolve problems with planning vaccinations in children, stocking vaccines, and tracking and monitoring non-vaccinated children appropriately. Geographical mapping of vaccination records helps to tackle red zone areas, where vaccination rates are poor, while green zone areas, where vaccination rates are good, can be monitored to enable health care staff to plan the administration of vaccines. Our recommendation algorithm assists in these processes by using deep data mining and by accessing records of other hospitals to highlight locations with lower rates of vaccination. The overall performance of the model is good. The model has been implemented in hospitals to control vaccination across the coverage area.

  7. Shared Variable Oriented Parallel Precompiler for SPMD Model

    Institute of Scientific and Technical Information of China (English)

    1995-01-01

    For the moment,commercial parallel computer systems with distributed memory architecture are usually provided with parallel FORTRAN or parallel C compliers,which are just traditional sequential FORTRAN or C compilers expanded with communication statements.Programmers suffer from writing parallel programs with communication statements. The Shared Variable Oriented Parallel Precompiler (SVOPP) proposed in this paper can automatically generate appropriate communication statements based on shared variables for SPMD(Single Program Multiple Data) computation model and greatly ease the parallel programming with high communication efficiency.The core function of parallel C precompiler has been successfully verified on a transputer-based parallel computer.Its prominent performance shows that SVOPP is probably a break-through in parallel programming technique.

  8. A Tutorial on Parallel and Concurrent Programming in Haskell

    Science.gov (United States)

    Peyton Jones, Simon; Singh, Satnam

    This practical tutorial introduces the features available in Haskell for writing parallel and concurrent programs. We first describe how to write semi-explicit parallel programs by using annotations to express opportunities for parallelism and to help control the granularity of parallelism for effective execution on modern operating systems and processors. We then describe the mechanisms provided by Haskell for writing explicitly parallel programs with a focus on the use of software transactional memory to help share information between threads. Finally, we show how nested data parallelism can be used to write deterministically parallel programs which allows programmers to use rich data types in data parallel programs which are automatically transformed into flat data parallel versions for efficient execution on multi-core processors.

  9. Automatic Parallelization Tool: Classification of Program Code for Parallel Computing

    Directory of Open Access Journals (Sweden)

    Mustafa Basthikodi

    2016-04-01

    Full Text Available Performance growth of single-core processors has come to a halt in the past decade, but was re-enabled by the introduction of parallelism in processors. Multicore frameworks along with Graphical Processing Units empowered to enhance parallelism broadly. Couples of compilers are updated to developing challenges forsynchronization and threading issues. Appropriate program and algorithm classifications will have advantage to a great extent to the group of software engineers to get opportunities for effective parallelization. In present work we investigated current species for classification of algorithms, in that related work on classification is discussed along with the comparison of issues that challenges the classification. The set of algorithms are chosen which matches the structure with different issues and perform given task. We have tested these algorithms utilizing existing automatic species extraction toolsalong with Bones compiler. We have added functionalities to existing tool, providing a more detailed characterization. The contributions of our work include support for pointer arithmetic, conditional and incremental statements, user defined types, constants and mathematical functions. With this, we can retain significant data which is not captured by original speciesof algorithms. We executed new theories into the device, empowering automatic characterization of program code.

  10. SPINning parallel systems software

    International Nuclear Information System (INIS)

    Matlin, O.S.; Lusk, E.; McCune, W.

    2002-01-01

    We describe our experiences in using Spin to verify parts of the Multi Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of processes connected by Unix network sockets. MPD is dynamic processes and connections among them are created and destroyed as MPD is initialized, runs user processes, recovers from faults, and terminates. This dynamic nature is easily expressible in the Spin/Promela framework but poses performance and scalability challenges. We present here the results of expressing some of the parallel algorithms of MPD and executing both simulation and verification runs with Spin

  11. Parallel programming with Python

    CERN Document Server

    Palach, Jan

    2014-01-01

    A fast, easy-to-follow and clear tutorial to help you develop Parallel computing systems using Python. Along with explaining the fundamentals, the book will also introduce you to slightly advanced concepts and will help you in implementing these techniques in the real world. If you are an experienced Python programmer and are willing to utilize the available computing resources by parallelizing applications in a simple way, then this book is for you. You are required to have a basic knowledge of Python development to get the most of this book.

  12. A possibility of parallel and anti-parallel diffraction measurements on ...

    Indian Academy of Sciences (India)

    However, a bent perfect crystal (BPC) monochromator at monochromatic focusing condition can provide a quite flat and equal resolution property at both parallel and anti-parallel positions and thus one can have a chance to use both sides for the diffraction experiment. From the data of the FWHM and the / measured ...

  13. Performance of the Galley Parallel File System

    Science.gov (United States)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the input/output (I/O) needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. This interface conceals the parallism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. Initial experiments, reported in this paper, indicate that Galley is capable of providing high-performance 1/O to applications the applications that rely on them. In Section 3 we describe that access data in patterns that have been observed to be common.

  14. Extending Coverage and Lifetime of K-coverage Wireless Sensor Networks Using Improved Harmony Search

    Directory of Open Access Journals (Sweden)

    Shohreh Ebrahimnezhad

    2011-07-01

    Full Text Available K-coverage wireless sensor networks try to provide facilities such that each hotspot region is covered by at least k sensors. Because, the fundamental evaluation metrics of such networks are coverage and lifetime, proposing an approach that extends both of them simultaneously has a lot of interests. In this article, it is supposed that two kinds of nodes are available: static and mobile. The proposed method, at first, tries to balance energy among sensor nodes using Improved Harmony Search (IHS algorithm in a k-coverage and connected wireless sensor network in order to achieve a sensor node deployment. Also, this method proposes a suitable place for a gateway node (Sink that collects data from all sensors. Second, in order to prolong the network lifetime, some of the high energy-consuming mobile nodes are moved to the closest positions of low energy-consuming ones and vice versa after a while. This leads increasing the lifetime of network while connectivity and k-coverage are preserved. Through computer simulations, experimental results verified that the proposed IHS-based algorithm found better solution compared to some related methods.

  15. Multistage parallel-serial time averaging filters

    International Nuclear Information System (INIS)

    Theodosiou, G.E.

    1980-01-01

    Here, a new time averaging circuit design, the 'parallel filter' is presented, which can reduce the time jitter, introduced in time measurements using counters of large dimensions. This parallel filter could be considered as a single stage unit circuit which can be repeated an arbitrary number of times in series, thus providing a parallel-serial filter type as a result. The main advantages of such a filter over a serial one are much less electronic gate jitter and time delay for the same amount of total time uncertainty reduction. (orig.)

  16. Expressing Parallelism with ROOT

    Energy Technology Data Exchange (ETDEWEB)

    Piparo, D. [CERN; Tejedor, E. [CERN; Guiraud, E. [CERN; Ganis, G. [CERN; Mato, P. [CERN; Moneta, L. [CERN; Valls Pla, X. [CERN; Canal, P. [Fermilab

    2017-11-22

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  17. Expressing Parallelism with ROOT

    Science.gov (United States)

    Piparo, D.; Tejedor, E.; Guiraud, E.; Ganis, G.; Mato, P.; Moneta, L.; Valls Pla, X.; Canal, P.

    2017-10-01

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  18. Parallel Fast Legendre Transform

    NARCIS (Netherlands)

    Alves de Inda, M.; Bisseling, R.H.; Maslen, D.K.

    1998-01-01

    We discuss a parallel implementation of a fast algorithm for the discrete polynomial Legendre transform We give an introduction to the DriscollHealy algorithm using polynomial arithmetic and present experimental results on the eciency and accuracy of our implementation The algorithms were

  19. Practical parallel programming

    CERN Document Server

    Bauer, Barr E

    2014-01-01

    This is the book that will teach programmers to write faster, more efficient code for parallel processors. The reader is introduced to a vast array of procedures and paradigms on which actual coding may be based. Examples and real-life simulations using these devices are presented in C and FORTRAN.

  20. Parallel universes beguile science

    CERN Multimedia

    2007-01-01

    A staple of mind-bending science fiction, the possibility of multiple universes has long intrigued hard-nosed physicists, mathematicians and cosmologists too. We may not be able -- as least not yet -- to prove they exist, many serious scientists say, but there are plenty of reasons to think that parallel dimensions are more than figments of eggheaded imagination.

  1. Parallel k-means++

    Energy Technology Data Exchange (ETDEWEB)

    2017-04-04

    A parallelization of the k-means++ seed selection algorithm on three distinct hardware platforms: GPU, multicore CPU, and multithreaded architecture. K-means++ was developed by David Arthur and Sergei Vassilvitskii in 2007 as an extension of the k-means data clustering technique. These algorithms allow people to cluster multidimensional data, by attempting to minimize the mean distance of data points within a cluster. K-means++ improved upon traditional k-means by using a more intelligent approach to selecting the initial seeds for the clustering process. While k-means++ has become a popular alternative to traditional k-means clustering, little work has been done to parallelize this technique. We have developed original C++ code for parallelizing the algorithm on three unique hardware architectures: GPU using NVidia's CUDA/Thrust framework, multicore CPU using OpenMP, and the Cray XMT multithreaded architecture. By parallelizing the process for these platforms, we are able to perform k-means++ clustering much more quickly than it could be done before.

  2. Parallel plate detectors

    International Nuclear Information System (INIS)

    Gardes, D.; Volkov, P.

    1981-01-01

    A 5x3cm 2 (timing only) and a 15x5cm 2 (timing and position) parallel plate avalanche counters (PPAC) are considered. The theory of operation and timing resolution is given. The measurement set-up and the curves of experimental results illustrate the possibilities of the two counters [fr

  3. Parallel hierarchical global illumination

    Energy Technology Data Exchange (ETDEWEB)

    Snell, Quinn O. [Iowa State Univ., Ames, IA (United States)

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  4. Adapting algorithms to massively parallel hardware

    CERN Document Server

    Sioulas, Panagiotis

    2016-01-01

    In the recent years, the trend in computing has shifted from delivering processors with faster clock speeds to increasing the number of cores per processor. This marks a paradigm shift towards parallel programming in which applications are programmed to exploit the power provided by multi-cores. Usually there is gain in terms of the time-to-solution and the memory footprint. Specifically, this trend has sparked an interest towards massively parallel systems that can provide a large number of processors, and possibly computing nodes, as in the GPUs and MPPAs (Massively Parallel Processor Arrays). In this project, the focus was on two distinct computing problems: k-d tree searches and track seeding cellular automata. The goal was to adapt the algorithms to parallel systems and evaluate their performance in different cases.

  5. Optimisation of a parallel ocean general circulation model

    OpenAIRE

    M. I. Beare; D. P. Stevens

    1997-01-01

    International audience; This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by...

  6. Introduction to parallel algorithms and architectures arrays, trees, hypercubes

    CERN Document Server

    Leighton, F Thomson

    1991-01-01

    Introduction to Parallel Algorithms and Architectures: Arrays Trees Hypercubes provides an introduction to the expanding field of parallel algorithms and architectures. This book focuses on parallel computation involving the most popular network architectures, namely, arrays, trees, hypercubes, and some closely related networks.Organized into three chapters, this book begins with an overview of the simplest architectures of arrays and trees. This text then presents the structures and relationships between the dominant network architectures, as well as the most efficient parallel algorithms for

  7. Newspaper coverage of mental illness in England 2008-2011.

    Science.gov (United States)

    Thornicroft, Amalia; Goulden, Robert; Shefer, Guy; Rhydderch, Danielle; Rose, Diana; Williams, Paul; Thornicroft, Graham; Henderson, Claire

    2013-04-01

    Better newspaper coverage of mental health-related issues is a target for the Time to Change (TTC) anti-stigma programme in England, whose population impact may be influenced by how far concurrent media coverage perpetuates stigma and discrimination. To compare English newspaper coverage of mental health-related topics each year of the TTC social marketing campaign (2009-2011) with baseline coverage in 2008. Content analysis was performed on articles in 27 local and national newspapers on two randomly chosen days each month. There was a significant increase in the proportion of anti-stigmatising articles between 2008 and 2011. There was no concomitant proportional decrease in stigmatising articles, and the contribution of mixed or neutral elements decreased. These findings provide promising results on improvements in press reporting of mental illness during the TTC programme in 2009-2011, and a basis for guidance to newspaper journalists and editors on reporting mental illness.

  8. Land and federal mineral ownership coverage for northwestern Colorado

    Science.gov (United States)

    Biewick, L.H.; Mercier, T.J.; Levitt, Pam; Deikman, Doug; Vlahos, Bob

    1999-01-01

    This Arc/Info coverage contains land status and Federal mineral ownership for approximately 26,800 square miles in northwestern Colorado. The polygon coverage (which is also provided here as a shapefile) contains two attributes of ownership information for each polygon. One attribute indicates where the surface is State owned, privately owned, or, if Federally owned, which Federal agency manages the land surface. The other attribute indicates which minerals, if any, are owned by the Federal govenment. This coverage is based on land status and Federal mineral ownership data compiled by the U.S. Geological Survey (USGS) and three Colorado State Bureau of Land Management (BLM) former district offices at a scale of 1:24,000. This coverage was compiled primarily to serve the USGS National Oil and Gas Resource Assessment Project in the Uinta-Piceance Basin Province and the USGS National Coal Resource Assessment Project in the Colorado Plateau.

  9. [Gaps in effective coverage by socioeconomic status and poverty condition].

    Science.gov (United States)

    Gutiérrez, Juan Pablo

    2013-01-01

    To analyze, in the context of increased health protection in Mexico, the gaps by socioeconomic status and poverty condition on effective coverage of selected preventive interventions. Data from the National Health & Nutrition Survey 2012 and 2006, using previously defined indicators of effective coverage and stratifying them by socioeconomic (SE) status and multidimensional poverty condition. For vaccination interventions, immunological equity has been maintained in Mexico. For indicators related to preventive interventions provided at the clinical setting, effective coverage is lower among those in the lowest SE quintile and among people living in multidimensional poverty. Comparing 2006 and 2012, there is no evidence on gap reduction. While health protection has significantly increased in Mexico, thus reducing SE gaps, those gaps are still important in magnitude for effective coverage of preventive interventions.

  10. .NET 4.5 parallel extensions

    CERN Document Server

    Freeman, Bryan

    2013-01-01

    This book contains practical recipes on everything you will need to create task-based parallel programs using C#, .NET 4.5, and Visual Studio. The book is packed with illustrated code examples to create scalable programs.This book is intended to help experienced C# developers write applications that leverage the power of modern multicore processors. It provides the necessary knowledge for an experienced C# developer to work with .NET parallelism APIs. Previous experience of writing multithreaded applications is not necessary.

  11. Mental Health Insurance Parity and Provider Wages.

    Science.gov (United States)

    Golberstein, Ezra; Busch, Susan H

    2017-06-01

    Policymakers frequently mandate that employers or insurers provide insurance benefits deemed to be critical to individuals' well-being. However, in the presence of private market imperfections, mandates that increase demand for a service can lead to price increases for that service, without necessarily affecting the quantity being supplied. We test this idea empirically by looking at mental health parity mandates. This study evaluated whether implementation of parity laws was associated with changes in mental health provider wages. Quasi-experimental analysis of average wages by state and year for six mental health care-related occupations were considered: Clinical, Counseling, and School Psychologists; Substance Abuse and Behavioral Disorder Counselors; Marriage and Family Therapists; Mental Health Counselors; Mental Health and Substance Abuse Social Workers; and Psychiatrists. Data from 1999-2013 were used to estimate the association between the implementation of state mental health parity laws and the Paul Wellstone and Pete Domenici Mental Health Parity and Addiction Equity Act and average mental health provider wages. Mental health parity laws were associated with a significant increase in mental health care provider wages controlling for changes in mental health provider wages in states not exposed to parity (3.5 percent [95% CI: 0.3%, 6.6%]; pwages. Health insurance benefit expansions may lead to increased prices for health services when the private market that supplies the service is imperfect or constrained. In the context of mental health parity, this work suggests that part of the value of expanding insurance benefits for mental health coverage was captured by providers. Given historically low wage levels of mental health providers, this increase may be a first step in bringing mental health provider wages in line with parallel health professions, potentially reducing turnover rates and improving treatment quality.

  12. Root coverage with bridge flap

    Directory of Open Access Journals (Sweden)

    Pushpendra Kumar Verma

    2013-01-01

    Full Text Available Gingival recession in anterior teeth is a common concern due to esthetic reasons or root sensitivity. Gingival recession, especially in multiple anterior teeth, is of huge concern due to esthetic reasons. Various mucogingival surgeries are available for root coverage. This case report presents a new bridge flap technique, which allows the dentist not only to cover the previously denuded root surfaces but also to increase the zone of attached gingiva at a single step. In this case, a coronally advanced flap along with vestibular deepening technique was used as root coverage procedure for the treatment of multiple recession-type defect. Here, vestibular deepening technique is used to increase the width of the attached gingiva. The predictability of this procedure results in an esthetically healthy periodontium, along with gain in keratinized tissue and good patient′s acceptance.

  13. [Quantification of acetabular coverage in normal adult].

    Science.gov (United States)

    Lin, R M; Yang, C Y; Yu, C Y; Yang, C R; Chang, G L; Chou, Y L

    1991-03-01

    Quantification of acetabular coverage is important and can be expressed by superimposition of cartilage tracings on the maximum cross-sectional area of the femoral head. A practical Autolisp program on PC AutoCAD has been developed by us to quantify the acetabular coverage through numerical expression of the images of computed tomography. Thirty adults (60 hips) with normal center-edge angle and acetabular index in plain X ray were randomly selected for serial drops. These slices were prepared with a fixed coordination and in continuous sections of 5 mm in thickness. The contours of the cartilage of each section were digitized into a PC computer and processed by AutoCAD programs to quantify and characterize the acetabular coverage of normal and dysplastic adult hips. We found that a total coverage ratio of greater than 80%, an anterior coverage ratio of greater than 75% and a posterior coverage ratio of greater than 80% can be categorized in a normal group. Polar edge distance is a good indicator for the evaluation of preoperative and postoperative coverage conditions. For standardization and evaluation of acetabular coverage, the most suitable parameters are the total coverage ratio, anterior coverage ratio, posterior coverage ratio and polar edge distance. However, medial coverage and lateral coverage ratios are indispensable in cases of dysplastic hip because variations between them are so great that acetabuloplasty may be impossible. This program can also be used to classify precisely the type of dysplastic hip.

  14. Parallel grid population

    Science.gov (United States)

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  15. More parallel please

    DEFF Research Database (Denmark)

    Gregersen, Frans; Josephson, Olle; Kristoffersen, Gjert

    of departure that English may be used in parallel with the various local, in this case Nordic, languages. As such, the book integrates the challenge of internationalization faced by any university with the wish to improve quality in research, education and administration based on the local language......Abstract [en] More parallel, please is the result of the work of an Inter-Nordic group of experts on language policy financed by the Nordic Council of Ministers 2014-17. The book presents all that is needed to plan, practice and revise a university language policy which takes as its point......(s). There are three layers in the text: First, you may read the extremely brief version of the in total 11 recommendations for best practice. Second, you may acquaint yourself with the extended version of the recommendations and finally, you may study the reasoning behind each of them. At the end of the text, we give...

  16. PARALLEL MOVING MECHANICAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Florian Ion Tiberius Petrescu

    2014-09-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Moving mechanical systems parallel structures are solid, fast, and accurate. Between parallel systems it is to be noticed Stewart platforms, as the oldest systems, fast, solid and precise. The work outlines a few main elements of Stewart platforms. Begin with the geometry platform, kinematic elements of it, and presented then and a few items of dynamics. Dynamic primary element on it means the determination mechanism kinetic energy of the entire Stewart platforms. It is then in a record tail cinematic mobile by a method dot matrix of rotation. If a structural mottoelement consists of two moving elements which translates relative, drive train and especially dynamic it is more convenient to represent the mottoelement as a single moving components. We have thus seven moving parts (the six motoelements or feet to which is added mobile platform 7 and one fixed.

  17. Medicare Provider Data - Hospice Providers

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Hospice Utilization and Payment Public Use File provides information on services provided to Medicare beneficiaries by hospice providers. The Hospice PUF...

  18. Xyce parallel electronic simulator.

    Energy Technology Data Exchange (ETDEWEB)

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd S; Pawlowski, Roger P; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide.

  19. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  20. Assessing Measurement Error in Medicare Coverage

    Data.gov (United States)

    U.S. Department of Health & Human Services — Assessing Measurement Error in Medicare Coverage From the National Health Interview Survey Using linked administrative data, to validate Medicare coverage estimates...

  1. 76 FR 7767 - Student Health Insurance Coverage

    Science.gov (United States)

    2011-02-11

    ... Student Health Insurance Coverage AGENCY: Centers for Medicare & Medicaid Services (CMS), HHS. ACTION... health insurance coverage under the Public Health Service Act and the Affordable Care Act. The proposed rule would define ``student health insurance [[Page 7768

  2. 3D Hyperpolarized C-13 EPI with Calibrationless Parallel Imaging

    DEFF Research Database (Denmark)

    Gordon, Jeremy W.; Hansen, Rie Beck; Shin, Peter J.

    2018-01-01

    With the translation of metabolic MRI with hyperpolarized 13C agents into the clinic, imaging approaches will require large volumetric FOVs to support clinical applications. Parallel imaging techniques will be crucial to increasing volumetric scan coverage while minimizing RF requirements and tem...... strategies to accelerate and undersample hyperpolarized 13C data using 3D blipped EPI acquisitions and multichannel receive coils, and demonstrated its application in a human study of [1-13C]pyruvate metabolism....

  3. Current and future state of FDA-CMS parallel reviews.

    Science.gov (United States)

    Messner, D A; Tunis, S R

    2012-03-01

    The US Food and Drug Administration (FDA) and the Centers for Medicare and Medicaid Services (CMS) recently proposed a partial alignment of their respective review processes for new medical products. The proposed "parallel review" not only offers an opportunity for some products to reach the market with Medicare coverage more quickly but may also create new incentives for product developers to conduct studies designed to address simultaneously the information needs of regulators, payers, patients, and clinicians.

  4. Massively Parallel Computing: A Sandia Perspective

    Energy Technology Data Exchange (ETDEWEB)

    Dosanjh, Sudip S.; Greenberg, David S.; Hendrickson, Bruce; Heroux, Michael A.; Plimpton, Steve J.; Tomkins, James L.; Womble, David E.

    1999-05-06

    The computing power available to scientists and engineers has increased dramatically in the past decade, due in part to progress in making massively parallel computing practical and available. The expectation for these machines has been great. The reality is that progress has been slower than expected. Nevertheless, massively parallel computing is beginning to realize its potential for enabling significant break-throughs in science and engineering. This paper provides a perspective on the state of the field, colored by the authors' experiences using large scale parallel machines at Sandia National Laboratories. We address trends in hardware, system software and algorithms, and we also offer our view of the forces shaping the parallel computing industry.

  5. Parallel processor programs in the Federal Government

    Science.gov (United States)

    Schneck, P. B.; Austin, D.; Squires, S. L.; Lehmann, J.; Mizell, D.; Wallgren, K.

    1985-01-01

    In 1982, a report dealing with the nation's research needs in high-speed computing called for increased access to supercomputing resources for the research community, research in computational mathematics, and increased research in the technology base needed for the next generation of supercomputers. Since that time a number of programs addressing future generations of computers, particularly parallel processors, have been started by U.S. government agencies. The present paper provides a description of the largest government programs in parallel processing. Established in fiscal year 1985 by the Institute for Defense Analyses for the National Security Agency, the Supercomputing Research Center will pursue research to advance the state of the art in supercomputing. Attention is also given to the DOE applied mathematical sciences research program, the NYU Ultracomputer project, the DARPA multiprocessor system architectures program, NSF research on multiprocessor systems, ONR activities in parallel computing, and NASA parallel processor projects.

  6. 5 CFR 890.1106 - Coverage.

    Science.gov (United States)

    2010-01-01

    ... family member is an individual whose relationship to the enrollee meets the requirements of 5 U.S.C. 8901... EMPLOYEES HEALTH BENEFITS PROGRAM Temporary Continuation of Coverage § 890.1106 Coverage. (a) Type of enrollment. An individual who enrolls under this subpart may elect coverage for self alone or self and family...

  7. 40 CFR 51.356 - Vehicle coverage.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 2 2010-07-01 2010-07-01 false Vehicle coverage. 51.356 Section 51.356....356 Vehicle coverage. The performance standard for enhanced I/M programs assumes coverage of all 1968 and later model year light duty vehicles and light duty trucks up to 8,500 pounds GVWR, and includes...

  8. 29 CFR 801.3 - Coverage.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Coverage. 801.3 Section 801.3 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR OTHER LAWS APPLICATION OF THE EMPLOYEE POLYGRAPH PROTECTION ACT OF 1988 General § 801.3 Coverage. (a) The coverage of the Act extends to “any...

  9. Towards a streaming model for nested data parallelism

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner; Filinski, Andrzej

    2013-01-01

    The language-integrated cost semantics for nested data parallelism pioneered by NESL provides an intuitive, high-level model for predicting performance and scalability of parallel algorithms with reasonable accuracy. However, this predictability, obtained through a uniform, parallelism-flattening......The language-integrated cost semantics for nested data parallelism pioneered by NESL provides an intuitive, high-level model for predicting performance and scalability of parallel algorithms with reasonable accuracy. However, this predictability, obtained through a uniform, parallelism......-processable in a streaming fashion. This semantics is directly compatible with previously proposed piecewise execution models for nested data parallelism, but allows the expected space usage to be reasoned about directly at the source-language level. The language definition and implementation are still very much work...

  10. Hospital emergency on-call coverage: is there a doctor in the house?

    Science.gov (United States)

    O'Malley, Ann S; Draper, Debra A; Felland, Laurie E

    2007-11-01

    The nation's community hospitals face increasing problems obtaining emergency on-call coverage from specialist physicians, according to findings from the Center for Studying Health System Change's (HSC) 2007 site visits to 12 nationally representative metropolitan communities. The diminished willingness of specialist physicians to provide on-call coverage is occurring as hospital emergency departments confront an ever-increasing demand for services. Factors influencing physician reluctance to provide on-call coverage include decreased dependence on hospital admitting privileges as more services shift to non-hospital settings; payment for emergency care, especially for uninsured patients; and medical liability concerns. Hospital strategies to secure on-call coverage include enforcing hospital medical staff bylaws that require physicians to take call, contracting with physicians to provide coverage, paying physicians stipends, and employing physicians. Nonetheless, many hospitals continue to struggle with inadequate on-call coverage, which threatens patients' timely access to high-quality emergency care and may raise health care costs.

  11. Resistor Combinations for Parallel Circuits.

    Science.gov (United States)

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  12. SOFTWARE FOR DESIGNING PARALLEL APPLICATIONS

    Directory of Open Access Journals (Sweden)

    M. K. Bouza

    2017-01-01

    Full Text Available The object of research is the tools to support the development of parallel programs in C/C ++. The methods and software which automates the process of designing parallel applications are proposed.

  13. Parallel External Memory Graph Algorithms

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Sitchinava, Nodari

    2010-01-01

    In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... an optimal speedup of ¿(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts....

  14. Parallel inter channel interaction mechanisms

    International Nuclear Information System (INIS)

    Jovic, V.; Afgan, N.; Jovic, L.

    1995-01-01

    Parallel channels interactions are examined. For experimental researches of nonstationary regimes flow in three parallel vertical channels results of phenomenon analysis and mechanisms of parallel channel interaction for adiabatic condition of one-phase fluid and two-phase mixture flow are shown. (author)

  15. Parallel processing of structural integrity analysis codes

    International Nuclear Information System (INIS)

    Swami Prasad, P.; Dutta, B.K.; Kushwaha, H.S.

    1996-01-01

    Structural integrity analysis forms an important role in assessing and demonstrating the safety of nuclear reactor components. This analysis is performed using analytical tools such as Finite Element Method (FEM) with the help of digital computers. The complexity of the problems involved in nuclear engineering demands high speed computation facilities to obtain solutions in reasonable amount of time. Parallel processing systems such as ANUPAM provide an efficient platform for realising the high speed computation. The development and implementation of software on parallel processing systems is an interesting and challenging task. The data and algorithm structure of the codes plays an important role in exploiting the parallel processing system capabilities. Structural analysis codes based on FEM can be divided into two categories with respect to their implementation on parallel processing systems. The first category codes such as those used for harmonic analysis, mechanistic fuel performance codes need not require the parallelisation of individual modules of the codes. The second category of codes such as conventional FEM codes require parallelisation of individual modules. In this category, parallelisation of equation solution module poses major difficulties. Different solution schemes such as domain decomposition method (DDM), parallel active column solver and substructuring method are currently used on parallel processing systems. Two codes, FAIR and TABS belonging to each of these categories have been implemented on ANUPAM. The implementation details of these codes and the performance of different equation solvers are highlighted. (author). 5 refs., 12 figs., 1 tab

  16. Armenian media coverage of science topics

    Science.gov (United States)

    Mkhitaryan, Marie

    2016-12-01

    The article discusses features and issues of Armenian media coverage on scientific topics and provides recommendations on how to promote scientific topics in media. The media is more interested in social or public reaction rather than in scientific information itself. Medical science has a large share of the global media coverage. It is followed by articles about environment, space, technology, physics and other areas. Armenian media mainly tends to focus on a scientific topic if at first sight it contains something revolutionary. Media primarily reviews whether that scientific study can affect the Armenian economy and only then decides to refer to it. Unfortunately, nowadays the perception of science is a little distorted in media. We can often see headlines of news where is mentioned that the scientist has made "an invention". Nowadays it is hard to see the border between a scientist and an inventor. In fact, the technological term "invention" attracts the media by making illusionary sensation and ensuring large audience. The report also addresses the "Gitamard" ("A science-man") special project started in 2016 in Mediamax that tells about scientists and their motivations.

  17. Is expanding Medicare coverage cost-effective?

    Directory of Open Access Journals (Sweden)

    Muennig Peter

    2005-03-01

    Full Text Available Abstract Background Proposals to expand Medicare coverage tend to be expensive, but the value of services purchased is not known. This study evaluates the efficiency of the average private supplemental insurance plan for Medicare recipients. Methods Data from the National Health Interview Survey, the National Death Index, and the Medical Expenditure Panel Survey were analyzed to estimate the costs, changes in life expectancy, and health-related quality of life gains associated with providing private supplemental insurance coverage for Medicare beneficiaries. Model inputs included socio-demographic, health, and health behavior characteristics. Parameter estimates from regression models were used to predict quality-adjusted life years (QALYs and costs associated with private supplemental insurance relative to Medicare only. Markov decision analysis modeling was then employed to calculate incremental cost-effectiveness ratios. Results Medicare supplemental insurance is associated with increased health care utilization, but the additional costs associated with this utilization are offset by gains in quality-adjusted life expectancy. The incremental cost-effectiveness of private supplemental insurance is approximately $24,000 per QALY gained relative to Medicare alone. Conclusion Supplemental insurance for Medicare beneficiaries is a good value, with an incremental cost-effectiveness ratio comparable to medical interventions commonly deemed worthwhile.

  18. Massively Parallel QCD

    International Nuclear Information System (INIS)

    Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G

    2007-01-01

    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results

  19. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing

    2014-01-01

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  20. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  1. Fast parallel event reconstruction

    CERN Multimedia

    CERN. Geneva

    2010-01-01

    On-line processing of large data volumes produced in modern HEP experiments requires using maximum capabilities of modern and future many-core CPU and GPU architectures.One of such powerful feature is a SIMD instruction set, which allows packing several data items in one register and to operate on all of them, thus achievingmore operations per clock cycle. Motivated by the idea of using the SIMD unit ofmodern processors, the KF based track fit has been adapted for parallelism, including memory optimization, numerical analysis, vectorization with inline operator overloading, and optimization using SDKs. The speed of the algorithm has been increased in 120000 times with 0.1 ms/track, running in parallel on 16 SPEs of a Cell Blade computer.  Running on a Nehalem CPU with 8 cores it shows the processing speed of 52 ns/track using the Intel Threading Building Blocks. The same KF algorithm running on an Nvidia GTX 280 in the CUDA frameworkprovi...

  2. PALNS - A software framework for parallel large neighborhood search

    DEFF Research Database (Denmark)

    Røpke, Stefan

    2009-01-01

    This paper propose a simple, parallel, portable software framework for the metaheuristic named large neighborhood search (LNS). The aim is to provide a framework where the user has to set up a few data structures and implement a few functions and then the framework provides a metaheuristic where ...... parallelization "comes for free". We apply the parallel LNS heuristic to two different problems: the traveling salesman problem with pickup and delivery (TSPPD) and the capacitated vehicle routing problem (CVRP)....

  3. Building high-coverage monolayers of covalently bound magnetic nanoparticles

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Mackenzie G.; Teplyakov, Andrew V., E-mail: andrewt@udel.edu

    2016-12-01

    Graphical abstract: - Highlights: • A method for forming a layer of covalently bound nanoparticles is offered. • A nearly perfect monolayer of covalently bound magnetic nanoparticles was formed on gold. • Spectroscopic techniques confirmed covalent binding by the “click” reaction. • The influence of the functionalization scheme on surface coverage was investigated. - Abstract: This work presents an approach for producing a high-coverage single monolayer of magnetic nanoparticles using “click chemistry” between complementarily functionalized nanoparticles and a flat substrate. This method highlights essential aspects of the functionalization scheme for substrate surface and nanoparticles to produce exceptionally high surface coverage without sacrificing selectivity or control over the layer produced. The deposition of one single layer of magnetic particles without agglomeration, over a large area, with a nearly 100% coverage is confirmed by electron microscopy. Spectroscopic techniques, supplemented by computational predictions, are used to interrogate the chemistry of the attachment and to confirm covalent binding, rather than attachment through self-assembly or weak van der Waals bonding. Density functional theory calculations for the surface intermediate of this copper-catalyzed process provide mechanistic insight into the effects of the functionalization scheme on surface coverage. Based on this analysis, it appears that steric limitations of the intermediate structure affect nanoparticle coverage on a flat solid substrate; however, this can be overcome by designing a functionalization scheme in such a way that the copper-based intermediate is formed on the spherical nanoparticles instead. This observation can be carried over to other approaches for creating highly controlled single- or multilayered nanostructures of a wide range of materials to result in high coverage and possibly, conformal filling.

  4. Parallel Polarization State Generation.

    Science.gov (United States)

    She, Alan; Capasso, Federico

    2016-05-17

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  5. Parallel imaging microfluidic cytometer.

    Science.gov (United States)

    Ehrlich, Daniel J; McKenna, Brian K; Evans, James G; Belkina, Anna C; Denis, Gerald V; Sherr, David H; Cheung, Man Ching

    2011-01-01

    By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of fluorescence-activated flow cytometry (FCM) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity, and (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in ∼6-10 min, about 30 times the speed of most current FCM systems. In 1D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times for the sample throughput of charge-coupled device (CCD)-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take. Copyright © 2011 Elsevier Inc. All rights reserved.

  6. Prevalence, Characteristics, and Perception of Nursery Antibiotic Stewardship Coverage in the United States.

    Science.gov (United States)

    Cantey, Joseph B; Vora, Niraj; Sunkara, Mridula

    2017-09-01

    Prolonged or unnecessary antibiotic use is associated with adverse outcomes in infants. Antibiotic stewardship programs (ASPs) aim to prevent these adverse outcomes and optimize antibiotic prescribing. However, data evaluating ASP coverage of nurseries are limited. The objectives of this study were to describe the characteristics of nurseries with and without ASP coverage and to determine perceptions of and barriers to nursery ASP coverage. The 2014 American Hospital Association annual survey was used to randomly select a level III neonatal intensive care unit from all 50 states. A level I and level II nursery from the same city as the level III nursery were then randomly selected. Hospital, nursery, and ASP characteristics were collected. Nursery and ASP providers (pharmacists or infectious disease providers) were interviewed using a semistructured template. Transcribed interviews were analyzed for themes. One hundred forty-six centers responded; 104 (71%) provided nursery ASP coverage. In multivariate analysis, level of nursery, university affiliation, and number of full-time equivalent ASP staff were the main predictors of nursery ASP coverage. Several themes were identified from interviews: unwanted coverage, unnecessary coverage, jurisdiction issues, need for communication, and a focus on outcomes. Most providers had a favorable view of nursery ASP coverage. Larger, higher-acuity nurseries in university-affiliated hospitals are more likely to have ASP coverage. Low ASP staffing and a perceived lack of importance were frequently cited as barriers to nursery coverage. Most nursery ASP coverage is viewed favorably by providers, but nursery providers regard it as less important than ASP providers. © The Author 2016. Published by Oxford University Press on behalf of the Pediatric Infectious Diseases Society. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. Sideline coverage of youth football.

    Science.gov (United States)

    Rizzone, Katie; Diamond, Alex; Gregory, Andrew

    2013-01-01

    Youth football is a popular sport in the United States and has been for some time. There are currently more than 3 million participants in youth football leagues according to USA Football. While the number of participants and overall injuries may be higher in other sports, football has a higher rate of injuries. Most youth sporting events do not have medical personnel on the sidelines in event of an injury or emergency. Therefore it is necessary for youth sports coaches to undergo basic medical training in order to effectively act in these situations. In addition, an argument could be made that appropriate medical personnel should be on the sideline for collision sports at all levels, from youth to professional. This article will discuss issues pertinent to sideline coverage of youth football, including coaching education, sideline personnel, emergency action plans, age and size divisions, tackle versus flag football, and injury prevention.

  8. About Parallel Programming: Paradigms, Parallel Execution and Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Loredana MOCEAN

    2009-01-01

    Full Text Available In the last years, there were made efforts for delineation of a stabile and unitary frame, where the problems of logical parallel processing must find solutions at least at the level of imperative languages. The results obtained by now are not at the level of the made efforts. This paper wants to be a little contribution at these efforts. We propose an overview in parallel programming, parallel execution and collaborative systems.

  9. Neural Parallel Engine: A toolbox for massively parallel neural signal processing.

    Science.gov (United States)

    Tam, Wing-Kin; Yang, Zhi

    2018-05-01

    Large-scale neural recordings provide detailed information on neuronal activities and can help elicit the underlying neural mechanisms of the brain. However, the computational burden is also formidable when we try to process the huge data stream generated by such recordings. In this study, we report the development of Neural Parallel Engine (NPE), a toolbox for massively parallel neural signal processing on graphical processing units (GPUs). It offers a selection of the most commonly used routines in neural signal processing such as spike detection and spike sorting, including advanced algorithms such as exponential-component-power-component (EC-PC) spike detection and binary pursuit spike sorting. We also propose a new method for detecting peaks in parallel through a parallel compact operation. Our toolbox is able to offer a 5× to 110× speedup compared with its CPU counterparts depending on the algorithms. A user-friendly MATLAB interface is provided to allow easy integration of the toolbox into existing workflows. Previous efforts on GPU neural signal processing only focus on a few rudimentary algorithms, are not well-optimized and often do not provide a user-friendly programming interface to fit into existing workflows. There is a strong need for a comprehensive toolbox for massively parallel neural signal processing. A new toolbox for massively parallel neural signal processing has been created. It can offer significant speedup in processing signals from large-scale recordings up to thousands of channels. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Synthesis of Volumetric Ring Antenna Array for Terrestrial Coverage Pattern

    Directory of Open Access Journals (Sweden)

    Alberto Reyna

    2014-01-01

    Full Text Available This paper presents a synthesis of a volumetric ring antenna array for a terrestrial coverage pattern. This synthesis regards the spacing among the rings on the planes X-Y, the positions of the rings on the plane X-Z, and uniform and concentric excitations. The optimization is carried out by implementing the particle swarm optimization. The synthesis is compared with previous designs by resulting with proper performance of this geometry to provide an accurate coverage to be applied in satellite applications with a maximum reduction of the antenna hardware as well as the side lobe level reduction.

  11. Synthesis of Volumetric Ring Antenna Array for Terrestrial Coverage Pattern

    Science.gov (United States)

    Reyna, Alberto; Panduro, Marco A.; Del Rio Bocio, Carlos

    2014-01-01

    This paper presents a synthesis of a volumetric ring antenna array for a terrestrial coverage pattern. This synthesis regards the spacing among the rings on the planes X-Y, the positions of the rings on the plane X-Z, and uniform and concentric excitations. The optimization is carried out by implementing the particle swarm optimization. The synthesis is compared with previous designs by resulting with proper performance of this geometry to provide an accurate coverage to be applied in satellite applications with a maximum reduction of the antenna hardware as well as the side lobe level reduction. PMID:24701150

  12. Quad-Tree Visual-Calculus Analysis of Satellite Coverage

    Science.gov (United States)

    Lo, Martin W.; Hockney, George; Kwan, Bruce

    2003-01-01

    An improved method of analysis of coverage of areas of the Earth by a constellation of radio-communication or scientific-observation satellites has been developed. This method is intended to supplant an older method in which the global-coverage-analysis problem is solved from a ground-to-satellite perspective. The present method provides for rapid and efficient analysis. This method is derived from a satellite-to-ground perspective and involves a unique combination of two techniques for multiresolution representation of map features on the surface of a sphere.

  13. Estimating IBD tracts from low coverage NGS data

    DEFF Research Database (Denmark)

    Garrett Vieira, Filipe Jorge; Albrechtsen, Anders; Nielsen, Rasmus

    2016-01-01

    that the new method provides a marked increase in accuracy even at low coverage. AVAILABILITY AND IMPLEMENTATION: The methods presented in this work were implemented in C/C ++ and are freely available for non-commercial use from https://github.com/fgvieira/ngsF-HMM CONTACT: fgvieira@snm.ku.dk SUPPLEMENTARY...... method for estimating inbreeding IBD tracts from low coverage NGS data. Contrary to other methods that use genotype data, the one presented here uses genotype likelihoods to take the uncertainty of the data into account. We benchmark it under a wide range of biologically relevant conditions and show...

  14. Change of mobile network coverage in France from 29 August

    CERN Multimedia

    IT Department

    2016-01-01

    The change of mobile network coverage on the French part of the CERN site will take effect on 29 August and not on 11 July as previously announced.    From 29 August, the Swisscom transmitters in France will be deactivated and Orange France will thenceforth provide coverage on the French part of the CERN site.  This switch will result in changes to billing. You should also ensure that you can still be contacted by your colleagues when you are on the French part of the CERN site. Please consult the information and instructions in this official communication.

  15. Insurance coverage for male infertility care in the United States.

    Science.gov (United States)

    Dupree, James M

    2016-01-01

    Infertility is a common condition experienced by many men and women, and treatments are expensive. The World Health Organization and American Society of Reproductive Medicine define infertility as a disease, yet private companies infrequently offer insurance coverage for infertility treatments. This is despite the clear role that healthcare insurance plays in ensuring access to care and minimizing the financial burden of expensive services. In this review, we assess the current knowledge of how male infertility care is covered by insurance in the United States. We begin with an appraisal of the costs of male infertility care, then examine the state insurance laws relevant to male infertility, and close with a discussion of why insurance coverage for male infertility is important to both men and women. Importantly, we found that despite infertility being classified as a disease and males contributing to almost half of all infertility cases, coverage for male infertility is often excluded from health insurance laws. Excluding coverage for male infertility places an undue burden on their female partners. In addition, excluding care for male infertility risks missing opportunities to diagnose important health conditions and identify reversible or irreversible causes of male infertility. Policymakers should consider providing equal coverage for male and female infertility care in future health insurance laws.

  16. Scalable Coverage Maintenance for Dense Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Jun Lu

    2007-06-01

    Full Text Available Owing to numerous potential applications, wireless sensor networks have been attracting significant research effort recently. The critical challenge that wireless sensor networks often face is to sustain long-term operation on limited battery energy. Coverage maintenance schemes can effectively prolong network lifetime by selecting and employing a subset of sensors in the network to provide sufficient sensing coverage over a target region. We envision future wireless sensor networks composed of a vast number of miniaturized sensors in exceedingly high density. Therefore, the key issue of coverage maintenance for future sensor networks is the scalability to sensor deployment density. In this paper, we propose a novel coverage maintenance scheme, scalable coverage maintenance (SCOM, which is scalable to sensor deployment density in terms of communication overhead (i.e., number of transmitted and received beacons and computational complexity (i.e., time and space complexity. In addition, SCOM achieves high energy efficiency and load balancing over different sensors. We have validated our claims through both analysis and simulations.

  17. Contraception coverage and methods used among women in South ...

    African Journals Online (AJOL)

    its convenience for providers and women, cost effectiveness, and high acceptability ... Using data from the 2012 SA National HIV Prevalence, Incidence ... Data on contraceptive coverage and service gaps could help to shape these initiatives. ... 7 London School of Hygiene and Tropical Medicine, University of London, UK.

  18. Media coverage of chronic diseases in the Netherlands.

    NARCIS (Netherlands)

    van der Wardt, E.M.; van der Wardt, Elly M.; Taal, Erik; Rasker, Johannes J.; Wiegman, O.

    1999-01-01

    Objective: Little is known about the quantity or quality of information on rheumatic diseases provided by the mass media. The aim of this study was to gain insight into the media coverage of rheumatic diseases compared with other chronic diseases in the Netherlands. - Materials and Methods:

  19. The Coverage of the Holocaust in High School History Textbooks

    Science.gov (United States)

    Lindquist, David

    2009-01-01

    The Holocaust is now a regular part of high school history curricula throughout the United States and, as a result, coverage of the Holocaust has become a standard feature of high school textbooks. As with any major event, it is important for textbooks to provide a rigorously accurate and valid historical account. In dealing with the Holocaust,…

  20. 78 FR 54986 - Information Reporting of Minimum Essential Coverage

    Science.gov (United States)

    2013-09-09

    ... employees, and offer that coverage to spouses and dependents, all with no employee contribution, to forgo... health benefits provided through a contribution to a health savings account. Health savings accounts are... agenda will be available free of charge at the hearing. Drafting Information The principal authors of...

  1. Comparison of two next-generation sequencing kits for diagnosis of epileptic disorders with a user-friendly tool for displaying gene coverage, DeCovA

    Directory of Open Access Journals (Sweden)

    Sarra Dimassi

    2015-12-01

    Full Text Available In recent years, molecular genetics has been playing an increasing role in the diagnostic process of monogenic epilepsies. Knowing the genetic basis of one patient's epilepsy provides accurate genetic counseling and may guide therapeutic options. Genetic diagnosis of epilepsy syndromes has long been based on Sanger sequencing and search for large rearrangements using MLPA or DNA arrays (array-CGH or SNP-array. Recently, next-generation sequencing (NGS was demonstrated to be a powerful approach to overcome the wide clinical and genetic heterogeneity of epileptic disorders. Coverage is critical for assessing the quality and accuracy of results from NGS. However, it is often a difficult parameter to display in practice. The aim of the study was to compare two library-building methods (Haloplex, Agilent and SeqCap EZ, Roche for a targeted panel of 41 genes causing monogenic epileptic disorders. We included 24 patients, 20 of whom had known disease-causing mutations. For each patient both libraries were built in parallel and sequenced on an Ion Torrent Personal Genome Machine (PGM. To compare coverage and depth, we developed a simple homemade tool, named DeCovA (Depth and Coverage Analysis. DeCovA displays the sequencing depth of each base and the coverage of target genes for each genomic position. The fraction of each gene covered at different thresholds could be easily estimated. None of the two methods used, namely NextGene and Ion Reporter, were able to identify all the known mutations/CNVs displayed by the 20 patients. Variant detection rate was globally similar for the two techniques and DeCovA showed that failure to detect a mutation was mainly related to insufficient coverage.

  2. Message passing with parallel queue traversal

    Science.gov (United States)

    Underwood, Keith D [Albuquerque, NM; Brightwell, Ronald B [Albuquerque, NM; Hemmert, K Scott [Albuquerque, NM

    2012-05-01

    In message passing implementations, associative matching structures are used to permit list entries to be searched in parallel fashion, thereby avoiding the delay of linear list traversal. List management capabilities are provided to support list entry turnover semantics and priority ordering semantics.

  3. A PARALLEL EXTENSION OF THE UAL ENVIRONMENT

    International Nuclear Information System (INIS)

    MALITSKY, N.; SHISHLO, A.

    2001-01-01

    The deployment of the Unified Accelerator Library (UAL) environment on the parallel cluster is presented. The approach is based on the Message-Passing Interface (MPI) library and the Perl adapter that allows one to control and mix together the existing conventional UAL components with the new MPI-based parallel extensions. In the paper, we provide timing results and describe the application of the new environment to the SNS Ring complex beam dynamics studies, particularly, simulations of several physical effects, such as space charge, field errors, fringe fields, and others

  4. Parallel processing for artificial intelligence 2

    CERN Document Server

    Kumar, V; Suttner, CB

    1994-01-01

    With the increasing availability of parallel machines and the raising of interest in large scale and real world applications, research on parallel processing for Artificial Intelligence (AI) is gaining greater importance in the computer science environment. Many applications have been implemented and delivered but the field is still considered to be in its infancy. This book assembles diverse aspects of research in the area, providing an overview of the current state of technology. It also aims to promote further growth across the discipline. Contributions have been grouped according to their

  5. CEOS Ocean Variables Enabling Research and Applications for Geo (COVERAGE)

    Science.gov (United States)

    Tsontos, V. M.; Vazquez, J.; Zlotnicki, V.

    2017-12-01

    The CEOS Ocean Variables Enabling Research and Applications for GEO (COVERAGE) initiative seeks to facilitate joint utilization of different satellite data streams on ocean physics, better integrated with biological and in situ observations, including near real-time data streams in support of oceanographic and decision support applications for societal benefit. COVERAGE aligns with programmatic objectives of CEOS (the Committee on Earth Observation Satellites) and the missions of GEO-MBON (Marine Biodiversity Observation Network) and GEO-Blue Planet, which are to advance and exploit synergies among the many observational programs devoted to ocean and coastal waters. COVERAGE is conceived of as 3 year pilot project involving international collaboration. It focuses on implementing technologies, including cloud based solutions, to provide a data rich, web-based platform for integrated ocean data delivery and access: multi-parameter observations, easily discoverable and usable, organized by disciplines, available in near real-time, collocated to a common grid and including climatologies. These will be complemented by a set of value-added data services available via the COVERAGE portal including an advanced Web-based visualization interface, subsetting/extraction, data collocation/matchup and other relevant on demand processing capabilities. COVERAGE development will be organized around priority use cases and applications identified by GEO and agency partners. The initial phase will be to develop co-located 25km products from the four Ocean Virtual Constellations (VCs), Sea Surface Temperature, Sea Level, Ocean Color, and Sea Surface Winds. This aims to stimulate work among the ocean VCs while developing products and system functionality based on community recommendations. Such products as anomalies from a time mean, would build on the theme of applications with a relevance to CEOS/GEO mission and vision. Here we provide an overview of the COVERAGE initiative with an

  6. Parallel Framework for Cooperative Processes

    Directory of Open Access Journals (Sweden)

    Mitică Craus

    2005-01-01

    Full Text Available This paper describes the work of an object oriented framework designed to be used in the parallelization of a set of related algorithms. The idea behind the system we are describing is to have a re-usable framework for running several sequential algorithms in a parallel environment. The algorithms that the framework can be used with have several things in common: they have to run in cycles and the work should be possible to be split between several "processing units". The parallel framework uses the message-passing communication paradigm and is organized as a master-slave system. Two applications are presented: an Ant Colony Optimization (ACO parallel algorithm for the Travelling Salesman Problem (TSP and an Image Processing (IP parallel algorithm for the Symmetrical Neighborhood Filter (SNF. The implementations of these applications by means of the parallel framework prove to have good performances: approximatively linear speedup and low communication cost.

  7. Parallel Monte Carlo reactor neutronics

    International Nuclear Information System (INIS)

    Blomquist, R.N.; Brown, F.B.

    1994-01-01

    The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved

  8. DNA barcoding in the media: does coverage of cool science reflect its social context?

    Science.gov (United States)

    Geary, Janis; Camicioli, Emma; Bubela, Tania

    2016-09-01

    Paul Hebert and colleagues first described DNA barcoding in 2003, which led to international efforts to promote and coordinate its use. Since its inception, DNA barcoding has generated considerable media coverage. We analysed whether this coverage reflected both the scientific and social mandates of international barcoding organizations. We searched newspaper databases to identify 900 English-language articles from 2003 to 2013. Coverage of the science of DNA barcoding was highly positive but lacked context for key topics. Coverage omissions pose challenges for public understanding of the science and applications of DNA barcoding; these included coverage of governance structures and issues related to the sharing of genetic resources across national borders. Our analysis provided insight into how barcoding communication efforts have translated into media coverage; more targeted communication efforts may focus media attention on previously omitted, but important topics. Our analysis is timely as the DNA barcoding community works to establish the International Society for the Barcode of Life.

  9. Distributed Parallel Architecture for "Big Data"

    Directory of Open Access Journals (Sweden)

    Catalin BOJA

    2012-01-01

    Full Text Available This paper is an extension to the "Distributed Parallel Architecture for Storing and Processing Large Datasets" paper presented at the WSEAS SEPADS’12 conference in Cambridge. In its original version the paper went over the benefits of using a distributed parallel architecture to store and process large datasets. This paper analyzes the problem of storing, processing and retrieving meaningful insight from petabytes of data. It provides a survey on current distributed and parallel data processing technologies and, based on them, will propose an architecture that can be used to solve the analyzed problem. In this version there is more emphasis put on distributed files systems and the ETL processes involved in a distributed environment.

  10. Java parallel secure stream for grid computing

    International Nuclear Information System (INIS)

    Chen, J.; Akers, W.; Chen, Y.; Watson, W.

    2001-01-01

    The emergence of high speed wide area networks makes grid computing a reality. However grid applications that need reliable data transfer still have difficulties to achieve optimal TCP performance due to network tuning of TCP window size to improve the bandwidth and to reduce latency on a high speed wide area network. The authors present a pure Java package called JPARSS (Java Parallel Secure Stream) that divides data into partitions that are sent over several parallel Java streams simultaneously and allows Java or Web applications to achieve optimal TCP performance in a gird environment without the necessity of tuning the TCP window size. Several experimental results are provided to show that using parallel stream is more effective than tuning TCP window size. In addition X.509 certificate based single sign-on mechanism and SSL based connection establishment are integrated into this package. Finally a few applications using this package will be discussed

  11. Abstract Level Parallelization of Finite Difference Methods

    Directory of Open Access Journals (Sweden)

    Edwin Vollebregt

    1997-01-01

    Full Text Available A formalism is proposed for describing finite difference calculations in an abstract way. The formalism consists of index sets and stencils, for characterizing the structure of sets of data items and interactions between data items (“neighbouring relations”. The formalism provides a means for lifting programming to a more abstract level. This simplifies the tasks of performance analysis and verification of correctness, and opens the way for automaticcode generation. The notation is particularly useful in parallelization, for the systematic construction of parallel programs in a process/channel programming paradigm (e.g., message passing. This is important because message passing, unfortunately, still is the only approach that leads to acceptable performance for many more unstructured or irregular problems on parallel computers that have non-uniform memory access times. It will be shown that the use of index sets and stencils greatly simplifies the determination of which data must be exchanged between different computing processes.

  12. Anti-parallel triplexes

    DEFF Research Database (Denmark)

    Kosbar, Tamer R.; Sofan, Mamdouh A.; Waly, Mohamed A.

    2015-01-01

    about 6.1 °C when the TFO strand was modified with Z and the Watson-Crick strand with adenine-LNA (AL). The molecular modeling results showed that, in case of nucleobases Y and Z a hydrogen bond (1.69 and 1.72 Å, respectively) was formed between the protonated 3-aminopropyn-1-yl chain and one...... of the phosphate groups in Watson-Crick strand. Also, it was shown that the nucleobase Y made a good stacking and binding with the other nucleobases in the TFO and Watson-Crick duplex, respectively. In contrast, the nucleobase Z with LNA moiety was forced to twist out of plane of Watson-Crick base pair which......The phosphoramidites of DNA monomers of 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine (Y) and 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine LNA (Z) are synthesized, and the thermal stability at pH 7.2 and 8.2 of anti-parallel triplexes modified with these two monomers is determined. When, the anti...

  13. Parallel consensual neural networks.

    Science.gov (United States)

    Benediktsson, J A; Sveinsson, J R; Ersoy, O K; Swain, P H

    1997-01-01

    A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data.

  14. Effect of stone coverage on soil erosion

    Science.gov (United States)

    Jomaa, S.; Barry, D. A.; Heng, B. P.; Brovelli, A.; Sander, G. C.; Parlange, J.

    2010-12-01

    Soil surface coverage has a significant impact on water infiltration, runoff and soil erosion yields. In particular, surface stones protect the soils from raindrop detachment, they retard the overland flow therefore decreasing its sediment transport capacity, and they prevent surface sealing. Several physical and environmental factors control to what extent stones on the soil surface modify the erosion rates and the related hydrological response. Among the most important factors are the moisture content of the topsoil, stone size, emplacement, coverage density and soil texture. Owing to the different inter-related processes, there is ambiguity concerning the quantitative effect of stones, and process-based understanding is limited. Experiments were performed (i) to quantify how stone features affect sediment yields, (ii) to understand the local effect of isolated surface stones, that is, the changes of the soil particle size distribution in the vicinity of a stone and (iii) to determine how stones attenuate the development of surface sealing and in turn how this affects the local infiltration rate. A series of experiments using the EPFL 6-m × 2-m erosion flume were conducted at different rainfall intensities (28 and 74 mm h-1) and stone coverage (20 and 40%). The total sediment concentration, the concentration of the individual size classes and the flow discharge were measured. In order to analyze the measurements, the Hairsine and Rose (HR) erosion model was adapted to account for the shielding effect of the stone cover. This was done by suitably adjusting the parameters based on the area not covered by stones. It was found that the modified HR model predictions agreed well with the measured sediment concentrations especially for the long time behavior. Changes in the bulk density of the topsoil due to raindrop-induced compaction with and without stone protection revealed that the stones protect the upper soil surface against the structural seals resulting in

  15. A Parallel Particle Swarm Optimizer

    National Research Council Canada - National Science Library

    Schutte, J. F; Fregly, B .J; Haftka, R. T; George, A. D

    2003-01-01

    .... Motivated by a computationally demanding biomechanical system identification problem, we introduce a parallel implementation of a stochastic population based global optimizer, the Particle Swarm...

  16. Patterns for Parallel Software Design

    CERN Document Server

    Ortega-Arjona, Jorge Luis

    2010-01-01

    Essential reading to understand patterns for parallel programming Software patterns have revolutionized the way we think about how software is designed, built, and documented, and the design of parallel software requires you to consider other particular design aspects and special skills. From clusters to supercomputers, success heavily depends on the design skills of software developers. Patterns for Parallel Software Design presents a pattern-oriented software architecture approach to parallel software design. This approach is not a design method in the classic sense, but a new way of managin

  17. Seeing or moving in parallel

    DEFF Research Database (Denmark)

    Christensen, Mark Schram; Ehrsson, H Henrik; Nielsen, Jens Bo

    2013-01-01

    a different network, involving bilateral dorsal premotor cortex (PMd), primary motor cortex, and SMA, was more active when subjects viewed parallel movements while performing either symmetrical or parallel movements. Correlations between behavioral instability and brain activity were present in right lateral...... adduction-abduction movements symmetrically or in parallel with real-time congruent or incongruent visual feedback of the movements. One network, consisting of bilateral superior and middle frontal gyrus and supplementary motor area (SMA), was more active when subjects performed parallel movements, whereas...

  18. Increasing Coverage of Hepatitis B Vaccination in China

    Science.gov (United States)

    Wang, Shengnan; Smith, Helen; Peng, Zhuoxin; Xu, Biao; Wang, Weibing

    2016-01-01

    Abstract This study used a system evaluation method to summarize China's experience on improving the coverage of hepatitis B vaccine, especially the strategies employed to improve the uptake of timely birth dosage. Identifying successful methods and strategies will provide strong evidence for policy makers and health workers in other countries with high hepatitis B prevalence. We conducted a literature review included English or Chinese literature carried out in mainland China, using PubMed, the Cochrane databases, Web of Knowledge, China National Knowledge Infrastructure, Wanfang data, and other relevant databases. Nineteen articles about the effectiveness and impact of interventions on improving the coverage of hepatitis B vaccine were included. Strong or moderate evidence showed that reinforcing health education, training and supervision, providing subsidies for facility birth, strengthening the coordination among health care providers, and using out-of-cold-chain storage for vaccines were all important to improving vaccination coverage. We found evidence that community education was the most commonly used intervention, and out-reach programs such as out-of-cold chain strategy were more effective in increasing the coverage of vaccination in remote areas where the facility birth rate was respectively low. The essential impact factors were found to be strong government commitment and the cooperation of the different government departments. Public interventions relying on basic health care systems combined with outreach care services were critical elements in improving the hepatitis B vaccination rate in China. This success could not have occurred without exceptional national commitment. PMID:27175710

  19. Coverage-based constraints for IMRT optimization

    Science.gov (United States)

    Mescher, H.; Ulrich, S.; Bangert, M.

    2017-09-01

    Radiation therapy treatment planning requires an incorporation of uncertainties in order to guarantee an adequate irradiation of the tumor volumes. In current clinical practice, uncertainties are accounted for implicitly with an expansion of the target volume according to generic margin recipes. Alternatively, it is possible to account for uncertainties by explicit minimization of objectives that describe worst-case treatment scenarios, the expectation value of the treatment or the coverage probability of the target volumes during treatment planning. In this note we show that approaches relying on objectives to induce a specific coverage of the clinical target volumes are inevitably sensitive to variation of the relative weighting of the objectives. To address this issue, we introduce coverage-based constraints for intensity-modulated radiation therapy (IMRT) treatment planning. Our implementation follows the concept of coverage-optimized planning that considers explicit error scenarios to calculate and optimize patient-specific probabilities q(\\hat{d}, \\hat{v}) of covering a specific target volume fraction \\hat{v} with a certain dose \\hat{d} . Using a constraint-based reformulation of coverage-based objectives we eliminate the trade-off between coverage and competing objectives during treatment planning. In-depth convergence tests including 324 treatment plan optimizations demonstrate the reliability of coverage-based constraints for varying levels of probability, dose and volume. General clinical applicability of coverage-based constraints is demonstrated for two cases. A sensitivity analysis regarding penalty variations within this planing study based on IMRT treatment planning using (1) coverage-based constraints, (2) coverage-based objectives, (3) probabilistic optimization, (4) robust optimization and (5) conventional margins illustrates the potential benefit of coverage-based constraints that do not require tedious adjustment of target volume objectives.

  20. 75 FR 69577 - Deposit Insurance Regulations; Unlimited Coverage for Noninterest-Bearing Transaction Accounts

    Science.gov (United States)

    2010-11-15

    ..., contending that providing such coverage for these accounts promotes moral hazard. Four commenters suggested... withdrawals at any time, whether held by a business, an individual or other type of depositor. Unlike the... for unlimited separate coverage as a noninterest-bearing transaction account. One issue raised during...

  1. An Analysis of Television's Coverage of the "Iran Crisis": 5 November 1979 to 15 January 1980.

    Science.gov (United States)

    Miller, Christine

    The three television networks, acting under severe restrictions imposed by the Iranian government, all provided comprehensive coverage of the hostage crisis. A study was conducted to examine what, if any, salient differences arose or existed in this coverage from November 5, 1979, until January 15, 1980. A research procedure combining qualitative…

  2. 7 CFR 457.146 - Northern potato crop insurance-storage coverage endorsement.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 6 2010-01-01 2010-01-01 false Northern potato crop insurance-storage coverage... Northern potato crop insurance—storage coverage endorsement. The Northern Potato Crop Insurance Storage... for insurance provider) Both FCIC and reinsured policies: Northern Potato Crop Insurance Storage...

  3. 42 CFR 416.48 - Condition for coverage-Pharmaceutical services.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Condition for coverage-Pharmaceutical services. 416... Coverage § 416.48 Condition for coverage—Pharmaceutical services. The ASC must provide drugs and... direction of an individual designated responsible for pharmaceutical services. (a) Standard: Administration...

  4. 42 CFR 436.308 - Medically needy coverage of individuals under age 21.

    Science.gov (United States)

    2010-10-01

    ... THE VIRGIN ISLANDS Optional Coverage of the Medically Needy § 436.308 Medically needy coverage of... (b) of this section: (1) Who would not be covered under the mandatory medically needy group of... nursing facility services are provided under the plan to individuals within the age group selected under...

  5. CDMA coverage under mobile heterogeneous network load

    NARCIS (Netherlands)

    Saban, D.; van den Berg, Hans Leo; Boucherie, Richardus J.; Endrayanto, A.I.

    2002-01-01

    We analytically investigate coverage (determined by the uplink) under non-homogeneous and moving traffic load of third generation UMTS mobile networks. In particular, for different call assignment policies, we investigate cell breathing and the movement of the coverage gap occurring between cells

  6. 22 CFR 226.31 - Insurance coverage.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Insurance coverage. 226.31 Section 226.31 Foreign Relations AGENCY FOR INTERNATIONAL DEVELOPMENT ADMINISTRATION OF ASSISTANCE AWARDS TO U.S. NON-GOVERNMENTAL ORGANIZATIONS Post-award Requirements Property Standards § 226.31 Insurance coverage. Recipients...

  7. Coverage matters: insurance and health care

    National Research Council Canada - National Science Library

    Board on Health Care Services Staff; Institute of Medicine Staff; Institute of Medicine; National Academy of Sciences

    2001-01-01

    ...? How does the system of insurance coverage in the U.S. operate, and where does it fail? The first of six Institute of Medicine reports that will examine in detail the consequences of having a large uninsured population, Coverage Matters...

  8. Legislating health care coverage for the unemployed.

    Science.gov (United States)

    Palley, H A; Feldman, G; Gallner, I; Tysor, M

    1985-01-01

    Because the unemployed and their families are often likely to develop stress-related health problems, ensuring them access to health care is a public health issue. Congressional efforts thus far to legislate health coverage for the unemployed have proposed a system that recognizes people's basic need for coverage but has several limitations.

  9. PARALLEL IMPORT: REALITY FOR RUSSIA

    Directory of Open Access Journals (Sweden)

    Т. А. Сухопарова

    2014-01-01

    Full Text Available Problem of parallel import is urgent question at now. Parallel import legalization in Russia is expedient. Such statement based on opposite experts opinion analysis. At the same time it’s necessary to negative consequences consider of this decision and to apply remedies to its minimization.Purchase on Elibrary.ru > Buy now

  10. Optimisation of a parallel ocean general circulation model

    Science.gov (United States)

    Beare, M. I.; Stevens, D. P.

    1997-10-01

    This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  11. Survey on present status and trend of parallel programming environments

    International Nuclear Information System (INIS)

    Takemiya, Hiroshi; Higuchi, Kenji; Honma, Ichiro; Ohta, Hirofumi; Kawasaki, Takuji; Imamura, Toshiyuki; Koide, Hiroshi; Akimoto, Masayuki.

    1997-03-01

    This report intends to provide useful information on software tools for parallel programming through the survey on parallel programming environments of the following six parallel computers, Fujitsu VPP300/500, NEC SX-4, Hitachi SR2201, Cray T94, IBM SP, and Intel Paragon, all of which are installed at Japan Atomic Energy Research Institute (JAERI), moreover, the present status of R and D's on parallel softwares of parallel languages, compilers, debuggers, performance evaluation tools, and integrated tools is reported. This survey has been made as a part of our project of developing a basic software for parallel programming environment, which is designed on the concept of STA (Seamless Thinking Aid to programmers). (author)

  12. The BLAZE language - A parallel language for scientific programming

    Science.gov (United States)

    Mehrotra, Piyush; Van Rosendale, John

    1987-01-01

    A Pascal-like scientific programming language, BLAZE, is described. BLAZE contains array arithmetic, forall loops, and APL-style accumulation operators, which allow natural expression of fine grained parallelism. It also employs an applicative or functional procedure invocation mechanism, which makes it easy for compilers to extract coarse grained parallelism using machine specific program restructuring. Thus BLAZE should allow one to achieve highly parallel execution on multiprocessor architectures, while still providing the user with conceptually sequential control flow. A central goal in the design of BLAZE is portability across a broad range of parallel architectures. The multiple levels of parallelism present in BLAZE code, in principle, allow a compiler to extract the types of parallelism appropriate for the given architecture while neglecting the remainder. The features of BLAZE are described and it is shown how this language would be used in typical scientific programming.

  13. The BLAZE language: A parallel language for scientific programming

    Science.gov (United States)

    Mehrotra, P.; Vanrosendale, J.

    1985-01-01

    A Pascal-like scientific programming language, Blaze, is described. Blaze contains array arithmetic, forall loops, and APL-style accumulation operators, which allow natural expression of fine grained parallelism. It also employs an applicative or functional procedure invocation mechanism, which makes it easy for compilers to extract coarse grained parallelism using machine specific program restructuring. Thus Blaze should allow one to achieve highly parallel execution on multiprocessor architectures, while still providing the user with onceptually sequential control flow. A central goal in the design of Blaze is portability across a broad range of parallel architectures. The multiple levels of parallelism present in Blaze code, in principle, allow a compiler to extract the types of parallelism appropriate for the given architecture while neglecting the remainder. The features of Blaze are described and shows how this language would be used in typical scientific programming.

  14. Parallelization of the FLAPW method

    International Nuclear Information System (INIS)

    Canning, A.; Mannstadt, W.; Freeman, A.J.

    1999-01-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about one hundred atoms due to a lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel computer

  15. Parallelization of the FLAPW method

    Science.gov (United States)

    Canning, A.; Mannstadt, W.; Freeman, A. J.

    2000-08-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining structural, electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about a hundred atoms due to the lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work, we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel supercomputer.

  16. Parallel SN transport calculations on a transputer network

    International Nuclear Information System (INIS)

    Kim, Yong Hee; Cho, Nam Zin

    1994-01-01

    A parallel computing algorithm for the neutron transport problems has been implemented on a transputer network and two reactor benchmark problems (a fixed-source problem and an eigenvalue problem) are solved. We have shown that the parallel calculations provided significant reduction in execution time over the sequential calculations

  17. Experimental study on influence of vegetation coverage on runoff in wind-water erosion crisscross region

    Science.gov (United States)

    Wang, Jinhua; Zhang, Ronggang; Sun, Juan

    2018-02-01

    Using artificial rainfall simulation method, 23 simulation experiments were carried out in water-wind erosion crisscross region in order to analyze the influence of vegetation coverage on runoff and sediment yield. The experimental plots are standard plots with a length of 20m, width of 5m and slope of 15 degrees. The simulation experiments were conducted in different vegetation coverage experimental plots based on three different rainfall intensities. According to the experimental observation data, the influence of vegetation coverage on runoff and infiltration was analyzed. Vegetation coverage has a significant impact on runoff, and the higher the vegetation coverage is, the smaller the runoff is. Under the condition of 0.6mm/min rainfall intensity, the runoff volume from the experimental plot with 18% vegetation coverage was 1.2 times of the runoff from the experimental with 30% vegetation coverage. What’s more, the difference of runoff is more obvious in higher rainfall intensity. If the rainfall intensity reaches 1.32mm/min, the runoff from the experimental plot with 11% vegetation coverage is about 2 times as large as the runoff from the experimental plot with 53%vegetation coverage. Under the condition of small rainfall intensity, the starting time of runoff in the experimental plot with higher vegetation coverage is later than that in the experimental plot with low vegetation coverage. However, under the condition of heavy rainfall intensity, there is no obvious difference in the beginning time of runoff. In addition, the higher the vegetation coverage is, the deeper the rainfall infiltration depth is.The results can provide reference for ecological construction carried out in wind erosion crisscross region with serious soil erosion.

  18. Acceleration of cardiovascular MRI using parallel imaging: basic principles, practical considerations, clinical applications and future directions

    International Nuclear Information System (INIS)

    Niendorf, T.; Sodickson, D.

    2006-01-01

    Cardiovascular Magnetic Resonance (CVMR) imaging has proven to be of clinical value for non-invasive diagnostic imaging of cardiovascular diseases. CVMR requires rapid imaging; however, the speed of conventional MRI is fundamentally limited due to its sequential approach to image acquisition, in which data points are collected one after the other in the presence of sequentially-applied magnetic field gradients and radiofrequency coils to acquire multiple data points simultaneously, and thereby to increase imaging speed and efficiency beyond the limits of purely gradient-based approaches. The resulting improvements in imaging speed can be used in various ways, including shortening long examinations, improving spatial resolution and anatomic coverage, improving temporal resolution, enhancing image quality, overcoming physiological constraints, detecting and correcting for physiologic motion, and streamlining work flow. Examples of these strategies will be provided in this review, after some of the fundamentals of parallel imaging methods now in use for cardiovascular MRI are outlined. The emphasis will rest upon basic principles and clinical state-of-the art cardiovascular MRI applications. In addition, practical aspects such as signal-to-noise ratio considerations, tailored parallel imaging protocols and potential artifacts will be discussed, and current trends and future directions will be explored. (orig.)

  19. A massively parallel strategy for STR marker development, capture, and genotyping.

    Science.gov (United States)

    Kistler, Logan; Johnson, Stephen M; Irwin, Mitchell T; Louis, Edward E; Ratan, Aakrosh; Perry, George H

    2017-09-06

    Short tandem repeat (STR) variants are highly polymorphic markers that facilitate powerful population genetic analyses. STRs are especially valuable in conservation and ecological genetic research, yielding detailed information on population structure and short-term demographic fluctuations. Massively parallel sequencing has not previously been leveraged for scalable, efficient STR recovery. Here, we present a pipeline for developing STR markers directly from high-throughput shotgun sequencing data without a reference genome, and an approach for highly parallel target STR recovery. We employed our approach to capture a panel of 5000 STRs from a test group of diademed sifakas (Propithecus diadema, n = 3), endangered Malagasy rainforest lemurs, and we report extremely efficient recovery of targeted loci-97.3-99.6% of STRs characterized with ≥10x non-redundant sequence coverage. We then tested our STR capture strategy on P. diadema fecal DNA, and report robust initial results and suggestions for future implementations. In addition to STR targets, this approach also generates large, genome-wide single nucleotide polymorphism (SNP) panels from flanking regions. Our method provides a cost-effective and scalable solution for rapid recovery of large STR and SNP datasets in any species without needing a reference genome, and can be used even with suboptimal DNA more easily acquired in conservation and ecological studies. Published by Oxford University Press on behalf of Nucleic Acids Research 2017.

  20. Network television news coverage of environmental risks

    International Nuclear Information System (INIS)

    Greenberg, M.R.; Sandman, P.M.; Sachsman, D.V.; Salomone, K.L.

    1989-01-01

    Despite the criticisms that surround television coverage of environmental risk, there have been relatively few attempts to measure what and whom television shows. Most research has focused analysis on a few weeks of coverage of major stories like the gas leak at Bhopal, the Three Mile Island nuclear accident, or the Mount St. Helen's eruption. To advance the research into television coverage of environmental risk, an analysis has been made of all environmental risk coverage by the network nightly news broadcasts for a period of more than two years. Researchers have analyzed all environmental risk coverage-564 stories in 26 months-presented on ABC, CBS, and NBC's evening news broadcasts from January 1984 through February 1986. The quantitative information from the 564 stories was balanced by a more qualitative analysis of the television coverage of two case studies-the dioxin contamination in Times Beach, Missouri, and the suspected methyl isocyanate emissions from the Union Carbide plant in Institute, West Virginia. Both qualitative and quantitative data contributed to the analysis of the role played by experts and environmental advocacy sources in coverage of environmental risk and to the suggestions for increasing that role

  1. Is Monte Carlo embarrassingly parallel?

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)

    2012-07-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  2. Is Monte Carlo embarrassingly parallel?

    International Nuclear Information System (INIS)

    Hoogenboom, J. E.

    2012-01-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  3. Parallel integer sorting with medium and fine-scale parallelism

    Science.gov (United States)

    Dagum, Leonardo

    1993-01-01

    Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.

  4. Template based parallel checkpointing in a massively parallel computer system

    Science.gov (United States)

    Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  5. State Mandated Benefits and Employer Provided Health Insurance

    OpenAIRE

    Jonathan Gruber

    1992-01-01

    One popular explanation for this low rate of employee coverage is the presence of numerous state regulations which mandate that group health insurance plans must include certain benefits. By raising the minimum costs of providing any health insurance coverage, these mandated benefits make it impossible for firms which would have desired to offer minimal health insurance at a low cost to do so. I use data on insurance coverage among employees in small firms to investigate whether this problem ...

  6. Parallel education: what is it?

    OpenAIRE

    Amos, Michelle Peta

    2017-01-01

    In the history of education it has long been discussed that single-sex and coeducation are the two models of education present in schools. With the introduction of parallel schools over the last 15 years, there has been very little research into this 'new model'. Many people do not understand what it means for a school to be parallel or they confuse a parallel model with co-education, due to the presence of both boys and girls within the one institution. Therefore, the main obj...

  7. 42 CFR 440.330 - Benchmark health benefits coverage.

    Science.gov (United States)

    2010-10-01

    ...) Federal Employees Health Benefit Plan Equivalent Coverage (FEHBP—Equivalent Health Insurance Coverage). A... coverage. Health benefits coverage that is offered and generally available to State employees in the State... 42 Public Health 4 2010-10-01 2010-10-01 false Benchmark health benefits coverage. 440.330 Section...

  8. Space Shuttle Communications Coverage Analysis for Thermal Tile Inspection

    Science.gov (United States)

    Kroll, Quin D.; Hwu, Shian U.; Upanavage, Matthew; Boster, John P.; Chavez, Mark A.

    2009-01-01

    The space shuttle ultra-high frequency Space-to-Space Communication System has to provide adequate communication coverage for astronauts who are performing thermal tile inspection and repair on the underside of the space shuttle orbiter (SSO). Careful planning and quantitative assessment are necessary to ensure successful system operations and mission safety in this work environment. This study assesses communication systems performance for astronauts who are working in the underside, non-line-of-sight shadow region on the space shuttle. All of the space shuttle and International Space Station (ISS) transmitting antennas are blocked by the SSO structure. To ensure communication coverage at planned inspection worksites, the signal strength and link margin between the SSO/ISS antennas and the extravehicular activity astronauts, whose line-of-sight is blocked by vehicle structure, was analyzed. Investigations were performed using rigorous computational electromagnetic modeling techniques. Signal strength was obtained by computing the reflected and diffracted fields along the signal propagation paths between transmitting and receiving antennas. Radio frequency (RF) coverage was determined for thermal tile inspection and repair missions using the results of this computation. Analysis results from this paper are important in formulating the limits on reliable communication range and RF coverage at planned underside inspection and repair worksites.

  9. Print News Coverage of School-Based HPV Vaccine Mandate

    Science.gov (United States)

    Casciotti, Dana; Smith, Katherine C.; Andon, Lindsay; Vernick, Jon; Tsui, Amy; Klassen, Ann C.

    2015-01-01

    BACKGROUND In 2007, legislation was proposed in 24 states and the District of Columbia for school-based HPV vaccine mandates, and mandates were enacted in Texas, Virginia, and the District of Columbia. Media coverage of these events was extensive, and media messages both reflected and contributed to controversy surrounding these legislative activities. Messages communicated through the media are an important influence on adolescent and parent understanding of school-based vaccine mandates. METHODS We conducted structured text analysis of newspaper coverage, including quantitative analysis of 169 articles published in mandate jurisdictions from 2005-2009, and qualitative analysis of 63 articles from 2007. Our structured analysis identified topics, key stakeholders and sources, tone, and the presence of conflict. Qualitative thematic analysis identified key messages and issues. RESULTS Media coverage was often incomplete, providing little context about cervical cancer or screening. Skepticism and autonomy concerns were common. Messages reflected conflict and distrust of government activities, which could negatively impact this and other youth-focused public health initiatives. CONCLUSIONS If school health professionals are aware of the potential issues raised in media coverage of school-based health mandates, they will be more able to convey appropriate health education messages, and promote informed decision-making by parents and students. PMID:25099421

  10. Chernobyl coverage: how the US media treated the nuclear industry

    International Nuclear Information System (INIS)

    Friedman, S.M.; Gorney, C.M.; Egolf, B.P.

    1992-01-01

    This study attempted to uncover whether enough background information about nuclear power and the nuclear industries in the USA, USSR and Eastern and Western Europe had been included during the first two weeks of US coverage of the Chernobyl accident so that Americans would not be misled in their understanding of and attitudes toward nuclear power in general. It also sought to determine if reporters took advantage of the Chernobyl accident to attack nuclear technology or the nuclear industry in general. Coverage was analysed in five US newspapers and on the evening newscasts of the three major US television networks. Despite heavy coverage of the accident, no more than 25% of the coverage was devoted to information on safety records, history of accidents and current status of nuclear industries. Not enough information was provided to help the public's level of understanding of nuclear power or to put the Chernobyl accident in context. However, articles and newscasts generally balanced use of pro- and anti-nuclear statements, and did not include excessive amounts of fear-inducing and negative information. (author)

  11. Massive hybrid parallelism for fully implicit multiphysics

    International Nuclear Information System (INIS)

    Gaston, D. R.; Permann, C. J.; Andrs, D.; Peterson, J. W.

    2013-01-01

    As hardware advances continue to modify the supercomputing landscape, traditional scientific software development practices will become more outdated, ineffective, and inefficient. The process of rewriting/retooling existing software for new architectures is a Sisyphean task, and results in substantial hours of development time, effort, and money. Software libraries which provide an abstraction of the resources provided by such architectures are therefore essential if the computational engineering and science communities are to continue to flourish in this modern computing environment. The Multiphysics Object Oriented Simulation Environment (MOOSE) framework enables complex multiphysics analysis tools to be built rapidly by scientists, engineers, and domain specialists, while also allowing them to both take advantage of current HPC architectures, and efficiently prepare for future supercomputer designs. MOOSE employs a hybrid shared-memory and distributed-memory parallel model and provides a complete and consistent interface for creating multiphysics analysis tools. In this paper, a brief discussion of the mathematical algorithms underlying the framework and the internal object-oriented hybrid parallel design are given. Representative massively parallel results from several applications areas are presented, and a brief discussion of future areas of research for the framework are provided. (authors)

  12. Massive hybrid parallelism for fully implicit multiphysics

    Energy Technology Data Exchange (ETDEWEB)

    Gaston, D. R.; Permann, C. J.; Andrs, D.; Peterson, J. W. [Idaho National Laboratory, 2525 N. Fremont Ave., Idaho Falls, ID 83415 (United States)

    2013-07-01

    As hardware advances continue to modify the supercomputing landscape, traditional scientific software development practices will become more outdated, ineffective, and inefficient. The process of rewriting/retooling existing software for new architectures is a Sisyphean task, and results in substantial hours of development time, effort, and money. Software libraries which provide an abstraction of the resources provided by such architectures are therefore essential if the computational engineering and science communities are to continue to flourish in this modern computing environment. The Multiphysics Object Oriented Simulation Environment (MOOSE) framework enables complex multiphysics analysis tools to be built rapidly by scientists, engineers, and domain specialists, while also allowing them to both take advantage of current HPC architectures, and efficiently prepare for future supercomputer designs. MOOSE employs a hybrid shared-memory and distributed-memory parallel model and provides a complete and consistent interface for creating multiphysics analysis tools. In this paper, a brief discussion of the mathematical algorithms underlying the framework and the internal object-oriented hybrid parallel design are given. Representative massively parallel results from several applications areas are presented, and a brief discussion of future areas of research for the framework are provided. (authors)

  13. MASSIVE HYBRID PARALLELISM FOR FULLY IMPLICIT MULTIPHYSICS

    Energy Technology Data Exchange (ETDEWEB)

    Cody J. Permann; David Andrs; John W. Peterson; Derek R. Gaston

    2013-05-01

    As hardware advances continue to modify the supercomputing landscape, traditional scientific software development practices will become more outdated, ineffective, and inefficient. The process of rewriting/retooling existing software for new architectures is a Sisyphean task, and results in substantial hours of development time, effort, and money. Software libraries which provide an abstraction of the resources provided by such architectures are therefore essential if the computational engineering and science communities are to continue to flourish in this modern computing environment. The Multiphysics Object Oriented Simulation Environment (MOOSE) framework enables complex multiphysics analysis tools to be built rapidly by scientists, engineers, and domain specialists, while also allowing them to both take advantage of current HPC architectures, and efficiently prepare for future supercomputer designs. MOOSE employs a hybrid shared-memory and distributed-memory parallel model and provides a complete and consistent interface for creating multiphysics analysis tools. In this paper, a brief discussion of the mathematical algorithms underlying the framework and the internal object-oriented hybrid parallel design are given. Representative massively parallel results from several applications areas are presented, and a brief discussion of future areas of research for the framework are provided.

  14. A Soft Parallel Kinematic Mechanism.

    Science.gov (United States)

    White, Edward L; Case, Jennifer C; Kramer-Bottiglio, Rebecca

    2018-02-01

    In this article, we describe a novel holonomic soft robotic structure based on a parallel kinematic mechanism. The design is based on the Stewart platform, which uses six sensors and actuators to achieve full six-degree-of-freedom motion. Our design is much less complex than a traditional platform, since it replaces the 12 spherical and universal joints found in a traditional Stewart platform with a single highly deformable elastomer body and flexible actuators. This reduces the total number of parts in the system and simplifies the assembly process. Actuation is achieved through coiled-shape memory alloy actuators. State observation and feedback is accomplished through the use of capacitive elastomer strain gauges. The main structural element is an elastomer joint that provides antagonistic force. We report the response of the actuators and sensors individually, then report the response of the complete assembly. We show that the completed robotic system is able to achieve full position control, and we discuss the limitations associated with using responsive material actuators. We believe that control demonstrated on a single body in this work could be extended to chains of such bodies to create complex soft robots.

  15. A Two-Phase Coverage-Enhancing Algorithm for Hybrid Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Qingguo Zhang

    2017-01-01

    Full Text Available Providing field coverage is a key task in many sensor network applications. In certain scenarios, the sensor field may have coverage holes due to random initial deployment of sensors; thus, the desired level of coverage cannot be achieved. A hybrid wireless sensor network is a cost-effective solution to this problem, which is achieved by repositioning a portion of the mobile sensors in the network to meet the network coverage requirement. This paper investigates how to redeploy mobile sensor nodes to improve network coverage in hybrid wireless sensor networks. We propose a two-phase coverage-enhancing algorithm for hybrid wireless sensor networks. In phase one, we use a differential evolution algorithm to compute the candidate’s target positions in the mobile sensor nodes that could potentially improve coverage. In the second phase, we use an optimization scheme on the candidate’s target positions calculated from phase one to reduce the accumulated potential moving distance of mobile sensors, such that the exact mobile sensor nodes that need to be moved as well as their final target positions can be determined. Experimental results show that the proposed algorithm provided significant improvement in terms of area coverage rate, average moving distance, area coverage–distance rate and the number of moved mobile sensors, when compare with other approaches.

  16. SWAMP+: multiple subsequence alignment using associative massive parallelism

    Energy Technology Data Exchange (ETDEWEB)

    Steinfadt, Shannon Irene [Los Alamos National Laboratory; Baker, Johnnie W [KENT STATE UNIV.

    2010-10-18

    A new parallel algorithm SWAMP+ incorporates the Smith-Waterman sequence alignment on an associative parallel model known as ASC. It is a highly sensitive parallel approach that expands traditional pairwise sequence alignment. This is the first parallel algorithm to provide multiple non-overlapping, non-intersecting subsequence alignments with the accuracy of Smith-Waterman. The efficient algorithm provides multiple alignments similar to BLAST while creating a better workflow for the end users. The parallel portions of the code run in O(m+n) time using m processors. When m = n, the algorithmic analysis becomes O(n) with a coefficient of two, yielding a linear speedup. Implementation of the algorithm on the SIMD ClearSpeed CSX620 confirms this theoretical linear speedup with real timings.

  17. Kalman Filter Tracking on Parallel Architectures

    International Nuclear Information System (INIS)

    Cerati, Giuseppe; Elmer, Peter; Krutelyov, Slava; Lantz, Steven; Lefebvre, Matthieu; McDermott, Kevin; Riley, Daniel; Tadel, Matevž; Wittich, Peter; Würthwein, Frank; Yagil, Avi

    2016-01-01

    Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen the introduction of lower-power, multi-core processors such as GPGPU, ARM and Intel MIC. In order to achieve the theoretical performance gains of these processors, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High-Luminosity Large Hadron Collider (HL-LHC), for example, this will be by far the dominant problem. The need for greater parallelism has driven investigations of very different track finding techniques such as Cellular Automata or Hough Transforms. The most common track finding techniques in use today, however, are those based on a Kalman filter approach. Significant experience has been accumulated with these techniques on real tracking detector systems, both in the trigger and offline. They are known to provide high physics performance, are robust, and are in use today at the LHC. Given the utility of the Kalman filter in track finding, we have begun to port these algorithms to parallel architectures, namely Intel Xeon and Xeon Phi. We report here on our progress towards an end-to-end track reconstruction algorithm fully exploiting vectorization and parallelization techniques in a simplified experimental environment

  18. Parallel algorithms on the ASTRA SIMD machine

    International Nuclear Information System (INIS)

    Odor, G.; Rohrbach, F.; Vesztergombi, G.; Varga, G.; Tatrai, F.

    1996-01-01

    In view of the tremendous computing power jump of modern RISC processors the interest in parallel computing seems to be thinning out. Why use a complicated system of parallel processors, if the problem can be solved by a single powerful micro-chip. It is a general law, however, that exponential growth will always end by some kind of a saturation, and then parallelism will again become a hot topic. We try to prepare ourselves for this eventuality. The MPPC project started in 1990 in the keydeys of parallelism and produced four ASTRA machines (presented at CHEP's 92) with 4k processors (which are expandable to 16k) based on yesterday's chip-technology (chip presented at CHEP'91). These machines now provide excellent test-beds for algorithmic developments in a complete, real environment. We are developing for example fast-pattern recognition algorithms which could be used in high-energy physics experiments at the LHC (planned to be operational after 2004 at CERN) for triggering and data reduction. The basic feature of our ASP (Associate String Processor) approach is to use extremely simple (thus very cheap) processor elements but in huge quantities (up to millions of processors) connected together by a very simple string-like communication chain. In this paper we present powerful algorithms based on this architecture indicating the performance perspectives if the hardware quality reaches present or even future technology levels. (author)

  19. Mapping robust parallel multigrid algorithms to scalable memory architectures

    Science.gov (United States)

    Overman, Andrea; Vanrosendale, John

    1993-01-01

    The convergence rate of standard multigrid algorithms degenerates on problems with stretched grids or anisotropic operators. The usual cure for this is the use of line or plane relaxation. However, multigrid algorithms based on line and plane relaxation have limited and awkward parallelism and are quite difficult to map effectively to highly parallel architectures. Newer multigrid algorithms that overcome anisotropy through the use of multiple coarse grids rather than relaxation are better suited to massively parallel architectures because they require only simple point-relaxation smoothers. In this paper, we look at the parallel implementation of a V-cycle multiple semicoarsened grid (MSG) algorithm on distributed-memory architectures such as the Intel iPSC/860 and Paragon computers. The MSG algorithms provide two levels of parallelism: parallelism within the relaxation or interpolation on each grid and across the grids on each multigrid level. Both levels of parallelism must be exploited to map these algorithms effectively to parallel architectures. This paper describes a mapping of an MSG algorithm to distributed-memory architectures that demonstrates how both levels of parallelism can be exploited. The result is a robust and effective multigrid algorithm for distributed-memory machines.

  20. Vacuum Large Current Parallel Transfer Numerical Analysis

    Directory of Open Access Journals (Sweden)

    Enyuan Dong

    2014-01-01

    Full Text Available The stable operation and reliable breaking of large generator current are a difficult problem in power system. It can be solved successfully by the parallel interrupters and proper timing sequence with phase-control technology, in which the strategy of breaker’s control is decided by the time of both the first-opening phase and second-opening phase. The precise transfer current’s model can provide the proper timing sequence to break the generator circuit breaker. By analysis of the transfer current’s experiments and data, the real vacuum arc resistance and precise correctional model in the large transfer current’s process are obtained in this paper. The transfer time calculated by the correctional model of transfer current is very close to the actual transfer time. It can provide guidance for planning proper timing sequence and breaking the vacuum generator circuit breaker with the parallel interrupters.

  1. Workspace Analysis for Parallel Robot

    Directory of Open Access Journals (Sweden)

    Ying Sun

    2013-05-01

    Full Text Available As a completely new-type of robot, the parallel robot possesses a lot of advantages that the serial robot does not, such as high rigidity, great load-carrying capacity, small error, high precision, small self-weight/load ratio, good dynamic behavior and easy control, hence its range is extended in using domain. In order to find workspace of parallel mechanism, the numerical boundary-searching algorithm based on the reverse solution of kinematics and limitation of link length has been introduced. This paper analyses position workspace, orientation workspace of parallel robot of the six degrees of freedom. The result shows: It is a main means to increase and decrease its workspace to change the length of branch of parallel mechanism; The radius of the movement platform has no effect on the size of workspace, but will change position of workspace.

  2. Parallel encoders for pixel detectors

    International Nuclear Information System (INIS)

    Nikityuk, N.M.

    1991-01-01

    A new method of fast encoding and determining the multiplicity and coordinates of fired pixels is described. A specific example construction of parallel encodes and MCC for n=49 and t=2 is given. 16 refs.; 6 figs.; 2 tabs

  3. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo

    2010-01-01

    Today\\'s large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  4. Event monitoring of parallel computations

    Directory of Open Access Journals (Sweden)

    Gruzlikov Alexander M.

    2015-06-01

    Full Text Available The paper considers the monitoring of parallel computations for detection of abnormal events. It is assumed that computations are organized according to an event model, and monitoring is based on specific test sequences

  5. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo; Kronbichler, Martin; Bangerth, Wolfgang

    2010-01-01

    Today's large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  6. NOAA Weather Radio - County Coverage by State

    Science.gov (United States)

    Non-Zero All Hazards Logo Emergency Alert Description Event Codes Fact Sheet FAQ Organization Search Coverage Listings NWR Station Search Maps SAME SAME Coding Using SAME SAME Non-Zero Codes DOCUMENTS NWR

  7. Media Coverage of Nuclear Energy after Fukushima

    International Nuclear Information System (INIS)

    Oltra, C.; Roman, P.; Prades, A.

    2013-01-01

    This report presents the main findings of a content analysis of printed media coverage of nuclear energy in Spain before and after the Fukushima accident. Our main objective is to understand the changes in the presentation of nuclear fission and nuclear fusion as a result of the accident in Japan. We specifically analyze the volume of coverage and thematic content in the media coverage for nuclear fusion from a sample of Spanish print articles in more than 20 newspapers from 2008 to 2012. We also analyze the media coverage of nuclear energy (fission) in three main Spanish newspapers one year before and one year after the accident. The results illustrate how the media contributed to the presentation of nuclear power in the months before and after the accident. This could have implications for the public understanding of nuclear power. (Author)

  8. Media Coverage of Nuclear Energy after Fukushima

    Energy Technology Data Exchange (ETDEWEB)

    Oltra, C.; Roman, P.; Prades, A.

    2013-07-01

    This report presents the main findings of a content analysis of printed media coverage of nuclear energy in Spain before and after the Fukushima accident. Our main objective is to understand the changes in the presentation of nuclear fission and nuclear fusion as a result of the accident in Japan. We specifically analyze the volume of coverage and thematic content in the media coverage for nuclear fusion from a sample of Spanish print articles in more than 20 newspapers from 2008 to 2012. We also analyze the media coverage of nuclear energy (fission) in three main Spanish newspapers one year before and one year after the accident. The results illustrate how the media contributed to the presentation of nuclear power in the months before and after the accident. This could have implications for the public understanding of nuclear power. (Author)

  9. Coverage for SCS Pre-1941 Aerial Photography

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — This shapefile was generated by the U.S. Bureau of Land Management (BLM) at the New Mexico State Office to show the coverage for the Pre-1941 aerial photography...

  10. Base drive for paralleled inverter systems

    Science.gov (United States)

    Nagano, S. (Inventor)

    1980-01-01

    In a paralleled inverter system, a positive feedback current derived from the total current from all of the modules of the inverter system is applied to the base drive of each of the power transistors of all modules, thereby to provide all modules protection against open or short circuit faults occurring in any of the modules, and force equal current sharing among the modules during turn on of the power transistors.

  11. Exploiting Symmetry on Parallel Architectures.

    Science.gov (United States)

    Stiller, Lewis Benjamin

    1995-01-01

    This thesis describes techniques for the design of parallel programs that solve well-structured problems with inherent symmetry. Part I demonstrates the reduction of such problems to generalized matrix multiplication by a group-equivariant matrix. Fast techniques for this multiplication are described, including factorization, orbit decomposition, and Fourier transforms over finite groups. Our algorithms entail interaction between two symmetry groups: one arising at the software level from the problem's symmetry and the other arising at the hardware level from the processors' communication network. Part II illustrates the applicability of our symmetry -exploitation techniques by presenting a series of case studies of the design and implementation of parallel programs. First, a parallel program that solves chess endgames by factorization of an associated dihedral group-equivariant matrix is described. This code runs faster than previous serial programs, and discovered it a number of results. Second, parallel algorithms for Fourier transforms for finite groups are developed, and preliminary parallel implementations for group transforms of dihedral and of symmetric groups are described. Applications in learning, vision, pattern recognition, and statistics are proposed. Third, parallel implementations solving several computational science problems are described, including the direct n-body problem, convolutions arising from molecular biology, and some communication primitives such as broadcast and reduce. Some of our implementations ran orders of magnitude faster than previous techniques, and were used in the investigation of various physical phenomena.

  12. Parallel algorithms for continuum dynamics

    International Nuclear Information System (INIS)

    Hicks, D.L.; Liebrock, L.M.

    1987-01-01

    Simply porting existing parallel programs to a new parallel processor may not achieve the full speedup possible; to achieve the maximum efficiency may require redesigning the parallel algorithms for the specific architecture. The authors discuss here parallel algorithms that were developed first for the HEP processor and then ported to the CRAY X-MP/4, the ELXSI/10, and the Intel iPSC/32. Focus is mainly on the most recent parallel processing results produced, i.e., those on the Intel Hypercube. The applications are simulations of continuum dynamics in which the momentum and stress gradients are important. Examples of these are inertial confinement fusion experiments, severe breaks in the coolant system of a reactor, weapons physics, shock-wave physics. Speedup efficiencies on the Intel iPSC Hypercube are very sensitive to the ratio of communication to computation. Great care must be taken in designing algorithms for this machine to avoid global communication. This is much more critical on the iPSC than it was on the three previous parallel processors

  13. Length and coverage of inhibitory decision rules

    KAUST Repository

    Alsolami, Fawaz

    2012-01-01

    Authors present algorithms for optimization of inhibitory rules relative to the length and coverage. Inhibitory rules have a relation "attribute ≠ value" on the right-hand side. The considered algorithms are based on extensions of dynamic programming. Paper contains also comparison of length and coverage of inhibitory rules constructed by a greedy algorithm and by the dynamic programming algorithm. © 2012 Springer-Verlag.

  14. Universal Health Coverage - The Critical Importance of Global Solidarity and Good Governance Comment on "Ethical Perspective: Five Unacceptable Trade-offs on the Path to Universal Health Coverage".

    Science.gov (United States)

    Reis, Andreas A

    2016-06-07

    This article provides a commentary to Ole Norheim' s editorial entitled "Ethical perspective: Five unacceptable trade-offs on the path to universal health coverage." It reinforces its message that an inclusive, participatory process is essential for ethical decision-making and underlines the crucial importance of good governance in setting fair priorities in healthcare. Solidarity on both national and international levels is needed to make progress towards the goal of universal health coverage (UHC). © 2016 by Kerman University of Medical Sciences.

  15. Comparison of gene coverage of mouse oligonucleotide microarray platforms

    Directory of Open Access Journals (Sweden)

    Medrano Juan F

    2006-03-01

    reveals that the commercial microarray Sentrix, which is based on the MEEBO public oligoset, showed the best mouse genome coverage currently available. We also suggest the creation of guidelines to standardize the minimum set of information that vendors should provide to allow researchers to accurately evaluate the advantages and disadvantages of using a given platform.

  16. Universal health coverage in Turkey: enhancement of equity.

    Science.gov (United States)

    Atun, Rifat; Aydın, Sabahattin; Chakraborty, Sarbani; Sümer, Safir; Aran, Meltem; Gürol, Ipek; Nazlıoğlu, Serpil; Ozgülcü, Senay; Aydoğan, Ulger; Ayar, Banu; Dilmen, Uğur; Akdağ, Recep

    2013-07-06

    Turkey has successfully introduced health system changes and provided its citizens with the right to health to achieve universal health coverage, which helped to address inequities in financing, health service access, and health outcomes. We trace the trajectory of health system reforms in Turkey, with a particular emphasis on 2003-13, which coincides with the Health Transformation Program (HTP). The HTP rapidly expanded health insurance coverage and access to health-care services for all citizens, especially the poorest population groups, to achieve universal health coverage. We analyse the contextual drivers that shaped the transformations in the health system, explore the design and implementation of the HTP, identify the factors that enabled its success, and investigate its effects. Our findings suggest that the HTP was instrumental in achieving universal health coverage to enhance equity substantially, and led to quantifiable and beneficial effects on all health system goals, with an improved level and distribution of health, greater fairness in financing with better financial protection, and notably increased user satisfaction. After the HTP, five health insurance schemes were consolidated to create a unified General Health Insurance scheme with harmonised and expanded benefits. Insurance coverage for the poorest population groups in Turkey increased from 2·4 million people in 2003, to 10·2 million in 2011. Health service access increased across the country-in particular, access and use of key maternal and child health services improved to help to greatly reduce the maternal mortality ratio, and under-5, infant, and neonatal mortality, especially in socioeconomically disadvantaged groups. Several factors helped to achieve universal health coverage and improve outcomes. These factors include economic growth, political stability, a comprehensive transformation strategy led by a transformation team, rapid policy translation, flexible implementation with

  17. Increasing Coverage of Hepatitis B Vaccination in China

    OpenAIRE

    Wang, Shengnan; Smith, Helen; Peng, Zhuoxin; Xu, Biao; Wang, Weibing

    2016-01-01

    Abstract This study used a system evaluation method to summarize China's experience on improving the coverage of hepatitis B vaccine, especially the strategies employed to improve the uptake of timely birth dosage. Identifying successful methods and strategies will provide strong evidence for policy makers and health workers in other countries with high hepatitis B prevalence. We conducted a literature review included English or Chinese literature carried out in mainland China, using PubMed, ...

  18. Performance Evaluation of a Dual Coverage System for Internet of Things Environments

    Directory of Open Access Journals (Sweden)

    Omar Said

    2016-01-01

    Full Text Available A dual coverage system for Internet of Things (IoT environments is introduced. This system is used to connect IoT nodes regardless of their locations. The proposed system has three different architectures, which are based on satellites and High Altitude Platforms (HAPs. In case of Internet coverage problems, the Internet coverage will be replaced with the Satellite/HAP network coverage under specific restrictions such as loss and delay. According to IoT requirements, the proposed architectures should include multiple levels of satellites or HAPs, or a combination of both, to cover the global Internet things. It was shown that the Satellite/HAP/HAP/Things architecture provides the largest coverage area. A network simulation package, NS2, was used to test the performance of the proposed multilevel architectures. The results indicated that the HAP/HAP/Things architecture has the best end-to-end delay, packet loss, throughput, energy consumption, and handover.

  19. Dynamic balancing of mechanisms and synthesizing of parallel robots

    CERN Document Server

    Wei, Bin

    2016-01-01

    This book covers the state-of-the-art technologies in dynamic balancing of mechanisms with minimum increase of mass and inertia. The synthesis of parallel robots based on the Decomposition and Integration concept is also covered in detail. The latest advances are described, including different balancing principles, design of reactionless mechanisms with minimum increase of mass and inertia, and synthesizing parallel robots. This is an ideal book for mechanical engineering students and researchers who are interested in the dynamic balancing of mechanisms and synthesizing of parallel robots. This book also: ·       Broadens reader understanding of the synthesis of parallel robots based on the Decomposition and Integration concept ·       Reinforces basic principles with detailed coverage of different balancing principles, including input torque balancing mechanisms ·       Reviews exhaustively the key recent research into the design of reactionless mechanisms with minimum increase of mass a...

  20. Evaluation of the Defense Contract Audit Agency Audit Coverage of Tricare Contracts

    National Research Council Canada - National Science Library

    Brannin, Patricia

    2000-01-01

    Our objective was to evaluate the adequacy of the Defense Contract Audit Agency (DCAA) audit coverage of contracts for health care provided under TRICARE and the former Civilian Health Care and Medical Program of the Uniformed Services...

  1. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  2. Variation in Private Payer Coverage of Rheumatoid Arthritis Drugs.

    Science.gov (United States)

    Chambers, James D; Wilkinson, Colby L; Anderson, Jordan E; Chenoweth, Matthew D

    2016-10-01

    1.4 clinical guidelines, 1.1 clinical reviews, 0.8 other clinical studies, and 0.5 technology assessments per policy. Only 1 payer reported reviewing cost-effectiveness analyses. The evidence base that the payers reported reviewing varied in terms of volume and composition. Payers most often covered rheumatoid arthritis drugs more restrictively than the corresponding FDA label indication and the ACR treatment recommendations. Payers reported reviewing a varied evidence base in their coverage policies. Funding for this study was provided by Genentech. Chambers has participated in a Sanofi advisory board, unrelated to this study. The authors report no other potential conflicts of interest. Study concept and design were contributed by Chambers. Anderson, Wilkinson, and Chenoweth collected the data, assisted by Chambers, and data interpretation was primarily performed by Chambers, along with Anderson and with assistance from Wilkinson and Chenoweth. The manuscript was written primarily by Chambers, along with Wilkinson and with assistance from Anderson and Chenoweth. Chambers, Chenoweth, Wilkinson, and Anderson revised the manuscript.

  3. Parallel Implicit Algorithms for CFD

    Science.gov (United States)

    Keyes, David E.

    1998-01-01

    The main goal of this project was efficient distributed parallel and workstation cluster implementations of Newton-Krylov-Schwarz (NKS) solvers for implicit Computational Fluid Dynamics (CFD.) "Newton" refers to a quadratically convergent nonlinear iteration using gradient information based on the true residual, "Krylov" to an inner linear iteration that accesses the Jacobian matrix only through highly parallelizable sparse matrix-vector products, and "Schwarz" to a domain decomposition form of preconditioning the inner Krylov iterations with primarily neighbor-only exchange of data between the processors. Prior experience has established that Newton-Krylov methods are competitive solvers in the CFD context and that Krylov-Schwarz methods port well to distributed memory computers. The combination of the techniques into Newton-Krylov-Schwarz was implemented on 2D and 3D unstructured Euler codes on the parallel testbeds that used to be at LaRC and on several other parallel computers operated by other agencies or made available by the vendors. Early implementations were made directly in Massively Parallel Integration (MPI) with parallel solvers we adapted from legacy NASA codes and enhanced for full NKS functionality. Later implementations were made in the framework of the PETSC library from Argonne National Laboratory, which now includes pseudo-transient continuation Newton-Krylov-Schwarz solver capability (as a result of demands we made upon PETSC during our early porting experiences). A secondary project pursued with funding from this contract was parallel implicit solvers in acoustics, specifically in the Helmholtz formulation. A 2D acoustic inverse problem has been solved in parallel within the PETSC framework.

  4. Second derivative parallel block backward differentiation type ...

    African Journals Online (AJOL)

    Second derivative parallel block backward differentiation type formulas for Stiff ODEs. ... Log in or Register to get access to full text downloads. ... and the methods are inherently parallel and can be distributed over parallel processors. They are ...

  5. A Parallel Approach to Fractal Image Compression

    OpenAIRE

    Lubomir Dedera

    2004-01-01

    The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.

  6. 45 CFR 148.124 - Certification and disclosure of coverage.

    Science.gov (United States)

    2010-10-01

    ... method of counting creditable coverage, and the requesting entity may identify specific information that... a payroll deduction for health coverage, a health insurance identification card, a certificate of...

  7. Microwave tomography global optimization, parallelization and performance evaluation

    CERN Document Server

    Noghanian, Sima; Desell, Travis; Ashtari, Ali

    2014-01-01

    This book provides a detailed overview on the use of global optimization and parallel computing in microwave tomography techniques. The book focuses on techniques that are based on global optimization and electromagnetic numerical methods. The authors provide parallelization techniques on homogeneous and heterogeneous computing architectures on high performance and general purpose futuristic computers. The book also discusses the multi-level optimization technique, hybrid genetic algorithm and its application in breast cancer imaging.

  8. Why not private health insurance? 2. Actuarial principles meet provider dreams.

    Science.gov (United States)

    Deber, R; Gildiner, A; Baranek, P

    1999-09-07

    What do insurers and employers feel about proposals to expand Canadian health care financing through private insurance, in either a parallel stream or a supplementary tier? The authors conducted 10 semistructured, open-ended interviews in the autumn and early winter of 1996 with representatives of the insurance industry and benefits managers working with large employers; respondents were identified using a snowball sampling technique. The respondents felt that proposals for parallel private plans within a competitive market are incompatible with insurance principles, as long as a well-functioning and relatively comprehensive public system continues to exist; the maintenance of a strong public system was both socially and economically desirable. With the exception of serving the niche market for the private management of return-to-work strategies, respondents showed little interest in providing parallel coverage. They were receptive to a larger role for supplementary insurance but cautioned that they are not willing to cover all delisted services. As business executives they stated that they are willing to insure only services and clients that will be profitable.

  9. Design Patterns: establishing a discipline of parallel software engineering

    CERN Multimedia

    CERN. Geneva

    2010-01-01

    Many core processors present us with a software challenge. We must turn our serial code into parallel code. To accomplish this wholesale transformation of our software ecosystem, we must define established practice is in parallel programming and then develop tools to support that practice. This leads to design patterns supported by frameworks optimized at runtime with advanced autotuning compilers. In this talk I provide an update of my ongoing research with the ParLab at UC Berkeley to realize this vision. In particular, I will describe our draft parallel pattern language, our early experiments with software frameworks, and the associated runtime optimization tools.About the speakerTim Mattson is a parallel programmer (Ph.D. Chemistry, UCSC, 1985). He does linear algebra, finds oil, shakes molecules, solves differential equations, and models electrons in simple atomic systems. He has spent his career working with computer scientists to make sure the needs of parallel applications programmers are met.Tim has ...

  10. A Parallel Prefix Algorithm for Almost Toeplitz Tridiagonal Systems

    Science.gov (United States)

    Sun, Xian-He; Joslin, Ronald D.

    1995-01-01

    A compact scheme is a discretization scheme that is advantageous in obtaining highly accurate solutions. However, the resulting systems from compact schemes are tridiagonal systems that are difficult to solve efficiently on parallel computers. Considering the almost symmetric Toeplitz structure, a parallel algorithm, simple parallel prefix (SPP), is proposed. The SPP algorithm requires less memory than the conventional LU decomposition and is efficient on parallel machines. It consists of a prefix communication pattern and AXPY operations. Both the computation and the communication can be truncated without degrading the accuracy when the system is diagonally dominant. A formal accuracy study has been conducted to provide a simple truncation formula. Experimental results have been measured on a MasPar MP-1 SIMD machine and on a Cray 2 vector machine. Experimental results show that the simple parallel prefix algorithm is a good algorithm for symmetric, almost symmetric Toeplitz tridiagonal systems and for the compact scheme on high-performance computers.

  11. Parallel Ada benchmarks for the SVMS

    Science.gov (United States)

    Collard, Philippe E.

    1990-01-01

    The use of parallel processing paradigm to design and develop faster and more reliable computers appear to clearly mark the future of information processing. NASA started the development of such an architecture: the Spaceborne VHSIC Multi-processor System (SVMS). Ada will be one of the languages used to program the SVMS. One of the unique characteristics of Ada is that it supports parallel processing at the language level through the tasking constructs. It is important for the SVMS project team to assess how efficiently the SVMS architecture will be implemented, as well as how efficiently Ada environment will be ported to the SVMS. AUTOCLASS II, a Bayesian classifier written in Common Lisp, was selected as one of the benchmarks for SVMS configurations. The purpose of the R and D effort was to provide the SVMS project team with the version of AUTOCLASS II, written in Ada, that would make use of Ada tasking constructs as much as possible so as to constitute a suitable benchmark. Additionally, a set of programs was developed that would measure Ada tasking efficiency on parallel architectures as well as determine the critical parameters influencing tasking efficiency. All this was designed to provide the SVMS project team with a set of suitable tools in the development of the SVMS architecture.

  12. Multibus-based parallel processor for simulation

    Science.gov (United States)

    Ogrady, E. P.; Wang, C.-H.

    1983-01-01

    A Multibus-based parallel processor simulation system is described. The system is intended to serve as a vehicle for gaining hands-on experience, testing system and application software, and evaluating parallel processor performance during development of a larger system based on the horizontal/vertical-bus interprocessor communication mechanism. The prototype system consists of up to seven Intel iSBC 86/12A single-board computers which serve as processing elements, a multiple transmission controller (MTC) designed to support system operation, and an Intel Model 225 Microcomputer Development System which serves as the user interface and input/output processor. All components are interconnected by a Multibus/IEEE 796 bus. An important characteristic of the system is that it provides a mechanism for a processing element to broadcast data to other selected processing elements. This parallel transfer capability is provided through the design of the MTC and a minor modification to the iSBC 86/12A board. The operation of the MTC, the basic hardware-level operation of the system, and pertinent details about the iSBC 86/12A and the Multibus are described.

  13. A multitransputer parallel processing system (MTPPS)

    International Nuclear Information System (INIS)

    Jethra, A.K.; Pande, S.S.; Borkar, S.P.; Khare, A.N.; Ghodgaonkar, M.D.; Bairi, B.R.

    1993-01-01

    This report describes the design and implementation of a 16 node Multi Transputer Parallel Processing System(MTPPS) which is a platform for parallel program development. It is a MIMD machine based on message passing paradigm. The basic compute engine is an Inmos Transputer Ims T800-20. Transputer with local memory constitutes the processing element (NODE) of this MIMD architecture. Multiple NODES can be connected to each other in an identifiable network topology through the high speed serial links of the transputer. A Network Configuration Unit (NCU) incorporates the necessary hardware to provide software controlled network configuration. System is modularly expandable and more NODES can be added to the system to achieve the required processing power. The system is backend to the IBM-PC which has been integrated into the system to provide user I/O interface. PC resources are available to the programmer. Interface hardware between the PC and the network of transputers is INMOS compatible. Therefore, all the commercially available development software compatible to INMOS products can run on this system. While giving the details of design and implementation, this report briefly summarises MIMD Architectures, Transputer Architecture and Parallel Processing Software Development issues. LINPACK performance evaluation of the system and solutions of neutron physics and plasma physics problem have been discussed along with results. (author). 12 refs., 22 figs., 3 tabs., 3 appendixes

  14. Parallel Computing for Brain Simulation.

    Science.gov (United States)

    Pastur-Romay, L A; Porto-Pazos, A B; Cedron, F; Pazos, A

    2017-01-01

    The human brain is the most complex system in the known universe, it is therefore one of the greatest mysteries. It provides human beings with extraordinary abilities. However, until now it has not been understood yet how and why most of these abilities are produced. For decades, researchers have been trying to make computers reproduce these abilities, focusing on both understanding the nervous system and, on processing data in a more efficient way than before. Their aim is to make computers process information similarly to the brain. Important technological developments and vast multidisciplinary projects have allowed creating the first simulation with a number of neurons similar to that of a human brain. This paper presents an up-to-date review about the main research projects that are trying to simulate and/or emulate the human brain. They employ different types of computational models using parallel computing: digital models, analog models and hybrid models. This review includes the current applications of these works, as well as future trends. It is focused on various works that look for advanced progress in Neuroscience and still others which seek new discoveries in Computer Science (neuromorphic hardware, machine learning techniques). Their most outstanding characteristics are summarized and the latest advances and future plans are presented. In addition, this review points out the importance of considering not only neurons: Computational models of the brain should also include glial cells, given the proven importance of astrocytes in information processing. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  15. Parallel fabrication of macroporous scaffolds.

    Science.gov (United States)

    Dobos, Andrew; Grandhi, Taraka Sai Pavan; Godeshala, Sudhakar; Meldrum, Deirdre R; Rege, Kaushal

    2018-07-01

    Scaffolds generated from naturally occurring and synthetic polymers have been investigated in several applications because of their biocompatibility and tunable chemo-mechanical properties. Existing methods for generation of 3D polymeric scaffolds typically cannot be parallelized, suffer from low throughputs, and do not allow for quick and easy removal of the fragile structures that are formed. Current molds used in hydrogel and scaffold fabrication using solvent casting and porogen leaching are often single-use and do not facilitate 3D scaffold formation in parallel. Here, we describe a simple device and related approaches for the parallel fabrication of macroporous scaffolds. This approach was employed for the generation of macroporous and non-macroporous materials in parallel, in higher throughput and allowed for easy retrieval of these 3D scaffolds once formed. In addition, macroporous scaffolds with interconnected as well as non-interconnected pores were generated, and the versatility of this approach was employed for the generation of 3D scaffolds from diverse materials including an aminoglycoside-derived cationic hydrogel ("Amikagel"), poly(lactic-co-glycolic acid) or PLGA, and collagen. Macroporous scaffolds generated using the device were investigated for plasmid DNA binding and cell loading, indicating the use of this approach for developing materials for different applications in biotechnology. Our results demonstrate that the device-based approach is a simple technology for generating scaffolds in parallel, which can enhance the toolbox of current fabrication techniques. © 2018 Wiley Periodicals, Inc.

  16. Parallel plasma fluid turbulence calculations

    International Nuclear Information System (INIS)

    Leboeuf, J.N.; Carreras, B.A.; Charlton, L.A.; Drake, J.B.; Lynch, V.E.; Newman, D.E.; Sidikman, K.L.; Spong, D.A.

    1994-01-01

    The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center's CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated

  17. Evaluating parallel optimization on transputers

    Directory of Open Access Journals (Sweden)

    A.G. Chalmers

    2003-12-01

    Full Text Available The faster processing power of modern computers and the development of efficient algorithms have made it possible for operations researchers to tackle a much wider range of problems than ever before. Further improvements in processing speed can be achieved utilising relatively inexpensive transputers to process components of an algorithm in parallel. The Davidon-Fletcher-Powell method is one of the most successful and widely used optimisation algorithms for unconstrained problems. This paper examines the algorithm and identifies the components that can be processed in parallel. The results of some experiments with these components are presented which indicates under what conditions parallel processing with an inexpensive configuration is likely to be faster than the traditional sequential implementations. The performance of the whole algorithm with its parallel components is then compared with the original sequential algorithm. The implementation serves to illustrate the practicalities of speeding up typical OR algorithms in terms of difficulty, effort and cost. The results give an indication of the savings in time a given parallel implementation can be expected to yield.

  18. Pattern-Driven Automatic Parallelization

    Directory of Open Access Journals (Sweden)

    Christoph W. Kessler

    1996-01-01

    Full Text Available This article describes a knowledge-based system for automatic parallelization of a wide class of sequential numerical codes operating on vectors and dense matrices, and for execution on distributed memory message-passing multiprocessors. Its main feature is a fast and powerful pattern recognition tool that locally identifies frequently occurring computations and programming concepts in the source code. This tool also works for dusty deck codes that have been "encrypted" by former machine-specific code transformations. Successful pattern recognition guides sophisticated code transformations including local algorithm replacement such that the parallelized code need not emerge from the sequential program structure by just parallelizing the loops. It allows access to an expert's knowledge on useful parallel algorithms, available machine-specific library routines, and powerful program transformations. The partially restored program semantics also supports local array alignment, distribution, and redistribution, and allows for faster and more exact prediction of the performance of the parallelized target code than is usually possible.

  19. Parallel interactive data analysis with PROOF

    International Nuclear Information System (INIS)

    Ballintijn, Maarten; Biskup, Marek; Brun, Rene; Canal, Philippe; Feichtinger, Derek; Ganis, Gerardo; Kickinger, Guenter; Peters, Andreas; Rademakers, Fons

    2006-01-01

    The Parallel ROOT Facility, PROOF, enables the analysis of much larger data sets on a shorter time scale. It exploits the inherent parallelism in data of uncorrelated events via a multi-tier architecture that optimizes I/O and CPU utilization in heterogeneous clusters with distributed storage. The system provides transparent and interactive access to gigabytes today. Being part of the ROOT framework PROOF inherits the benefits of a performant object storage system and a wealth of statistical and visualization tools. This paper describes the data analysis model of ROOT and the latest developments on closer integration of PROOF into that model and the ROOT user environment, e.g. support for PROOF-based browsing of trees stored remotely, and the popular TTree::Draw() interface. We also outline the ongoing developments aimed to improve the flexibility and user-friendliness of the system

  20. Flexibility and Performance of Parallel File Systems

    Science.gov (United States)

    Kotz, David; Nieuwejaar, Nils

    1996-01-01

    As we gain experience with parallel file systems, it becomes increasingly clear that a single solution does not suit all applications. For example, it appears to be impossible to find a single appropriate interface, caching policy, file structure, or disk-management strategy. Furthermore, the proliferation of file-system interfaces and abstractions make applications difficult to port. We propose that the traditional functionality of parallel file systems be separated into two components: a fixed core that is standard on all platforms, encapsulating only primitive abstractions and interfaces, and a set of high-level libraries to provide a variety of abstractions and application-programmer interfaces (API's). We present our current and next-generation file systems as examples of this structure. Their features, such as a three-dimensional file structure, strided read and write interfaces, and I/O-node programs, are specifically designed with the flexibility and performance necessary to support a wide range of applications.

  1. Frontiers of massively parallel scientific computation

    International Nuclear Information System (INIS)

    Fischer, J.R.

    1987-07-01

    Practical applications using massively parallel computer hardware first appeared during the 1980s. Their development was motivated by the need for computing power orders of magnitude beyond that available today for tasks such as numerical simulation of complex physical and biological processes, generation of interactive visual displays, satellite image analysis, and knowledge based systems. Representative of the first generation of this new class of computers is the Massively Parallel Processor (MPP). A team of scientists was provided the opportunity to test and implement their algorithms on the MPP. The first results are presented. The research spans a broad variety of applications including Earth sciences, physics, signal and image processing, computer science, and graphics. The performance of the MPP was very good. Results obtained using the Connection Machine and the Distributed Array Processor (DAP) are presented

  2. A parallel robot to assist vitreoretinal surgery

    Energy Technology Data Exchange (ETDEWEB)

    Nakano, Taiga; Sugita, Naohiko; Mitsuishi, Mamoru [University of Tokyo, School of Engineering, Tokyo (Japan); Ueta, Takashi; Tamaki, Yasuhiro [University of Tokyo, Graduate School of Medicine, Tokyo (Japan)

    2009-11-15

    This paper describes the development and evaluation of a parallel prototype robot for vitreoretinal surgery where physiological hand tremor limits performance. The manipulator was specifically designed to meet requirements such as size, precision, and sterilization; this has six-degree-of-freedom parallel architecture and provides positioning accuracy with micrometer resolution within the eye. The manipulator is controlled by an operator with a ''master manipulator'' consisting of multiple joints. Results of the in vitro experiments revealed that when compared to the manual procedure, a higher stability and accuracy of tool positioning could be achieved using the prototype robot. This microsurgical system that we have developed has superior operability as compared to traditional manual procedure and has sufficient potential to be used clinically for vitreoretinal surgery. (orig.)

  3. Multi-petascale highly efficient parallel supercomputer

    Science.gov (United States)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen-Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2018-05-15

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaflop-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC). The ASIC nodes are interconnected by a five dimensional torus network that optimally maximize the throughput of packet communications between nodes and minimize latency. The network implements collective network and a global asynchronous network that provides global barrier and notification functions. Integrated in the node design include a list-based prefetcher. The memory system implements transaction memory, thread level speculation, and multiversioning cache that improves soft error rate at the same time and supports DMA functionality allowing for parallel processing message-passing.

  4. 75 FR 70159 - Group Health Plans and Health Insurance Coverage Rules Relating to Status as a Grandfathered...

    Science.gov (United States)

    2010-11-17

    ... Group Health Plans and Health Insurance Coverage Rules Relating to Status as a Grandfathered Health Plan... contracts of insurance. The temporary regulations provide guidance to employers, group health plans, and health insurance issuers providing group health insurance coverage. The IRS is issuing the temporary...

  5. Patient choice of providers in a preferred provider organization.

    Science.gov (United States)

    Wouters, A V; Hester, J

    1988-03-01

    This article is an analysis of patient choice of providers by the employees of the Security Pacific Bank of California and their dependents who have access to the Med Network Preferred Provider Organization (PPO). The empirical results show that not only is the PPO used by individuals who require relatively little medical care (as measured by predicted office visit charges) but that the PPO is most intensively used for low-risk services such as treatment for minor illness and preventive care. Also, the most likely Security Pacific Health Care beneficiary to use a PPO provider is a recently hired employee who lives in the south urban region, has a relatively low income, does not have supplemental insurance coverage, and is without previous attachments to non-PPO primary care providers. In order to maximize their ability to reduce plan paid benefits, insurers who contract with PPOs should focus on increasing PPO utilization among poorer health risks.

  6. Cholera in Haiti: Reproductive numbers and vaccination coverage estimates

    Science.gov (United States)

    Mukandavire, Zindoga; Smith, David L.; Morris, J. Glenn, Jr.

    2013-01-01

    Cholera reappeared in Haiti in October, 2010 after decades of absence. Cases were first detected in Artibonite region and in the ensuing months the disease spread to every department in the country. The rate of increase in the number of cases at the start of epidemics provides valuable information about the basic reproductive number (). Quantitative analysis of such data gives useful information for planning and evaluating disease control interventions, including vaccination. Using a mathematical model, we fitted data on the cumulative number of reported hospitalized cholera cases in Haiti. varied by department, ranging from 1.06 to 2.63. At a national level, 46% vaccination coverage would result in an () cholera vaccines in endemic and non-endemic regions, our results suggest that moderate cholera vaccine coverage would be an important element of disease control in Haiti.

  7. The WET Coverage - How Well Do We Do?

    Directory of Open Access Journals (Sweden)

    Solheim Jan-Erik

    2003-06-01

    Full Text Available The Whole Earth Telescope collaboration is build solidly on the interest of the participants. One of the goals of the collaboration is to produce a high signal to noise, as continuous as possible, light curve for a selected target. During the nearly 15 years of existence the operation of the network has been based on what the members have been able to provide of local funds for their own participation, in addition to NSF grants to run the headquarters activities. This has led to a very uneven geographical distribution of participating groups and observatories. An analysis of the coverage of some of the last WET runs shows that we still have large holes in the coverage, and this leads to aliasing and loss of precision in our final products.

  8. Parallel artificial liquid membrane extraction

    DEFF Research Database (Denmark)

    Gjelstad, Astrid; Rasmussen, Knut Einar; Parmer, Marthe Petrine

    2013-01-01

    This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated by an arti......This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated...... by an artificial liquid membrane. Parallel artificial liquid membrane extraction is a modification of hollow-fiber liquid-phase microextraction, where the hollow fibers are replaced by flat membranes in a 96-well plate format....

  9. Parallel algorithms for mapping pipelined and parallel computations

    Science.gov (United States)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  10. Cellular automata a parallel model

    CERN Document Server

    Mazoyer, J

    1999-01-01

    Cellular automata can be viewed both as computational models and modelling systems of real processes. This volume emphasises the first aspect. In articles written by leading researchers, sophisticated massive parallel algorithms (firing squad, life, Fischer's primes recognition) are treated. Their computational power and the specific complexity classes they determine are surveyed, while some recent results in relation to chaos from a new dynamic systems point of view are also presented. Audience: This book will be of interest to specialists of theoretical computer science and the parallelism challenge.

  11. Conceptualising the lack of health insurance coverage.

    Science.gov (United States)

    Davis, J B

    2000-01-01

    This paper examines the lack of health insurance coverage in the US as a public policy issue. It first compares the problem of health insurance coverage to the problem of unemployment to show that in terms of the numbers of individuals affected lack of health insurance is a problem comparable in importance to the problem of unemployment. Secondly, the paper discusses the methodology involved in measuring health insurance coverage, and argues that the current method of estimation of the uninsured underestimates the extent that individuals go without health insurance. Third, the paper briefly introduces Amartya Sen's functioning and capabilities framework to suggest a way of representing the extent to which individuals are uninsured. Fourth, the paper sketches a means of operationalizing the Sen representation of the uninsured in terms of the disability-adjusted life year (DALY) measure.

  12. Resolution, coverage, and geometry beyond traditional limits

    Energy Technology Data Exchange (ETDEWEB)

    Ronen, Shuki; Ferber, Ralf

    1998-12-31

    The presentation relates to the optimization of the image of seismic data and improved resolution and coverage of acquired data. Non traditional processing methods such as inversion to zero offset (IZO) are used. To realize the potential of saving acquisition cost by reducing in-fill and to plan resolution improvement by processing, geometry QC methods such as DMO Dip Coverage Spectrum (DDCS) and Bull`s Eyes Analysis are used. The DDCS is a 2-D spectrum whose entries consist of the DMO (Dip Move Out) coverage for a particular reflector specified by it`s true time dip and reflector normal strike. The Bull`s Eyes Analysis relies on real time processing of synthetic data generated with the real geometry. 4 refs., 6 figs.

  13. Energy-efficient area coverage for intruder detection in sensor networks

    CERN Document Server

    He, Shibo; Li, Junkun

    2014-01-01

    This Springer Brief presents recent research results on area coverage for intruder detection from an energy-efficient perspective. These results cover a variety of topics, including environmental surveillance and security monitoring. The authors also provide the background and range of applications for area coverage and elaborate on system models such as the formal definition of area coverage and sensing models. Several chapters focus on energy-efficient intruder detection and intruder trapping under the well-known binary sensing model, along with intruder trapping under the probabilistic sens

  14. Quality and extent of locum tenens coverage in pediatric surgical practices.

    Science.gov (United States)

    Nolan, Tracy L; Kandel, Jessica J; Nakayama, Don K

    2015-04-01

    The prevalence and quality of locum tenens coverage in pediatric surgery have not been determined. An Internet-based survey of American Pediatric Surgical Association members was conducted: 1) practice description; 2) use and frequency of locum tenens coverage; 4) whether the surgeon provided such coverage; and 5) Likert scale responses (strongly disagree, disagree, neutral, agree, strongly agree) to statements addressing its acceptability and quality (two × five contingency table and χ(2) analyses, significance at P view it as a stopgap solution to the surgical workforce shortage.

  15. Evaluation of Coverage and Barriers to Access to MAM Treatment in West Pokot County, Kenya

    International Nuclear Information System (INIS)

    Basquin, Cecile; Imelda, Awino; Gallagher, Maureen

    2014-01-01

    Full text: Despite an increased number of nutrition treatment coverage assessments conducted, they often focus on Severe Acute Malnutrition (SAM) treatment. In a recent experience in Kenya, Action Against Hunger| ACF International (ACF) conducted a coverage assessment to evaluate access to SAM and Moderate Acute Malnutrition (MAM) treatment. ACF supports the Ministry of Health (MoH) in delivering SAM and MAM treatment at health facility level through an Integrated Management of Acute Malnutrition (IMAM) programme in West Pokot county since 2011. In order to evaluate the coverage of Outpatient Therapeutic Programme (OTP) and Supplementary Feeding Programme (SFP) components, the Simplified Lot Quality Assurance Sampling Evaluation of Access and Coverage (SLEAC) methodology was used. The goals of the coverage assessment were i) to estimate coverage for OTP and SFP; ii) to identify barriers to access to SAM and MAM treatment; iii) to evaluate whether any differences exist between barriers to access to SAM versus to MAM treatment as SFP coverage and uptake of MAM services were never assessed before; and iv) to build local capacities in assessing coverage and to provide recommendations for the MoH-led IMAM programme. With the support of the Coverage Monitoring Network (CMN), ACF led the SLEAC assessment as part of an on-the-job training exercise for MoH and partners in July 2013, covering all of West Pokot county. SLEAC is a rapid and low-resource survey method that uses a three-tier classification approach to evaluate and classify coverage, i.e., low coverage: < 20%; moderate: 20% -50%; and high coverage: ≤ 50%. In a first sampling stage, villages in each of the four sub-counties were randomly selected using systematic sampling. In a second sampling stage, in order to also assess MAM coverage, a house-to-house approach was applied to identify all or near all acutely malnourished children using Mid Upper Arm Circumference (MUAC) tape and identification of bilateral

  16. 29 CFR 2.13 - Audiovisual coverage prohibited.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true Audiovisual coverage prohibited. 2.13 Section 2.13 Labor Office of the Secretary of Labor GENERAL REGULATIONS Audiovisual Coverage of Administrative Hearings § 2.13 Audiovisual coverage prohibited. The Department shall not permit audiovisual coverage of the...

  17. 28 CFR 55.6 - Coverage under section 203(c).

    Science.gov (United States)

    2010-07-01

    ... THE VOTING RIGHTS ACT REGARDING LANGUAGE MINORITY GROUPS Nature of Coverage § 55.6 Coverage under section 203(c). (a) Coverage formula. There are four ways in which a political subdivision can become subject to section 203(c). 2 2 The criteria for coverage are contained in section 203(b). (1) Political...

  18. Microstrip Antenna Design for Femtocell Coverage Optimization

    Directory of Open Access Journals (Sweden)

    Afaz Uddin Ahmed

    2014-01-01

    Full Text Available A mircostrip antenna is designed for multielement antenna coverage optimization in femtocell network. Interference is the foremost concern for the cellular operator in vast commercial deployments of femtocell. Many techniques in physical, data link and network-layer are analysed and developed to settle down the interference issues. A multielement technique with self-configuration features is analyzed here for coverage optimization of femtocell. It also focuses on the execution of microstrip antenna for multielement configuration. The antenna is designed for LTE Band 7 by using standard FR4 dielectric substrate. The performance of the proposed antenna in the femtocell application is discussed along with results.

  19. RCT: Module 2.11, Radiological Work Coverage, Course 8777

    Energy Technology Data Exchange (ETDEWEB)

    Hillmer, Kurt T. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-07-20

    Radiological work is usually approved and controlled by radiation protection personnel by using administrative and procedural controls, such as radiological work permits (RWPs). In addition, some jobs will require working in, or will have the potential for creating, very high radiation, contamination, or airborne radioactivity areas. Radiological control technicians (RCTs) providing job coverage have an integral role in controlling radiological hazards. This course will prepare the student with the skills necessary for RCT qualification by passing quizzes, tests, and the RCT Comprehensive Phase 1, Unit 2 Examination (TEST 27566) and will provide in-the-field skills.

  20. Handbook of infrared standards II with spectral coverage between

    CERN Document Server

    Meurant, Gerard

    1993-01-01

    This timely compilation of infrared standards has been developed for use by infrared researchers in chemistry, physics, engineering, astrophysics, and laser and atmospheric sciences. Providing maps of closely spaced molecular spectra along with their measured wavenumbers between 1.4vm and 4vm, this handbook will complement the 1986 Handbook of Infrared Standards that included special coverage between 3 and 2600vm. It will serve as a necessary reference for all researchers conducting spectroscopic investigations in the near-infrared region.Key Features:- Provides all new spec

  1. A Parallel Encryption Algorithm Based on Piecewise Linear Chaotic Map

    Directory of Open Access Journals (Sweden)

    Xizhong Wang

    2013-01-01

    Full Text Available We introduce a parallel chaos-based encryption algorithm for taking advantage of multicore processors. The chaotic cryptosystem is generated by the piecewise linear chaotic map (PWLCM. The parallel algorithm is designed with a master/slave communication model with the Message Passing Interface (MPI. The algorithm is suitable not only for multicore processors but also for the single-processor architecture. The experimental results show that the chaos-based cryptosystem possesses good statistical properties. The parallel algorithm provides much better performance than the serial ones and would be useful to apply in encryption/decryption file with large size or multimedia.

  2. pcircle - A Suite of Scalable Parallel File System Tools

    Energy Technology Data Exchange (ETDEWEB)

    2015-10-01

    Most of the software related to file system are written for conventional local file system, they are serialized and can't take advantage of the benefit of a large scale parallel file system. "pcircle" software builds on top of ubiquitous MPI in cluster computing environment and "work-stealing" pattern to provide a scalable, high-performance suite of file system tools. In particular - it implemented parallel data copy and parallel data checksumming, with advanced features such as async progress report, checkpoint and restart, as well as integrity checking.

  3. CALTRANS: A parallel, deterministic, 3D neutronics code

    Energy Technology Data Exchange (ETDEWEB)

    Carson, L.; Ferguson, J.; Rogers, J.

    1994-04-01

    Our efforts to parallelize the deterministic solution of the neutron transport equation has culminated in a new neutronics code CALTRANS, which has full 3D capability. In this article, we describe the layout and algorithms of CALTRANS and present performance measurements of the code on a variety of platforms. Explicit implementation of the parallel algorithms of CALTRANS using both the function calls of the Parallel Virtual Machine software package (PVM 3.2) and the Meiko CS-2 tagged message passing library (based on the Intel NX/2 interface) are provided in appendices.

  4. Parallel quantum computing in a single ensemble quantum computer

    International Nuclear Information System (INIS)

    Long Guilu; Xiao, L.

    2004-01-01

    We propose a parallel quantum computing mode for ensemble quantum computer. In this mode, some qubits are in pure states while other qubits are in mixed states. It enables a single ensemble quantum computer to perform 'single-instruction-multidata' type of parallel computation. Parallel quantum computing can provide additional speedup in Grover's algorithm and Shor's algorithm. In addition, it also makes a fuller use of qubit resources in an ensemble quantum computer. As a result, some qubits discarded in the preparation of an effective pure state in the Schulman-Varizani and the Cleve-DiVincenzo algorithms can be reutilized

  5. Prosodic structure as a parallel to musical structure

    Directory of Open Access Journals (Sweden)

    Christopher Cullen Heffner

    2015-12-01

    Full Text Available What structural properties do language and music share? Although early speculation identified a wide variety of possibilities, the literature has largely focused on the parallels between musical structure and syntactic structure. Here, we argue that parallels between musical structure and prosodic structure deserve more attention. We review the evidence for a link between musical and prosodic structure and find it to be strong. In fact, certain elements of prosodic structure may provide a parsimonious comparison with musical structure without sacrificing empirical findings related to the parallels between language and music. We then develop several predictions related to such a hypothesis.

  6. Lemon : An MPI parallel I/O library for data encapsulation using LIME

    NARCIS (Netherlands)

    Deuzeman, Albert; Reker, Siebren; Urbach, Carsten

    We introduce Lemon, an MPI parallel I/O library that provides efficient parallel I/O of both binary and metadata on massively parallel architectures. Motivated by the demands of the lattice Quantum Chromodynamics community, the data is stored in the SciDAC Lattice QCD Interchange Message

  7. Parallel Sparse Matrix - Vector Product

    DEFF Research Database (Denmark)

    Alexandersen, Joe; Lazarov, Boyan Stefanov; Dammann, Bernd

    This technical report contains a case study of a sparse matrix-vector product routine, implemented for parallel execution on a compute cluster with both pure MPI and hybrid MPI-OpenMP solutions. C++ classes for sparse data types were developed and the report shows how these class can be used...

  8. The parallel adult education system

    DEFF Research Database (Denmark)

    Wahlgren, Bjarne

    2015-01-01

    for competence development. The Danish university educational system includes two parallel programs: a traditional academic track (candidatus) and an alternative practice-based track (master). The practice-based program was established in 2001 and organized as part time. The total program takes half the time...

  9. Where are the parallel algorithms?

    Science.gov (United States)

    Voigt, R. G.

    1985-01-01

    Four paradigms that can be useful in developing parallel algorithms are discussed. These include computational complexity analysis, changing the order of computation, asynchronous computation, and divide and conquer. Each is illustrated with an example from scientific computation, and it is shown that computational complexity must be used with great care or an inefficient algorithm may be selected.

  10. Parallel imaging with phase scrambling.

    Science.gov (United States)

    Zaitsev, Maxim; Schultz, Gerrit; Hennig, Juergen; Gruetter, Rolf; Gallichan, Daniel

    2015-04-01

    Most existing methods for accelerated parallel imaging in MRI require additional data, which are used to derive information about the sensitivity profile of each radiofrequency (RF) channel. In this work, a method is presented to avoid the acquisition of separate coil calibration data for accelerated Cartesian trajectories. Quadratic phase is imparted to the image to spread the signals in k-space (aka phase scrambling). By rewriting the Fourier transform as a convolution operation, a window can be introduced to the convolved chirp function, allowing a low-resolution image to be reconstructed from phase-scrambled data without prominent aliasing. This image (for each RF channel) can be used to derive coil sensitivities to drive existing parallel imaging techniques. As a proof of concept, the quadratic phase was applied by introducing an offset to the x(2) - y(2) shim and the data were reconstructed using adapted versions of the image space-based sensitivity encoding and GeneRalized Autocalibrating Partially Parallel Acquisitions algorithms. The method is demonstrated in a phantom (1 × 2, 1 × 3, and 2 × 2 acceleration) and in vivo (2 × 2 acceleration) using a 3D gradient echo acquisition. Phase scrambling can be used to perform parallel imaging acceleration without acquisition of separate coil calibration data, demonstrated here for a 3D-Cartesian trajectory. Further research is required to prove the applicability to other 2D and 3D sampling schemes. © 2014 Wiley Periodicals, Inc.

  11. Parallel plate transmission line transformer

    NARCIS (Netherlands)

    Voeten, S.J.; Brussaard, G.J.H.; Pemen, A.J.M.

    2011-01-01

    A Transmission Line Transformer (TLT) can be used to transform high-voltage nanosecond pulses. These transformers rely on the fact that the length of the pulse is shorter than the transmission lines used. This allows connecting the transmission lines in parallel at the input and in series at the

  12. Matpar: Parallel Extensions for MATLAB

    Science.gov (United States)

    Springer, P. L.

    1998-01-01

    Matpar is a set of client/server software that allows a MATLAB user to take advantage of a parallel computer for very large problems. The user can replace calls to certain built-in MATLAB functions with calls to Matpar functions.

  13. Massively parallel quantum computer simulator

    NARCIS (Netherlands)

    De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.

    2007-01-01

    We describe portable software to simulate universal quantum computers on massive parallel Computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray

  14. Parallel computing: numerics, applications, and trends

    National Research Council Canada - National Science Library

    Trobec, Roman; Vajteršic, Marián; Zinterhof, Peter

    2009-01-01

    ... and/or distributed systems. The contributions to this book are focused on topics most concerned in the trends of today's parallel computing. These range from parallel algorithmics, programming, tools, network computing to future parallel computing. Particular attention is paid to parallel numerics: linear algebra, differential equations, numerica...

  15. Experiments with parallel algorithms for combinatorial problems

    NARCIS (Netherlands)

    G.A.P. Kindervater (Gerard); H.W.J.M. Trienekens

    1985-01-01

    textabstractIn the last decade many models for parallel computation have been proposed and many parallel algorithms have been developed. However, few of these models have been realized and most of these algorithms are supposed to run on idealized, unrealistic parallel machines. The parallel machines

  16. Parallel fuzzy connected image segmentation on GPU.

    Science.gov (United States)

    Zhuge, Ying; Cao, Yong; Udupa, Jayaram K; Miller, Robert W

    2011-07-01

    Image segmentation techniques using fuzzy connectedness (FC) principles have shown their effectiveness in segmenting a variety of objects in several large applications. However, one challenge in these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays, commodity graphics hardware provides a highly parallel computing environment. In this paper, the authors present a parallel fuzzy connected image segmentation algorithm implementation on NVIDIA's compute unified device Architecture (CUDA) platform for segmenting medical image data sets. In the FC algorithm, there are two major computational tasks: (i) computing the fuzzy affinity relations and (ii) computing the fuzzy connectedness relations. These two tasks are implemented as CUDA kernels and executed on GPU. A dramatic improvement in speed for both tasks is achieved as a result. Our experiments based on three data sets of small, medium, and large data size demonstrate the efficiency of the parallel algorithm, which achieves a speed-up factor of 24.4x, 18.1x, and 10.3x, correspondingly, for the three data sets on the NVIDIA Tesla C1060 over the implementation of the algorithm on CPU, and takes 0.25, 0.72, and 15.04 s, correspondingly, for the three data sets. The authors developed a parallel algorithm of the widely used fuzzy connected image segmentation method on the NVIDIA GPUs, which are far more cost- and speed-effective than both cluster of workstations and multiprocessing systems. A near-interactive speed of segmentation has been achieved, even for the large data set.

  17. Parallel, Asynchronous Executive (PAX): System concepts, facilities, and architecture

    Science.gov (United States)

    Jones, W. H.

    1983-01-01

    The Parallel, Asynchronous Executive (PAX) is a software operating system simulation that allows many computers to work on a single problem at the same time. PAX is currently implemented on a UNIVAC 1100/42 computer system. Independent UNIVAC runstreams are used to simulate independent computers. Data are shared among independent UNIVAC runstreams through shared mass-storage files. PAX has achieved the following: (1) applied several computing processes simultaneously to a single, logically unified problem; (2) resolved most parallel processor conflicts by careful work assignment; (3) resolved by means of worker requests to PAX all conflicts not resolved by work assignment; (4) provided fault isolation and recovery mechanisms to meet the problems of an actual parallel, asynchronous processing machine. Additionally, one real-life problem has been constructed for the PAX environment. This is CASPER, a collection of aerodynamic and structural dynamic problem simulation routines. CASPER is not discussed in this report except to provide examples of parallel-processing techniques.

  18. Conventional sunscreen application does not lead to sufficient body coverage.

    Science.gov (United States)

    Jovanovic, Z; Schornstein, T; Sutor, A; Neufang, G; Hagens, R

    2017-10-01

    This study aimed to assess sunscreen application habits and relative body coverage after single whole body application. Fifty-two healthy volunteers were asked to use the test product once, following their usual sunscreen application routine. Standardized UV photographs, which were evaluated by Image Analysis, were conducted before and immediately after product application to evaluate relative body coverage. In addition to these procedures, the volunteers completed an online self-assessment questionnaire to assess sunscreen usage habits. After product application, the front side showed significantly less non-covered skin (4.35%) than the backside (17.27%) (P = 0.0000). Females showed overall significantly less non-covered skin (8.98%) than males (13.16%) (P = 0.0381). On the backside, females showed significantly less non-covered skin (13.57%) (P = 0.0045) than males (21.94%), while on the front side, this difference between females (4.14%) and males (4.53%) was not significant. In most cases, the usual sunscreen application routine does not provide complete body coverage even though an extra light sunscreen with good absorption properties was used. On average, 11% of the body surface was not covered by sunscreen at all. Therefore, appropriate consumer education is required to improve sunscreen application and to warrant effective sun protection. © 2017 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  19. Pricing of drugs with heterogeneous health insurance coverage.

    Science.gov (United States)

    Ferrara, Ida; Missios, Paul

    2012-03-01

    In this paper, we examine the role of insurance coverage in explaining the generic competition paradox in a two-stage game involving a single producer of brand-name drugs and n quantity-competing producers of generic drugs. Independently of brand loyalty, which some studies rely upon to explain the paradox, we show that heterogeneity in insurance coverage may result in higher prices of brand-name drugs following generic entry. With market segmentation based on insurance coverage present in both the pre- and post-entry stages, the paradox can arise when the two types of drugs are highly substitutable and the market is quite profitable but does not have to arise when the two types of drugs are highly differentiated. However, with market segmentation occurring only after generic entry, the paradox can arise when the two types of drugs are weakly substitutable, provided, however, that the industry is not very profitable. In both cases, that is, when market segmentation is present in the pre-entry stage and when it is not, the paradox becomes more likely to arise as the market expands and/or insurance companies decrease deductibles applied on the purchase of generic drugs. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. The numerical parallel computing of photon transport

    International Nuclear Information System (INIS)

    Huang Qingnan; Liang Xiaoguang; Zhang Lifa

    1998-12-01

    The parallel computing of photon transport is investigated, the parallel algorithm and the parallelization of programs on parallel computers both with shared memory and with distributed memory are discussed. By analyzing the inherent law of the mathematics and physics model of photon transport according to the structure feature of parallel computers, using the strategy of 'to divide and conquer', adjusting the algorithm structure of the program, dissolving the data relationship, finding parallel liable ingredients and creating large grain parallel subtasks, the sequential computing of photon transport into is efficiently transformed into parallel and vector computing. The program was run on various HP parallel computers such as the HY-1 (PVP), the Challenge (SMP) and the YH-3 (MPP) and very good parallel speedup has been gotten

  1. Measuring coverage in MNCH: a validation study linking population survey derived coverage to maternal, newborn, and child health care records in rural China.

    Directory of Open Access Journals (Sweden)

    Li Liu

    Full Text Available Accurate data on coverage of key maternal, newborn, and child health (MNCH interventions are crucial for monitoring progress toward the Millennium Development Goals 4 and 5. Coverage estimates are primarily obtained from routine population surveys through self-reporting, the validity of which is not well understood. We aimed to examine the validity of the coverage of selected MNCH interventions in Gongcheng County, China.We conducted a validation study by comparing women's self-reported coverage of MNCH interventions relating to antenatal and postnatal care, mode of delivery, and child vaccinations in a community survey with their paper- and electronic-based health care records, treating the health care records as the reference standard. Of 936 women recruited, 914 (97.6% completed the survey. Results show that self-reported coverage of these interventions had moderate to high sensitivity (0.57 [95% confidence interval (CI: 0.50-0.63] to 0.99 [95% CI: 0.98-1.00] and low to high specificity (0 to 0.83 [95% CI: 0.80-0.86]. Despite varying overall validity, with the area under the receiver operating characteristic curve (AUC ranging between 0.49 [95% CI: 0.39-0.57] and 0.90 [95% CI: 0.88-0.92], bias in the coverage estimates at the population level was small to moderate, with the test to actual positive (TAP ratio ranging between 0.8 and 1.5 for 24 of the 28 indicators examined. Our ability to accurately estimate validity was affected by several caveats associated with the reference standard. Caution should be exercised when generalizing the results to other settings.The overall validity of self-reported coverage was moderate across selected MNCH indicators. However, at the population level, self-reported coverage appears to have small to moderate degree of bias. Accuracy of the coverage was particularly high for indicators with high recorded coverage or low recorded coverage but high specificity. The study provides insights into the accuracy of

  2. Circuit and bond polytopes on series–parallel graphs

    OpenAIRE

    Borne , Sylvie; Fouilhoux , Pierre; Grappe , Roland; Lacroix , Mathieu; Pesneau , Pierre

    2015-01-01

    International audience; In this paper, we describe the circuit polytope on series–parallel graphs. We first show the existence of a compact extended formulation. Though not being explicit, its construction process helps us to inductively provide the description in the original space. As a consequence, using the link between bonds and circuits in planar graphs, we also describe the bond polytope on series–parallel graphs.

  3. Dynamic grid refinement for partial differential equations on parallel computers

    International Nuclear Information System (INIS)

    Mccormick, S.; Quinlan, D.

    1989-01-01

    The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids to provide adaptive resolution and fast solution of PDEs. An asynchronous version of FAC, called AFAC, that completely eliminates the bottleneck to parallelism is presented. This paper describes the advantage that this algorithm has in adaptive refinement for moving singularities on multiprocessor computers. This work is applicable to the parallel solution of two- and three-dimensional shock tracking problems. 6 refs

  4. Numerical kinematic transformation calculations for a parallel link manipulator

    International Nuclear Information System (INIS)

    Killough, S.M.

    1993-01-01

    Parallel link manipulators are often considered for particular robotic applications because of the unique advantages they provide. Unfortunately, they have significant disadvantages with respect to calculating the kinematic transformations because of the high-order equations that must be solved. Presented is a manipulator design that exploits the mechanical advantages of parallel links yet also has a corresponding numerical kinematic solution that can be solved in real time on common microcomputers

  5. Constructing and Using Broad-coverage Lexical Resource for Enhancing Morphological Analysis of Arabic

    OpenAIRE

    Sawalha, M.; Atwell, E.S.

    2010-01-01

    Broad-coverage language resources which provide prior linguistic knowledge must improve the accuracy and the performance of NLP applications. We are constructing a broad-coverage lexical resource to improve the accuracy of morphological analyzers and part-of-speech taggers of Arabic text. Over the past 1200 years, many different kinds of Arabic language lexicons were constructed; these lexicons are different in ordering, size and aim or goal of construction. We collected 23 machine-readable l...

  6. Universal Health Coverage – The Critical Importance of Global Solidarity and Good Governance

    Science.gov (United States)

    Reis, Andreas A.

    2016-01-01

    This article provides a commentary to Ole Norheim’ s editorial entitled "Ethical perspective: Five unacceptable trade-offs on the path to universal health coverage." It reinforces its message that an inclusive, participatory process is essential for ethical decision-making and underlines the crucial importance of good governance in setting fair priorities in healthcare. Solidarity on both national and international levels is needed to make progress towards the goal of universal health coverage (UHC). PMID:27694683

  7. Coverage Extension via Side-Lobe Transmission in Multibeam Satellite System

    OpenAIRE

    Gharanjik, Ahmad; Kmieciak, Jarek; Shankar, Bhavani; Ottersten, Björn

    2017-01-01

    In this paper, we study feasibility of coverage extension of a multibeam satellite network by providing low-rate communications to terminals located outside the coverage of main beams. Focusing on the MEO satellite network, and using realistic link budgets from O3b networks, we investigate the performance of both forward and return-links for terminals stationed in the side lobes of the main beams. Particularly, multi-carrier transmission for forward-link and single carrier transmission for re...

  8. Clustered lot quality assurance sampling to assess immunisation coverage: increasing rapidity and maintaining precision.

    Science.gov (United States)

    Pezzoli, Lorenzo; Andrews, Nick; Ronveaux, Olivier

    2010-05-01

    Vaccination programmes targeting disease elimination aim to achieve very high coverage levels (e.g. 95%). We calculated the precision of different clustered lot quality assurance sampling (LQAS) designs in computer-simulated surveys to provide local health officers in the field with preset LQAS plans to simply and rapidly assess programmes with high coverage targets. We calculated sample size (N), decision value (d) and misclassification errors (alpha and beta) of several LQAS plans by running 10 000 simulations. We kept the upper coverage threshold (UT) at 90% or 95% and decreased the lower threshold (LT) progressively by 5%. We measured the proportion of simulations with d unvaccinated individuals if the coverage was LT% (pLT) to calculate alpha (1-pLT). We divided N in clusters (between 5 and 10) and recalculated the errors hypothesising that the coverage would vary in the clusters according to a binomial distribution with preset standard deviations of 0.05 and 0.1 from the mean lot coverage. We selected the plans fulfilling these criteria: alpha LQAS plans dividing the lot in five clusters with N = 50 (5 x 10) and d = 4 to evaluate programmes with 95% coverage target and d = 7 to evaluate programmes with 90% target. These plans will considerably increase the feasibility and the rapidity of conducting the LQAS in the field.

  9. 24 CFR 51.302 - Coverage.

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Coverage. 51.302 Section 51.302 Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development... significantly prolongs the physical or economic life of existing facilities or which, in the case of Accident...

  10. 5 CFR 880.304 - FEGLI coverage.

    Science.gov (United States)

    2010-01-01

    ... under § 880.205, FEGLI premiums and benefits will be computed using the date of death established under...) RETIREMENT AND INSURANCE BENEFITS DURING PERIODS OF UNEXPLAINED ABSENCE Continuation of Benefits § 880.304 FEGLI coverage. (a) FEGLI premiums will not be collected during periods when an annuitant is a missing...

  11. 44 CFR 17.610 - Coverage.

    Science.gov (United States)

    2010-10-01

    ... SECURITY GENERAL GOVERNMENTWIDE REQUIREMENTS FOR DRUG-FREE WORKPLACE (GRANTS) § 17.610 Coverage. (a) This... covered by this subpart, except where specifically modified by this subpart. In the event of any conflict... are deemed to control with respect to the implementation of drug-free workplace requirements...

  12. 77 FR 16453 - Student Health Insurance Coverage

    Science.gov (United States)

    2012-03-21

    ... eliminating annual and lifetime dollar limits would result in dramatic premium hikes for student plans and.... Industry and university commenters noted that student health insurance coverage benefits typically... duplication of benefits and makes student plans more affordable. Industry commenters noted that student health...

  13. Coverage of space by random sets

    Indian Academy of Sciences (India)

    Consider the non-negative integer line. For each integer point we toss a coin. If the toss at location i is a. Heads we place an interval (of random length) there and move to location i + 1,. Tails we move to location i + 1. Coverage of space by random sets – p. 2/29 ...

  14. 5 CFR 610.402 - Coverage.

    Science.gov (United States)

    2010-01-01

    ... Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS HOURS OF DUTY Flexible and Compressed Work Schedules § 610.402 Coverage. The regulations contained in this subpart apply only to flexible work schedules and compressed work schedules established under subchapter 11 of chapter 61 of...

  15. 14 CFR 205.5 - Minimum coverage.

    Science.gov (United States)

    2010-01-01

    ... 18,000 pounds maximum payload capacity, carriers need only maintain coverage of $2,000,000 per... than 30 seats or 7,500 pounds maximum cargo payload capacity, and a maximum authorized takeoff weight... not be contingent upon the financial condition, solvency, or freedom from bankruptcy of the carrier...

  16. 5 CFR 734.401 - Coverage.

    Science.gov (United States)

    2010-01-01

    ...) POLITICAL ACTIVITIES OF FEDERAL EMPLOYEES Employees in Certain Agencies and Positions § 734.401 Coverage. (a... Criminal Investigation of the Internal Revenue Service. (11) The Office of Investigative Programs of the... Firearms; (13) The Criminal Division of the Department of Justice; (14) The Central Imagery Office; (15...

  17. Danish Media coverage of 22/7

    DEFF Research Database (Denmark)

    Hervik, Peter; Boisen, Sophie

    2013-01-01

    ’s Danish connections through an analysis of the first 100 days of Danish media coverage. We scrutinised 188 articles in the largest daily newspapers to find out how Danish actors related to ABB’s ideas. The key argument is that the discourses and opinions reflect pre-existing opinions and entrenched...

  18. Binning metagenomic contigs by coverage and composition

    NARCIS (Netherlands)

    Alneberg, J.; Bjarnason, B.S.; Bruijn, de I.; Schirmer, M.; Quick, J.; Ijaz, U.Z.; Lahti, L.M.; Loman, N.J.; Andersson, A.F.; Quince, C.

    2014-01-01

    Shotgun sequencing enables the reconstruction of genomes from complex microbial communities, but because assembly does not reconstruct entire genomes, it is necessary to bin genome fragments. Here we present CONCOCT, a new algorithm that combines sequence composition and coverage across multiple

  19. Compressing Data Cube in Parallel OLAP Systems

    Directory of Open Access Journals (Sweden)

    Frank Dehne

    2007-03-01

    Full Text Available This paper proposes an efficient algorithm to compress the cubes in the progress of the parallel data cube generation. This low overhead compression mechanism provides block-by-block and record-by-record compression by using tuple difference coding techniques, thereby maximizing the compression ratio and minimizing the decompression penalty at run-time. The experimental results demonstrate that the typical compression ratio is about 30:1 without sacrificing running time. This paper also demonstrates that the compression method is suitable for Hilbert Space Filling Curve, a mechanism widely used in multi-dimensional indexing.

  20. Parallel Algorithms for Switching Edges in Heterogeneous Graphs.

    Science.gov (United States)

    Bhuiyan, Hasanuzzaman; Khan, Maleq; Chen, Jiangzhuo; Marathe, Madhav

    2017-06-01

    An edge switch is an operation on a graph (or network) where two edges are selected randomly and one of their end vertices are swapped with each other. Edge switch operations have important applications in graph theory and network analysis, such as in generating random networks with a given degree sequence, modeling and analyzing dynamic networks, and in studying various dynamic phenomena over a network. The recent growth of real-world networks motivates the need for efficient parallel algorithms. The dependencies among successive edge switch operations and the requirement to keep the graph simple (i.e., no self-loops or parallel edges) as the edges are switched lead to significant challenges in designing a parallel algorithm. Addressing these challenges requires complex synchronization and communication among the processors leading to difficulties in achieving a good speedup by parallelization. In this paper, we present distributed memory parallel algorithms for switching edges in massive networks. These algorithms provide good speedup and scale well to a large number of processors. A harmonic mean speedup of 73.25 is achieved on eight different networks with 1024 processors. One of the steps in our edge switch algorithms requires the computation of multinomial random variables in parallel. This paper presents the first non-trivial parallel algorithm for the problem, achieving a speedup of 925 using 1024 processors.

  1. An Automatic Instruction-Level Parallelization of Machine Code

    Directory of Open Access Journals (Sweden)

    MARINKOVIC, V.

    2018-02-01

    Full Text Available Prevailing multicores and novel manycores have made a great challenge of modern day - parallelization of embedded software that is still written as sequential. In this paper, automatic code parallelization is considered, focusing on developing a parallelization tool at the binary level as well as on the validation of this approach. The novel instruction-level parallelization algorithm for assembly code which uses the register names after SSA to find independent blocks of code and then to schedule independent blocks using METIS to achieve good load balance is developed. The sequential consistency is verified and the validation is done by measuring the program execution time on the target architecture. Great speedup, taken as the performance measure in the validation process, and optimal load balancing are achieved for multicore RISC processors with 2 to 16 cores (e.g. MIPS, MicroBlaze, etc.. In particular, for 16 cores, the average speedup is 7.92x, while in some cases it reaches 14x. An approach to automatic parallelization provided by this paper is useful to researchers and developers in the area of parallelization as the basis for further optimizations, as the back-end of a compiler, or as the code parallelization tool for an embedded system.

  2. Proton Therapy Coverage for Prostate Cancer Treatment

    International Nuclear Information System (INIS)

    Vargas, Carlos; Wagner, Marcus; Mahajan, Chaitali; Indelicato, Daniel; Fryer, Amber; Falchook, Aaron; Horne, David C.; Chellini, Angela; McKenzie, Craig C.; Lawlor, Paula C.; Li Zuofeng; Lin Liyong; Keole, Sameer

    2008-01-01

    Purpose: To determine the impact of prostate motion on dose coverage in proton therapy. Methods and Materials: A total of 120 prostate positions were analyzed on 10 treatment plans for 10 prostate patients treated using our low-risk proton therapy prostate protocol (University of Florida Proton Therapy Institute 001). Computed tomography and magnetic resonance imaging T 2 -weighted turbo spin-echo scans were registered for all cases. The planning target volume included the prostate with a 5-mm axial and 8-mm superoinferior expansion. The prostate was repositioned using 5- and 10-mm one-dimensional vectors and 10-mm multidimensional vectors (Points A-D). The beam was realigned for the 5- and 10-mm displacements. The prescription dose was 78 Gy equivalent (GE). Results: The mean percentage of rectum receiving 70 Gy (V 70 ) was 7.9%, the bladder V 70 was 14.0%, and the femoral head/neck V 50 was 0.1%, and the mean pelvic dose was 4.6 GE. The percentage of prostate receiving 78 Gy (V 78 ) with the 5-mm movements changed by -0.2% (range, 0.006-0.5%, p > 0.7). However, the prostate V 78 after a 10-mm displacement changed significantly (p 78 coverage had a large and significant reduction of 17.4% (range, 13.5-17.4%, p 78 coverage of the clinical target volume. The minimal prostate dose was reduced 33% (25.8 GE), on average, for Points A-D. The prostate minimal dose improved from 69.3 GE to 78.2 GE (p < 0.001) with realignment for 10-mm movements. Conclusion: The good dose coverage and low normal doses achieved for the initial plan was maintained with movements of ≤5 mm. Beam realignment improved coverage for 10-mm displacements

  3. Policy Choices for Progressive Realization of Universal Health Coverage Comment on "Ethical Perspective: Five Unacceptable Trade-offs on the Path to Universal Health Coverage".

    Science.gov (United States)

    Tangcharoensathien, Viroj; Patcharanarumol, Walaiporn; Panichkriangkrai, Warisa; Sommanustweechai, Angkana

    2016-07-31

    In responses to Norheim's editorial, this commentary offers reflections from Thailand, how the five unacceptable trade-offs were applied to the universal health coverage (UHC) reforms between 1975 and 2002 when the whole 64 million people were covered by one of the three public health insurance systems. This commentary aims to generate global discussions on how best UHC can be gradually achieved. Not only the proposed five discrete trade-offs within each dimension, there are also trade-offs between the three dimensions of UHC such as population coverage, service coverage and cost coverage. Findings from Thai UHC show that equity is applied for the population coverage extension, when the low income households and the informal sector were the priority population groups for coverage extension by different prepayment schemes in 1975 and 1984, respectively. With an exception of public sector employees who were historically covered as part of fringe benefits were covered well before the poor. The private sector employees were covered last in 1990. Historically, Thailand applied a comprehensive benefit package where a few items are excluded using the negative list; until there was improved capacities on technology assessment that cost-effectiveness are used for the inclusion of new interventions into the benefit package. Not only cost-effectiveness, but long term budget impact, equity and ethical considerations are taken into account. Cost coverage is mostly determined by the fiscal capacities. Close ended budget with mix of provider payment methods are used as a tool for trade-off service coverage and financial risk protection. Introducing copayment in the context of fee-for-service can be harmful to beneficiaries due to supplier induced demands, inefficiency and unpredictable out of pocket payment by households. UHC achieves favorable outcomes as it was implemented when there was a full geographical coverage of primary healthcare coverage in all districts and sub

  4. Cosmic Shear With ACS Pure Parallels

    Science.gov (United States)

    Rhodes, Jason

    2002-07-01

    Small distortions in the shapes of background galaxies by foreground mass provide a powerful method of directly measuring the amount and distribution of dark matter. Several groups have recently detected this weak lensing by large-scale structure, also called cosmic shear. The high resolution and sensitivity of HST/ACS provide a unique opportunity to measure cosmic shear accurately on small scales. Using 260 parallel orbits in Sloan textiti {F775W} we will measure for the first time: beginlistosetlength sep0cm setlengthemsep0cm setlengthopsep0cm em the cosmic shear variance on scales Omega_m^0.5, with signal-to-noise {s/n} 20, and the mass density Omega_m with s/n=4. They will be done at small angular scales where non-linear effects dominate the power spectrum, providing a test of the gravitational instability paradigm for structure formation. Measurements on these scales are not possible from the ground, because of the systematic effects induced by PSF smearing from seeing. Having many independent lines of sight reduces the uncertainty due to cosmic variance, making parallel observations ideal.

  5. Structural synthesis of parallel robots

    CERN Document Server

    Gogu, Grigore

    This book represents the fifth part of a larger work dedicated to the structural synthesis of parallel robots. The originality of this work resides in the fact that it combines new formulae for mobility, connectivity, redundancy and overconstraints with evolutionary morphology in a unified structural synthesis approach that yields interesting and innovative solutions for parallel robotic manipulators.  This is the first book on robotics that presents solutions for coupled, decoupled, uncoupled, fully-isotropic and maximally regular robotic manipulators with Schönflies motions systematically generated by using the structural synthesis approach proposed in Part 1.  Overconstrained non-redundant/overactuated/redundantly actuated solutions with simple/complex limbs are proposed. Many solutions are presented here for the first time in the literature. The author had to make a difficult and challenging choice between protecting these solutions through patents and releasing them directly into the public domain. T...

  6. GPU Parallel Bundle Block Adjustment

    Directory of Open Access Journals (Sweden)

    ZHENG Maoteng

    2017-09-01

    Full Text Available To deal with massive data in photogrammetry, we introduce the GPU parallel computing technology. The preconditioned conjugate gradient and inexact Newton method are also applied to decrease the iteration times while solving the normal equation. A brand new workflow of bundle adjustment is developed to utilize GPU parallel computing technology. Our method can avoid the storage and inversion of the big normal matrix, and compute the normal matrix in real time. The proposed method can not only largely decrease the memory requirement of normal matrix, but also largely improve the efficiency of bundle adjustment. It also achieves the same accuracy as the conventional method. Preliminary experiment results show that the bundle adjustment of a dataset with about 4500 images and 9 million image points can be done in only 1.5 minutes while achieving sub-pixel accuracy.

  7. A tandem parallel plate analyzer

    International Nuclear Information System (INIS)

    Hamada, Y.; Fujisawa, A.; Iguchi, H.; Nishizawa, A.; Kawasumi, Y.

    1996-11-01

    By a new modification of a parallel plate analyzer the second-order focus is obtained in an arbitrary injection angle. This kind of an analyzer with a small injection angle will have an advantage of small operational voltage, compared to the Proca and Green analyzer where the injection angle is 30 degrees. Thus, the newly proposed analyzer will be very useful for the precise energy measurement of high energy particles in MeV range. (author)

  8. High-speed parallel counter

    International Nuclear Information System (INIS)

    Gus'kov, B.N.; Kalinnikov, V.A.; Krastev, V.R.; Maksimov, A.N.; Nikityuk, N.M.

    1985-01-01

    This paper describes a high-speed parallel counter that contains 31 inputs and 15 outputs and is implemented by integrated circuits of series 500. The counter is designed for fast sampling of events according to the number of particles that pass simultaneously through the hodoscopic plane of the detector. The minimum delay of the output signals relative to the input is 43 nsec. The duration of the output signals can be varied from 75 to 120 nsec

  9. An anthropologist in parallel structure

    Directory of Open Access Journals (Sweden)

    Noelle Molé Liston

    2016-08-01

    Full Text Available The essay examines the parallels between Molé Liston’s studies on labor and precarity in Italy and the United States’ anthropology job market. Probing the way economic shift reshaped the field of anthropology of Europe in the late 2000s, the piece explores how the neoliberalization of the American academy increased the value in studying the hardships and daily lives of non-western populations in Europe.

  10. Combinatorics of spreads and parallelisms

    CERN Document Server

    Johnson, Norman

    2010-01-01

    Partitions of Vector Spaces Quasi-Subgeometry Partitions Finite Focal-SpreadsGeneralizing André SpreadsThe Going Up Construction for Focal-SpreadsSubgeometry Partitions Subgeometry and Quasi-Subgeometry Partitions Subgeometries from Focal-SpreadsExtended André SubgeometriesKantor's Flag-Transitive DesignsMaximal Additive Partial SpreadsSubplane Covered Nets and Baer Groups Partial Desarguesian t-Parallelisms Direct Products of Affine PlanesJha-Johnson SL(2,

  11. New algorithms for parallel MRI

    International Nuclear Information System (INIS)

    Anzengruber, S; Ramlau, R; Bauer, F; Leitao, A

    2008-01-01

    Magnetic Resonance Imaging with parallel data acquisition requires algorithms for reconstructing the patient's image from a small number of measured lines of the Fourier domain (k-space). In contrast to well-known algorithms like SENSE and GRAPPA and its flavors we consider the problem as a non-linear inverse problem. However, in order to avoid cost intensive derivatives we will use Landweber-Kaczmarz iteration and in order to improve the overall results some additional sparsity constraints.

  12. Wakefield calculations on parallel computers

    International Nuclear Information System (INIS)

    Schoessow, P.

    1990-01-01

    The use of parallelism in the solution of wakefield problems is illustrated for two different computer architectures (SIMD and MIMD). Results are given for finite difference codes which have been implemented on a Connection Machine and an Alliant FX/8 and which are used to compute wakefields in dielectric loaded structures. Benchmarks on code performance are presented for both cases. 4 refs., 3 figs., 2 tabs

  13. Parallel processing of genomics data

    Science.gov (United States)

    Agapito, Giuseppe; Guzzi, Pietro Hiram; Cannataro, Mario

    2016-10-01

    The availability of high-throughput experimental platforms for the analysis of biological samples, such as mass spectrometry, microarrays and Next Generation Sequencing, have made possible to analyze a whole genome in a single experiment. Such platforms produce an enormous volume of data per single experiment, thus the analysis of this enormous flow of data poses several challenges in term of data storage, preprocessing, and analysis. To face those issues, efficient, possibly parallel, bioinformatics software needs to be used to preprocess and analyze data, for instance to highlight genetic variation associated with complex diseases. In this paper we present a parallel algorithm for the parallel preprocessing and statistical analysis of genomics data, able to face high dimension of data and resulting in good response time. The proposed system is able to find statistically significant biological markers able to discriminate classes of patients that respond to drugs in different ways. Experiments performed on real and synthetic genomic datasets show good speed-up and scalability.

  14. Parallel Aircraft Trajectory Optimization with Analytic Derivatives

    Science.gov (United States)

    Falck, Robert D.; Gray, Justin S.; Naylor, Bret

    2016-01-01

    Trajectory optimization is an integral component for the design of aerospace vehicles, but emerging aircraft technologies have introduced new demands on trajectory analysis that current tools are not well suited to address. Designing aircraft with technologies such as hybrid electric propulsion and morphing wings requires consideration of the operational behavior as well as the physical design characteristics of the aircraft. The addition of operational variables can dramatically increase the number of design variables which motivates the use of gradient based optimization with analytic derivatives to solve the larger optimization problems. In this work we develop an aircraft trajectory analysis tool using a Legendre-Gauss-Lobatto based collocation scheme, providing analytic derivatives via the OpenMDAO multidisciplinary optimization framework. This collocation method uses an implicit time integration scheme that provides a high degree of sparsity and thus several potential options for parallelization. The performance of the new implementation was investigated via a series of single and multi-trajectory optimizations using a combination of parallel computing and constraint aggregation. The computational performance results show that in order to take full advantage of the sparsity in the problem it is vital to parallelize both the non-linear analysis evaluations and the derivative computations themselves. The constraint aggregation results showed a significant numerical challenge due to difficulty in achieving tight convergence tolerances. Overall, the results demonstrate the value of applying analytic derivatives to trajectory optimization problems and lay the foundation for future application of this collocation based method to the design of aircraft with where operational scheduling of technologies is key to achieving good performance.

  15. Xyce parallel electronic simulator : users' guide.

    Energy Technology Data Exchange (ETDEWEB)

    Mei, Ting; Rankin, Eric Lamont; Thornquist, Heidi K.; Santarelli, Keith R.; Fixel, Deborah A.; Coffey, Todd Stirling; Russo, Thomas V.; Schiek, Richard Louis; Warrender, Christina E.; Keiter, Eric Richard; Pawlowski, Roger Patrick

    2011-05-01

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: (1) Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers; (2) Improved performance for all numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-art algorithms and novel techniques. (3) Device models which are specifically tailored to meet Sandia's needs, including some radiation-aware devices (for Sandia users only); and (4) Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing parallel implementation - which allows it to run efficiently on the widest possible number of computing platforms. These include serial, shared-memory and distributed-memory parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The development of Xyce provides a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods, parallel solver algorithms) research and development can be performed. As a result, Xyce is

  16. Effects of coverage gap reform on adherence to diabetes medications.

    Science.gov (United States)

    Zeng, Feng; Patel, Bimal V; Brunetti, Louis

    2013-04-01

    To investigate the impact of Part D coverage gap reform on diabetes medication adherence. Retrospective data analysis based on pharmacy claims data from a national pharmacy benefit manager. We used a difference-in-difference-indifference method to evaluate the impact of coverage gap reform on adherence to diabetes medications. Two cohorts (2010 and 2011) were constructed to represent the last year before Affordable Care Act (ACA) reform and the first year after reform, respectively. Each patient had 2 observations: 1 before and 1 after entering the coverage gap. Patients in each cohort were divided into groups based on type of gap coverage: no coverage, partial coverage (generics only), and full coverage. Following ACA reform, patients with no gap coverage and patients with partial gap coverage experienced substantial drops in copayments in the coverage gap in 2011. Their adherence to diabetes medications in the gap, measured by percentage of days covered, improved correspondingly (2.99 percentage points, 95% confidence interval [CI] 0.49-5.48, P = .019 for patients with no coverage; 6.46 percentage points, 95% CI 3.34-9.58, P gap in 2011. However, their adherence did not increase (-0.13 percentage point, P = .8011). In the first year of ACA coverage gap reform, copayments in the gap decreased substantially for all patients. Patients with no coverage and patients with partial coverage in the gap had better adherence in the gap in 2011.

  17. Optimisation of a parallel ocean general circulation model

    Directory of Open Access Journals (Sweden)

    M. I. Beare

    1997-10-01

    Full Text Available This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  18. Optimisation of a parallel ocean general circulation model

    Directory of Open Access Journals (Sweden)

    M. I. Beare

    Full Text Available This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  19. Broader health coverage is good for the nation's health: evidence from country level panel data.

    Science.gov (United States)

    Moreno-Serra, Rodrigo; Smith, Peter C

    2015-01-01

    Progress towards universal health coverage involves providing people with access to needed health services without entailing financial hardship and is often advocated on the grounds that it improves population health. The paper offers econometric evidence on the effects of health coverage on mortality outcomes at the national level. We use a large panel data set of countries, examined by using instrumental variable specifications that explicitly allow for potential reverse causality and unobserved country-specific characteristics. We employ various proxies for the coverage level in a health system. Our results indicate that expanded health coverage, particularly through higher levels of publicly funded health spending, results in lower child and adult mortality, with the beneficial effect on child mortality being larger in poorer countries.

  20. Evaluation of fault coverage for digitalized system in nuclear power plants using VHDL

    International Nuclear Information System (INIS)

    Kim, Suk Joon; Lee, Jun Suk; Seong, Poong Hyun

    2003-01-01

    Fault coverage of digital systems is found to be one of the most important factors in the safety analysis of nuclear power plants. Several axiomatic models for the estimation of fault coverage of digital systems have been proposed, but to apply those axiomatic models to real digital systems, parameters that the axiomatic models require should be approximated using analytic methods, empirical methods or expert opinions. In this paper, we apply the fault injection method to VHDL computer simulation model of a real digital system which provides the protection function to nuclear power plants, for the approximation of fault detection coverage of the digital system. As a result, the fault detection coverage of the digital system could be obtained

  1. Progress towards the Conventionon Biological Diversity terrestrial2010 and marine 2012 targets forprotected area coverage

    DEFF Research Database (Denmark)

    Coad, Lauren; Burgess, Neil David; Fish, Lucy

    2010-01-01

    coverage targets. National protected areas data from the WDPA have been used to measure progress in protected areas coverage at global, regional and national scale. The mean protected area coverage per nation was 12.2% for terrestrial area, and only 5.1% for near-shore marine area. Variation in protected......Protected area coverage targets set by the Convention on Biological Diversity (CBD) for both terrestrial and marine environments provide a major incentive for governments to review and upgrade their protected area systems. Assessing progress towards these targets will form an important component...... of the work of the Xth CBD Conference of Parties meeting to be held in Japan in 2010. The World Database on Protected Areas (WDPA) is the largest assembly of data on the world's terrestrial and marine protected areas and, as such, represents a fundamental tool in tracking progress towards protected area...

  2. 75 FR 2562 - Publication of Model Notices for Health Care Continuation Coverage Provided Pursuant to the...

    Science.gov (United States)

    2010-01-15

    ... DEPARTMENT OF LABOR Employee Benefits Security Administration Publication of Model Notices for... AGENCY: Employee Benefits Security Administration, Department of Labor. ACTION: Notice of the..., contact the Department's Employee Benefits Security Administration's Benefits Advisors at 1-866-444-3272...

  3. Cross-sample validation provides enhanced proteome coverage in rat vocal fold mucosa.

    Directory of Open Access Journals (Sweden)

    Nathan V Welham

    2011-03-01

    Full Text Available The vocal fold mucosa is a biomechanically unique tissue comprised of a densely cellular epithelium, superficial to an extracellular matrix (ECM-rich lamina propria. Such ECM-rich tissues are challenging to analyze using proteomic assays, primarily due to extensive crosslinking and glycosylation of the majority of high M(r ECM proteins. In this study, we implemented an LC-MS/MS-based strategy to characterize the rat vocal fold mucosa proteome. Our sample preparation protocol successfully solubilized both proteins and certain high M(r glycoconjugates and resulted in the identification of hundreds of mucosal proteins. A straightforward approach to the treatment of protein identifications attributed to single peptide hits allowed the retention of potentially important low abundance identifications (validated by a cross-sample match and de novo interpretation of relevant spectra while still eliminating potentially spurious identifications (global single peptide hits with no cross-sample match. The resulting vocal fold mucosa proteome was characterized by a wide range of cellular and extracellular proteins spanning 12 functional categories.

  4. First-line treatment with cephalosporins in spontaneous bacterial peritonitis provides poor antibiotic coverage

    DEFF Research Database (Denmark)

    Novovic, Srdan; Semb, Synne; Olsen, Henrik

    2012-01-01

    Abstract Objective. Spontaneous bacterial peritonitis is a common infection in cirrhosis, associated with a high mortality. Third-generation cephalosporins are recommended as first-line treatment. The aim was to evaluate the epidemiology of microbiological ascitic fluid findings and antimicrobial...... resistance in Denmark. Material and Methods. All patients with cirrhosis and a positive ascitic fluid culture, at three university hospitals in the Copenhagen area during a 7-year period, were retrospectively evaluated. Patients with apparent secondary peritonitis were excluded from the study. Results. One...

  5. Factors influencing media coverage of a radiological incident

    International Nuclear Information System (INIS)

    Bernhardt, R.K.; O'Neill, L.J.

    1986-01-01

    Most organizations have an existing policy for interactions with the media. This policy often requires that interactions be with or through a professional group of public information officers or the Office of Public Affairs. This policy tends to give individual members of an organization the belief that they are not responsible or in some instances, even allowed to interact with the media. To achieve good media relationships and/or coverage, individual interactions are necessary and required. The guidelines for media interactions provided in the Federal Emergency Management Agency (FEMA) sponsored Radiological Emergency Response course are relatively straightforward and simple to adopt

  6. A QoS-Guaranteed Coverage Precedence Routing Algorithm for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Jiun-Chuan Lin

    2011-03-01

    Full Text Available For mission-critical applications of wireless sensor networks (WSNs involving extensive battlefield surveillance, medical healthcare, etc., it is crucial to have low-power, new protocols, methodologies and structures for transferring data and information in a network with full sensing coverage capability for an extended working period. The upmost mission is to ensure that the network is fully functional providing reliable transmission of the sensed data without the risk of data loss. WSNs have been applied to various types of mission-critical applications. Coverage preservation is one of the most essential functions to guarantee quality of service (QoS in WSNs. However, a tradeoff exists between sensing coverage and network lifetime due to the limited energy supplies of sensor nodes. In this study, we propose a routing protocol to accommodate both energy-balance and coverage-preservation for sensor nodes in WSNs. The energy consumption for radio transmissions and the residual energy over the network are taken into account when the proposed protocol determines an energy-efficient route for a packet. The simulation results demonstrate that the proposed protocol is able to increase the duration of the on-duty network and provide up to 98.3% and 85.7% of extra service time with 100% sensing coverage ratio comparing with LEACH and the LEACH-Coverage-U protocols, respectively.

  7. Increasing coverage and decreasing inequity in insecticide-treated bed net use among rural Kenyan children.

    Directory of Open Access Journals (Sweden)

    Abdisalan M Noor

    2007-08-01

    Full Text Available Inexpensive and efficacious interventions that avert childhood deaths in sub-Saharan Africa have failed to reach effective coverage, especially among the poorest rural sectors. One particular example is insecticide-treated bed nets (ITNs. In this study, we present repeat observations of ITN coverage among rural Kenyan homesteads exposed at different times to a range of delivery models, and assess changes in coverage across socioeconomic groups.We undertook a study of annual changes in ITN coverage among a cohort of 3,700 children aged 0-4 y in four districts of Kenya (Bondo, Greater Kisii, Kwale, and Makueni annually between 2004 and 2006. Cross-sectional surveys of ITN coverage were undertaken coincidentally with the incremental availability of commercial sector nets (2004, the introduction of heavily subsidized nets through clinics (2005, and the introduction of free mass distributed ITNs (2006. The changing prevalence of ITN coverage was examined with special reference to the degree of equity in each delivery approach. ITN coverage was only 7.1% in 2004 when the predominant source of nets was the commercial retail sector. By the end of 2005, following the expansion of heavily subsidized clinic distribution system, ITN coverage rose to 23.5%. In 2006 a large-scale mass distribution of ITNs was mounted providing nets free of charge to children, resulting in a dramatic increase in ITN coverage to 67.3%. With each subsequent survey socioeconomic inequity in net coverage sequentially decreased: 2004 (most poor [2.9%] versus least poor [15.6%]; concentration index 0.281; 2005 (most poor [17.5%] versus least poor [37.9%]; concentration index 0.131, and 2006 with near-perfect equality (most poor [66.3%] versus least poor [66.6%]; concentration index 0.000. The free mass distribution method achieved highest coverage among the poorest children, the highly subsidised clinic nets programme was marginally in favour of the least poor, and the commercial

  8. Conceptual design of multiple parallel switching controller

    International Nuclear Information System (INIS)

    Ugolini, D.; Yoshikawa, S.; Ozawa, K.

    1996-01-01

    This paper discusses the conceptual design and the development of a preliminary model of a multiple parallel switching (MPS) controller. The introduction of several advanced controllers has widened and improved the control capability of nonlinear dynamical systems. However, it is not possible to uniquely define a controller that always outperforms the others, and, in many situations, the controller providing the best control action depends on the operating conditions and on the intrinsic properties and behavior of the controlled dynamical system. The desire to combine the control action of several controllers with the purpose to continuously attain the best control action has motivated the development of the MPS controller. The MPS controller consists of a number of single controllers acting in parallel and of an artificial intelligence (AI) based selecting mechanism. The AI selecting mechanism analyzes the output of each controller and implements the one providing the best control performance. An inherent property of the MPS controller is the possibility to discard unreliable controllers while still being able to perform the control action. To demonstrate the feasibility and the capability of the MPS controller the simulation of the on-line operation control of a fast breeder reactor (FBR) evaporator is presented. (author)

  9. QDP++: Data Parallel Interface for QCD

    Energy Technology Data Exchange (ETDEWEB)

    Robert Edwards

    2003-03-01

    This is a user's guide for the C++ binding for the QDP Data Parallel Applications Programmer Interface developed under the auspices of the US Department of Energy Scientific Discovery through Advanced Computing (SciDAC) program. The QDP Level 2 API has the following features: (1) Provides data parallel operations (logically SIMD) on all sites across the lattice or subsets of these sites. (2) Operates on lattice objects, which have an implementation-dependent data layout that is not visible above this API. (3) Hides details of how the implementation maps onto a given architecture, namely how the logical problem grid (i.el lattice) is mapped onto the machine architecture. (4) Allows asynchronous (non-blocking) shifts of lattice level objects over any permutation map of site sonto sites. However, from the user's view these instructions appear blocking and in fact may be so in some implementation. (5) Provides broadcast operations (filling a lattice quantity from a scalar value(s)), global reduction operations, and lattice-wide operations on various data-type primitives, such as matrices, vectors, and tensor products of matrices (propagators). (6) Operator syntax that support complex expression constructions.

  10. [Options for flap coverage in pressure sores].

    Science.gov (United States)

    Nae, S; Antohi, N; Stîngu, C; Stan, V; Parasca, S

    2010-01-01

    Despite improvements in reconstructive techniques for pressure sores, recurrences are still seen frequently, and success rate remains variable. During 2003 - 2007, at the Emergency Hospital for Plastic Surgery and Burns in Bucharest, 27 patients underwent surgical repair of 45 pressure sores located at sacral (22 ulcers), ischial (12 ulcers) and trochanteric (11 ulcers) regions. The mean patient age was 57, 1 years (range 26 to 82 years). Mean postoperative follow-up was 6 months (range 2 months - 2 years). There were 18 complications for the 45 sores (40%). At 6 months postoperatively, recurrence was noted in 12 ulcers (27%). Details regarding indications, contraindications, advantages and disadvantages for different coverage options are outlined. The authors advocate the importance of surgical coverage in reducing morbidity, mortality and treatment costs.

  11. Parallel Task Processing on a Multicore Platform in a PC-based Control System for Parallel Kinematics

    Directory of Open Access Journals (Sweden)

    Harald Michalik

    2009-02-01

    Full Text Available Multicore platforms are such that have one physical processor chip with multiple cores interconnected via a chip level bus. Because they deliver a greater computing power through concurrency, offer greater system density multicore platforms provide best qualifications to address the performance bottleneck encountered in PC-based control systems for parallel kinematic robots with heavy CPU-load. Heavy load control tasks are generated by new control approaches that include features like singularity prediction, structure control algorithms, vision data integration and similar tasks. In this paper we introduce the parallel task scheduling extension of a communication architecture specially tailored for the development of PC-based control of parallel kinematics. The Sche-duling is specially designed for the processing on a multicore platform. It breaks down the serial task processing of the robot control cycle and extends it with parallel task processing paths in order to enhance the overall control performance.

  12. Worker Sorting, Taxes and Health Insurance Coverage

    OpenAIRE

    Kevin Lang; Hong Kang

    2007-01-01

    We develop a model in which firms hire heterogeneous workers but must offer all workers insurance benefits under similar terms. In equilibrium, some firms offer free health insurance, some require an employee premium payment and some do not offer insurance. Making the employee contribution pre-tax lowers the cost to workers of a given employee premium and encourages more firms to charge. This increases the offer rate, lowers the take-up rate, increases (decreases) coverage among high (low) de...

  13. Cooperative storage of shared files in a parallel computing system with dynamic block size

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-11-10

    Improved techniques are provided for parallel writing of data to a shared object in a parallel computing system. A method is provided for storing data generated by a plurality of parallel processes to a shared object in a parallel computing system. The method is performed by at least one of the processes and comprises: dynamically determining a block size for storing the data; exchanging a determined amount of the data with at least one additional process to achieve a block of the data having the dynamically determined block size; and writing the block of the data having the dynamically determined block size to a file system. The determined block size comprises, e.g., a total amount of the data to be stored divided by the number of parallel processes. The file system comprises, for example, a log structured virtual parallel file system, such as a Parallel Log-Structured File System (PLFS).

  14. State Medicaid Expansion Tobacco Cessation Coverage and Number of Adult Smokers Enrolled in Expansion Coverage - United States, 2016.

    Science.gov (United States)

    DiGiulio, Anne; Haddix, Meredith; Jump, Zach; Babb, Stephen; Schecter, Anna; Williams, Kisha-Ann S; Asman, Kat; Armour, Brian S

    2016-12-09

    In 2015, 27.8% of adult Medicaid enrollees were current cigarette smokers, compared with 11.1% of adults with private health insurance, placing Medicaid enrollees at increased risk for smoking-related disease and death (1). In addition, smoking-related diseases are a major contributor to Medicaid costs, accounting for about 15% (>$39 billion) of annual Medicaid spending during 2006-2010 (2). Individual, group, and telephone counseling and seven Food and Drug Administration (FDA)-approved medications are effective treatments for helping tobacco users quit (3). Insurance coverage for tobacco cessation treatments is associated with increased quit attempts, use of cessation treatments, and successful smoking cessation (3); this coverage has the potential to reduce Medicaid costs (4). However, barriers such as requiring copayments and prior authorization for treatment can impede access to cessation treatments (3,5). As of July 1, 2016, 32 states (including the District of Columbia) have expanded Medicaid eligibility through the Patient Protection and Affordable Care Act (ACA),* ,† which has increased access to health care services, including cessation treatments (5). CDC used data from the Centers for Medicare and Medicaid Services (CMS) Medicaid Budget and Expenditure System (MBES) and the Behavioral Risk Factor Surveillance System (BRFSS) to estimate the number of adult smokers enrolled in Medicaid expansion coverage. To assess cessation coverage among Medicaid expansion enrollees, the American Lung Association collected data on coverage of, and barriers to accessing, evidence-based cessation treatments. As of December 2015, approximately 2.3 million adult smokers were newly enrolled in Medicaid because of Medicaid expansion. As of July 1, 2016, all 32 states that have expanded Medicaid eligibility under ACA covered some cessation treatments for all Medicaid expansion enrollees, with nine states covering all nine cessation treatments for all Medicaid expansion

  15. Indonesia's road to universal health coverage: a political journey.

    Science.gov (United States)

    Pisani, Elizabeth; Olivier Kok, Maarten; Nugroho, Kharisma

    2017-03-01

    In 2013 Indonesia, the world's fourth most populous country, declared that it would provide affordable health care for all its citizens within seven years. This crystallised an ambition first enshrined in law over five decades earlier, but never previously realised. This paper explores Indonesia's journey towards universal health coverage (UHC) from independence to the launch of a comprehensive health insurance scheme in January 2014. We find that Indonesia's path has been determined largely by domestic political concerns – different groups obtained access to healthcare as their socio-political importance grew. A major inflection point occurred following the Asian financial crisis of 1997. To stave off social unrest, the government provided health coverage for the poor for the first time, creating a path dependency that influenced later policy choices. The end of this programme coincided with decentralisation, leading to experimentation with several different models of health provision at the local level. When direct elections for local leaders were introduced in 2005, popular health schemes led to success at the polls. UHC became an electoral asset, moving up the political agenda. It also became contested, with national policy-makers appropriating health insurance programmes that were first developed locally, and taking credit for them. The Indonesian experience underlines the value of policy experimentation, and of a close understanding of the contextual and political factors that drive successful UHC models at the local level. Specific drivers of success and failure should be taken into account when scaling UHC to the national level. In the Indonesian example, UHC became possible when the interests of politically and economically influential groups were either satisfied or neutralised. While technical considerations took a back seat to political priorities in developing the structures for health coverage nationally, they will have to be addressed going forward

  16. Interpregnancy intervals: impact of postpartum contraceptive effectiveness and coverage.

    Science.gov (United States)

    Thiel de Bocanegra, Heike; Chang, Richard; Howell, Mike; Darney, Philip

    2014-04-01

    The purpose of this study was to determine the use of contraceptive methods, which was defined by effectiveness, length of coverage, and their association with short interpregnancy intervals, when controlling for provider type and client demographics. We identified a cohort of 117,644 women from the 2008 California Birth Statistical Master file with second or higher order birth and at least 1 Medicaid (Family Planning, Access, Care, and Treatment [Family PACT] program or Medi-Cal) claim within 18 months after index birth. We explored the effect of contraceptive method provision on the odds of having an optimal interpregnancy interval and controlled for covariates. The average length of contraceptive coverage was 3.81 months (SD = 4.84). Most women received user-dependent hormonal contraceptives as their most effective contraceptive method (55%; n = 65,103 women) and one-third (33%; n = 39,090 women) had no contraceptive claim. Women who used long-acting reversible contraceptive methods had 3.89 times the odds and women who used user-dependent hormonal methods had 1.89 times the odds of achieving an optimal birth interval compared with women who used barrier methods only; women with no method had 0.66 times the odds. When user-dependent methods are considered, the odds of having an optimal birth interval increased for each additional month of contraceptive coverage by 8% (odds ratio, 1.08; 95% confidence interval, 1.08-1.09). Women who were seen by Family PACT or by both Family PACT and Medi-Cal providers had significantly higher odds of optimal birth intervals compared with women who were served by Medi-Cal only. To achieve optimal birth spacing and ultimately to improve birth outcomes, attention should be given to contraceptive counseling and access to contraceptive methods in the postpartum period. Copyright © 2014 Mosby, Inc. All rights reserved.

  17. Portable programming on parallel/networked computers using the Application Portable Parallel Library (APPL)

    Science.gov (United States)

    Quealy, Angela; Cole, Gary L.; Blech, Richard A.

    1993-01-01

    The Application Portable Parallel Library (APPL) is a subroutine-based library of communication primitives that is callable from applications written in FORTRAN or C. APPL provides a consistent programmer interface to a variety of distributed and shared-memory multiprocessor MIMD machines. The objective of APPL is to minimize the effort required to move parallel applications from one machine to another, or to a network of homogeneous machines. APPL encompasses many of the message-passing primitives that are currently available on commercial multiprocessor systems. This paper describes APPL (version 2.3.1) and its usage, reports the status of the APPL project, and indicates possible directions for the future. Several applications using APPL are discussed, as well as performance and overhead results.

  18. Automatic Loop Parallelization via Compiler Guided Refactoring

    DEFF Research Database (Denmark)

    Larsen, Per; Ladelsky, Razya; Lidman, Jacob

    For many parallel applications, performance relies not on instruction-level parallelism, but on loop-level parallelism. Unfortunately, many modern applications are written in ways that obstruct automatic loop parallelization. Since we cannot identify sufficient parallelization opportunities...... for these codes in a static, off-line compiler, we developed an interactive compilation feedback system that guides the programmer in iteratively modifying application source, thereby improving the compiler’s ability to generate loop-parallel code. We use this compilation system to modify two sequential...... benchmarks, finding that the code parallelized in this way runs up to 8.3 times faster on an octo-core Intel Xeon 5570 system and up to 12.5 times faster on a quad-core IBM POWER6 system. Benchmark performance varies significantly between the systems. This suggests that semi-automatic parallelization should...

  19. Applied Parallel Computing Industrial Computation and Optimization

    DEFF Research Database (Denmark)

    Madsen, Kaj; NA NA NA Olesen, Dorte

    Proceedings and the Third International Workshop on Applied Parallel Computing in Industrial Problems and Optimization (PARA96)......Proceedings and the Third International Workshop on Applied Parallel Computing in Industrial Problems and Optimization (PARA96)...

  20. Parallelizing AT with MatlabMPI

    International Nuclear Information System (INIS)

    2011-01-01

    The Accelerator Toolbox (AT) is a high-level collection of tools and scripts specifically oriented toward solving problems dealing with computational accelerator physics. It is integrated into the MATLAB environment, which provides an accessible, intuitive interface for accelerator physicists, allowing researchers to focus the majority of their efforts on simulations and calculations, rather than programming and debugging difficulties. Efforts toward parallelization of AT have been put in place to upgrade its performance to modern standards of computing. We utilized the packages MatlabMPI and pMatlab, which were developed by MIT Lincoln Laboratory, to set up a message-passing environment that could be called within MATLAB, which set up the necessary pre-requisites for multithread processing capabilities. On local quad-core CPUs, we were able to demonstrate processor efficiencies of roughly 95% and speed increases of nearly 380%. By exploiting the efficacy of modern-day parallel computing, we were able to demonstrate incredibly efficient speed increments per processor in AT's beam-tracking functions. Extrapolating from prediction, we can expect to reduce week-long computation runtimes to less than 15 minutes. This is a huge performance improvement and has enormous implications for the future computing power of the accelerator physics group at SSRL. However, one of the downfalls of parringpass is its current lack of transparency; the pMatlab and MatlabMPI packages must first be well-understood by the user before the system can be configured to run the scripts. In addition, the instantiation of argument parameters requires internal modification of the source code. Thus, parringpass, cannot be directly run from the MATLAB command line, which detracts from its flexibility and user-friendliness. Future work in AT's parallelization will focus on development of external functions and scripts that can be called from within MATLAB and configured on multiple nodes, while

  1. Out-of-order parallel discrete event simulation for electronic system-level design

    CERN Document Server

    Chen, Weiwei

    2014-01-01

    This book offers readers a set of new approaches and tools a set of tools and techniques for facing challenges in parallelization with design of embedded systems.? It provides an advanced parallel simulation infrastructure for efficient and effective system-level model validation and development so as to build better products in less time.? Since parallel discrete event simulation (PDES) has the potential to exploit the underlying parallel computational capability in today's multi-core simulation hosts, the author begins by reviewing the parallelization of discrete event simulation, identifyin

  2. A class of parallel algorithms for computation of the manipulator inertia matrix

    Science.gov (United States)

    Fijany, Amir; Bejczy, Antal K.

    1989-01-01

    Parallel and parallel/pipeline algorithms for computation of the manipulator inertia matrix are presented. An algorithm based on composite rigid-body spatial inertia method, which provides better features for parallelization, is used for the computation of the inertia matrix. Two parallel algorithms are developed which achieve the time lower bound in computation. Also described is the mapping of these algorithms with topological variation on a two-dimensional processor array, with nearest-neighbor connection, and with cardinality variation on a linear processor array. An efficient parallel/pipeline algorithm for the linear array was also developed, but at significantly higher efficiency.

  3. Parallel algorithms and cluster computing

    CERN Document Server

    Hoffmann, Karl Heinz

    2007-01-01

    This book presents major advances in high performance computing as well as major advances due to high performance computing. It contains a collection of papers in which results achieved in the collaboration of scientists from computer science, mathematics, physics, and mechanical engineering are presented. From the science problems to the mathematical algorithms and on to the effective implementation of these algorithms on massively parallel and cluster computers we present state-of-the-art methods and technology as well as exemplary results in these fields. This book shows that problems which seem superficially distinct become intimately connected on a computational level.

  4. Parallel computation of rotating flows

    DEFF Research Database (Denmark)

    Lundin, Lars Kristian; Barker, Vincent A.; Sørensen, Jens Nørkær

    1999-01-01

    This paper deals with the simulation of 3‐D rotating flows based on the velocity‐vorticity formulation of the Navier‐Stokes equations in cylindrical coordinates. The governing equations are discretized by a finite difference method. The solution is advanced to a new time level by a two‐step process...... is that of solving a singular, large, sparse, over‐determined linear system of equations, and the iterative method CGLS is applied for this purpose. We discuss some of the mathematical and numerical aspects of this procedure and report on the performance of our software on a wide range of parallel computers. Darbe...

  5. Vector and parallel processors in computational science. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Duff, I S; Reid, J K

    1985-01-01

    This volume contains papers from most of the invited talks and from several of the contributed talks and poster sessions presented at VAPP II. The contents present an extensive coverage of all important aspects of vector and parallel processors, including hardware, languages, numerical algorithms and applications. The topics covered include descriptions of new machines (both research and commercial machines), languages and software aids, and general discussions of whole classes of machines and their uses. Numerical methods papers include Monte Carlo algorithms, iterative and direct methods for solving large systems, finite elements, optimization, random number generation and mathematical software. The specific applications covered include neutron diffusion calculations, molecular dynamics, weather forecasting, lattice gauge calculations, fluid dynamics, flight simulation, cartography, image processing and cryptography. Most machines and architecture types are being used for these applications. many refs.

  6. The parallel volume at large distances

    DEFF Research Database (Denmark)

    Kampf, Jürgen

    In this paper we examine the asymptotic behavior of the parallel volume of planar non-convex bodies as the distance tends to infinity. We show that the difference between the parallel volume of the convex hull of a body and the parallel volume of the body itself tends to . This yields a new proof...... for the fact that a planar body can only have polynomial parallel volume, if it is convex. Extensions to Minkowski spaces and random sets are also discussed....

  7. The parallel volume at large distances

    DEFF Research Database (Denmark)

    Kampf, Jürgen

    In this paper we examine the asymptotic behavior of the parallel volume of planar non-convex bodies as the distance tends to infinity. We show that the difference between the parallel volume of the convex hull of a body and the parallel volume of the body itself tends to 0. This yields a new proof...... for the fact that a planar body can only have polynomial parallel volume, if it is convex. Extensions to Minkowski spaces and random sets are also discussed....

  8. Assessing Requirements Quality through Requirements Coverage

    Science.gov (United States)

    Rajan, Ajitha; Heimdahl, Mats; Woodham, Kurt

    2008-01-01

    In model-based development, the development effort is centered around a formal description of the proposed software system the model. This model is derived from some high-level requirements describing the expected behavior of the software. For validation and verification purposes, this model can then be subjected to various types of analysis, for example, completeness and consistency analysis [6], model checking [3], theorem proving [1], and test-case generation [4, 7]. This development paradigm is making rapid inroads in certain industries, e.g., automotive, avionics, space applications, and medical technology. This shift towards model-based development naturally leads to changes in the verification and validation (V&V) process. The model validation problem determining that the model accurately captures the customer's high-level requirements has received little attention and the sufficiency of the validation activities has been largely determined through ad-hoc methods. Since the model serves as the central artifact, its correctness with respect to the users needs is absolutely crucial. In our investigation, we attempt to answer the following two questions with respect to validation (1) Are the requirements sufficiently defined for the system? and (2) How well does the model implement the behaviors specified by the requirements? The second question can be addressed using formal verification. Nevertheless, the size and complexity of many industrial systems make formal verification infeasible even if we have a formal model and formalized requirements. Thus, presently, there is no objective way of answering these two questions. To this end, we propose an approach based on testing that, when given a set of formal requirements, explores the relationship between requirements-based structural test-adequacy coverage and model-based structural test-adequacy coverage. The proposed technique uses requirements coverage metrics defined in [9] on formal high-level software

  9. An analysis of the policy coverage and examination of ...

    African Journals Online (AJOL)

    ... topics in subjects such as Life Sciences, Physical Sciences, Life Orientation, ... The aim of the research reported here was to investigate the coverage and ... In analysing the coverage and examination of environmental-impact topics, ...

  10. Assessment of Effective Coverage of Voluntary Counseling and ...

    African Journals Online (AJOL)

    Assessment of Effective Coverage of Voluntary Counseling and Testing ... The objective of this study was to assess effective coverage level for Voluntary Counseling and testing services in major health facilities ... AJOL African Journals Online.

  11. Determinants of vaccination coverage among pastoralists in north ...

    African Journals Online (AJOL)

    Determinants of vaccination coverage among pastoralists in north eastern Kenya. ... Attitudes, and Practices (KAPs) on vaccination coverage among settled and ... We used a structured instrument to survey pastoralist mothers with children ...

  12. A Parallel Approach to Fractal Image Compression

    Directory of Open Access Journals (Sweden)

    Lubomir Dedera

    2004-01-01

    Full Text Available The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.

  13. Parallel Computing Using Web Servers and "Servlets".

    Science.gov (United States)

    Lo, Alfred; Bloor, Chris; Choi, Y. K.

    2000-01-01

    Describes parallel computing and presents inexpensive ways to implement a virtual parallel computer with multiple Web servers. Highlights include performance measurement of parallel systems; models for using Java and intranet technology including single server, multiple clients and multiple servers, single client; and a comparison of CGI (common…

  14. Comparison of parallel viscosity with neoclassical theory

    International Nuclear Information System (INIS)

    Ida, K.; Nakajima, N.

    1996-04-01

    Toroidal rotation profiles are measured with charge exchange spectroscopy for the plasma heated with tangential NBI in CHS heliotron/torsatron device to estimate parallel viscosity. The parallel viscosity derived from the toroidal rotation velocity shows good agreement with the neoclassical parallel viscosity plus the perpendicular viscosity. (μ perpendicular = 2 m 2 /s). (author)

  15. Development and application of a 6.5 million feature Affymetrix Genechip® for massively parallel discovery of single position polymorphisms in lettuce (Lactuca spp.

    Directory of Open Access Journals (Sweden)

    Stoffel Kevin

    2012-05-01

    Full Text Available Abstract Background High-resolution genetic maps are needed in many crops to help characterize the genetic diversity that determines agriculturally important traits. Hybridization to microarrays to detect single feature polymorphisms is a powerful technique for marker discovery and genotyping because of its highly parallel nature. However, microarrays designed for gene expression analysis rarely provide sufficient gene coverage for optimal detection of nucleotide polymorphisms, which limits utility in species with low rates of polymorphism such as lettuce (Lactuca sativa. Results We developed a 6.5 million feature Affymetrix GeneChip® for efficient polymorphism discovery and genotyping, as well as for analysis of gene expression in lettuce. Probes on the microarray were designed from 26,809 unigenes from cultivated lettuce and an additional 8,819 unigenes from four related species (L. serriola, L. saligna, L. virosa and L. perennis. Where possible, probes were tiled with a 2 bp stagger, alternating on each DNA strand; providing an average of 187 probes covering approximately 600 bp for each of over 35,000 unigenes; resulting in up to 13 fold redundancy in coverage per nucleotide. We developed protocols for hybridization of genomic DNA to the GeneChip® and refined custom algorithms that utilized coverage from multiple, high quality probes to detect single position polymorphisms in 2 bp sliding windows across each unigene. This allowed us to detect greater than 18,000 polymorphisms between the parental lines of our core mapping population, as well as numerous polymorphisms between cultivated lettuce and wild species in the lettuce genepool. Using marker data from our diversity panel comprised of 52 accessions from the five species listed above, we were able to separate accessions by species using both phylogenetic and principal component analyses. Additionally, we estimated the diversity between different types of cultivated lettuce and

  16. Development and application of a 6.5 million feature Affymetrix Genechip® for massively parallel discovery of single position polymorphisms in lettuce (Lactuca spp.).

    Science.gov (United States)

    Stoffel, Kevin; van Leeuwen, Hans; Kozik, Alexander; Caldwell, David; Ashrafi, Hamid; Cui, Xinping; Tan, Xiaoping; Hill, Theresa; Reyes-Chin-Wo, Sebastian; Truco, Maria-Jose; Michelmore, Richard W; Van Deynze, Allen

    2012-05-14

    High-resolution genetic maps are needed in many crops to help characterize the genetic diversity that determines agriculturally important traits. Hybridization to microarrays to detect single feature polymorphisms is a powerful technique for marker discovery and genotyping because of its highly parallel nature. However, microarrays designed for gene expression analysis rarely provide sufficient gene coverage for optimal detection of nucleotide polymorphisms, which limits utility in species with low rates of polymorphism such as lettuce (Lactuca sativa). We developed a 6.5 million feature Affymetrix GeneChip® for efficient polymorphism discovery and genotyping, as well as for analysis of gene expression in lettuce. Probes on the microarray were designed from 26,809 unigenes from cultivated lettuce and an additional 8,819 unigenes from four related species (L. serriola, L. saligna, L. virosa and L. perennis). Where possible, probes were tiled with a 2 bp stagger, alternating on each DNA strand; providing an average of 187 probes covering approximately 600 bp for each of over 35,000 unigenes; resulting in up to 13 fold redundancy in coverage per nucleotide. We developed protocols for hybridization of genomic DNA to the GeneChip® and refined custom algorithms that utilized coverage from multiple, high quality probes to detect single position polymorphisms in 2 bp sliding windows across each unigene. This allowed us to detect greater than 18,000 polymorphisms between the parental lines of our core mapping population, as well as numerous polymorphisms between cultivated lettuce and wild species in the lettuce genepool. Using marker data from our diversity panel comprised of 52 accessions from the five species listed above, we were able to separate accessions by species using both phylogenetic and principal component analyses. Additionally, we estimated the diversity between different types of cultivated lettuce and distinguished morphological types. By hybridizing

  17. The Acoustic and Peceptual Effects of Series and Parallel Processing

    Directory of Open Access Journals (Sweden)

    Melinda C. Anderson

    2009-01-01

    Full Text Available Temporal envelope (TE cues provide a great deal of speech information. This paper explores how spectral subtraction and dynamic-range compression gain modifications affect TE fluctuations for parallel and series configurations. In parallel processing, algorithms compute gains based on the same input signal, and the gains in dB are summed. In series processing, output from the first algorithm forms the input to the second algorithm. Acoustic measurements show that the parallel arrangement produces more gain fluctuations, introducing more changes to the TE than the series configurations. Intelligibility tests for normal-hearing (NH and hearing-impaired (HI listeners show (1 parallel processing gives significantly poorer speech understanding than an unprocessed (UNP signal and the series arrangement and (2 series processing and UNP yield similar results. Speech quality tests show that UNP is preferred to both parallel and series arrangements, although spectral subtraction is the most preferred. No significant differences exist in sound quality between the series and parallel arrangements, or between the NH group and the HI group. These results indicate that gain modifications affect intelligibility and sound quality differently. Listeners appear to have a higher tolerance for gain modifications with regard to intelligibility, while judgments for sound quality appear to be more affected by smaller amounts of gain modification.

  18. A solution for automatic parallelization of sequential assembly code

    Directory of Open Access Journals (Sweden)

    Kovačević Đorđe

    2013-01-01

    Full Text Available Since modern multicore processors can execute existing sequential programs only on a single core, there is a strong need for automatic parallelization of program code. Relying on existing algorithms, this paper describes one new software solution tool for parallelization of sequential assembly code. The main goal of this paper is to develop the parallelizator which reads sequential assembler code and at the output provides parallelized code for MIPS processor with multiple cores. The idea is the following: the parser translates assembler input file to program objects suitable for further processing. After that the static single assignment is done. Based on the data flow graph, the parallelization algorithm separates instructions on different cores. Once sequential code is parallelized by the parallelization algorithm, registers are allocated with the algorithm for linear allocation, and the result at the end of the program is distributed assembler code on each of the cores. In the paper we evaluate the speedup of the matrix multiplication example, which was processed by the parallelizator of assembly code. The result is almost linear speedup of code execution, which increases with the number of cores. The speed up on the two cores is 1.99, while on 16 cores the speed up is 13.88.

  19. Parallelization of MCNP4 code by using simple FORTRAN algorithms

    International Nuclear Information System (INIS)

    Yazid, P.I.; Takano, Makoto; Masukawa, Fumihiro; Naito, Yoshitaka.

    1993-12-01

    Simple FORTRAN algorithms, that rely only on open, close, read and write statements, together with disk files and some UNIX commands have been applied to parallelization of MCNP4. The code, named MCNPNFS, maintains almost all capabilities of MCNP4 in solving shielding problems. It is able to perform parallel computing on a set of any UNIX workstations connected by a network, regardless of the heterogeneity in hardware system, provided that all processors produce a binary file in the same format. Further, it is confirmed that MCNPNFS can be executed also on Monte-4 vector-parallel computer. MCNPNFS has been tested intensively by executing 5 photon-neutron benchmark problems, a spent fuel cask problem and 17 sample problems included in the original code package of MCNP4. Three different workstations, connected by a network, have been used to execute MCNPNFS in parallel. By measuring CPU time, the parallel efficiency is determined to be 58% to 99% and 86% in average. On Monte-4, MCNPNFS has been executed using 4 processors concurrently and has achieved the parallel efficiency of 79% in average. (author)

  20. Coverage and Compliance of Mass Drug Administration in Lymphatic Filariasis: A Comparative Analysis in a District of West Bengal, India

    Directory of Open Access Journals (Sweden)

    Tanmay Kanti Panja

    2012-01-01

    Full Text Available Background: Despite several rounds of Mass Drug Administration (MDA as an elimination strategy of Lymphatic Filariasis (LF from India, still the coverage is far behind the required level of 85%.Objectives: The present study was carried out with the objectives to assess the coverage and compliance of MDA and their possible determinants. Methods: A cross-sectional community based study was conducted in Paschim Midnapur district of West Bengal, India for consecutive two years following MDA. Study participants were chosen by 30-cluster sampling technique. Data was collected by using pre-tested semi-structured proforma to assess the coverage and compliance of MDA along with possible determinants for non-attaining the expected coverage. Results: In the year 2009, coverage, compliance, coverage compliance gap (CCG and effective coverage was seen to be 84.1%, 70.5%, 29.5% and 59.3% respectively. In 2010, the results further deteriorated to 78.5%, 66.9%, 33.3% and 57% respectively. The poor coverage and compliance were attributed to improper training of service providers and lack of community awareness regarding MDA.Conclusion: The study emphasized supervised consumption, retraining of service providers before MDA activities, strengthening behaviour change communication strategy for community awareness. Advocacy by the program managers and policy makers towards prioritization of MDA program will make the story of filaria elimination a success.

  1. Parallel altitudinal clines reveal trends in adaptive evolution of genome size in Zea mays

    Science.gov (United States)

    Berg, Jeremy J.; Birchler, James A.; Grote, Mark N.; Lorant, Anne; Quezada, Juvenal

    2018-01-01

    While the vast majority of genome size variation in plants is due to differences in repetitive sequence, we know little about how selection acts on repeat content in natural populations. Here we investigate parallel changes in intraspecific genome size and repeat content of domesticated maize (Zea mays) landraces and their wild relative teosinte across altitudinal gradients in Mesoamerica and South America. We combine genotyping, low coverage whole-genome sequence data, and flow cytometry to test for evidence of selection on genome size and individual repeat abundance. We find that population structure alone cannot explain the observed variation, implying that clinal patterns of genome size are maintained by natural selection. Our modeling additionally provides evidence of selection on individual heterochromatic knob repeats, likely due to their large individual contribution to genome size. To better understand the phenotypes driving selection on genome size, we conducted a growth chamber experiment using a population of highland teosinte exhibiting extensive variation in genome size. We find weak support for a positive correlation between genome size and cell size, but stronger support for a negative correlation between genome size and the rate of cell production. Reanalyzing published data of cell counts in maize shoot apical meristems, we then identify a negative correlation between cell production rate and flowering time. Together, our data suggest a model in which variation in genome size is driven by natural selection on flowering time across altitudinal clines, connecting intraspecific variation in repetitive sequence to important differences in adaptive phenotypes. PMID:29746459

  2. Mobile-robot navigation with complete coverage of unstructured environments

    OpenAIRE

    García Armada, Elena; González de Santos, Pablo

    2004-01-01

    There are some mobile-robot applications that require the complete coverage of an unstructured environment. Examples are humanitarian de-mining and floor-cleaning tasks. A complete-coverage algorithm is then used, a path-planning technique that allows the robot to pass over all points in the environment, avoiding unknown obstacles. Different coverage algorithms exist, but they fail working in unstructured environments. This paper details a complete-coverage algorithm for unstructured environm...

  3. Analysis of multigrid methods on massively parallel computers: Architectural implications

    Science.gov (United States)

    Matheson, Lesley R.; Tarjan, Robert E.

    1993-01-01

    We study the potential performance of multigrid algorithms running on massively parallel computers with the intent of discovering whether presently envisioned machines will provide an efficient platform for such algorithms. We consider the domain parallel version of the standard V cycle algorithm on model problems, discretized using finite difference techniques in two and three dimensions on block structured grids of size 10(exp 6) and 10(exp 9), respectively. Our models of parallel computation were developed to reflect the computing characteristics of the current generation of massively parallel multicomputers. These models are based on an interconnection network of 256 to 16,384 message passing, 'workstation size' processors executing in an SPMD mode. The first model accomplishes interprocessor communications through a multistage permutation network. The communication cost is a logarithmic function which is similar to the costs in a variety of different topologies. The second model allows single stage communication costs only. Both models were designed with information provided by machine developers and utilize implementation derived parameters. With the medium grain parallelism of the current generation and the high fixed cost of an interprocessor communication, our analysis suggests an efficient implementation requires the machine to support the efficient transmission of long messages, (up to 1000 words) or the high initiation cost of a communication must be significantly reduced through an alternative optimization technique. Furthermore, with variable length message capability, our analysis suggests the low diameter multistage networks provide little or no advantage over a simple single stage communications network.

  4. Advances in randomized parallel computing

    CERN Document Server

    Rajasekaran, Sanguthevar

    1999-01-01

    The technique of randomization has been employed to solve numerous prob­ lems of computing both sequentially and in parallel. Examples of randomized algorithms that are asymptotically better than their deterministic counterparts in solving various fundamental problems abound. Randomized algorithms have the advantages of simplicity and better performance both in theory and often in practice. This book is a collection of articles written by renowned experts in the area of randomized parallel computing. A brief introduction to randomized algorithms In the aflalysis of algorithms, at least three different measures of performance can be used: the best case, the worst case, and the average case. Often, the average case run time of an algorithm is much smaller than the worst case. 2 For instance, the worst case run time of Hoare's quicksort is O(n ), whereas its average case run time is only O( n log n). The average case analysis is conducted with an assumption on the input space. The assumption made to arrive at t...

  5. Xyce parallel electronic simulator design.

    Energy Technology Data Exchange (ETDEWEB)

    Thornquist, Heidi K.; Rankin, Eric Lamont; Mei, Ting; Schiek, Richard Louis; Keiter, Eric Richard; Russo, Thomas V.

    2010-09-01

    This document is the Xyce Circuit Simulator developer guide. Xyce has been designed from the 'ground up' to be a SPICE-compatible, distributed memory parallel circuit simulator. While it is in many respects a research code, Xyce is intended to be a production simulator. As such, having software quality engineering (SQE) procedures in place to insure a high level of code quality and robustness are essential. Version control, issue tracking customer support, C++ style guildlines and the Xyce release process are all described. The Xyce Parallel Electronic Simulator has been under development at Sandia since 1999. Historically, Xyce has mostly been funded by ASC, the original focus of Xyce development has primarily been related to circuits for nuclear weapons. However, this has not been the only focus and it is expected that the project will diversify. Like many ASC projects, Xyce is a group development effort, which involves a number of researchers, engineers, scientists, mathmaticians and computer scientists. In addition to diversity of background, it is to be expected on long term projects for there to be a certain amount of staff turnover, as people move on to different projects. As a result, it is very important that the project maintain high software quality standards. The point of this document is to formally document a number of the software quality practices followed by the Xyce team in one place. Also, it is hoped that this document will be a good source of information for new developers.

  6. What hysteria? A systematic study of newspaper coverage of accused child molesters.

    Science.gov (United States)

    Cheit, Ross E

    2003-06-01

    There were three aims: First, to determine the extent to which those charged with child molestation receive newspaper coverage; second, to analyze the nature of that coverage; and third, to compare the universe of coverage to the nature of child molestation charges in the criminal justice system as a whole. Two databases were created. The first one identified all defendants charged with child molestation in Rhode Island in 1993. The database was updated after 5 years to include relevant information about case disposition. The second database was created by electronic searching the Providence Journal for every story that mentioned each defendant. Most defendants (56.1%) were not mentioned in the newspaper. Factors associated with a greater chance of coverage include: cases involving first-degree charges, cases with multiple counts, cases involving additional violence or multiple victims, and cases resulting in long prison sentences. The data indicate that the press exaggerates "stranger danger," while intra-familial cases are underreported. Newspaper accounts also minimize the extent to which guilty defendants avoid prison. Generalizing about the nature of child molestation cases in criminal court on the basis of newspaper coverage is inappropriate. The coverage is less extensive than often claimed, and it is skewed in ways that are typical of the mass media.

  7. Federally-Assisted Healthcare Coverage among Male State Prisoners with Chronic Health Problems.

    Directory of Open Access Journals (Sweden)

    David L Rosen

    Full Text Available Prisoners have higher rates of chronic diseases such as substance dependence, mental health conditions and infectious disease, as compared to the general population. We projected the number of male state prisoners with a chronic health condition who at release would be eligible or ineligible for healthcare coverage under the Affordable Care Act (ACA. We used ACA income guidelines in conjunction with reported pre-arrest social security benefits and income from a nationally representative sample of prisoners to estimate the number eligible for healthcare coverage at release. There were 643,290 US male prisoners aged 18-64 with a chronic health condition. At release, 73% in Medicaid-expansion states would qualify for Medicaid or tax credits. In non-expansion states, 54% would qualify for tax credits, but 22% (n = 69,827 had incomes of ≤ 100% the federal poverty limit and thus would be ineligible for ACA-mediated healthcare coverage. These prisoners comprise 11% of all male prisoners with a chronic condition. The ACA was projected to provide coverage to most male state prisoners with a chronic health condition; however, roughly 70,000 fall in the "coverage gap" and may require non-routine care at emergency departments. Mechanisms are needed to secure coverage for this at risk group and address barriers to routine utilization of health services.

  8. A biologically inspired controller to solve the coverage problem in robotics.

    Science.gov (United States)

    Rañó, Iñaki; Santos, José A

    2017-06-05

    The coverage problem consists on computing a path or trajectory for a robot to pass over all the points in some free area and has applications ranging from floor cleaning to demining. Coverage is solved as a planning problem-providing theoretical validation of the solution-or through heuristic techniques which rely on experimental validation. Through a combination of theoretical results and simulations, this paper presents a novel solution to the coverage problem that exploits the chaotic behaviour of a simple biologically inspired motion controller, the Braitenberg vehicle 2b. Although chaos has been used for coverage, our approach has much less restrictive assumptions about the environment and can be implemented using on-board sensors. First, we prove theoretically that this vehicle-a well known model of animal tropotaxis-behaves as a charge in an electro-magnetic field. The motion equations can be reduced to a Hamiltonian system, and, therefore the vehicle follows quasi-periodic or chaotic trajectories, which pass arbitrarily close to any point in the work-space, i.e. it solves the coverage problem. Secondly, through a set of extensive simulations, we show that the trajectories cover regions of bounded workspaces, and full coverage is achieved when the perceptual range of the vehicle is short. We compare the performance of this new approach with different types of random motion controllers in the same bounded environments.

  9. The influence of patient positioning in breast CT on breast tissue coverage and patient comfort

    Energy Technology Data Exchange (ETDEWEB)

    Roessler, A.C.; Althoff, F.; Kalender, W. [Erlangen Univ. (Germany). Inst. of Medical Physics; Wenkel, E. [University Hospital of Erlangen (Germany). Radiological Inst.

    2015-02-15

    The presented study aimed at optimizing a patient table design for breast CT (BCT) systems with respect to breast tissue coverage and patient comfort. Additionally, the benefits and acceptance of an immobilization device for BCT using underpressure were evaluated. Three different study parts were carried out. In a positioning study women were investigated on an MRI tabletop with exchangeable inserts (flat and cone-shaped with different opening diameters) to evaluate their influence on breast coverage and patient comfort in various positioning alternatives. Breast length and volume were calculated to compare positioning modalities including various opening diameters and forms. In the second study part, an underpressure system was tested for its functionality and comfort on a stereotactic biopsy table mimicking a future CT scanner table. In the last study part, this system was tested regarding breast tissue coverage. Best results for breast tissue coverage were shown for cone-shaped table inserts with an opening of 180 mm. Flat inserts did not provide complete coverage of breast tissue. The underpressure system showed robust function and tended to pull more breast tissue into the field of view. Patient comfort was rated good for all table inserts, with highest ratings for cone-shaped inserts. Cone-shaped tabletops appeared to be adequate for BCT systems and to allow imaging of almost the complete breast. An underpressure system proved promising for the fixation of the breast during imaging and increased coverage. Patient comfort appears to be adequate.

  10. A framework to estimate the coverage of AOPs in nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jinkyun; Jung, Wondea [KAERI, Daejeon (Korea, Republic of)

    2015-05-15

    In this paper, a framework to estimate the coverage of AOPs in NPPs is proposed based on a SPV (Single Point Vulnerability) model. It is apparent that the sufficient coverage of AOPs is one of the prerequisites for improving the operational safety of NPPs because they provide a series of proper actions to be conducted by human operators, which are crucial for coping with off-normal conditions caused by the failure of critical components. In this light, the catalog of BEs (i.e., SPV components) identified from an SPV model could be a good source of information to enhance the coverage of AOPs. Unfortunately, because of the avalanche of the number of corresponding MCSs, it is inevitable to develop a screening process that allows us to select critical MCSs. For this reason, the MCSC score is defined along with the DIF concept. Based on the MCSC score, a framework that allows us to systematically investigate the coverage of AOPs is proposed in Ref. As a result, it is estimated that the coverage of AOPs being used in OPR1000 is about 63%. It should be noted that there are a couple of limitations in this study. For example, the precision of the abovementioned coverage entirely depends on that of the SPV model being scrutinized by the proposed framework. This implies that independent reviews of SMEs (Subject Matter Experts) who have sufficient knowledge on both the configuration and operation of NPPs are unavoidable to confirm the appropriateness of the suggested framework.

  11. Federally-Assisted Healthcare Coverage among Male State Prisoners with Chronic Health Problems.

    Science.gov (United States)

    Rosen, David L; Grodensky, Catherine A; Holley, Tara K

    2016-01-01

    Prisoners have higher rates of chronic diseases such as substance dependence, mental health conditions and infectious disease, as compared to the general population. We projected the number of male state prisoners with a chronic health condition who at release would be eligible or ineligible for healthcare coverage under the Affordable Care Act (ACA). We used ACA income guidelines in conjunction with reported pre-arrest social security benefits and income from a nationally representative sample of prisoners to estimate the number eligible for healthcare coverage at release. There were 643,290 US male prisoners aged 18-64 with a chronic health condition. At release, 73% in Medicaid-expansion states would qualify for Medicaid or tax credits. In non-expansion states, 54% would qualify for tax credits, but 22% (n = 69,827) had incomes of ≤ 100% the federal poverty limit and thus would be ineligible for ACA-mediated healthcare coverage. These prisoners comprise 11% of all male prisoners with a chronic condition. The ACA was projected to provide coverage to most male state prisoners with a chronic health condition; however, roughly 70,000 fall in the "coverage gap" and may require non-routine care at emergency departments. Mechanisms are needed to secure coverage for this at risk group and address barriers to routine utilization of health services.

  12. A methodology for extending domain coverage in SemRep.

    Science.gov (United States)

    Rosemblat, Graciela; Shin, Dongwook; Kilicoglu, Halil; Sneiderman, Charles; Rindflesch, Thomas C

    2013-12-01

    We describe a domain-independent methodology to extend SemRep coverage beyond the biomedical domain. SemRep, a natural language processing application originally designed for biomedical texts, uses the knowledge sources provided by the Unified Medical Language System (UMLS©). Ontological and terminological extensions to the system are needed in order to support other areas of knowledge. We extended SemRep's application by developing a semantic representation of a previously unsupported domain. This was achieved by adapting well-known ontology engineering phases and integrating them with the UMLS knowledge sources on which SemRep crucially depends. While the process to extend SemRep coverage has been successfully applied in earlier projects, this paper presents in detail the step-wise approach we followed and the mechanisms implemented. A case study in the field of medical informatics illustrates how the ontology engineering phases have been adapted for optimal integration with the UMLS. We provide qualitative and quantitative results, which indicate the validity and usefulness of our methodology. Published by Elsevier Inc.

  13. Vaccination coverage among children in kindergarten - United States, 2013-14 school year.

    Science.gov (United States)

    Seither, Ranee; Masalovich, Svetlana; Knighton, Cynthia L; Mellerson, Jenelle; Singleton, James A; Greby, Stacie M

    2014-10-17

    State and local vaccination requirements for school entry are implemented to maintain high vaccination coverage and protect schoolchildren from vaccine-preventable diseases. Each year, to assess state and national vaccination coverage and exemption levels among kindergartners, CDC analyzes school vaccination data collected by federally funded state, local, and territorial immunization programs. This report describes vaccination coverage in 49 states and the District of Columbia (DC) and vaccination exemption rates in 46 states and DC for children enrolled in kindergarten during the 2013-14 school year. Median vaccination coverage was 94.7% for 2 doses of measles, mumps, and rubella (MMR) vaccine; 95.0% for varying local requirements for diphtheria, tetanus toxoid, and acellular pertussis (DTaP) vaccine; and 93.3% for 2 doses of varicella vaccine among those states with a 2-dose requirement. The median total exemption rate was 1.8%. High exemption levels and suboptimal vaccination coverage leave children vulnerable to vaccine-preventable diseases. Although vaccination coverage among kindergartners for the majority of reporting states was at or near the 95% national Healthy People 2020 targets for 4 doses of DTaP, 2 doses of MMR, and 2 doses of varicella vaccine, low vaccination coverage and high exemption levels can cluster within communities. Immunization programs might have access to school vaccination coverage and exemption rates at a local level for counties, school districts, or schools that can identify areas where children are more vulnerable to vaccine-preventable diseases. Health promotion efforts in these local areas can be used to help parents understand the risks for vaccine-preventable diseases and the protection that vaccinations provide to their children.

  14. Mesh-based parallel code coupling interface

    Energy Technology Data Exchange (ETDEWEB)

    Wolf, K.; Steckel, B. (eds.) [GMD - Forschungszentrum Informationstechnik GmbH, St. Augustin (DE). Inst. fuer Algorithmen und Wissenschaftliches Rechnen (SCAI)

    2001-04-01

    MpCCI (mesh-based parallel code coupling interface) is an interface for multidisciplinary simulations. It provides industrial end-users as well as commercial code-owners with the facility to combine different simulation tools in one environment. Thereby new solutions for multidisciplinary problems will be created. This opens new application dimensions for existent simulation tools. This Book of Abstracts gives a short overview about ongoing activities in industry and research - all presented at the 2{sup nd} MpCCI User Forum in February 2001 at GMD Sankt Augustin. (orig.) [German] MpCCI (mesh-based parallel code coupling interface) definiert eine Schnittstelle fuer multidisziplinaere Simulationsanwendungen. Sowohl industriellen Anwender als auch kommerziellen Softwarehersteller wird mit MpCCI die Moeglichkeit gegeben, Simulationswerkzeuge unterschiedlicher Disziplinen miteinander zu koppeln. Dadurch entstehen neue Loesungen fuer multidisziplinaere Problemstellungen und fuer etablierte Simulationswerkzeuge ergeben sich neue Anwendungsfelder. Dieses Book of Abstracts bietet einen Ueberblick ueber zur Zeit laufende Arbeiten in der Industrie und in der Forschung, praesentiert auf dem 2{sup nd} MpCCI User Forum im Februar 2001 an der GMD Sankt Augustin. (orig.)

  15. Parallel asynchronous systems and image processing algorithms

    Science.gov (United States)

    Coon, D. D.; Perera, A. G. U.

    1989-01-01

    A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.

  16. Parallel Numerical Simulations of Water Reservoirs

    Science.gov (United States)

    Torres, Pedro; Mangiavacchi, Norberto

    2010-11-01

    The study of the water flow and scalar transport in water reservoirs is important for the determination of the water quality during the initial stages of the reservoir filling and during the life of the reservoir. For this scope, a parallel 2D finite element code for solving the incompressible Navier-Stokes equations coupled with scalar transport was implemented using the message-passing programming model, in order to perform simulations of hidropower water reservoirs in a computer cluster environment. The spatial discretization is based on the MINI element that satisfies the Babuska-Brezzi (BB) condition, which provides sufficient conditions for a stable mixed formulation. All the distributed data structures needed in the different stages of the code, such as preprocessing, solving and post processing, were implemented using the PETSc library. The resulting linear systems for the velocity and the pressure fields were solved using the projection method, implemented by an approximate block LU factorization. In order to increase the parallel performance in the solution of the linear systems, we employ the static condensation method for solving the intermediate velocity at vertex and centroid nodes separately. We compare performance results of the static condensation method with the approach of solving the complete system. In our tests the static condensation method shows better performance for large problems, at the cost of an increased memory usage. Performance results for other intensive parts of the code in a computer cluster are also presented.

  17. 42 CFR 457.410 - Health benefits coverage options.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Health benefits coverage options. 457.410 Section 457.410 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... State Plan Requirements: Coverage and Benefits § 457.410 Health benefits coverage options. (a) Types of...

  18. 7 CFR 457.172 - Coverage Enhancement Option.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 6 2010-01-01 2010-01-01 false Coverage Enhancement Option. 457.172 Section 457.172..., DEPARTMENT OF AGRICULTURE COMMON CROP INSURANCE REGULATIONS § 457.172 Coverage Enhancement Option. The Coverage Enhancement Option for the 2009 and succeeding crop years are as follows: FCIC policies: United...

  19. 20 CFR 701.401 - Coverage under state compensation programs.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Coverage under state compensation programs...; DEFINITIONS AND USE OF TERMS Coverage Under State Compensation Programs § 701.401 Coverage under state compensation programs. (a) Exclusions from the definition of “employee” under § 701.301(a)(12), and the...

  20. 20 CFR 404.1065 - Self-employment coverage.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Self-employment coverage. 404.1065 Section... INSURANCE (1950- ) Employment, Wages, Self-Employment, and Self-Employment Income Self-Employment § 404.1065 Self-employment coverage. For an individual to have self-employment coverage under social security, the...

  1. 42 CFR 435.350 - Coverage for certain aliens.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Coverage for certain aliens. 435.350 Section 435... ISLANDS, AND AMERICAN SAMOA Optional Coverage of the Medically Needy § 435.350 Coverage for certain aliens... treatment of an emergency medical condition, as defined in § 440.255(c) of this chapter, to those aliens...

  2. 42 CFR 436.128 - Coverage for certain qualified aliens.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Coverage for certain qualified aliens. 436.128... Mandatory Coverage of the Categorically Needy § 436.128 Coverage for certain qualified aliens. The agency... § 440.255(c) of this chapter to those aliens described in § 436.406(c) of this subpart. [55 FR 36820...

  3. 29 CFR 2.12 - Audiovisual coverage permitted.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true Audiovisual coverage permitted. 2.12 Section 2.12 Labor Office of the Secretary of Labor GENERAL REGULATIONS Audiovisual Coverage of Administrative Hearings § 2.12 Audiovisual coverage permitted. The following are the types of hearings where the Department...

  4. Computer Security in the Introductory Business Information Systems Course: An Exploratory Study of Textbook Coverage

    Science.gov (United States)

    Sousa, Kenneth J.; MacDonald, Laurie E.; Fougere, Kenneth T.

    2005-01-01

    The authors conducted an evaluation of Management Information Systems (MIS) textbooks and found that computer security receives very little in-depth coverage. The textbooks provide, at best, superficial treatment of security issues. The research results suggest that MIS faculty need to provide material to supplement the textbook to provide…

  5. 77 FR 70374 - Servicemembers' Group Life Insurance-Stillborn Child Coverage

    Science.gov (United States)

    2012-11-26

    ... is the biological mother of a stillborn and if both the surrogate and the stillborn's biological... the coverage of the child's SGLI-insured biological mother. This final rule will provide consistency... proceeds would be paid to the child's SGLI- insured mother. We provided a 60-day public-comment period...

  6. PDDP, A Data Parallel Programming Model

    Directory of Open Access Journals (Sweden)

    Karen H. Warren

    1996-01-01

    Full Text Available PDDP, the parallel data distribution preprocessor, is a data parallel programming model for distributed memory parallel computers. PDDP implements high-performance Fortran-compatible data distribution directives and parallelism expressed by the use of Fortran 90 array syntax, the FORALL statement, and the WHERE construct. Distributed data objects belong to a global name space; other data objects are treated as local and replicated on each processor. PDDP allows the user to program in a shared memory style and generates codes that are portable to a variety of parallel machines. For interprocessor communication, PDDP uses the fastest communication primitives on each platform.

  7. Parallelization of quantum molecular dynamics simulation code

    International Nuclear Information System (INIS)

    Kato, Kaori; Kunugi, Tomoaki; Shibahara, Masahiko; Kotake, Susumu

    1998-02-01

    A quantum molecular dynamics simulation code has been developed for the analysis of the thermalization of photon energies in the molecule or materials in Kansai Research Establishment. The simulation code is parallelized for both Scalar massively parallel computer (Intel Paragon XP/S75) and Vector parallel computer (Fujitsu VPP300/12). Scalable speed-up has been obtained with a distribution to processor units by division of particle group in both parallel computers. As a result of distribution to processor units not only by particle group but also by the particles calculation that is constructed with fine calculations, highly parallelization performance is achieved in Intel Paragon XP/S75. (author)

  8. Implementation and performance of parallelized elegant

    International Nuclear Information System (INIS)

    Wang, Y.; Borland, M.

    2008-01-01

    The program elegant is widely used for design and modeling of linacs for free-electron lasers and energy recovery linacs, as well as storage rings and other applications. As part of a multi-year effort, we have parallelized many aspects of the code, including single-particle dynamics, wakefields, and coherent synchrotron radiation. We report on the approach used for gradual parallelization, which proved very beneficial in getting parallel features into the hands of users quickly. We also report details of parallelization of collective effects. Finally, we discuss performance of the parallelized code in various applications.

  9. Unified Singularity Modeling and Reconfiguration of 3rTPS Metamorphic Parallel Mechanisms with Parallel Constraint Screws

    Directory of Open Access Journals (Sweden)

    Yufeng Zhuang

    2015-01-01

    Full Text Available This paper presents a unified singularity modeling and reconfiguration analysis of variable topologies of a class of metamorphic parallel mechanisms with parallel constraint screws. The new parallel mechanisms consist of three reconfigurable rTPS limbs that have two working phases stemming from the reconfigurable Hooke (rT joint. While one phase has full mobility, the other supplies a constraint force to the platform. Based on these, the platform constraint screw systems show that the new metamorphic parallel mechanisms have four topologies by altering the limb phases with mobility change among 1R2T (one rotation with two translations, 2R2T, and 3R2T and mobility 6. Geometric conditions of the mechanism design are investigated with some special topologies illustrated considering the limb arrangement. Following this and the actuation scheme analysis, a unified Jacobian matrix is formed using screw theory to include the change between geometric constraints and actuation constraints in the topology reconfiguration. Various singular configurations are identified by analyzing screw dependency in the Jacobian matrix. The work in this paper provides basis for singularity-free workspace analysis and optimal design of the class of metamorphic parallel mechanisms with parallel constraint screws which shows simple geometric constraints with potential simple kinematics and dynamics properties.

  10. Parallelization of 2-D lattice Boltzmann codes

    International Nuclear Information System (INIS)

    Suzuki, Soichiro; Kaburaki, Hideo; Yokokawa, Mitsuo.

    1996-03-01

    Lattice Boltzmann (LB) codes to simulate two dimensional fluid flow are developed on vector parallel computer Fujitsu VPP500 and scalar parallel computer Intel Paragon XP/S. While a 2-D domain decomposition method is used for the scalar parallel LB code, a 1-D domain decomposition method is used for the vector parallel LB code to be vectorized along with the axis perpendicular to the direction of the decomposition. High parallel efficiency of 95.1% by the vector parallel calculation on 16 processors with 1152x1152 grid and 88.6% by the scalar parallel calculation on 100 processors with 800x800 grid are obtained. The performance models are developed to analyze the performance of the LB codes. It is shown by our performance models that the execution speed of the vector parallel code is about one hundred times faster than that of the scalar parallel code with the same number of processors up to 100 processors. We also analyze the scalability in keeping the available memory size of one processor element at maximum. Our performance model predicts that the execution time of the vector parallel code increases about 3% on 500 processors. Although the 1-D domain decomposition method has in general a drawback in the interprocessor communication, the vector parallel LB code is still suitable for the large scale and/or high resolution simulations. (author)

  11. Parallelization of 2-D lattice Boltzmann codes

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Soichiro; Kaburaki, Hideo; Yokokawa, Mitsuo

    1996-03-01

    Lattice Boltzmann (LB) codes to simulate two dimensional fluid flow are developed on vector parallel computer Fujitsu VPP500 and scalar parallel computer Intel Paragon XP/S. While a 2-D domain decomposition method is used for the scalar parallel LB code, a 1-D domain decomposition method is used for the vector parallel LB code to be vectorized along with the axis perpendicular to the direction of the decomposition. High parallel efficiency of 95.1% by the vector parallel calculation on 16 processors with 1152x1152 grid and 88.6% by the scalar parallel calculation on 100 processors with 800x800 grid are obtained. The performance models are developed to analyze the performance of the LB codes. It is shown by our performance models that the execution speed of the vector parallel code is about one hundred times faster than that of the scalar parallel code with the same number of processors up to 100 processors. We also analyze the scalability in keeping the available memory size of one processor element at maximum. Our performance model predicts that the execution time of the vector parallel code increases about 3% on 500 processors. Although the 1-D domain decomposition method has in general a drawback in the interprocessor communication, the vector parallel LB code is still suitable for the large scale and/or high resolution simulations. (author).

  12. Self-balanced modulation and magnetic rebalancing method for parallel multilevel inverters

    Science.gov (United States)

    Li, Hui; Shi, Yanjun

    2017-11-28

    A self-balanced modulation method and a closed-loop magnetic flux rebalancing control method for parallel multilevel inverters. The combination of the two methods provides for balancing of the magnetic flux of the inter-cell transformers (ICTs) of the parallel multilevel inverters without deteriorating the quality of the output voltage. In various embodiments a parallel multi-level inverter modulator is provide including a multi-channel comparator to generate a multiplexed digitized ideal waveform for a parallel multi-level inverter and a finite state machine (FSM) module coupled to the parallel multi-channel comparator, the FSM module to receive the multiplexed digitized ideal waveform and to generate a pulse width modulated gate-drive signal for each switching device of the parallel multi-level inverter. The system and method provides for optimization of the output voltage spectrum without influence the magnetic balancing.

  13. Systematic approach for deriving feasible mappings of parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir; Imre, Kayhan M.

    2017-01-01

    The need for high-performance computing together with the increasing trend from single processor to parallel computer architectures has leveraged the adoption of parallel computing. To benefit from parallel computing power, usually parallel algorithms are defined that can be mapped and executed

  14. Experiences in Data-Parallel Programming

    Directory of Open Access Journals (Sweden)

    Terry W. Clark

    1997-01-01

    Full Text Available To efficiently parallelize a scientific application with a data-parallel compiler requires certain structural properties in the source program, and conversely, the absence of others. A recent parallelization effort of ours reinforced this observation and motivated this correspondence. Specifically, we have transformed a Fortran 77 version of GROMOS, a popular dusty-deck program for molecular dynamics, into Fortran D, a data-parallel dialect of Fortran. During this transformation we have encountered a number of difficulties that probably are neither limited to this particular application nor do they seem likely to be addressed by improved compiler technology in the near future. Our experience with GROMOS suggests a number of points to keep in mind when developing software that may at some time in its life cycle be parallelized with a data-parallel compiler. This note presents some guidelines for engineering data-parallel applications that are compatible with Fortran D or High Performance Fortran compilers.

  15. Streaming for Functional Data-Parallel Languages

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner

    In this thesis, we investigate streaming as a general solution to the space inefficiency commonly found in functional data-parallel programming languages. The data-parallel paradigm maps well to parallel SIMD-style hardware. However, the traditional fully materializing execution strategy...... by extending two existing data-parallel languages: NESL and Accelerate. In the extensions we map bulk operations to data-parallel streams that can evaluate fully sequential, fully parallel or anything in between. By a dataflow, piecewise parallel execution strategy, the runtime system can adjust to any target...... flattening necessitates all sub-computations to materialize at the same time. For example, naive n by n matrix multiplication requires n^3 space in NESL because the algorithm contains n^3 independent scalar multiplications. For large values of n, this is completely unacceptable. We address the problem...

  16. Design, analysis and control of cable-suspended parallel robots and its applications

    CERN Document Server

    Zi, Bin

    2017-01-01

    This book provides an essential overview of the authors’ work in the field of cable-suspended parallel robots, focusing on innovative design, mechanics, control, development and applications. It presents and analyzes several typical mechanical architectures of cable-suspended parallel robots in practical applications, including the feed cable-suspended structure for super antennae, hybrid-driven-based cable-suspended parallel robots, and cooperative cable parallel manipulators for multiple mobile cranes. It also addresses the fundamental mechanics of cable-suspended parallel robots on the basis of their typical applications, including the kinematics, dynamics and trajectory tracking control of the feed cable-suspended structure for super antennae. In addition it proposes a novel hybrid-driven-based cable-suspended parallel robot that uses integrated mechanism design methods to improve the performance of traditional cable-suspended parallel robots. A comparative study on error and performance indices of hybr...

  17. A possibility of parallel and anti-parallel diffraction measurements on neu- tron diffractometer employing bent perfect crystal monochromator at the monochromatic focusing condition

    Science.gov (United States)

    Choi, Yong Nam; Kim, Shin Ae; Kim, Sung Kyu; Kim, Sung Baek; Lee, Chang-Hee; Mikula, Pavel

    2004-07-01

    In a conventional diffractometer having single monochromator, only one position, parallel position, is used for the diffraction experiment (i.e. detection) because the resolution property of the other one, anti-parallel position, is very poor. However, a bent perfect crystal (BPC) monochromator at monochromatic focusing condition can provide a quite flat and equal resolution property at both parallel and anti-parallel positions and thus one can have a chance to use both sides for the diffraction experiment. From the data of the FWHM and the Delta d/d measured on three diffraction geometries (symmetric, asymmetric compression and asymmetric expansion), we can conclude that the simultaneous diffraction measurement in both parallel and anti-parallel positions can be achieved.

  18. Massively parallel diffuse optical tomography

    Energy Technology Data Exchange (ETDEWEB)

    Sandusky, John V.; Pitts, Todd A.

    2017-09-05

    Diffuse optical tomography systems and methods are described herein. In a general embodiment, the diffuse optical tomography system comprises a plurality of sensor heads, the plurality of sensor heads comprising respective optical emitter systems and respective sensor systems. A sensor head in the plurality of sensors heads is caused to act as an illuminator, such that its optical emitter system transmits a transillumination beam towards a portion of a sample. Other sensor heads in the plurality of sensor heads act as observers, detecting portions of the transillumination beam that radiate from the sample in the fields of view of the respective sensory systems of the other sensor heads. Thus, sensor heads in the plurality of sensors heads generate sensor data in parallel.

  19. Embodied and Distributed Parallel DJing.

    Science.gov (United States)

    Cappelen, Birgitta; Andersson, Anders-Petter

    2016-01-01

    Everyone has a right to take part in cultural events and activities, such as music performances and music making. Enforcing that right, within Universal Design, is often limited to a focus on physical access to public areas, hearing aids etc., or groups of persons with special needs performing in traditional ways. The latter might be people with disabilities, being musicians playing traditional instruments, or actors playing theatre. In this paper we focus on the innovative potential of including people with special needs, when creating new cultural activities. In our project RHYME our goal was to create health promoting activities for children with severe disabilities, by developing new musical and multimedia technologies. Because of the users' extreme demands and rich contribution, we ended up creating both a new genre of musical instruments and a new art form. We call this new art form Embodied and Distributed Parallel DJing, and the new genre of instruments for Empowering Multi-Sensorial Things.

  20. Device for balancing parallel strings

    Science.gov (United States)

    Mashikian, Matthew S.

    1985-01-01

    A battery plant is described which features magnetic circuit means in association with each of the battery strings in the battery plant for balancing the electrical current flow through the battery strings by equalizing the voltage across each of the battery strings. Each of the magnetic circuit means generally comprises means for sensing the electrical current flow through one of the battery strings, and a saturable reactor having a main winding connected electrically in series with the battery string, a bias winding connected to a source of alternating current and a control winding connected to a variable source of direct current controlled by the sensing means. Each of the battery strings is formed by a plurality of batteries connected electrically in series, and these battery strings are connected electrically in parallel across common bus conductors.

  1. Linear parallel processing machines I

    Energy Technology Data Exchange (ETDEWEB)

    Von Kunze, M

    1984-01-01

    As is well-known, non-context-free grammars for generating formal languages happen to be of a certain intrinsic computational power that presents serious difficulties to efficient parsing algorithms as well as for the development of an algebraic theory of contextsensitive languages. In this paper a framework is given for the investigation of the computational power of formal grammars, in order to start a thorough analysis of grammars consisting of derivation rules of the form aB ..-->.. A/sub 1/ ... A /sub n/ b/sub 1/...b /sub m/ . These grammars may be thought of as automata by means of parallel processing, if one considers the variables as operators acting on the terminals while reading them right-to-left. This kind of automata and their 2-dimensional programming language prove to be useful by allowing a concise linear-time algorithm for integer multiplication. Linear parallel processing machines (LP-machines) which are, in their general form, equivalent to Turing machines, include finite automata and pushdown automata (with states encoded) as special cases. Bounded LP-machines yield deterministic accepting automata for nondeterministic contextfree languages, and they define an interesting class of contextsensitive languages. A characterization of this class in terms of generating grammars is established by using derivation trees with crossings as a helpful tool. From the algebraic point of view, deterministic LP-machines are effectively represented semigroups with distinguished subsets. Concerning the dualism between generating and accepting devices of formal languages within the algebraic setting, the concept of accepting automata turns out to reduce essentially to embeddability in an effectively represented extension monoid, even in the classical cases.

  2. Parallel computing in enterprise modeling.

    Energy Technology Data Exchange (ETDEWEB)

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

  3. Parallel computing for event reconstruction in high-energy physics

    International Nuclear Information System (INIS)

    Wolbers, S.

    1993-01-01

    Parallel computing has been recognized as a solution to large computing problems. In High Energy Physics offline event reconstruction of detector data is a very large computing problem that has been solved with parallel computing techniques. A review of the parallel programming package CPS (Cooperative Processes Software) developed and used at Fermilab for offline reconstruction of Terabytes of data requiring the delivery of hundreds of Vax-Years per experiment is given. The Fermilab UNIX farms, consisting of 180 Silicon Graphics workstations and 144 IBM RS6000 workstations, are used to provide the computing power for the experiments. Fermilab has had a long history of providing production parallel computing starting with the ACP (Advanced Computer Project) Farms in 1986. The Fermilab UNIX Farms have been in production for over 2 years with 24 hour/day service to experimental user groups. Additional tools for management, control and monitoring these large systems will be described. Possible future directions for parallel computing in High Energy Physics will be given

  4. Parallel Careers and their Consequences for Companies in Brazil

    Directory of Open Access Journals (Sweden)

    Maria Candida Baumer Azevedo

    2014-04-01

    Full Text Available Given the relevance of the need to manage parallel careers to attract and retain people in organizations, this paper provides insight into this phenomenon from an organizational perspective. The parallel career concept, introduced by Alboher (2007 and recently addressed by Schuiling (2012, has previously been examined only from the perspective of the parallel career holder (PC holder. The paper provides insight from both individual and organizational perspectives on the phenomenon of parallel careers and considers how it can function as an important tool for attracting and retaining people by contributing to human development. This paper employs a qualitative approach that includes 30 semi-structured one-on-one interviews. The organizational perspective arises from the 15 interviews with human resources (HR executives from different companies. The individual viewpoint originates from the interviews with 15 executives who are also PC holders. An inductive content analysis approach was used to examine Brazilian companies and the Brazilian office of multinationals. Companies that are concerned about having the best talent on their teams can benefit from a deeper understanding of parallel careers, which can be used to attract, develop, and retain talent. Limitations and directions for future research are discussed.

  5. Parallel Tensor Compression for Large-Scale Scientific Data.

    Energy Technology Data Exchange (ETDEWEB)

    Kolda, Tamara G. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ballard, Grey [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Austin, Woody Nathan [Univ. of Texas, Austin, TX (United States)

    2015-10-01

    As parallel computing trends towards the exascale, scientific data produced by high-fidelity simulations are growing increasingly massive. For instance, a simulation on a three-dimensional spatial grid with 512 points per dimension that tracks 64 variables per grid point for 128 time steps yields 8 TB of data. By viewing the data as a dense five way tensor, we can compute a Tucker decomposition to find inherent low-dimensional multilinear structure, achieving compression ratios of up to 10000 on real-world data sets with negligible loss in accuracy. So that we can operate on such massive data, we present the first-ever distributed memory parallel implementation for the Tucker decomposition, whose key computations correspond to parallel linear algebra operations, albeit with nonstandard data layouts. Our approach specifies a data distribution for tensors that avoids any tensor data redistribution, either locally or in parallel. We provide accompanying analysis of the computation and communication costs of the algorithms. To demonstrate the compression and accuracy of the method, we apply our approach to real-world data sets from combustion science simulations. We also provide detailed performance results, including parallel performance in both weak and strong scaling experiments.

  6. Financing universal coverage in Malaysia: a case study.

    Science.gov (United States)

    Chua, Hong Teck; Cheah, Julius Chee Ho

    2012-01-01

    One of the challenges to maintain an agenda for universal coverage and equitable health system is to develop effective structuring and management of health financing. Global experiences with different systems of health financing suggests that a strong public role in health financing is essential for health systems to protect the poor and health systems with the strongest state role are likely the more equitable and achieve better aggregate health outcomes. Using Malaysia as a case study, this paper seeks to evaluate the progress and capacity of a middle income country in terms of health financing for universal coverage, and also to highlight some of the key underlying health systems challenges.The WHO Health Financing Strategy for the Asia Pacific Region (2010-2015) was used as the framework to evaluate the Malaysian healthcare financing system in terms of the provision of universal coverage for the population, and the Malaysian National Health Accounts (2008) provided the latest Malaysian data on health spending. Measuring against the four target indicators outlined, Malaysia fared credibly with total health expenditure close to 5% of its GDP (4.75%), out-of-pocket payment below 40% of total health expenditure (30.7%), comprehensive social safety nets for vulnerable populations, and a tax-based financing system that fundamentally poses as a national risk-pooled scheme for the population.Nonetheless, within a holistic systems framework, the financing component interacts synergistically with other health system spheres. In Malaysia, outmigration of public health workers particularly specialist doctors remains an issue and financing strategies critically needs to incorporate a comprehensive workforce compensation strategy to improve the health workforce skill mix. Health expenditure information is systematically collated, but feedback from the private sector remains a challenge. Service delivery-wise, there is a need to enhance financing capacity to expand preventive

  7. Analytical methodologies for broad metabolite coverage of exhaled breath condensate.

    Science.gov (United States)

    Aksenov, Alexander A; Zamuruyev, Konstantin O; Pasamontes, Alberto; Brown, Joshua F; Schivo, Michael; Foutouhi, Soraya; Weimer, Bart C; Kenyon, Nicholas J; Davis, Cristina E

    2017-09-01

    Breath analysis has been gaining popularity as a non-invasive technique that is amenable to a broad range of medical uses. One of the persistent problems hampering the wide application of the breath analysis method is measurement variability of metabolite abundances stemming from differences in both sampling and analysis methodologies used in various studies. Mass spectrometry has been a method of choice for comprehensive metabolomic analysis. For the first time in the present study, we juxtapose the most commonly employed mass spectrometry-based analysis methodologies and directly compare the resultant coverages of detected compounds in exhaled breath condensate in order to guide methodology choices for exhaled breath condensate analysis studies. Four methods were explored to broaden the range of measured compounds across both the volatile and non-volatile domain. Liquid phase sampling with polyacrylate Solid-Phase MicroExtraction fiber, liquid phase extraction with a polydimethylsiloxane patch, and headspace sampling using Carboxen/Polydimethylsiloxane Solid-Phase MicroExtraction (SPME) followed by gas chromatography mass spectrometry were tested for the analysis of volatile fraction. Hydrophilic interaction liquid chromatography and reversed-phase chromatography high performance liquid chromatography mass spectrometry were used for analysis of non-volatile fraction. We found that liquid phase breath condensate extraction was notably superior compared to headspace extraction and differences in employed sorbents manifested altered metabolite coverages. The most pronounced effect was substantially enhanced metabolite capture for larger, higher-boiling compounds using polyacrylate SPME liquid phase sampling. The analysis of the non-volatile fraction of breath condensate by hydrophilic and reverse phase high performance liquid chromatography mass spectrometry indicated orthogonal metabolite coverage by these chromatography modes. We found that the metabolite coverage

  8. Breast Health Services: Accuracy of Benefit Coverage Information in the Individual Insurance Marketplace.

    Science.gov (United States)

    Hamid, Mariam S; Kolenic, Giselle E; Dozier, Jessica; Dalton, Vanessa K; Carlos, Ruth C

    2017-04-01

    The aim of this study was to determine if breast health coverage information provided by customer service representatives employed by insurers offering plans in the 2015 federal and state health insurance marketplaces is consistent with Patient Protection and Affordable Care Act (ACA) and state-specific legislation. One hundred fifty-eight unique customer service numbers were identified for insurers offering plans through the federal marketplace, augmented with four additional numbers representing the Connecticut state-run exchange. Using a standardized patient biography and the mystery-shopper technique, a single investigator posed as a purchaser and contacted each number, requesting information on breast health services coverage. Consistency of information provided by the representative with the ACA mandates (BRCA testing in high-risk women) or state-specific legislation (screening ultrasound in women with dense breasts) was determined. Insurer representatives gave BRCA test coverage information that was not consistent with the ACA mandate in 60.8% of cases, and 22.8% could not provide any information regarding coverage. Nearly half (48.1%) of insurer representatives gave coverage information about ultrasound screening for dense breasts that was not consistent with state-specific legislation, and 18.5% could not provide any information. Insurance customer service representatives in the federal and state marketplaces frequently provide inaccurate coverage information about breast health services that should be covered under the ACA and state-specific legislation. Misinformation can inadvertently lead to the purchase of a plan that does not meet the needs of the insured. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  9. Summarizing an Ontology: A "Big Knowledge" Coverage Approach.

    Science.gov (United States)

    Zheng, Ling; Perl, Yehoshua; Elhanan, Gai; Ochs, Christopher; Geller, James; Halper, Michael

    2017-01-01

    Maintenance and use of a large ontology, consisting of thousands of knowledge assertions, are hampered by its scope and complexity. It is important to provide tools for summarization of ontology content in order to facilitate user "big picture" comprehension. We present a parameterized methodology for the semi-automatic summarization of major topics in an ontology, based on a compact summary of the ontology, called an "aggregate partial-area taxonomy", followed by manual enhancement. An experiment is presented to test the effectiveness of such summarization measured by coverage of a given list of major topics of the corresponding application domain. SNOMED CT's Specimen hierarchy is the test-bed. A domain-expert provided a list of topics that serves as a gold standard. The enhanced results show that the aggregate taxonomy covers most of the domain's main topics.

  10. Parallelization of the Coupled Earthquake Model

    Science.gov (United States)

    Block, Gary; Li, P. Peggy; Song, Yuhe T.

    2007-01-01

    This Web-based tsunami simulation system allows users to remotely run a model on JPL s supercomputers for a given undersea earthquake. At the time of this reporting, predicting tsunamis on the Internet has never happened before. This new code directly couples the earthquake model and the ocean model on parallel computers and improves simulation speed. Seismometers can only detect information from earthquakes; they cannot detect whether or not a tsunami may occur as a result of the earthquake. When earthquake-tsunami models are coupled with the improved computational speed of modern, high-performance computers and constrained by remotely sensed data, they are able to provide early warnings for those coastal regions at risk. The software is capable of testing NASA s satellite observations of tsunamis. It has been successfully tested for several historical tsunamis, has passed all alpha and beta testing, and is well documented for users.

  11. Parallel discrete event simulation using shared memory

    Science.gov (United States)

    Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.

    1988-01-01

    With traditional event-list techniques, evaluating a detailed discrete-event simulation-model can often require hours or even days of computation time. By eliminating the event list and maintaining only sufficient synchronization to ensure causality, parallel simulation can potentially provide speedups that are linear in the numbers of processors. A set of shared-memory experiments, using the Chandy-Misra distributed-simulation algorithm, to simulate networks of queues is presented. Parameters of the study include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential-simulation of most queueing network models.

  12. Coverage maximization for a poisson field of drone cells

    KAUST Repository

    Azari, Mohammad Mahdi

    2018-02-15

    The use of drone base stations to provide wireless connectivity for ground terminals is becoming a promising part of future technologies. The design of such aerial networks is however different compared to cellular 2D networks, as antennas from the drones are looking down, and the channel model becomes height-dependent. In this paper, we study the effect of antenna patterns and height-dependent shadowing. We consider a random network topology to capture the effect of dynamic changes of the flying base stations. First we characterize the aggregate interference imposed by the co-channel neighboring drones. Then we derive the link coverage probability between a ground user and its associated drone base station. The result is used to obtain the optimum system parameters in terms of drones antenna beamwidth, density and altitude. We also derive the average LoS probability of the associated drone and show that it is a good approximation and simplification of the coverage probability in low altitudes up to 500 m according to the required signal-to-interference-plus-noise ratio (SINR).

  13. From coverage to care: addressing the issue of churn.

    Science.gov (United States)

    Milligan, Charles

    2015-02-01

    In any given year, a significant number of individuals will move between Medicaid and qualified health plans (QHP). Known as "churn," this movement could disrupt continuity of health care services, even when no gap in insurance coverage exists. The number of people who churn in any given year is significant, and they often are significant utilizers of health care services. They could experience disruption in care in several ways: (1) changing carrier; (2) changing provider because of network differences; (3) a disruption in ongoing services, even when the benefit is covered in both programs (e.g., surgery that has been authorized but not yet performed; ongoing prescription medications for chronic illness; or some but not all therapy or counseling sessions have been completed); and (4) the loss of coverage for a service that is not a covered benefit in the new program. Many strategies are available to states to reduce the disruption caused by churn. The specific option, intervention, and set of policies in a given state will depend on its context. Policy makers would benefit from an examination and discussion of these issues. Copyright © 2015 by Duke University Press.

  14. Personality Disorder Models and their Coverage of Interpersonal Problems

    Science.gov (United States)

    Williams, Trevor F.; Simms, Leonard J.

    2015-01-01

    Interpersonal dysfunction is a defining feature of personality disorders (PDs) and can serve as a criterion for comparing PD models. In this study, the interpersonal coverage of four competing PD models was examined using a sample of 628 current or recent psychiatric patients who completed the NEO Personality Inventory-3 First Half (NEO-PI-3FH; McCrae & Costa, 2007), Personality Inventory for the DSM-5 (PID-5; Krueger et al., 2012), Computerized Adaptive Test of Personality Disorder-Static Form (CAT-PD-SF; Simms et al., 2011), and Structured Clinical Interview for DSM-IV Personality Questionnaire (SCID-II PQ; First, Spitzer, Gibbon, & Williams, 1995). Participants also completed the Inventory of Interpersonal Problems-Short Circumplex (IIP-SC; Soldz, Budman, Demby, & Merry, 1995) to assess interpersonal dysfunction. Analyses compared the severity and style of interpersonal problems that characterize PD models. Previous research with DSM-5 Section II and III models was generally replicated. Extraversion and Agreeableness facets related to the most well defined interpersonal problems across normal-range and pathological traits. Pathological trait models provided more coverage of dominance problems, whereas normal-range traits covered nonassertiveness better. These results suggest that more work may be needed to reconcile descriptions of personality pathology at the level of specific constructs. PMID:26168406

  15. Universal Health Coverage for Schizophrenia: A Global Mental Health Priority.

    Science.gov (United States)

    Patel, Vikram

    2016-07-01

    The growing momentum towards a global consensus on universal health coverage, alongside an acknowledgment of the urgency and importance of a comprehensive mental health action plan, offers a unique opportunity for a substantial scale-up of evidence-based interventions and packages of care for a range of mental disorders in all countries. There is a robust evidence base testifying to the effectiveness of drug and psychosocial interventions for people with schizophrenia and to the feasibility, acceptability and cost-effectiveness of the delivery of these interventions through a collaborative care model in low resource settings. While there are a number of barriers to scaling up this evidence, for eg, the finances needed to train and deploy community based workers and the lack of agency for people with schizophrenia, the experiences of some upper middle income countries show that sustained political commitment, allocation of transitional financial resources to develop community services, a commitment to an integrated approach with a strong role for community based institutions and providers, and a progressive realization of coverage are the key ingredients for scale up of services for schizophrenia. © The Author 2015. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center.

  16. Device evaluation and coverage policy in workers' compensation: examples from Washington State.

    Science.gov (United States)

    Franklin, G M; Lifka, J; Milstein, J

    1998-09-25

    Workers' compensation health benefits are broader than general health benefits and include payment for medical and rehabilitation costs, associated indemnity (lost time) costs, and vocational rehabilitation (return-to-work) costs. In addition, cost liability is for the life of the claim (injury), rather than for each plan year. We examined device evaluation and coverage policy in workers' compensation over a 10-year period in Washington State. Most requests for device coverage in workers' compensation relate to the diagnosis, prognosis, or treatment of chronic musculoskeletal conditions. A number of specific problems have been recognized in making device coverage decisions within workers' compensation: (1) invasive devices with a high adverse event profile and history of poor outcomes could significantly increase both indemnity and medical costs; (2) many noninvasive devices, while having a low adverse event profile, have not proved effective for managing chronic musculoskeletal conditions relevant to injured workers; (3) some devices are marketed and billed as surrogate diagnostic tests for generally accepted, and more clearly proven, standard tests; (4) quality oversight of technology use among physicians may be inadequate; and (5) insurers' access to efficacy data adequate to make timely and appropriate coverage decisions in workers' compensation is often lacking. Emerging technology may substantially increase the costs of workers' compensation without significant evidence of health benefit for injured workers. To prevent ever-rising costs, we need to increase provider education and patient education and consent, involve the state medical society in coverage policy, and collect relevant outcomes data from healthcare providers.

  17. Investigation of growth, coverage and effectiveness of plasma assisted nano-films of fluorocarbon

    International Nuclear Information System (INIS)

    Joshi, Pratik P.; Pulikollu, Rajasekhar; Higgins, Steven R.; Hu Xiaoming; Mukhopadhyay, S.M.

    2006-01-01

    Plasma-assisted functional films have significant potential in various engineering applications. They can be tailored to impart desired properties by bonding specific molecular groups to the substrate surface. The aim of this investigation was to develop a fundamental understanding of the atomic level growth, coverage and functional effectiveness of plasma nano-films on flat surfaces and to explore their application-potential for complex and uneven shaped nano-materials. In this paper, results on plasma-assisted nano-scale fluorocarbon films, which are known for imparting inertness or hydrophobicity to the surface, will be discussed. The film deposition was studied as a function of time on flat single crystal surfaces of silicon, sapphire and graphite, using microwave plasma. X-ray photoelectron spectroscopy (XPS) was used for detailed study of composition and chemistry of the substrate and coating atoms, at all stages of deposition. Atomic force microscopy (AFM) was performed in parallel to study the coverage and growth morphology of these films at each stage. Combined XPS and AFM results indicated complete coverage of all the substrates at the nanometer scale. It was also shown that these films grew in a layer-by-layer fashion. The nano-films were also applied to complex and uneven shaped nano-structured and porous materials, such as microcellular porous foam and nano fibers. It was seen that these nano-films can be a viable approach for effective surface modification of complex or uneven shaped nano-materials

  18. Construction Morphology and the Parallel Architecture of Grammar

    Science.gov (United States)

    Booij, Geert; Audring, Jenny

    2017-01-01

    This article presents a systematic exposition of how the basic ideas of Construction Grammar (CxG) (Goldberg, 2006) and the Parallel Architecture (PA) of grammar (Jackendoff, 2002]) provide the framework for a proper account of morphological phenomena, in particular word formation. This framework is referred to as Construction Morphology (CxM). As…

  19. Using parallel computing in modeling and optimization of mineral ...

    African Journals Online (AJOL)

    Then to solve ultimate pit limit problem it is required to find such a sub graph in a graph whose sum of weights will be maximal. One of the possible solutions of this problem is using genetic algorithms. We use a ... Details of implementation parallel genetic algorithm for searching open pit limits are provided. Comparison with ...

  20. Small file aggregation in a parallel computing system

    Science.gov (United States)

    Faibish, Sorin; Bent, John M.; Tzelnic, Percy; Grider, Gary; Zhang, Jingwang

    2014-09-02

    Techniques are provided for small file aggregation in a parallel computing system. An exemplary method for storing a plurality of files generated by a plurality of processes in a parallel computing system comprises aggregating the plurality of files into a single aggregated file; and generating metadata for the single aggregated file. The metadata comprises an offset and a length of each of the plurality of files in the single aggregated file. The metadata can be used to unpack one or more of the files from the single aggregated file.