WorldWideScience

Sample records for providing parallel coverage

  1. Providing Universal Health Insurance Coverage in Nigeria.

    Science.gov (United States)

    Okebukola, Peter O; Brieger, William R

    2016-07-07

    Despite a stated goal of achieving universal coverage, the National Health Insurance Scheme of Nigeria had achieved only 4% coverage 12 years after it was launched. This study assessed the plans of the National Health Insurance Scheme to achieve universal health insurance coverage in Nigeria by 2015 and discusses the challenges facing the scheme in achieving insurance coverage. In-depth interviews from various levels of the health-care system in the country, including providers, were conducted. The results of the analysis suggest that challenges to extending coverage include the difficulty in convincing autonomous state governments to buy into the scheme and an inadequate health workforce that might not be able to meet increased demand. Recommendations for increasing the scheme's coverage include increasing decentralization and strengthening human resources for health in the service delivery systems. Strong political will is needed as a catalyst to achieving these goals. © The Author(s) 2016.

  2. Parallel processing from applications to systems

    CERN Document Server

    Moldovan, Dan I

    1993-01-01

    This text provides one of the broadest presentations of parallelprocessing available, including the structure of parallelprocessors and parallel algorithms. The emphasis is on mappingalgorithms to highly parallel computers, with extensive coverage ofarray and multiprocessor architectures. Early chapters provideinsightful coverage on the analysis of parallel algorithms andprogram transformations, effectively integrating a variety ofmaterial previously scattered throughout the literature. Theory andpractice are well balanced across diverse topics in this concisepresentation. For exceptional cla

  3. Patient Experience Of Provider Refusal Of Medicaid Coverage And Its Implications.

    Science.gov (United States)

    Bhandari, Neeraj; Shi, Yunfeng; Jung, Kyoungrae

    2016-01-01

    Previous studies show that many physicians do not accept new patients with Medicaid coverage, but no study has examined Medicaid enrollees' actual experience of provider refusal of their coverage and its implications. Using the 2012 National Health Interview Survey, we estimate provider refusal of health insurance coverage reported by 23,992 adults with continuous coverage for the past 12 months. We find that among Medicaid enrollees, 6.73% reported their coverage being refused by a provider in 2012, a rate higher than that in Medicare and private insurance by 4.07 (p<.01) and 3.68 (p<.001) percentage points, respectively. Refusal of Medicaid coverage is associated with delaying needed care, using emergency room (ER) as a usual source of care, and perceiving current coverage as worse than last year. In view of the Affordable Care Act's (ACA) Medicaid expansion, future studies should continue monitoring enrollees' experience of coverage refusal.

  4. A NEPA compliance strategy plan for providing programmatic coverage to agency problems

    International Nuclear Information System (INIS)

    Eccleston, C.H.

    1994-04-01

    The National Environmental Policy Act (NEPA) of 1969, requires that all federal actions be reviewed before making a final decision to pursue a proposed action or one of its reasonable alternatives. The NEPA process is expected to begin early in the planning process. This paper discusses an approach for providing efficient and comprehensive NEPA coverage to large-scale programs. Particular emphasis has been given to determining bottlenecks and developing workarounds to such problems. Specifically, the strategy is designed to meet four specific goals: (1) provide comprehensive coverage, (2) reduce compliance cost/time, (3) prevent project delays, and (4) reduce document obsolescence

  5. Defining the essential anatomical coverage provided by military body armour against high energy projectiles.

    Science.gov (United States)

    Breeze, John; Lewis, E A; Fryer, R; Hepper, A E; Mahoney, Peter F; Clasper, Jon C

    2016-08-01

    Body armour is a type of equipment worn by military personnel that aims to prevent or reduce the damage caused by ballistic projectiles to structures within the thorax and abdomen. Such injuries remain the leading cause of potentially survivable deaths on the modern battlefield. Recent developments in computer modelling in conjunction with a programme to procure the next generation of UK military body armour has provided the impetus to re-evaluate the optimal anatomical coverage provided by military body armour against high energy projectiles. A systematic review of the literature was undertaken to identify those anatomical structures within the thorax and abdomen that if damaged were highly likely to result in death or significant long-term morbidity. These structures were superimposed upon two designs of ceramic plate used within representative body armour systems using a computerised representation of human anatomy. Those structures requiring essential medical coverage by a plate were demonstrated to be the heart, great vessels, liver and spleen. For the 50th centile male anthropometric model used in this study, the front and rear plates from the Enhanced Combat Body Armour system only provide limited coverage, but do fulfil their original requirement. The plates from the current Mark 4a OSPREY system cover all of the structures identified in this study as requiring coverage except for the abdominal sections of the aorta and inferior vena cava. Further work on sizing of plates is recommended due to its potential to optimise essential medical coverage. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  6. The generation of chromosomal deletions to provide extensive coverage and subdivision of the Drosophila melanogaster genome.

    Science.gov (United States)

    Cook, R Kimberley; Christensen, Stacey J; Deal, Jennifer A; Coburn, Rachel A; Deal, Megan E; Gresens, Jill M; Kaufman, Thomas C; Cook, Kevin R

    2012-01-01

    Chromosomal deletions are used extensively in Drosophila melanogaster genetics research. Deletion mapping is the primary method used for fine-scale gene localization. Effective and efficient deletion mapping requires both extensive genomic coverage and a high density of molecularly defined breakpoints across the genome. A large-scale resource development project at the Bloomington Drosophila Stock Center has improved the choice of deletions beyond that provided by previous projects. FLP-mediated recombination between FRT-bearing transposon insertions was used to generate deletions, because it is efficient and provides single-nucleotide resolution in planning deletion screens. The 793 deletions generated pushed coverage of the euchromatic genome to 98.4%. Gaps in coverage contain haplolethal and haplosterile genes, but the sizes of these gaps were minimized by flanking these genes as closely as possible with deletions. In improving coverage, a complete inventory of haplolethal and haplosterile genes was generated and extensive information on other haploinsufficient genes was compiled. To aid mapping experiments, a subset of deletions was organized into a Deficiency Kit to provide maximal coverage efficiently. To improve the resolution of deletion mapping, screens were planned to distribute deletion breakpoints evenly across the genome. The median chromosomal interval between breakpoints now contains only nine genes and 377 intervals contain only single genes. Drosophila melanogaster now has the most extensive genomic deletion coverage and breakpoint subdivision as well as the most comprehensive inventory of haploinsufficient genes of any multicellular organism. The improved selection of chromosomal deletion strains will be useful to nearly all Drosophila researchers.

  7. 42 CFR 423.464 - Coordination of benefits with other providers of prescription drug coverage.

    Science.gov (United States)

    2010-10-01

    ... fees. CMS may impose user fees on Part D plans for the transmittal of information necessary for benefit...) Provides supplemental drug coverage to individuals based on financial need, age, or medical condition, and... effective exchange of information and coordination between such plan and SPAPs and entities providing other...

  8. 75 FR 27141 - Group Health Plans and Health Insurance Issuers Providing Dependent Coverage of Children to Age...

    Science.gov (United States)

    2010-05-13

    ... Group Health Plans and Health Insurance Issuers Providing Dependent Coverage of Children to Age 26 Under... Information and Insurance Oversight of the U.S. Department of Health and Human Services are issuing substantially similar interim final regulations with respect to group health plans and health insurance coverage...

  9. 75 FR 41787 - Requirement for Group Health Plans and Health Insurance Issuers To Provide Coverage of Preventive...

    Science.gov (United States)

    2010-07-19

    ... Requirement for Group Health Plans and Health Insurance Issuers To Provide Coverage of Preventive Services... Insurance Oversight of the U.S. Department of Health and Human Services are issuing substantially similar interim final regulations with respect to group health plans and health insurance coverage offered in...

  10. A PC parallel port button box provides millisecond response time accuracy under Linux.

    Science.gov (United States)

    Stewart, Neil

    2006-02-01

    For psychologists, it is sometimes necessary to measure people's reaction times to the nearest millisecond. This article describes how to use the PC parallel port to receive signals from a button box to achieve millisecond response time accuracy. The workings of the parallel port, the corresponding port addresses, and a simple Linux program for controlling the port are described. A test of the speed and reliability of button box signal detection is reported. If the reader is moderately familiar with Linux, this article should provide sufficient instruction for him or her to build and test his or her own parallel port button box. This article also describes how the parallel port could be used to control an external apparatus.

  11. Coverage of the Stanford Prison Experiment in Introductory Psychology Courses

    Science.gov (United States)

    Bartels, Jared M.; Milovich, Marilyn M.; Moussier, Sabrina

    2016-01-01

    The present study examined the coverage of Stanford prison experiment (SPE), including criticisms of the study, in introductory psychology courses through an online survey of introductory psychology instructors (N = 117). Results largely paralleled those of the recently published textbook analyses with ethical issues garnering the most coverage,…

  12. Advanced parallel processing with supercomputer architectures

    International Nuclear Information System (INIS)

    Hwang, K.

    1987-01-01

    This paper investigates advanced parallel processing techniques and innovative hardware/software architectures that can be applied to boost the performance of supercomputers. Critical issues on architectural choices, parallel languages, compiling techniques, resource management, concurrency control, programming environment, parallel algorithms, and performance enhancement methods are examined and the best answers are presented. The authors cover advanced processing techniques suitable for supercomputers, high-end mainframes, minisupers, and array processors. The coverage emphasizes vectorization, multitasking, multiprocessing, and distributed computing. In order to achieve these operation modes, parallel languages, smart compilers, synchronization mechanisms, load balancing methods, mapping parallel algorithms, operating system functions, application library, and multidiscipline interactions are investigated to ensure high performance. At the end, they assess the potentials of optical and neural technologies for developing future supercomputers

  13. Non-Cartesian parallel imaging reconstruction.

    Science.gov (United States)

    Wright, Katherine L; Hamilton, Jesse I; Griswold, Mark A; Gulani, Vikas; Seiberlich, Nicole

    2014-11-01

    Non-Cartesian parallel imaging has played an important role in reducing data acquisition time in MRI. The use of non-Cartesian trajectories can enable more efficient coverage of k-space, which can be leveraged to reduce scan times. These trajectories can be undersampled to achieve even faster scan times, but the resulting images may contain aliasing artifacts. Just as Cartesian parallel imaging can be used to reconstruct images from undersampled Cartesian data, non-Cartesian parallel imaging methods can mitigate aliasing artifacts by using additional spatial encoding information in the form of the nonhomogeneous sensitivities of multi-coil phased arrays. This review will begin with an overview of non-Cartesian k-space trajectories and their sampling properties, followed by an in-depth discussion of several selected non-Cartesian parallel imaging algorithms. Three representative non-Cartesian parallel imaging methods will be described, including Conjugate Gradient SENSE (CG SENSE), non-Cartesian generalized autocalibrating partially parallel acquisition (GRAPPA), and Iterative Self-Consistent Parallel Imaging Reconstruction (SPIRiT). After a discussion of these three techniques, several potential promising clinical applications of non-Cartesian parallel imaging will be covered. © 2014 Wiley Periodicals, Inc.

  14. Radiographic Underestimation of In Vivo Cup Coverage Provided by Total Hip Arthroplasty for Dysplasia.

    Science.gov (United States)

    Nie, Yong; Wang, HaoYang; Huang, ZeYu; Shen, Bin; Kraus, Virginia Byers; Zhou, Zongke

    2018-01-01

    The accuracy of using 2-dimensional anteroposterior pelvic radiography to assess acetabular cup coverage among patients with developmental dysplasia of the hip after total hip arthroplasty (THA) remains unclear in retrospective clinical studies. A group of 20 patients with developmental dysplasia of the hip (20 hips) underwent cementless THA. During surgery but after acetabular reconstruction, bone wax was pressed onto the uncovered surface of the acetabular cup. A surface model of the bone wax was generated with 3-dimensional scanning. The percentage of the acetabular cup that was covered by intact host acetabular bone in vivo was calculated with modeling software. Acetabular cup coverage also was determined from a postoperative supine anteroposterior pelvic radiograph. The height of the hip center (distance from the center of the femoral head perpendicular to the inter-teardrop line) also was determined from radiographs. Radiographic cup coverage was a mean of 6.93% (SD, 2.47%) lower than in vivo cup coverage for these 20 patients with developmental dysplasia of the hip (Pcup coverage (Pearson r=0.761, Pcup (P=.001) but not the position of the hip center (high vs normal) was significantly associated with the difference between radiographic and in vivo cup coverage. Two-dimensional radiographically determined cup coverage conservatively reflects in vivo cup coverage and remains an important index (taking 7% underestimation errors and the effect of greater underestimation of larger cup size into account) for assessing the stability of the cup and monitoring for adequate ingrowth of bone. [Orthopedics. 2018; 41(1):e46-e51.]. Copyright 2017, SLACK Incorporated.

  15. Coverage and quality of antenatal care provided at primary health care facilities in the 'Punjab' province of 'Pakistan'.

    Directory of Open Access Journals (Sweden)

    Muhammad Ashraf Majrooh

    Full Text Available BACKGROUND: Antenatal care is a very important component of maternal health services. It provides the opportunity to learn about risks associated with pregnancy and guides to plan the place of deliveries thereby preventing maternal and infant morbidity and mortality. In 'Pakistan' antenatal services to rural population are being provided through a network of primary health care facilities designated as 'Basic Health Units and Rural Health Centers. Pakistan is a developing country, consisting of four provinces and federally administered areas. Each province is administratively subdivided in to 'Divisions' and 'Districts'. By population 'Punjab' is the largest province of Pakistan having 36 districts. This study was conducted to assess the coverage and quality antenatal care in the primary health care facilities in 'Punjab' province of 'Pakistan'. METHODS: Quantitative and Qualitative methods were used to collect data. Using multistage sampling technique nine out of thirty six districts were selected and 19 primary health care facilities of public sector (seventeen Basic Health Units and two Rural Health Centers were randomly selected from each district. Focus group discussions and in-depth interviews were conducted with clients, providers and health managers. RESULTS: The overall enrollment for antenatal checkup was 55.9% and drop out was 32.9% in subsequent visits. The quality of services regarding assessment, treatment and counseling was extremely poor. The reasons for low coverage and quality were the distant location of facilities, deficiency of facility resources, indifferent attitude and non availability of the staff. Moreover, lack of client awareness about importance of antenatal care and self empowerment for decision making to seek care were also responsible for low coverage. CONCLUSION: The coverage and quality of the antenatal care services in 'Punjab' are extremely compromised. Only half of the expected pregnancies are enrolled and

  16. Distributed and cloud computing from parallel processing to the Internet of Things

    CERN Document Server

    Hwang, Kai; Fox, Geoffrey C

    2012-01-01

    Distributed and Cloud Computing, named a 2012 Outstanding Academic Title by the American Library Association's Choice publication, explains how to create high-performance, scalable, reliable systems, exposing the design principles, architecture, and innovative applications of parallel, distributed, and cloud computing systems. Starting with an overview of modern distributed models, the book provides comprehensive coverage of distributed and cloud computing, including: Facilitating management, debugging, migration, and disaster recovery through virtualization Clustered systems for resear

  17. Practical parallel computing

    CERN Document Server

    Morse, H Stephen

    1994-01-01

    Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi

  18. Comparison of NIS and NHIS/NIPRCS vaccination coverage estimates. National Immunization Survey. National Health Interview Survey/National Immunization Provider Record Check Study.

    Science.gov (United States)

    Bartlett, D L; Ezzati-Rice, T M; Stokley, S; Zhao, Z

    2001-05-01

    The National Immunization Survey (NIS) and the National Health Interview Survey (NHIS) produce national coverage estimates for children aged 19 months to 35 months. The NIS is a cost-effective, random-digit-dialing telephone survey that produces national and state-level vaccination coverage estimates. The National Immunization Provider Record Check Study (NIPRCS) is conducted in conjunction with the annual NHIS, which is a face-to-face household survey. As the NIS is a telephone survey, potential coverage bias exists as the survey excludes children living in nontelephone households. To assess the validity of estimates of vaccine coverage from the NIS, we compared 1995 and 1996 NIS national estimates with results from the NHIS/NIPRCS for the same years. Both the NIS and the NHIS/NIPRCS produce similar results. The NHIS/NIPRCS supports the findings of the NIS.

  19. Medicaid and CHIP Provide Coverage to More than Half of All Children in D.C. Policy Snapshot

    Science.gov (United States)

    DC Action for Children, 2011

    2011-01-01

    Medicaid and CHIP are crucial parts of the social safety net, providing health insurance coverage to more than half of all children ages 0-21 in D.C. and a third of children nationally. Without these two programs, more than 97,000 children in the District would have been uninsured in 2010. New research indicates that compared with the uninsured,…

  20. Scaling up machine learning: parallel and distributed approaches

    National Research Council Canada - National Science Library

    Bekkerman, Ron; Bilenko, Mikhail; Langford, John

    2012-01-01

    ... presented in the book cover a range of parallelization platforms from FPGAs and GPUs to multi-core systems and commodity clusters; concurrent programming frameworks that include CUDA, MPI, MapReduce, and DryadLINQ; and various learning settings: supervised, unsupervised, semi-supervised, and online learning. Extensive coverage of parallelizat...

  1. ATLAS FTK a - very complex - custom parallel supercomputer

    CERN Document Server

    Kimura, Naoki; The ATLAS collaboration

    2016-01-01

    In the ever increasing pile-up LHC environment advanced techniques of analysing the data are implemented in order to increase the rate of relevant physics processes with respect to background processes. The Fast TracKer (FTK) is a track finding implementation at hardware level that is designed to deliver full-scan tracks with $p_{T}$ above 1GeV to the ATLAS trigger system for every L1 accept (at a maximum rate of 100kHz). In order to achieve this performance a highly parallel system was designed and now it is under installation in ATLAS. In the beginning of 2016 it will provide tracks for the trigger system in a region covering the central part of the ATLAS detector, and during the year it's coverage will be extended to the full detector coverage. The system relies on matching hits coming from the silicon tracking detectors against 1 billion patterns stored in specially designed ASICS chips (Associative memory - AM06). In a first stage coarse resolution hits are matched against the patterns and the accepted h...

  2. Providing Coverage for the Unique Lifelong Health Care Needs of Living Kidney Donors Within the Framework of Financial Neutrality.

    Science.gov (United States)

    Gill, J S; Delmonico, F; Klarenbach, S; Capron, A M

    2017-05-01

    Organ donation should neither enrich donors nor impose financial burdens on them. We described the scope of health care required for all living kidney donors, reflecting contemporary understanding of long-term donor health outcomes; proposed an approach to identify donor health conditions that should be covered within the framework of financial neutrality; and proposed strategies to pay for this care. Despite the Affordable Care Act in the United States, donors continue to have inadequate coverage for important health conditions that are donation related or that may compromise postdonation kidney function. Amendment of Medicare regulations is needed to clarify that surveillance and treatment of conditions that may compromise postdonation kidney function following donor nephrectomy will be covered without expense to the donor. In other countries lacking health insurance for all residents, sufficient data exist to allow the creation of a compensation fund or donor insurance policies to ensure appropriate care. Providing coverage for donation-related sequelae as well as care to preserve postdonation kidney function ensures protection against the financial burdens of health care encountered by donors throughout their lives. Providing coverage for this care should thus be cost-effective, even without considering the health care cost savings that occur for living donor transplant recipients. © 2016 The American Society of Transplantation and the American Society of Transplant Surgeons.

  3. Maiden immunization coverage survey in the republic of South Sudan: a cross-sectional study providing baselines for future performance measurement

    Science.gov (United States)

    Mbabazi, William; Lako, Anthony K; Ngemera, Daniel; Laku, Richard; Yehia, Mostafah; Nshakira, Nathan

    2013-01-01

    Introduction Since the comprehensive peace agreement was signed in 2005, institutionalization of immunization services in South Sudan remained a priority. Routine administrative reporting systems were established and showed that national coverage rates for DTP-3 rose from 20% in 2002 to 80% in 2011. This survey was conducted as part of an overall review of progress in implementation of the first EPI Multi-Year Plan for South Sudan 2007-2011. This report provides maiden community coverage estimates for immunization. Methods A cross sectional community survey was conducted between January and May 2012. Ten cluster surveys were conducted to generate state-specific coverage estimates. The WHO 30x7 cluster sampling method was employed. Data was collected using pre-tested, interviewer guided, structured questionnaires through house to house visits. Results The fully immunized children were 7.3%. Coverage for specific antigens were; BCG (28.3%), DTP-1(25.9%), DTP-3 (22.0%), Measles (16.8%). The drop-out rate between the first and third doses of DTP was 21.3%. Immunization coverage estimates based on card and history were higher, at 45.7% for DTP-3, 45.8% for MCV and 32.2% for full immunization. Majority of immunizations (80.8%) were received at health facilities compared to community service points (19.2%). The major reason for missed immunizations was inadequate information (41.1%). Conclusion The proportion of card-verified, fully vaccinated among children aged 12-23 months is very low at 7.3%. Future efforts to improve vaccination quality and coverage should prioritize training of vaccinators and program communication to levels equivalent or higher than investments in EPI cold chain systems since 2007. PMID:24876899

  4. Coverage Metrics for Model Checking

    Science.gov (United States)

    Penix, John; Visser, Willem; Norvig, Peter (Technical Monitor)

    2001-01-01

    When using model checking to verify programs in practice, it is not usually possible to achieve complete coverage of the system. In this position paper we describe ongoing research within the Automated Software Engineering group at NASA Ames on the use of test coverage metrics to measure partial coverage and provide heuristic guidance for program model checking. We are specifically interested in applying and developing coverage metrics for concurrent programs that might be used to support certification of next generation avionics software.

  5. 14 CFR 1260.131 - Insurance coverage.

    Science.gov (United States)

    2010-01-01

    ... coverage. Recipients shall, at a minimum, provide the equivalent insurance coverage for real property and equipment acquired with Federal funds as provided for property owned by the recipient. Federally-owned property need not be insured unless required by the terms and conditions of the award. ...

  6. 2 CFR 215.31 - Insurance coverage.

    Science.gov (United States)

    2010-01-01

    ... Insurance coverage. Recipients shall, at a minimum, provide the equivalent insurance coverage for real property and equipment acquired with Federal funds as provided to property owned by the recipient. Federally-owned property need not be insured unless required by the terms and conditions of the award. ...

  7. 36 CFR 1210.31 - Insurance coverage.

    Science.gov (United States)

    2010-07-01

    ....31 Insurance coverage. Recipients shall, at a minimum, provide the equivalent insurance coverage for real property and equipment acquired with NHPRC funds as provided to property owned by the recipient. Federally-owned property need not be insured unless required by the terms and conditions of the award. ...

  8. Practical tools to implement massive parallel pyrosequencing of PCR products in next generation molecular diagnostics.

    Directory of Open Access Journals (Sweden)

    Kim De Leeneer

    Full Text Available Despite improvements in terms of sequence quality and price per basepair, Sanger sequencing remains restricted to screening of individual disease genes. The development of massively parallel sequencing (MPS technologies heralded an era in which molecular diagnostics for multigenic disorders becomes reality. Here, we outline different PCR amplification based strategies for the screening of a multitude of genes in a patient cohort. We performed a thorough evaluation in terms of set-up, coverage and sequencing variants on the data of 10 GS-FLX experiments (over 200 patients. Crucially, we determined the actual coverage that is required for reliable diagnostic results using MPS, and provide a tool to calculate the number of patients that can be screened in a single run. Finally, we provide an overview of factors contributing to false negative or false positive mutation calls and suggest ways to maximize sensitivity and specificity, both important in a routine setting. By describing practical strategies for screening of multigenic disorders in a multitude of samples and providing answers to questions about minimum required coverage, the number of patients that can be screened in a single run and the factors that may affect sensitivity and specificity we hope to facilitate the implementation of MPS technology in molecular diagnostics.

  9. Aspects of coverage in medical DNA sequencing

    Directory of Open Access Journals (Sweden)

    Wilson Richard K

    2008-05-01

    Full Text Available Abstract Background DNA sequencing is now emerging as an important component in biomedical studies of diseases like cancer. Short-read, highly parallel sequencing instruments are expected to be used heavily for such projects, but many design specifications have yet to be conclusively established. Perhaps the most fundamental of these is the redundancy required to detect sequence variations, which bears directly upon genomic coverage and the consequent resolving power for discerning somatic mutations. Results We address the medical sequencing coverage problem via an extension of the standard mathematical theory of haploid coverage. The expected diploid multi-fold coverage, as well as its generalization for aneuploidy are derived and these expressions can be readily evaluated for any project. The resulting theory is used as a scaling law to calibrate performance to that of standard BAC sequencing at 8× to 10× redundancy, i.e. for expected coverages that exceed 99% of the unique sequence. A differential strategy is formalized for tumor/normal studies wherein tumor samples are sequenced more deeply than normal ones. In particular, both tumor alleles should be detected at least twice, while both normal alleles are detected at least once. Our theory predicts these requirements can be met for tumor and normal redundancies of approximately 26× and 21×, respectively. We explain why these values do not differ by a factor of 2, as might intuitively be expected. Future technology developments should prompt even deeper sequencing of tumors, but the 21× value for normal samples is essentially a constant. Conclusion Given the assumptions of standard coverage theory, our model gives pragmatic estimates for required redundancy. The differential strategy should be an efficient means of identifying potential somatic mutations for further study.

  10. 29 CFR 95.31 - Insurance coverage.

    Science.gov (United States)

    2010-07-01

    ... recipient. Federally-owned property need not be insured unless required by the terms and conditions of the... § 95.31 Insurance coverage. Recipients shall, at a minimum, provide the equivalent insurance coverage...

  11. Variation in hepatitis B immunization coverage rates associated with provider practices after the temporary suspension of the birth dose

    Directory of Open Access Journals (Sweden)

    Mullooly John P

    2006-11-01

    Full Text Available Abstract Background In 1999, the American Academy of Pediatrics and U.S. Public Health Service recommended suspending the birth dose of hepatitis B vaccine due to concerns about potential mercury exposure. A previous report found that overall national hepatitis B vaccination coverage rates decreased in association with the suspension. It is unknown whether this underimmunization occurred uniformly or was associated with how providers changed their practices for the timing of hepatitis B vaccine doses. We evaluate the impact of the birth dose suspension on underimmunization for the hepatitis B vaccine series among 24-month-olds in five large provider groups and describe provider practices potentially associated with underimmunization following the suspension. Methods Retrospective cohort study of children enrolled in five large provider groups in the United States (A-E. Logistic regression was used to evaluate the association between the birth dose suspension and a child's probability of being underimmunized at 24 months for the hepatitis B vaccine series. Results Prior to July 1999, the percent of children who received a hepatitis B vaccination at birth varied widely (3% to 90% across the five provider groups. After the national recommendation to suspend the hepatitis B birth dose, the percent of children who received a hepatitis B vaccination at birth decreased in all provider groups, and this trend persisted after the policy was reversed. The most substantial decreases were observed in the two provider groups that shifted the first hepatitis B dose from birth to 5–6 months of age. Accounting for temporal trend, children in these two provider groups were significantly more likely to be underimmunized for the hepatitis B series at 24 months of age if they were in the birth dose suspension cohort compared with baseline (Group D OR 2.7, 95% CI 1.7 – 4.4; Group E OR 3.1, 95% CI 2.3 – 4.2. This represented 6% more children in Group D and 9

  12. Improving Health Care Coverage, Equity, And Financial Protection Through A Hybrid System: Malaysia's Experience.

    Science.gov (United States)

    Rannan-Eliya, Ravindra P; Anuranga, Chamara; Manual, Adilius; Sararaks, Sondi; Jailani, Anis S; Hamid, Abdul J; Razif, Izzanie M; Tan, Ee H; Darzi, Ara

    2016-05-01

    Malaysia has made substantial progress in providing access to health care for its citizens and has been more successful than many other countries that are better known as models of universal health coverage. Malaysia's health care coverage and outcomes are now approaching levels achieved by member nations of the Organization for Economic Cooperation and Development. Malaysia's results are achieved through a mix of public services (funded by general revenues) and parallel private services (predominantly financed by out-of-pocket spending). We examined the distributional aspects of health financing and delivery and assessed financial protection in Malaysia's hybrid system. We found that this system has been effective for many decades in equalizing health care use and providing protection from financial risk, despite modest government spending. Our results also indicate that a high out-of-pocket share of total financing is not a consistent proxy for financial protection; greater attention is needed to the absolute level of out-of-pocket spending. Malaysia's hybrid health system presents continuing unresolved policy challenges, but the country's experience nonetheless provides lessons for other emerging economies that want to expand access to health care despite limited fiscal resources. Project HOPE—The People-to-People Health Foundation, Inc.

  13. Cooperative Cloud Service Aware Mobile Internet Coverage Connectivity Guarantee Protocol Based on Sensor Opportunistic Coverage Mechanism

    Directory of Open Access Journals (Sweden)

    Qin Qin

    2015-01-01

    Full Text Available In order to improve the Internet coverage ratio and provide connectivity guarantee, based on sensor opportunistic coverage mechanism and cooperative cloud service, we proposed the coverage connectivity guarantee protocol for mobile Internet. In this scheme, based on the opportunistic covering rules, the network coverage algorithm of high reliability and real-time security was achieved by using the opportunity of sensor nodes and the Internet mobile node. Then, the cloud service business support platform is created based on the Internet application service management capabilities and wireless sensor network communication service capabilities, which is the architecture of the cloud support layer. The cooperative cloud service aware model was proposed. Finally, we proposed the mobile Internet coverage connectivity guarantee protocol. The results of experiments demonstrate that the proposed algorithm has excellent performance, in terms of the security of the Internet and the stability, as well as coverage connectivity ability.

  14. Dynamic balancing of mechanisms and synthesizing of parallel robots

    CERN Document Server

    Wei, Bin

    2016-01-01

    This book covers the state-of-the-art technologies in dynamic balancing of mechanisms with minimum increase of mass and inertia. The synthesis of parallel robots based on the Decomposition and Integration concept is also covered in detail. The latest advances are described, including different balancing principles, design of reactionless mechanisms with minimum increase of mass and inertia, and synthesizing parallel robots. This is an ideal book for mechanical engineering students and researchers who are interested in the dynamic balancing of mechanisms and synthesizing of parallel robots. This book also: ·       Broadens reader understanding of the synthesis of parallel robots based on the Decomposition and Integration concept ·       Reinforces basic principles with detailed coverage of different balancing principles, including input torque balancing mechanisms ·       Reviews exhaustively the key recent research into the design of reactionless mechanisms with minimum increase of mass a...

  15. The health and healthcare impact of providing insurance coverage to uninsured children: A prospective observational study

    Directory of Open Access Journals (Sweden)

    Glenn Flores

    2017-05-01

    Full Text Available Abstract Background Of the 4.8 million uninsured children in America, 62–72% are eligible for but not enrolled in Medicaid or CHIP. Not enough is known, however, about the impact of health insurance on outcomes and costs for previously uninsured children, which has never been examined prospectively. Methods This prospective observational study of uninsured Medicaid/CHIP-eligible minority children compared children obtaining coverage vs. those remaining uninsured. Subjects were recruited at 97 community sites, and 11 outcomes monitored monthly for 1 year. Results In this sample of 237 children, those obtaining coverage were significantly (P 6 months at baseline were associated with remaining uninsured for the entire year. In multivariable analysis, children who had been uninsured for >6 months at baseline (odds ratio [OR], 3.8; 95% confidence interval [CI], 1.4–10.3 and African-American children (OR, 2.8; 95% CI, 1.1–7.3 had significantly higher odds of remaining uninsured for the entire year. Insurance saved $2886/insured child/year, with mean healthcare costs = $5155/uninsured vs. $2269/insured child (P = .04. Conclusions Providing health insurance to Medicaid/CHIP-eligible uninsured children improves health, healthcare access and quality, and parental satisfaction; reduces unmet needs and out-of-pocket costs; and saves $2886/insured child/year. African-American children and those who have been uninsured for >6 months are at greatest risk for remaining uninsured. Extrapolation of the savings realized by insuring uninsured, Medicaid/CHIP-eligible children suggests that America potentially could save $8.7–$10.1 billion annually by providing health insurance to all Medicaid/CHIP-eligible uninsured children.

  16. 7 CFR 1737.31 - Area Coverage Survey (ACS).

    Science.gov (United States)

    2010-01-01

    ... an ACS are provided in RUS Telecommunications Engineering and Construction Manual section 205. (e... Studies-Area Coverage Survey and Loan Design § 1737.31 Area Coverage Survey (ACS). (a) The Area Coverage... the borrower's records contain sufficient information as to subscriber development to enable cost...

  17. Why not private health insurance? 2. Actuarial principles meet provider dreams.

    Science.gov (United States)

    Deber, R; Gildiner, A; Baranek, P

    1999-09-07

    What do insurers and employers feel about proposals to expand Canadian health care financing through private insurance, in either a parallel stream or a supplementary tier? The authors conducted 10 semistructured, open-ended interviews in the autumn and early winter of 1996 with representatives of the insurance industry and benefits managers working with large employers; respondents were identified using a snowball sampling technique. The respondents felt that proposals for parallel private plans within a competitive market are incompatible with insurance principles, as long as a well-functioning and relatively comprehensive public system continues to exist; the maintenance of a strong public system was both socially and economically desirable. With the exception of serving the niche market for the private management of return-to-work strategies, respondents showed little interest in providing parallel coverage. They were receptive to a larger role for supplementary insurance but cautioned that they are not willing to cover all delisted services. As business executives they stated that they are willing to insure only services and clients that will be profitable.

  18. Development and application of a 6.5 million feature affymetrix genechip® for massively parallel discovery of single position polymorphisms in lettuce (Lactuca spp.)

    OpenAIRE

    Stoffel, Kevin; van Leeuwen, Hans; Kozik, Alexander; Caldwell, David; Ashrafi, Hamid; Cui, Xinping; Tan, Xiaoping; Hill, Theresa; Reyes-Chin-Wo, Sebastian; Truco, Maria-Jose; Michelmore, Richard W; Van Deynze, Allen

    2012-01-01

    Abstract Background High-resolution genetic maps are needed in many crops to help characterize the genetic diversity that determines agriculturally important traits. Hybridization to microarrays to detect single feature polymorphisms is a powerful technique for marker discovery and genotyping because of its highly parallel nature. However, microarrays designed for gene expression analysis rarely provide sufficient gene coverage for optimal detection o...

  19. Parallel rendering

    Science.gov (United States)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  20. 5 CFR 531.402 - Employee coverage.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Employee coverage. 531.402 Section 531... GENERAL SCHEDULE Within-Grade Increases § 531.402 Employee coverage. (a) Except as provided in paragraph (b) of this section, this subpart applies to employees who— (1) Are classified and paid under the...

  1. 42 CFR 436.321 - Medically needy coverage of the blind.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Medically needy coverage of the blind. 436.321... Optional Coverage of the Medically Needy § 436.321 Medically needy coverage of the blind. If the agency provides Medicaid to the medically needy, it may provide Medicaid to blind individuals who meet— (a) The...

  2. Development and application of a 6.5 million feature Affymetrix Genechip® for massively parallel discovery of single position polymorphisms in lettuce (Lactuca spp.)

    OpenAIRE

    Stoffel, Kevin; Kozik, Alexander; Ashrafi, Hamid; Cui, Xinping; Tan, Xiaoping; Hill, Theresa; Reyes-Chin-Wo, Sebastian; Truco, Maria-Jose; Michelmore, Richard W; Van Deynze, Allen

    2012-01-01

    Abstract Background High-resolution genetic maps are needed in many crops to help characterize the genetic diversity that determines agriculturally important traits. Hybridization to microarrays to detect single feature polymorphisms is a powerful technique for marker discovery and genotyping because of its highly parallel nature. However, microarrays designed for gene expression analysis rarely provide sufficient gene coverage for optimal detection of nucleotide polymorphisms, which limits u...

  3. 42 CFR 436.330 - Coverage for certain aliens.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Coverage for certain aliens. 436.330 Section 436... Coverage of the Medically Needy § 436.330 Coverage for certain aliens. If an agency provides Medicaid to... condition, as defined in § 440.255(c) of this chapter to those aliens described in § 436.406(c) of this...

  4. Delaunay Triangulation as a New Coverage Measurement Method in Wireless Sensor Network

    Science.gov (United States)

    Chizari, Hassan; Hosseini, Majid; Poston, Timothy; Razak, Shukor Abd; Abdullah, Abdul Hanan

    2011-01-01

    Sensing and communication coverage are among the most important trade-offs in Wireless Sensor Network (WSN) design. A minimum bound of sensing coverage is vital in scheduling, target tracking and redeployment phases, as well as providing communication coverage. Some methods measure the coverage as a percentage value, but detailed information has been missing. Two scenarios with equal coverage percentage may not have the same Quality of Coverage (QoC). In this paper, we propose a new coverage measurement method using Delaunay Triangulation (DT). This can provide the value for all coverage measurement tools. Moreover, it categorizes sensors as ‘fat’, ‘healthy’ or ‘thin’ to show the dense, optimal and scattered areas. It can also yield the largest empty area of sensors in the field. Simulation results show that the proposed DT method can achieve accurate coverage information, and provides many tools to compare QoC between different scenarios. PMID:22163792

  5. Neural Parallel Engine: A toolbox for massively parallel neural signal processing.

    Science.gov (United States)

    Tam, Wing-Kin; Yang, Zhi

    2018-05-01

    Large-scale neural recordings provide detailed information on neuronal activities and can help elicit the underlying neural mechanisms of the brain. However, the computational burden is also formidable when we try to process the huge data stream generated by such recordings. In this study, we report the development of Neural Parallel Engine (NPE), a toolbox for massively parallel neural signal processing on graphical processing units (GPUs). It offers a selection of the most commonly used routines in neural signal processing such as spike detection and spike sorting, including advanced algorithms such as exponential-component-power-component (EC-PC) spike detection and binary pursuit spike sorting. We also propose a new method for detecting peaks in parallel through a parallel compact operation. Our toolbox is able to offer a 5× to 110× speedup compared with its CPU counterparts depending on the algorithms. A user-friendly MATLAB interface is provided to allow easy integration of the toolbox into existing workflows. Previous efforts on GPU neural signal processing only focus on a few rudimentary algorithms, are not well-optimized and often do not provide a user-friendly programming interface to fit into existing workflows. There is a strong need for a comprehensive toolbox for massively parallel neural signal processing. A new toolbox for massively parallel neural signal processing has been created. It can offer significant speedup in processing signals from large-scale recordings up to thousands of channels. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. runjags: An R Package Providing Interface Utilities, Model Templates, Parallel Computing Methods and Additional Distributions for MCMC Models in JAGS

    Directory of Open Access Journals (Sweden)

    Matthew J. Denwood

    2016-07-01

    Full Text Available The runjags package provides a set of interface functions to facilitate running Markov chain Monte Carlo models in JAGS from within R. Automated calculation of appropriate convergence and sample length diagnostics, user-friendly access to commonly used graphical outputs and summary statistics, and parallelized methods of running JAGS are provided. Template model specifications can be generated using a standard lme4-style formula interface to assist users less familiar with the BUGS syntax. Automated simulation study functions are implemented to facilitate model performance assessment, as well as drop-k type cross-validation studies, using high performance computing clusters such as those provided by parallel. A module extension for JAGS is also included within runjags, providing the Pareto family of distributions and a series of minimally-informative priors including the DuMouchel and half-Cauchy priors. This paper outlines the primary functions of this package, and gives an illustration of a simulation study to assess the sensitivity of two equivalent model formulations to different prior distributions.

  7. A possibility of parallel and anti-parallel diffraction measurements on ...

    Indian Academy of Sciences (India)

    However, a bent perfect crystal (BPC) monochromator at monochromatic focusing condition can provide a quite flat and equal resolution property at both parallel and anti-parallel positions and thus one can have a chance to use both sides for the diffraction experiment. From the data of the FWHM and the / measured ...

  8. Technical support for universal health coverage pilots in Karnataka ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Technical support for universal health coverage pilots in Karnataka and Kerala. This project will provide evidence-based support to implement universal health coverage (UHC) pilot activities in two Indian states: Kerala and Karnataka. The project team will provide technical assistance to these early adopter states to assist ...

  9. Quantification of massively parallel sequencing libraries - a comparative study of eight methods

    DEFF Research Database (Denmark)

    Hussing, Christian; Kampmann, Marie-Louise; Mogensen, Helle Smidt

    2018-01-01

    Quantification of massively parallel sequencing libraries is important for acquisition of monoclonal beads or clusters prior to clonal amplification and to avoid large variations in library coverage when multiple samples are included in one sequencing analysis. No gold standard for quantification...... estimates followed by Qubit and electrophoresis-based instruments (Bioanalyzer, TapeStation, GX Touch, and Fragment Analyzer), while SYBR Green and TaqMan based qPCR assays gave the lowest estimates. qPCR gave more accurate predictions of sequencing coverage than Qubit and TapeStation did. Costs, time......-consumption, workflow simplicity, and ability to quantify multiple samples are discussed. Technical specifications, advantages, and disadvantages of the various methods are pointed out....

  10. Modelling the implications of moving towards universal coverage in Tanzania.

    Science.gov (United States)

    Borghi, Josephine; Mtei, Gemini; Ally, Mariam

    2012-03-01

    A model was developed to assess the impact of possible moves towards universal coverage in Tanzania over a 15-year time frame. Three scenarios were considered: maintaining the current situation ('the status quo'); expanded health insurance coverage (the estimated maximum achievable coverage in the absence of premium subsidies, coverage restricted to those who can pay); universal coverage to all (government revenues used to pay the premiums for the poor). The model estimated the costs of delivering public health services and all health services to the population as a proportion of Gross Domestic Product (GDP), and forecast revenue from user fees and insurance premiums. Under the status quo, financial protection is provided to 10% of the population through health insurance schemes, with the remaining population benefiting from subsidized user charges in public facilities. Seventy-six per cent of the population would benefit from financial protection through health insurance under the expanded coverage scenario, and 100% of the population would receive such protection through a mix of insurance cover and government funding under the universal coverage scenario. The expanded and universal coverage scenarios have a significant effect on utilization levels, especially for public outpatient care. Universal coverage would require an initial doubling in the proportion of GDP going to the public health system. Government health expenditure would increase to 18% of total government expenditure. The results are sensitive to the cost of health system strengthening, the level of real GDP growth, provider reimbursement rates and administrative costs. Promoting greater cross-subsidization between insurance schemes would provide sufficient resources to finance universal coverage. Alternately, greater tax funding for health could be generated through an increase in the rate of Value-Added Tax (VAT) or expanding the income tax base. The feasibility and sustainability of efforts to

  11. Insurance Coverage Policies for Personalized Medicine

    Directory of Open Access Journals (Sweden)

    Andrew Hresko

    2012-10-01

    Full Text Available Adoption of personalized medicine in practice has been slow, in part due to the lack of evidence of clinical benefit provided by these technologies. Coverage by insurers is a critical step in achieving widespread adoption of personalized medicine. Insurers consider a variety of factors when formulating medical coverage policies for personalized medicine, including the overall strength of evidence for a test, availability of clinical guidelines and health technology assessments by independent organizations. In this study, we reviewed coverage policies of the largest U.S. insurers for genomic (disease-related and pharmacogenetic (PGx tests to determine the extent that these tests were covered and the evidence basis for the coverage decisions. We identified 41 coverage policies for 49 unique testing: 22 tests for disease diagnosis, prognosis and risk and 27 PGx tests. Fifty percent (or less of the tests reviewed were covered by insurers. Lack of evidence of clinical utility appears to be a major factor in decisions of non-coverage. The inclusion of PGx information in drug package inserts appears to be a common theme of PGx tests that are covered. This analysis highlights the variability of coverage determinations and factors considered, suggesting that the adoption of personal medicine will affected by numerous factors, but will continue to be slowed due to lack of demonstrated clinical benefit.

  12. Dental Care Coverage and Use: Modeling Limitations and Opportunities

    Science.gov (United States)

    Moeller, John F.; Chen, Haiyan

    2014-01-01

    Objectives. We examined why older US adults without dental care coverage and use would have lower use rates if offered coverage than do those who currently have coverage. Methods. We used data from the 2008 Health and Retirement Study to estimate a multinomial logistic model to analyze the influence of personal characteristics in the grouping of older US adults into those with and those without dental care coverage and dental care use. Results. Compared with persons with no coverage and no dental care use, users of dental care with coverage were more likely to be younger, female, wealthier, college graduates, married, in excellent or very good health, and not missing all their permanent teeth. Conclusions. Providing dental care coverage to uninsured older US adults without use will not necessarily result in use rates similar to those with prior coverage and use. We have offered a model using modifiable factors that may help policy planners facilitate programs to increase dental care coverage uptake and use. PMID:24328635

  13. Prevalence, Characteristics, and Perception of Nursery Antibiotic Stewardship Coverage in the United States.

    Science.gov (United States)

    Cantey, Joseph B; Vora, Niraj; Sunkara, Mridula

    2017-09-01

    Prolonged or unnecessary antibiotic use is associated with adverse outcomes in infants. Antibiotic stewardship programs (ASPs) aim to prevent these adverse outcomes and optimize antibiotic prescribing. However, data evaluating ASP coverage of nurseries are limited. The objectives of this study were to describe the characteristics of nurseries with and without ASP coverage and to determine perceptions of and barriers to nursery ASP coverage. The 2014 American Hospital Association annual survey was used to randomly select a level III neonatal intensive care unit from all 50 states. A level I and level II nursery from the same city as the level III nursery were then randomly selected. Hospital, nursery, and ASP characteristics were collected. Nursery and ASP providers (pharmacists or infectious disease providers) were interviewed using a semistructured template. Transcribed interviews were analyzed for themes. One hundred forty-six centers responded; 104 (71%) provided nursery ASP coverage. In multivariate analysis, level of nursery, university affiliation, and number of full-time equivalent ASP staff were the main predictors of nursery ASP coverage. Several themes were identified from interviews: unwanted coverage, unnecessary coverage, jurisdiction issues, need for communication, and a focus on outcomes. Most providers had a favorable view of nursery ASP coverage. Larger, higher-acuity nurseries in university-affiliated hospitals are more likely to have ASP coverage. Low ASP staffing and a perceived lack of importance were frequently cited as barriers to nursery coverage. Most nursery ASP coverage is viewed favorably by providers, but nursery providers regard it as less important than ASP providers. © The Author 2016. Published by Oxford University Press on behalf of the Pediatric Infectious Diseases Society. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. The impacts of DRG-based payments on health care provider behaviors under a universal coverage system: a population-based study.

    Science.gov (United States)

    Cheng, Shou-Hsia; Chen, Chi-Chen; Tsai, Shu-Ling

    2012-10-01

    To examine the impacts of diagnosis-related group (DRG) payments on health care provider's behavior under a universal coverage system in Taiwan. This study employed a population-based natural experiment study design. Patients who underwent coronary artery bypass graft surgery or percutaneous transluminal coronary angioplasty, which were incorporated in the Taiwan version of DRG payments in 2010, were defined as the intervention group. The comparison group consisted of patients who underwent cardiovascular procedures which were paid for by fee-for-services schemes and were selected by propensity score matching from patients treated by the same group of surgeons. The generalized estimating equations model and difference-in-difference analysis was used in this study. The introduction of DRG payment resulted in a 10% decrease (pDRG-based payment resulted in reduced intensity of care and shortened length of stay. The findings might be valuable to other countries that are developing or reforming their payment system under a universal coverage system. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  15. Current and future state of FDA-CMS parallel reviews.

    Science.gov (United States)

    Messner, D A; Tunis, S R

    2012-03-01

    The US Food and Drug Administration (FDA) and the Centers for Medicare and Medicaid Services (CMS) recently proposed a partial alignment of their respective review processes for new medical products. The proposed "parallel review" not only offers an opportunity for some products to reach the market with Medicare coverage more quickly but may also create new incentives for product developers to conduct studies designed to address simultaneously the information needs of regulators, payers, patients, and clinicians.

  16. Rotational electrical impedance tomography using electrodes with limited surface coverage provides window for multimodal sensing

    Science.gov (United States)

    Lehti-Polojärvi, Mari; Koskela, Olli; Seppänen, Aku; Figueiras, Edite; Hyttinen, Jari

    2018-02-01

    Electrical impedance tomography (EIT) is an imaging method that could become a valuable tool in multimodal applications. One challenge in simultaneous multimodal imaging is that typically the EIT electrodes cover a large portion of the object surface. This paper investigates the feasibility of rotational EIT (rEIT) in applications where electrodes cover only a limited angle of the surface of the object. In the studied rEIT, the object is rotated a full 360° during a set of measurements to increase the information content of the data. We call this approach limited angle full revolution rEIT (LAFR-rEIT). We test LAFR-rEIT setups in two-dimensional geometries with computational and experimental data. We use up to 256 rotational measurement positions, which requires a new way to solve the forward and inverse problem of rEIT. For this, we provide a modification, available for EIDORS, in the supplementary material. The computational results demonstrate that LAFR-rEIT with eight electrodes produce the same image quality as conventional 16-electrode rEIT, when data from an adequate number of rotational measurement positions are used. Both computational and experimental results indicate that the novel LAFR-rEIT provides good EIT with setups with limited surface coverage and a small number of electrodes.

  17. Extending Coverage and Lifetime of K-coverage Wireless Sensor Networks Using Improved Harmony Search

    Directory of Open Access Journals (Sweden)

    Shohreh Ebrahimnezhad

    2011-07-01

    Full Text Available K-coverage wireless sensor networks try to provide facilities such that each hotspot region is covered by at least k sensors. Because, the fundamental evaluation metrics of such networks are coverage and lifetime, proposing an approach that extends both of them simultaneously has a lot of interests. In this article, it is supposed that two kinds of nodes are available: static and mobile. The proposed method, at first, tries to balance energy among sensor nodes using Improved Harmony Search (IHS algorithm in a k-coverage and connected wireless sensor network in order to achieve a sensor node deployment. Also, this method proposes a suitable place for a gateway node (Sink that collects data from all sensors. Second, in order to prolong the network lifetime, some of the high energy-consuming mobile nodes are moved to the closest positions of low energy-consuming ones and vice versa after a while. This leads increasing the lifetime of network while connectivity and k-coverage are preserved. Through computer simulations, experimental results verified that the proposed IHS-based algorithm found better solution compared to some related methods.

  18. Medicare Coverage Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Medicare Coverage Database (MCD) contains all National Coverage Determinations (NCDs) and Local Coverage Determinations (LCDs), local articles, and proposed NCD...

  19. Application Portable Parallel Library

    Science.gov (United States)

    Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott

    1995-01-01

    Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

  20. Overview of the Force Scientific Parallel Language

    Directory of Open Access Journals (Sweden)

    Gita Alaghband

    1994-01-01

    Full Text Available The Force parallel programming language designed for large-scale shared-memory multiprocessors is presented. The language provides a number of parallel constructs as extensions to the ordinary Fortran language and is implemented as a two-level macro preprocessor to support portability across shared memory multiprocessors. The global parallelism model on which the Force is based provides a powerful parallel language. The parallel constructs, generic synchronization, and freedom from process management supported by the Force has resulted in structured parallel programs that are ported to the many multiprocessors on which the Force is implemented. Two new parallel constructs for looping and functional decomposition are discussed. Several programming examples to illustrate some parallel programming approaches using the Force are also presented.

  1. Assessment of systems for paying health care providers in Vietnam: implications for equity, efficiency and expanding effective health coverage.

    Science.gov (United States)

    Phuong, Nguyen Khanh; Oanh, Tran Thi Mai; Phuong, Hoang Thi; Tien, Tran Van; Cashin, Cheryl

    2015-01-01

    Provider payment arrangements are currently a core concern for Vietnam's health sector and a key lever for expanding effective coverage and improving the efficiency and equity of the health system. This study describes how different provider payment systems are designed and implemented in practice across a sample of provinces and districts in Vietnam. Key informant interviews were conducted with over 100 health policy-makers, purchasers and providers using a structured interview guide. The results of the different payment methods were scored by respondents and assessed against a set of health system performance criteria. Overall, the public health insurance agency, Vietnam Social Security (VSS), is focused on managing expenditures through a complicated set of reimbursement policies and caps, but the incentives for providers are unclear and do not consistently support Vietnam's health system objectives. The results of this study are being used by the Ministry of Health and VSS to reform the provider payment systems to be more consistent with international definitions and good practices and to better support Vietnam's health system objectives.

  2. ReadDepth: a parallel R package for detecting copy number alterations from short sequencing reads.

    Directory of Open Access Journals (Sweden)

    Christopher A Miller

    2011-01-01

    Full Text Available Copy number alterations are important contributors to many genetic diseases, including cancer. We present the readDepth package for R, which can detect these aberrations by measuring the depth of coverage obtained by massively parallel sequencing of the genome. In addition to achieving higher accuracy than existing packages, our tool runs much faster by utilizing multi-core architectures to parallelize the processing of these large data sets. In contrast to other published methods, readDepth does not require the sequencing of a reference sample, and uses a robust statistical model that accounts for overdispersed data. It includes a method for effectively increasing the resolution obtained from low-coverage experiments by utilizing breakpoint information from paired end sequencing to do positional refinement. We also demonstrate a method for inferring copy number using reads generated by whole-genome bisulfite sequencing, thus enabling integrative study of epigenomic and copy number alterations. Finally, we apply this tool to two genomes, showing that it performs well on genomes sequenced to both low and high coverage. The readDepth package runs on Linux and MacOSX, is released under the Apache 2.0 license, and is available at http://code.google.com/p/readdepth/.

  3. [Medical coverage of a road bicycle race].

    Science.gov (United States)

    Reifferscheid, Florian; Stuhr, Markus; Harding, Ulf; Schüler, Christine; Thoms, Jürgen; Püschel, Klaus; Kappus, Stefan

    2010-07-01

    Major sport events require adequate expertise and experience concerning medical coverage and support. Medical and ambulance services need to cover both participants and spectators. Likewise, residents at the venue need to be provided for. Concepts have to include the possibility of major incidents related to the event. Using the example of the Hamburg Cyclassics, a road bicycle race and major event for professional and amateur cyclists, this article describes the medical coverage, number of patients, types of injuries and emergencies. Objectives regarding the planning of future events and essential medical coverage are consequently discussed. (c) Georg Thieme Verlag Stuttgart-New York.

  4. Design considerations for parallel graphics libraries

    Science.gov (United States)

    Crockett, Thomas W.

    1994-01-01

    Applications which run on parallel supercomputers are often characterized by massive datasets. Converting these vast collections of numbers to visual form has proven to be a powerful aid to comprehension. For a variety of reasons, it may be desirable to provide this visual feedback at runtime. One way to accomplish this is to exploit the available parallelism to perform graphics operations in place. In order to do this, we need appropriate parallel rendering algorithms and library interfaces. This paper provides a tutorial introduction to some of the issues which arise in designing parallel graphics libraries and their underlying rendering algorithms. The focus is on polygon rendering for distributed memory message-passing systems. We illustrate our discussion with examples from PGL, a parallel graphics library which has been developed on the Intel family of parallel systems.

  5. Hospital emergency on-call coverage: is there a doctor in the house?

    Science.gov (United States)

    O'Malley, Ann S; Draper, Debra A; Felland, Laurie E

    2007-11-01

    The nation's community hospitals face increasing problems obtaining emergency on-call coverage from specialist physicians, according to findings from the Center for Studying Health System Change's (HSC) 2007 site visits to 12 nationally representative metropolitan communities. The diminished willingness of specialist physicians to provide on-call coverage is occurring as hospital emergency departments confront an ever-increasing demand for services. Factors influencing physician reluctance to provide on-call coverage include decreased dependence on hospital admitting privileges as more services shift to non-hospital settings; payment for emergency care, especially for uninsured patients; and medical liability concerns. Hospital strategies to secure on-call coverage include enforcing hospital medical staff bylaws that require physicians to take call, contracting with physicians to provide coverage, paying physicians stipends, and employing physicians. Nonetheless, many hospitals continue to struggle with inadequate on-call coverage, which threatens patients' timely access to high-quality emergency care and may raise health care costs.

  6. The Galley Parallel File System

    Science.gov (United States)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  7. Parallel S/sub n/ iteration schemes

    International Nuclear Information System (INIS)

    Wienke, B.R.; Hiromoto, R.E.

    1986-01-01

    The iterative, multigroup, discrete ordinates (S/sub n/) technique for solving the linear transport equation enjoys widespread usage and appeal. Serial iteration schemes and numerical algorithms developed over the years provide a timely framework for parallel extension. On the Denelcor HEP, the authors investigate three parallel iteration schemes for solving the one-dimensional S/sub n/ transport equation. The multigroup representation and serial iteration methods are also reviewed. This analysis represents a first attempt to extend serial S/sub n/ algorithms to parallel environments and provides good baseline estimates on ease of parallel implementation, relative algorithm efficiency, comparative speedup, and some future directions. The authors examine ordered and chaotic versions of these strategies, with and without concurrent rebalance and diffusion acceleration. Two strategies efficiently support high degrees of parallelization and appear to be robust parallel iteration techniques. The third strategy is a weaker parallel algorithm. Chaotic iteration, difficult to simulate on serial machines, holds promise and converges faster than ordered versions of the schemes. Actual parallel speedup and efficiency are high and payoff appears substantial

  8. 3D Hyperpolarized C-13 EPI with Calibrationless Parallel Imaging

    DEFF Research Database (Denmark)

    Gordon, Jeremy W.; Hansen, Rie Beck; Shin, Peter J.

    2018-01-01

    With the translation of metabolic MRI with hyperpolarized 13C agents into the clinic, imaging approaches will require large volumetric FOVs to support clinical applications. Parallel imaging techniques will be crucial to increasing volumetric scan coverage while minimizing RF requirements and tem...... strategies to accelerate and undersample hyperpolarized 13C data using 3D blipped EPI acquisitions and multichannel receive coils, and demonstrated its application in a human study of [1-13C]pyruvate metabolism....

  9. Compiler Technology for Parallel Scientific Computation

    Directory of Open Access Journals (Sweden)

    Can Özturan

    1994-01-01

    Full Text Available There is a need for compiler technology that, given the source program, will generate efficient parallel codes for different architectures with minimal user involvement. Parallel computation is becoming indispensable in solving large-scale problems in science and engineering. Yet, the use of parallel computation is limited by the high costs of developing the needed software. To overcome this difficulty we advocate a comprehensive approach to the development of scalable architecture-independent software for scientific computation based on our experience with equational programming language (EPL. Our approach is based on a program decomposition, parallel code synthesis, and run-time support for parallel scientific computation. The program decomposition is guided by the source program annotations provided by the user. The synthesis of parallel code is based on configurations that describe the overall computation as a set of interacting components. Run-time support is provided by the compiler-generated code that redistributes computation and data during object program execution. The generated parallel code is optimized using techniques of data alignment, operator placement, wavefront determination, and memory optimization. In this article we discuss annotations, configurations, parallel code generation, and run-time support suitable for parallel programs written in the functional parallel programming language EPL and in Fortran.

  10. Summary of DOD Acquisition Program Audit Coverage

    National Research Council Canada - National Science Library

    2001-01-01

    This report will provide the DoD audit community with information to support their planning efforts and provide management with information on the extent of audit coverage of DoD acquisition programs...

  11. 42 CFR 435.139 - Coverage for certain aliens.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Coverage for certain aliens. 435.139 Section 435... Aliens § 435.139 Coverage for certain aliens. The agency must provide services necessary for the treatment of an emergency medical condition, as defined in § 440.255(c) of this chapter, to those aliens...

  12. The Coverage of Campaign Advertising by the Prestige Press in 1972.

    Science.gov (United States)

    Bowers, Thomas A.

    The nature and extent of the news media coverage of political advertising in the presidential campaign of 1972 was shallow and spotty at best. The candidates' political advertising strategies received limited coverage by reporters and commentators. Even the "prestige" press--16 major newspapers--provided limited coverage to the nature…

  13. Development and application of a 6.5 million feature Affymetrix Genechip® for massively parallel discovery of single position polymorphisms in lettuce (Lactuca spp.).

    Science.gov (United States)

    Stoffel, Kevin; van Leeuwen, Hans; Kozik, Alexander; Caldwell, David; Ashrafi, Hamid; Cui, Xinping; Tan, Xiaoping; Hill, Theresa; Reyes-Chin-Wo, Sebastian; Truco, Maria-Jose; Michelmore, Richard W; Van Deynze, Allen

    2012-05-14

    High-resolution genetic maps are needed in many crops to help characterize the genetic diversity that determines agriculturally important traits. Hybridization to microarrays to detect single feature polymorphisms is a powerful technique for marker discovery and genotyping because of its highly parallel nature. However, microarrays designed for gene expression analysis rarely provide sufficient gene coverage for optimal detection of nucleotide polymorphisms, which limits utility in species with low rates of polymorphism such as lettuce (Lactuca sativa). We developed a 6.5 million feature Affymetrix GeneChip® for efficient polymorphism discovery and genotyping, as well as for analysis of gene expression in lettuce. Probes on the microarray were designed from 26,809 unigenes from cultivated lettuce and an additional 8,819 unigenes from four related species (L. serriola, L. saligna, L. virosa and L. perennis). Where possible, probes were tiled with a 2 bp stagger, alternating on each DNA strand; providing an average of 187 probes covering approximately 600 bp for each of over 35,000 unigenes; resulting in up to 13 fold redundancy in coverage per nucleotide. We developed protocols for hybridization of genomic DNA to the GeneChip® and refined custom algorithms that utilized coverage from multiple, high quality probes to detect single position polymorphisms in 2 bp sliding windows across each unigene. This allowed us to detect greater than 18,000 polymorphisms between the parental lines of our core mapping population, as well as numerous polymorphisms between cultivated lettuce and wild species in the lettuce genepool. Using marker data from our diversity panel comprised of 52 accessions from the five species listed above, we were able to separate accessions by species using both phylogenetic and principal component analyses. Additionally, we estimated the diversity between different types of cultivated lettuce and distinguished morphological types. By hybridizing

  14. 42 CFR 416.48 - Condition for coverage-Pharmaceutical services.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Condition for coverage-Pharmaceutical services. 416... Coverage § 416.48 Condition for coverage—Pharmaceutical services. The ASC must provide drugs and... direction of an individual designated responsible for pharmaceutical services. (a) Standard: Administration...

  15. Parallel auto-correlative statistics with VTK.

    Energy Technology Data Exchange (ETDEWEB)

    Pebay, Philippe Pierre; Bennett, Janine Camille

    2013-08-01

    This report summarizes existing statistical engines in VTK and presents both the serial and parallel auto-correlative statistics engines. It is a sequel to [PT08, BPRT09b, PT09, BPT09, PT10] which studied the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k-means, and order statistics engines. The ease of use of the new parallel auto-correlative statistics engine is illustrated by the means of C++ code snippets and algorithm verification is provided. This report justifies the design of the statistics engines with parallel scalability in mind, and provides scalability and speed-up analysis results for the autocorrelative statistics engine.

  16. PSHED: a simplified approach to developing parallel programs

    International Nuclear Information System (INIS)

    Mahajan, S.M.; Ramesh, K.; Rajesh, K.; Somani, A.; Goel, M.

    1992-01-01

    This paper presents a simplified approach in the forms of a tree structured computational model for parallel application programs. An attempt is made to provide a standard user interface to execute programs on BARC Parallel Processing System (BPPS), a scalable distributed memory multiprocessor. The interface package called PSHED provides a basic framework for representing and executing parallel programs on different parallel architectures. The PSHED package incorporates concepts from a broad range of previous research in programming environments and parallel computations. (author). 6 refs

  17. 28 CFR 70.31 - Insurance coverage.

    Science.gov (United States)

    2010-07-01

    ... with Federal funds as provided to property owned by the recipient. Federally-owned property need not be...-PROFIT ORGANIZATIONS Post-Award Requirements Property Standards § 70.31 Insurance coverage. Recipients...

  18. Scalable Coverage Maintenance for Dense Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Jun Lu

    2007-06-01

    Full Text Available Owing to numerous potential applications, wireless sensor networks have been attracting significant research effort recently. The critical challenge that wireless sensor networks often face is to sustain long-term operation on limited battery energy. Coverage maintenance schemes can effectively prolong network lifetime by selecting and employing a subset of sensors in the network to provide sufficient sensing coverage over a target region. We envision future wireless sensor networks composed of a vast number of miniaturized sensors in exceedingly high density. Therefore, the key issue of coverage maintenance for future sensor networks is the scalability to sensor deployment density. In this paper, we propose a novel coverage maintenance scheme, scalable coverage maintenance (SCOM, which is scalable to sensor deployment density in terms of communication overhead (i.e., number of transmitted and received beacons and computational complexity (i.e., time and space complexity. In addition, SCOM achieves high energy efficiency and load balancing over different sensors. We have validated our claims through both analysis and simulations.

  19. 34 CFR 74.31 - Insurance coverage.

    Science.gov (United States)

    2010-07-01

    ... by the recipient. Federally-owned property need not be insured unless required by the terms and... Property Standards § 74.31 Insurance coverage. Recipients shall, at a minimum, provide the equivalent...

  20. 10 CFR 600.131 - Insurance coverage.

    Science.gov (United States)

    2010-01-01

    ... provided to property owned by the recipient. Federally-owned property need not be insured unless required... Nonprofit Organizations Post-Award Requirements § 600.131 Insurance coverage. Recipients shall, at a minimum...

  1. 20 CFR 435.31 - Insurance coverage.

    Science.gov (United States)

    2010-04-01

    ... funds as provided to property owned by the recipient. Federally-owned property need not be insured... ORGANIZATIONS Post-Award Requirements Property Standards § 435.31 Insurance coverage. Recipients must, at a...

  2. Massachusetts health reform: employer coverage from employees' perspective.

    Science.gov (United States)

    Long, Sharon K; Stockley, Karen

    2009-01-01

    The national health reform debate continues to draw on Massachusetts' 2006 reform initiative, with a focus on sustaining employer-sponsored insurance. This study provides an update on employers' responses under health reform in fall 2008, using data from surveys of working-age adults. Results show that concerns about employers' dropping coverage or scaling back benefits under health reform have not been realized. Access to employer coverage has increased, as has the scope and quality of their coverage as assessed by workers. However, premiums and out-of-pocket costs have become more of an issue for employees in small firms.

  3. Contraceptive Coverage and the Affordable Care Act.

    Science.gov (United States)

    Tschann, Mary; Soon, Reni

    2015-12-01

    A major goal of the Patient Protection and Affordable Care Act is reducing healthcare spending by shifting the focus of healthcare toward preventive care. Preventive services, including all FDA-approved contraception, must be provided to patients without cost-sharing under the ACA. No-cost contraception has been shown to increase uptake of highly effective birth control methods and reduce unintended pregnancy and abortion; however, some institutions and corporations argue that providing contraceptive coverage infringes on their religious beliefs. The contraceptive coverage mandate is evolving due to legal challenges, but it has already demonstrated success in reducing costs and improving access to contraception. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. 22 CFR 518.31 - Insurance coverage.

    Science.gov (United States)

    2010-04-01

    ... property owned by the recipient. Federally-owned property need not be insured unless required by the terms... Requirements Property Standards § 518.31 Insurance coverage. Recipients shall, at a minimum, provide the...

  5. 7 CFR 3019.31 - Insurance coverage.

    Science.gov (United States)

    2010-01-01

    ... recipient. Federally-owned property need not be insured unless required by the terms and conditions of the... Standards § 3019.31 Insurance coverage. Recipients shall, at a minimum, provide the equivalent insurance...

  6. 49 CFR 19.31 - Insurance coverage.

    Science.gov (United States)

    2010-10-01

    ... property owned by the recipient. Federally-owned property need not be insured unless required by the terms... Requirements Property Standards § 19.31 Insurance coverage. Recipients shall, at a minimum, provide the...

  7. An Introduction to Parallel Computation R

    Indian Academy of Sciences (India)

    How are they programmed? This article provides an introduction. A parallel computer is a network of processors built for ... and have been used to solve problems much faster than a single ... in parallel computer design is to select an organization which ..... The most ambitious approach to parallel computing is to develop.

  8. HPV vaccination coverage of teen girls: the influence of health care providers.

    Science.gov (United States)

    Smith, Philip J; Stokley, Shannon; Bednarczyk, Robert A; Orenstein, Walter A; Omer, Saad B

    2016-03-18

    Between 2010 and 2014, the percentage of 13-17 year-old girls administered ≥3 doses of the human papilloma virus (HPV) vaccine ("fully vaccinated") increased by 7.7 percentage points to 39.7%, and the percentage not administered any doses of the HPV vaccine ("not immunized") decreased by 11.3 percentage points to 40.0%. To evaluate the complex interactions between parents' vaccine-related beliefs, demographic factors, and HPV immunization status. Vaccine-related parental beliefs and sociodemographic data collected by the 2010 National Immunization Survey-Teen among teen girls (n=8490) were analyzed. HPV vaccination status was determined from teens' health care provider (HCP) records. Among teen girls either unvaccinated or fully vaccinated against HPV, teen girls whose parent was positively influenced to vaccinate their teen daughter against HPV were 48.2 percentage points more likely to be fully vaccinated. Parents who reported being positively influenced to vaccinate against HPV were 28.9 percentage points more likely to report that their daughter's HCP talked about the HPV vaccine, 27.2 percentage points more likely to report that their daughter's HCP gave enough time to discuss the HPV shot, and 43.4 percentage points more likely to report that their daughter's HCP recommended the HPV vaccine (pteen girls administered 1-2 doses of the HPV vaccine, 87.0% had missed opportunities for HPV vaccine administration. Results suggest that an important pathway to achieving higher ≥3 dose HPV vaccine coverage is by increasing HPV vaccination series initiation though HCP talking to parents about the HPV vaccine, giving parents time to discuss the vaccine, and by making a strong recommendation for the HPV. Also, HPV vaccination series completion rates may be increased by eliminating missed opportunities to vaccinate against HPV and scheduling additional follow-up visits to administer missing HPV vaccine doses. Published by Elsevier Ltd.

  9. Towards a streaming model for nested data parallelism

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner; Filinski, Andrzej

    2013-01-01

    The language-integrated cost semantics for nested data parallelism pioneered by NESL provides an intuitive, high-level model for predicting performance and scalability of parallel algorithms with reasonable accuracy. However, this predictability, obtained through a uniform, parallelism-flattening......The language-integrated cost semantics for nested data parallelism pioneered by NESL provides an intuitive, high-level model for predicting performance and scalability of parallel algorithms with reasonable accuracy. However, this predictability, obtained through a uniform, parallelism......-processable in a streaming fashion. This semantics is directly compatible with previously proposed piecewise execution models for nested data parallelism, but allows the expected space usage to be reasoned about directly at the source-language level. The language definition and implementation are still very much work...

  10. A Two-Phase Coverage-Enhancing Algorithm for Hybrid Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Qingguo Zhang

    2017-01-01

    Full Text Available Providing field coverage is a key task in many sensor network applications. In certain scenarios, the sensor field may have coverage holes due to random initial deployment of sensors; thus, the desired level of coverage cannot be achieved. A hybrid wireless sensor network is a cost-effective solution to this problem, which is achieved by repositioning a portion of the mobile sensors in the network to meet the network coverage requirement. This paper investigates how to redeploy mobile sensor nodes to improve network coverage in hybrid wireless sensor networks. We propose a two-phase coverage-enhancing algorithm for hybrid wireless sensor networks. In phase one, we use a differential evolution algorithm to compute the candidate’s target positions in the mobile sensor nodes that could potentially improve coverage. In the second phase, we use an optimization scheme on the candidate’s target positions calculated from phase one to reduce the accumulated potential moving distance of mobile sensors, such that the exact mobile sensor nodes that need to be moved as well as their final target positions can be determined. Experimental results show that the proposed algorithm provided significant improvement in terms of area coverage rate, average moving distance, area coverage–distance rate and the number of moved mobile sensors, when compare with other approaches.

  11. State contraceptive coverage laws: creative responses to questions of "conscience".

    Science.gov (United States)

    Dailard, C

    1999-08-01

    The Federal Employees Health Benefits Program (FEHBP) guaranteed contraceptive coverage for employees of the federal government. However, opponents of the FEHBP contraceptive coverage questioned the viability of the conscience clause. Supporters of the contraceptive coverage pressed for the narrowest exemption, one that only permit religious plans that clearly states religious objection to contraception. There are six of the nine states that have enacted contraceptive coverage laws aimed at the private sector. The statutes included a provision of conscience clause. The private sector disagrees to the plan since almost all of the employees¿ work for employers who only offer one plan. The scope of exemption for employers was an issue in five states that have enacted the contraceptive coverage. In Hawaii and California, it was exemplified that if employers are exempted from the contraceptive coverage based on religious grounds, an employee will be entitled to purchase coverage directly from the plan. There are still questions on how an insurer, who objects based on religious grounds to a plan with contraceptive coverage, can function in a marketplace where such coverage is provided by most private sector employers.

  12. Performance of the Galley Parallel File System

    Science.gov (United States)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the input/output (I/O) needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. This interface conceals the parallism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. Initial experiments, reported in this paper, indicate that Galley is capable of providing high-performance 1/O to applications the applications that rely on them. In Section 3 we describe that access data in patterns that have been observed to be common.

  13. Computer-Aided Parallelizer and Optimizer

    Science.gov (United States)

    Jin, Haoqiang

    2011-01-01

    The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

  14. Women's Health Insurance Coverage

    Science.gov (United States)

    ... Women's Health Policy Women’s Health Insurance Coverage Women’s Health Insurance Coverage Published: Oct 31, 2017 Facebook Twitter LinkedIn ... that many women continue to face. Sources of Health Insurance Coverage Employer-Sponsored Insurance: Approximately 57.9 million ...

  15. Achieving universal health coverage in small island states: could importing health services provide a solution?

    Science.gov (United States)

    Walls, Helen; Smith, Richard

    2018-01-01

    Background Universal health coverage (UHC) is difficult to achieve in settings short of medicines, health workers and health facilities. These characteristics define the majority of the small island developing states (SIDS), where population size negates the benefits of economies of scale. One option to alleviate this constraint is to import health services, rather than focus on domestic production. This paper provides empirical analysis of the potential impact of this option. Methods Analysis was based on publicly accessible data for 14 SIDS, covering health-related travel and health indicators for the period 2003–2013, together with in-depth review of medical travel schemes for the two highest importing SIDS—the Maldives and Tuvalu. Findings Medical travel from SIDS is accelerating. The SIDS studied generally lacked health infrastructure and technologies, and the majority of them had lower than the recommended number of physicians in a country, which limits their capacity for achieving UHC. Tuvalu and the Maldives were the highest importers of healthcare and notably have public schemes that facilitate medical travel and help lower the out-of-pocket expenditure on medical travel. Although different in approach, design and performance, the medical travel schemes in Tuvalu and the Maldives are both examples of measures used to increase access to health services that cannot feasibly be provided in SIDS. Interpretation Our findings suggest that importing health services (through schemes to facilitate medical travel) is a potential mechanism to help achieve universal healthcare for SIDS but requires due diligence over cost, equity and quality control. PMID:29527349

  16. Increasing Coverage of Hepatitis B Vaccination in China

    Science.gov (United States)

    Wang, Shengnan; Smith, Helen; Peng, Zhuoxin; Xu, Biao; Wang, Weibing

    2016-01-01

    Abstract This study used a system evaluation method to summarize China's experience on improving the coverage of hepatitis B vaccine, especially the strategies employed to improve the uptake of timely birth dosage. Identifying successful methods and strategies will provide strong evidence for policy makers and health workers in other countries with high hepatitis B prevalence. We conducted a literature review included English or Chinese literature carried out in mainland China, using PubMed, the Cochrane databases, Web of Knowledge, China National Knowledge Infrastructure, Wanfang data, and other relevant databases. Nineteen articles about the effectiveness and impact of interventions on improving the coverage of hepatitis B vaccine were included. Strong or moderate evidence showed that reinforcing health education, training and supervision, providing subsidies for facility birth, strengthening the coordination among health care providers, and using out-of-cold-chain storage for vaccines were all important to improving vaccination coverage. We found evidence that community education was the most commonly used intervention, and out-reach programs such as out-of-cold chain strategy were more effective in increasing the coverage of vaccination in remote areas where the facility birth rate was respectively low. The essential impact factors were found to be strong government commitment and the cooperation of the different government departments. Public interventions relying on basic health care systems combined with outreach care services were critical elements in improving the hepatitis B vaccination rate in China. This success could not have occurred without exceptional national commitment. PMID:27175710

  17. Climate Feedback: Bringing the Scientific Community to Provide Direct Feedback on the Credibility of Climate Media Coverage

    Science.gov (United States)

    Vincent, E. M.; Matlock, T.; Westerling, A. L.

    2015-12-01

    While most scientists recognize climate change as a major societal and environmental issue, social and political will to tackle the problem is still lacking. One of the biggest obstacles is inaccurate reporting or even outright misinformation in climate change coverage that result in the confusion of the general public on the issue.In today's era of instant access to information, what we read online usually falls outside our field of expertise and it is a real challenge to evaluate what is credible. The emerging technology of web annotation could be a game changer as it allows knowledgeable individuals to attach notes to any piece of text of a webpage and to share them with readers who will be able to see the annotations in-context -like comments on a pdf.Here we present the Climate Feedback initiative that is bringing together a community of climate scientists who collectively evaluate the scientific accuracy of influential climate change media coverage. Scientists annotate articles sentence by sentence and assess whether they are consistent with scientific knowledge allowing readers to see where and why the coverage is -or is not- based on science. Scientists also summarize the essence of their critical commentary in the form of a simple article-level overall credibility rating that quickly informs readers about the credibility of the entire piece.Web-annotation allows readers to 'hear' directly from the experts and to sense the consensus in a personal way as one can literaly see how many scientists agree with a given statement. It also allows a broad population of scientists to interact with the media, notably early career scientists.In this talk, we will present results on the impacts annotations have on readers -regarding their evaluation of the trustworthiness of the information they read- and on journalists -regarding their reception of scientists comments.Several dozen scientists have contributed to this effort to date and the system offers potential to

  18. Adapting algorithms to massively parallel hardware

    CERN Document Server

    Sioulas, Panagiotis

    2016-01-01

    In the recent years, the trend in computing has shifted from delivering processors with faster clock speeds to increasing the number of cores per processor. This marks a paradigm shift towards parallel programming in which applications are programmed to exploit the power provided by multi-cores. Usually there is gain in terms of the time-to-solution and the memory footprint. Specifically, this trend has sparked an interest towards massively parallel systems that can provide a large number of processors, and possibly computing nodes, as in the GPUs and MPPAs (Massively Parallel Processor Arrays). In this project, the focus was on two distinct computing problems: k-d tree searches and track seeding cellular automata. The goal was to adapt the algorithms to parallel systems and evaluate their performance in different cases.

  19. A massively parallel strategy for STR marker development, capture, and genotyping.

    Science.gov (United States)

    Kistler, Logan; Johnson, Stephen M; Irwin, Mitchell T; Louis, Edward E; Ratan, Aakrosh; Perry, George H

    2017-09-06

    Short tandem repeat (STR) variants are highly polymorphic markers that facilitate powerful population genetic analyses. STRs are especially valuable in conservation and ecological genetic research, yielding detailed information on population structure and short-term demographic fluctuations. Massively parallel sequencing has not previously been leveraged for scalable, efficient STR recovery. Here, we present a pipeline for developing STR markers directly from high-throughput shotgun sequencing data without a reference genome, and an approach for highly parallel target STR recovery. We employed our approach to capture a panel of 5000 STRs from a test group of diademed sifakas (Propithecus diadema, n = 3), endangered Malagasy rainforest lemurs, and we report extremely efficient recovery of targeted loci-97.3-99.6% of STRs characterized with ≥10x non-redundant sequence coverage. We then tested our STR capture strategy on P. diadema fecal DNA, and report robust initial results and suggestions for future implementations. In addition to STR targets, this approach also generates large, genome-wide single nucleotide polymorphism (SNP) panels from flanking regions. Our method provides a cost-effective and scalable solution for rapid recovery of large STR and SNP datasets in any species without needing a reference genome, and can be used even with suboptimal DNA more easily acquired in conservation and ecological studies. Published by Oxford University Press on behalf of Nucleic Acids Research 2017.

  20. Terrorism and nuclear damage coverage

    International Nuclear Information System (INIS)

    Horbach, N. L. J. T.; Brown, O. F.; Vanden Borre, T.

    2004-01-01

    This paper deals with nuclear terrorism and the manner in which nuclear operators can insure themselves against it, based on the international nuclear liability conventions. It concludes that terrorism is currently not covered under the treaty exoneration provisions on 'war-like events' based on an analysis of the concept on 'terrorism' and travaux preparatoires. Consequently, operators remain liable for nuclear damage resulting from terrorist acts, for which mandatory insurance is applicable. Since nuclear insurance industry looks at excluding such insurance coverage from their policies in the near future, this article aims to suggest alternative means for insurance, in order to ensure adequate compensation for innocent victims. The September 11, 2001 attacks at the World Trade Center in New York City and the Pentagon in Washington, DC resulted in the largest loss in the history of insurance, inevitably leading to concerns about nuclear damage coverage, should future such assaults target a nuclear power plant or other nuclear installation. Since the attacks, some insurers have signalled their intentions to exclude coverage for terrorism from their nuclear liability and property insurance policies. Other insurers are maintaining coverage for terrorism, but are establishing aggregate limits or sublimits and are increasing premiums. Additional changes by insurers are likely to occur. Highlighted by the September 11th events, and most recently by those in Madrid on 11 March 2004, are questions about how to define acts of terrorism and the extent to which such are covered under the international nuclear liability conventions and various domestic nuclear liability laws. Of particular concern to insurers is the possibility of coordinated simultaneous attacks on multiple nuclear facilities. This paper provides a survey of the issues, and recommendations for future clarifications and coverage options.(author)

  1. Writing parallel programs that work

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Serial algorithms typically run inefficiently on parallel machines. This may sound like an obvious statement, but it is the root cause of why parallel programming is considered to be difficult. The current state of the computer industry is still that almost all programs in existence are serial. This talk will describe the techniques used in the Intel Parallel Studio to provide a developer with the tools necessary to understand the behaviors and limitations of the existing serial programs. Once the limitations are known the developer can refactor the algorithms and reanalyze the resulting programs with the tools in the Intel Parallel Studio to create parallel programs that work. About the speaker Paul Petersen is a Sr. Principal Engineer in the Software and Solutions Group (SSG) at Intel. He received a Ph.D. degree in Computer Science from the University of Illinois in 1993. After UIUC, he was employed at Kuck and Associates, Inc. (KAI) working on auto-parallelizing compiler (KAP), and was involved in th...

  2. Massively parallel multicanonical simulations

    Science.gov (United States)

    Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard

    2018-03-01

    Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.

  3. Energy-efficient area coverage for intruder detection in sensor networks

    CERN Document Server

    He, Shibo; Li, Junkun

    2014-01-01

    This Springer Brief presents recent research results on area coverage for intruder detection from an energy-efficient perspective. These results cover a variety of topics, including environmental surveillance and security monitoring. The authors also provide the background and range of applications for area coverage and elaborate on system models such as the formal definition of area coverage and sensing models. Several chapters focus on energy-efficient intruder detection and intruder trapping under the well-known binary sensing model, along with intruder trapping under the probabilistic sens

  4. Inequity between male and female coverage in state infertility laws.

    Science.gov (United States)

    Dupree, James M; Dickey, Ryan M; Lipshultz, Larry I

    2016-06-01

    To analyze state insurance laws mandating coverage for male factor infertility and identify possible inequities between male and female coverage in state insurance laws. We identified states with laws or codes related to infertility insurance coverage using the National Conference of States Legislatures' and the National Infertility Association's websites. We performed a primary, systematic analysis of the laws or codes to specifically identify coverage for male factor infertility services. Not applicable. Not applicable. Not applicable. The presence or absence of language in state insurance laws mandating coverage for male factor infertility care. There are 15 states with laws mandating insurance coverage for female factor infertility. Only eight of those states (California, Connecticut, Massachusetts, Montana, New Jersey, New York, Ohio, and West Virginia) have mandates for male factor infertility evaluation or treatment. Insurance coverage for male factor infertility is most specific in Massachusetts, New Jersey, and New York, yet significant differences exist in the male factor policies in all eight states. Three states (Massachusetts, New Jersey, and New York) exempt coverage for vasectomy reversal. Despite national recommendations that male and female partners begin infertility evaluations together, only 8 of 15 states with laws mandating infertility coverage include coverage for the male partner. Excluding men from infertility coverage places an undue burden on female partners and risks missing opportunities to diagnose serious male health conditions, correct reversible causes of infertility, and provide cost-effective treatments that can downgrade the intensity of intervention required to achieve a pregnancy. Copyright © 2016 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  5. Development and application of a 6.5 million feature Affymetrix Genechip® for massively parallel discovery of single position polymorphisms in lettuce (Lactuca spp.

    Directory of Open Access Journals (Sweden)

    Stoffel Kevin

    2012-05-01

    Full Text Available Abstract Background High-resolution genetic maps are needed in many crops to help characterize the genetic diversity that determines agriculturally important traits. Hybridization to microarrays to detect single feature polymorphisms is a powerful technique for marker discovery and genotyping because of its highly parallel nature. However, microarrays designed for gene expression analysis rarely provide sufficient gene coverage for optimal detection of nucleotide polymorphisms, which limits utility in species with low rates of polymorphism such as lettuce (Lactuca sativa. Results We developed a 6.5 million feature Affymetrix GeneChip® for efficient polymorphism discovery and genotyping, as well as for analysis of gene expression in lettuce. Probes on the microarray were designed from 26,809 unigenes from cultivated lettuce and an additional 8,819 unigenes from four related species (L. serriola, L. saligna, L. virosa and L. perennis. Where possible, probes were tiled with a 2 bp stagger, alternating on each DNA strand; providing an average of 187 probes covering approximately 600 bp for each of over 35,000 unigenes; resulting in up to 13 fold redundancy in coverage per nucleotide. We developed protocols for hybridization of genomic DNA to the GeneChip® and refined custom algorithms that utilized coverage from multiple, high quality probes to detect single position polymorphisms in 2 bp sliding windows across each unigene. This allowed us to detect greater than 18,000 polymorphisms between the parental lines of our core mapping population, as well as numerous polymorphisms between cultivated lettuce and wild species in the lettuce genepool. Using marker data from our diversity panel comprised of 52 accessions from the five species listed above, we were able to separate accessions by species using both phylogenetic and principal component analyses. Additionally, we estimated the diversity between different types of cultivated lettuce and

  6. 7 CFR 457.146 - Northern potato crop insurance-storage coverage endorsement.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 6 2010-01-01 2010-01-01 false Northern potato crop insurance-storage coverage... Northern potato crop insurance—storage coverage endorsement. The Northern Potato Crop Insurance Storage... for insurance provider) Both FCIC and reinsured policies: Northern Potato Crop Insurance Storage...

  7. Shared Variable Oriented Parallel Precompiler for SPMD Model

    Institute of Scientific and Technical Information of China (English)

    1995-01-01

    For the moment,commercial parallel computer systems with distributed memory architecture are usually provided with parallel FORTRAN or parallel C compliers,which are just traditional sequential FORTRAN or C compilers expanded with communication statements.Programmers suffer from writing parallel programs with communication statements. The Shared Variable Oriented Parallel Precompiler (SVOPP) proposed in this paper can automatically generate appropriate communication statements based on shared variables for SPMD(Single Program Multiple Data) computation model and greatly ease the parallel programming with high communication efficiency.The core function of parallel C precompiler has been successfully verified on a transputer-based parallel computer.Its prominent performance shows that SVOPP is probably a break-through in parallel programming technique.

  8. SWAMP+: multiple subsequence alignment using associative massive parallelism

    Energy Technology Data Exchange (ETDEWEB)

    Steinfadt, Shannon Irene [Los Alamos National Laboratory; Baker, Johnnie W [KENT STATE UNIV.

    2010-10-18

    A new parallel algorithm SWAMP+ incorporates the Smith-Waterman sequence alignment on an associative parallel model known as ASC. It is a highly sensitive parallel approach that expands traditional pairwise sequence alignment. This is the first parallel algorithm to provide multiple non-overlapping, non-intersecting subsequence alignments with the accuracy of Smith-Waterman. The efficient algorithm provides multiple alignments similar to BLAST while creating a better workflow for the end users. The parallel portions of the code run in O(m+n) time using m processors. When m = n, the algorithmic analysis becomes O(n) with a coefficient of two, yielding a linear speedup. Implementation of the algorithm on the SIMD ClearSpeed CSX620 confirms this theoretical linear speedup with real timings.

  9. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable distributed graph container and a collection of commonly used parallel graph algorithms. The library introduces pGraph pViews that separate algorithm design from the container implementation. It supports three graph processing algorithmic paradigms, level-synchronous, asynchronous and coarse-grained, and provides common graph algorithms based on them. Experimental results demonstrate improved scalability in performance and data size over existing graph libraries on more than 16,000 cores and on internet-scale graphs containing over 16 billion vertices and 250 billion edges. © Springer-Verlag Berlin Heidelberg 2013.

  10. A bumpy ride on the diagnostic bench of massive parallel sequencing, the case of the mitochondrial genome.

    Directory of Open Access Journals (Sweden)

    Kim Vancampenhout

    Full Text Available The advent of massive parallel sequencing (MPS has revolutionized the field of human molecular genetics, including the diagnostic study of mitochondrial (mt DNA dysfunction. The analysis of the complete mitochondrial genome using MPS platforms is now common and will soon outrun conventional sequencing. However, the development of a robust and reliable protocol is rather challenging. A previous pilot study for the re-sequencing of human mtDNA revealed an uneven coverage, affecting predominantly part of the plus strand. In an attempt to address this problem, we undertook a comparative study of standard and modified protocols for the Ion Torrent PGM system. We could not improve strand representation by altering the recommended shearing methodology of the standard workflow or omitting the DNA polymerase amplification step from the library construction process. However, we were able to associate coverage bias of the plus strand with a specific sequence motif. Additionally, we compared coverage and variant calling across technologies. The same samples were also sequenced on a MiSeq device which showed that coverage and heteroplasmic variant calling were much improved.

  11. An Enumerative Combinatorics Model for Fragmentation Patterns in RNA Sequencing Provides Insights into Nonuniformity of the Expected Fragment Starting-Point and Coverage Profile.

    Science.gov (United States)

    Prakash, Celine; Haeseler, Arndt Von

    2017-03-01

    RNA sequencing (RNA-seq) has emerged as the method of choice for measuring the expression of RNAs in a given cell population. In most RNA-seq technologies, sequencing the full length of RNA molecules requires fragmentation into smaller pieces. Unfortunately, the issue of nonuniform sequencing coverage across a genomic feature has been a concern in RNA-seq and is attributed to biases for certain fragments in RNA-seq library preparation and sequencing. To investigate the expected coverage obtained from fragmentation, we develop a simple fragmentation model that is independent of bias from the experimental method and is not specific to the transcript sequence. Essentially, we enumerate all configurations for maximal placement of a given fragment length, F, on transcript length, T, to represent every possible fragmentation pattern, from which we compute the expected coverage profile across a transcript. We extend this model to incorporate general empirical attributes such as read length, fragment length distribution, and number of molecules of the transcript. We further introduce the fragment starting-point, fragment coverage, and read coverage profiles. We find that the expected profiles are not uniform and that factors such as fragment length to transcript length ratio, read length to fragment length ratio, fragment length distribution, and number of molecules influence the variability of coverage across a transcript. Finally, we explore a potential application of the model where, with simulations, we show that it is possible to correctly estimate the transcript copy number for any transcript in the RNA-seq experiment.

  12. CONTRIBUTION OF QUADRATIC RESIDUE DIFFUSERS TO EFFICIENCY OF TILTED PROFILE PARALLEL HIGHWAY NOISE BARRIERS

    Directory of Open Access Journals (Sweden)

    M. R. Monazzam ، P. Nassiri

    2009-10-01

    Full Text Available This paper presents the results of an investigation on the acoustic performance of tilted profile parallel barriers with quadratic residue diffuser (QRD tops and faces. A 2D boundary element method (BEM is used to predict the barrier insertion loss. The results of rigid and with absorptive coverage are also calculated for comparisons. Using QRD on the top surface and faces of all tilted profile parallel barrier models introduced here is found to improve the efficiency of barriers compared with rigid equivalent parallel barrier at the examined receiver positions. Applying a QRD with frequency design of 400 Hz on 5 degrees tilted parallel barrier improves the overall performance of its equivalent rigid barrier by 1.8 dB(A. Increase in the treated surfaces with reactive elements shifts the effective performance toward lower frequencies. It is found that by tilting the barriers from 0 to 10 degrees in parallel set up, the degradation effects in parallel barriers is reduced but the absorption effect of fibrous materials and also diffusivity of the quadratic residue diffuser is reduced significantly. In this case all the designed barriers have better performance with 10 degrees tilting in parallel set up. The most economic traffic noise parallel barrier which produces significantly high performance, is achieved by covering the top surface of the barrier closed to the receiver by just a QRD with frequency design of 400 Hz and tilting angle of 10 degrees. The average A-weighted insertion loss in this barrier is predicted to be 16.3 dB (A.

  13. Aspects of computation on asynchronous parallel processors

    International Nuclear Information System (INIS)

    Wright, M.

    1989-01-01

    The increasing availability of asynchronous parallel processors has provided opportunities for original and useful work in scientific computing. However, the field of parallel computing is still in a highly volatile state, and researchers display a wide range of opinion about many fundamental questions such as models of parallelism, approaches for detecting and analyzing parallelism of algorithms, and tools that allow software developers and users to make effective use of diverse forms of complex hardware. This volume collects the work of researchers specializing in different aspects of parallel computing, who met to discuss the framework and the mechanics of numerical computing. The far-reaching impact of high-performance asynchronous systems is reflected in the wide variety of topics, which include scientific applications (e.g. linear algebra, lattice gauge simulation, ordinary and partial differential equations), models of parallelism, parallel language features, task scheduling, automatic parallelization techniques, tools for algorithm development in parallel environments, and system design issues

  14. Land and federal mineral ownership coverage for northwestern Colorado

    Science.gov (United States)

    Biewick, L.H.; Mercier, T.J.; Levitt, Pam; Deikman, Doug; Vlahos, Bob

    1999-01-01

    This Arc/Info coverage contains land status and Federal mineral ownership for approximately 26,800 square miles in northwestern Colorado. The polygon coverage (which is also provided here as a shapefile) contains two attributes of ownership information for each polygon. One attribute indicates where the surface is State owned, privately owned, or, if Federally owned, which Federal agency manages the land surface. The other attribute indicates which minerals, if any, are owned by the Federal govenment. This coverage is based on land status and Federal mineral ownership data compiled by the U.S. Geological Survey (USGS) and three Colorado State Bureau of Land Management (BLM) former district offices at a scale of 1:24,000. This coverage was compiled primarily to serve the USGS National Oil and Gas Resource Assessment Project in the Uinta-Piceance Basin Province and the USGS National Coal Resource Assessment Project in the Colorado Plateau.

  15. Newspaper coverage of mental illness in England 2008-2011.

    Science.gov (United States)

    Thornicroft, Amalia; Goulden, Robert; Shefer, Guy; Rhydderch, Danielle; Rose, Diana; Williams, Paul; Thornicroft, Graham; Henderson, Claire

    2013-04-01

    Better newspaper coverage of mental health-related issues is a target for the Time to Change (TTC) anti-stigma programme in England, whose population impact may be influenced by how far concurrent media coverage perpetuates stigma and discrimination. To compare English newspaper coverage of mental health-related topics each year of the TTC social marketing campaign (2009-2011) with baseline coverage in 2008. Content analysis was performed on articles in 27 local and national newspapers on two randomly chosen days each month. There was a significant increase in the proportion of anti-stigmatising articles between 2008 and 2011. There was no concomitant proportional decrease in stigmatising articles, and the contribution of mixed or neutral elements decreased. These findings provide promising results on improvements in press reporting of mental illness during the TTC programme in 2009-2011, and a basis for guidance to newspaper journalists and editors on reporting mental illness.

  16. Internet of THings Area Coverage Analyzer (ITHACA for Complex Topographical Scenarios

    Directory of Open Access Journals (Sweden)

    Raúl Parada

    2017-10-01

    Full Text Available The number of connected devices is increasing worldwide. Not only in contexts like the Smart City, but also in rural areas, to provide advanced features like smart farming or smart logistics. Thus, wireless network technologies to efficiently allocate Internet of Things (IoT and Machine to Machine (M2M communications are necessary. Traditional cellular networks like Global System for Mobile communications (GSM are widely used worldwide for IoT environments. Nevertheless, Low Power Wide Area Networks (LP-WAN are becoming widespread as infrastructure for present and future IoT and M2M applications. Based also on a subscription service, the LP-WAN technology SIGFOXTM may compete with cellular networks in the M2M and IoT communications market, for instance in those projects where deploying the whole communications infrastructure is too complex or expensive. For decision makers to decide the most suitable technology for each specific application, signal coverage is within the key features. Unfortunately, besides simulated coverage maps, decision-makers do not have real coverage maps for SIGFOXTM, as they can be found for cellular networks. Thereby, we propose Internet of THings Area Coverage Analyzer (ITHACA, a signal analyzer prototype to provide automated signal coverage maps and analytics for LP-WAN. Experiments performed in the Gran Canaria Island, Spain (with both urban and complex topographic rural environments, returned a real SIGFOXTM service availability above 97% and above 11% more coverage with respect to the company-provided simulated maps. We expect that ITHACA may help decision makers to deploy the most suitable technologies for future IoT and M2M projects.

  17. Parallel kinematics type, kinematics, and optimal design

    CERN Document Server

    Liu, Xin-Jun

    2014-01-01

    Parallel Kinematics- Type, Kinematics, and Optimal Design presents the results of 15 year's research on parallel mechanisms and parallel kinematics machines. This book covers the systematic classification of parallel mechanisms (PMs) as well as providing a large number of mechanical architectures of PMs available for use in practical applications. It focuses on the kinematic design of parallel robots. One successful application of parallel mechanisms in the field of machine tools, which is also called parallel kinematics machines, has been the emerging trend in advanced machine tools. The book describes not only the main aspects and important topics in parallel kinematics, but also references novel concepts and approaches, i.e. type synthesis based on evolution, performance evaluation and optimization based on screw theory, singularity model taking into account motion and force transmissibility, and others.   This book is intended for researchers, scientists, engineers and postgraduates or above with interes...

  18. [Gaps in effective coverage by socioeconomic status and poverty condition].

    Science.gov (United States)

    Gutiérrez, Juan Pablo

    2013-01-01

    To analyze, in the context of increased health protection in Mexico, the gaps by socioeconomic status and poverty condition on effective coverage of selected preventive interventions. Data from the National Health & Nutrition Survey 2012 and 2006, using previously defined indicators of effective coverage and stratifying them by socioeconomic (SE) status and multidimensional poverty condition. For vaccination interventions, immunological equity has been maintained in Mexico. For indicators related to preventive interventions provided at the clinical setting, effective coverage is lower among those in the lowest SE quintile and among people living in multidimensional poverty. Comparing 2006 and 2012, there is no evidence on gap reduction. While health protection has significantly increased in Mexico, thus reducing SE gaps, those gaps are still important in magnitude for effective coverage of preventive interventions.

  19. Parallel R-matrix computation

    International Nuclear Information System (INIS)

    Heggarty, J.W.

    1999-06-01

    For almost thirty years, sequential R-matrix computation has been used by atomic physics research groups, from around the world, to model collision phenomena involving the scattering of electrons or positrons with atomic or molecular targets. As considerable progress has been made in the understanding of fundamental scattering processes, new data, obtained from more complex calculations, is of current interest to experimentalists. Performing such calculations, however, places considerable demands on the computational resources to be provided by the target machine, in terms of both processor speed and memory requirement. Indeed, in some instances the computational requirements are so great that the proposed R-matrix calculations are intractable, even when utilising contemporary classic supercomputers. Historically, increases in the computational requirements of R-matrix computation were accommodated by porting the problem codes to a more powerful classic supercomputer. Although this approach has been successful in the past, it is no longer considered to be a satisfactory solution due to the limitations of current (and future) Von Neumann machines. As a consequence, there has been considerable interest in the high performance multicomputers, that have emerged over the last decade which appear to offer the computational resources required by contemporary R-matrix research. Unfortunately, developing codes for these machines is not as simple a task as it was to develop codes for successive classic supercomputers. The difficulty arises from the considerable differences in the computing models that exist between the two types of machine and results in the programming of multicomputers to be widely acknowledged as a difficult, time consuming and error-prone task. Nevertheless, unless parallel R-matrix computation is realised, important theoretical and experimental atomic physics research will continue to be hindered. This thesis describes work that was undertaken in

  20. Quality and extent of locum tenens coverage in pediatric surgical practices.

    Science.gov (United States)

    Nolan, Tracy L; Kandel, Jessica J; Nakayama, Don K

    2015-04-01

    The prevalence and quality of locum tenens coverage in pediatric surgery have not been determined. An Internet-based survey of American Pediatric Surgical Association members was conducted: 1) practice description; 2) use and frequency of locum tenens coverage; 4) whether the surgeon provided such coverage; and 5) Likert scale responses (strongly disagree, disagree, neutral, agree, strongly agree) to statements addressing its acceptability and quality (two × five contingency table and χ(2) analyses, significance at P view it as a stopgap solution to the surgical workforce shortage.

  1. Comparison of two next-generation sequencing kits for diagnosis of epileptic disorders with a user-friendly tool for displaying gene coverage, DeCovA

    Directory of Open Access Journals (Sweden)

    Sarra Dimassi

    2015-12-01

    Full Text Available In recent years, molecular genetics has been playing an increasing role in the diagnostic process of monogenic epilepsies. Knowing the genetic basis of one patient's epilepsy provides accurate genetic counseling and may guide therapeutic options. Genetic diagnosis of epilepsy syndromes has long been based on Sanger sequencing and search for large rearrangements using MLPA or DNA arrays (array-CGH or SNP-array. Recently, next-generation sequencing (NGS was demonstrated to be a powerful approach to overcome the wide clinical and genetic heterogeneity of epileptic disorders. Coverage is critical for assessing the quality and accuracy of results from NGS. However, it is often a difficult parameter to display in practice. The aim of the study was to compare two library-building methods (Haloplex, Agilent and SeqCap EZ, Roche for a targeted panel of 41 genes causing monogenic epileptic disorders. We included 24 patients, 20 of whom had known disease-causing mutations. For each patient both libraries were built in parallel and sequenced on an Ion Torrent Personal Genome Machine (PGM. To compare coverage and depth, we developed a simple homemade tool, named DeCovA (Depth and Coverage Analysis. DeCovA displays the sequencing depth of each base and the coverage of target genes for each genomic position. The fraction of each gene covered at different thresholds could be easily estimated. None of the two methods used, namely NextGene and Ion Reporter, were able to identify all the known mutations/CNVs displayed by the 20 patients. Variant detection rate was globally similar for the two techniques and DeCovA showed that failure to detect a mutation was mainly related to insufficient coverage.

  2. Optimisation of a parallel ocean general circulation model

    OpenAIRE

    M. I. Beare; D. P. Stevens

    1997-01-01

    International audience; This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by...

  3. Quad-Tree Visual-Calculus Analysis of Satellite Coverage

    Science.gov (United States)

    Lo, Martin W.; Hockney, George; Kwan, Bruce

    2003-01-01

    An improved method of analysis of coverage of areas of the Earth by a constellation of radio-communication or scientific-observation satellites has been developed. This method is intended to supplant an older method in which the global-coverage-analysis problem is solved from a ground-to-satellite perspective. The present method provides for rapid and efficient analysis. This method is derived from a satellite-to-ground perspective and involves a unique combination of two techniques for multiresolution representation of map features on the surface of a sphere.

  4. Breast Health Services: Accuracy of Benefit Coverage Information in the Individual Insurance Marketplace.

    Science.gov (United States)

    Hamid, Mariam S; Kolenic, Giselle E; Dozier, Jessica; Dalton, Vanessa K; Carlos, Ruth C

    2017-04-01

    The aim of this study was to determine if breast health coverage information provided by customer service representatives employed by insurers offering plans in the 2015 federal and state health insurance marketplaces is consistent with Patient Protection and Affordable Care Act (ACA) and state-specific legislation. One hundred fifty-eight unique customer service numbers were identified for insurers offering plans through the federal marketplace, augmented with four additional numbers representing the Connecticut state-run exchange. Using a standardized patient biography and the mystery-shopper technique, a single investigator posed as a purchaser and contacted each number, requesting information on breast health services coverage. Consistency of information provided by the representative with the ACA mandates (BRCA testing in high-risk women) or state-specific legislation (screening ultrasound in women with dense breasts) was determined. Insurer representatives gave BRCA test coverage information that was not consistent with the ACA mandate in 60.8% of cases, and 22.8% could not provide any information regarding coverage. Nearly half (48.1%) of insurer representatives gave coverage information about ultrasound screening for dense breasts that was not consistent with state-specific legislation, and 18.5% could not provide any information. Insurance customer service representatives in the federal and state marketplaces frequently provide inaccurate coverage information about breast health services that should be covered under the ACA and state-specific legislation. Misinformation can inadvertently lead to the purchase of a plan that does not meet the needs of the insured. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  5. Balanced, parallel operation of flashlamps

    International Nuclear Information System (INIS)

    Carder, B.M.; Merritt, B.T.

    1979-01-01

    A new energy store, the Compensated Pulsed Alternator (CPA), promises to be a cost effective substitute for capacitors to drive flashlamps that pump large Nd:glass lasers. Because the CPA is large and discrete, it will be necessary that it drive many parallel flashlamp circuits, presenting a problem in equal current distribution. Current division to +- 20% between parallel flashlamps has been achieved, but this is marginal for laser pumping. A method is presented here that provides equal current sharing to about 1%, and it includes fused protection against short circuit faults. The method was tested with eight parallel circuits, including both open-circuit and short-circuit fault tests

  6. Massively Parallel Computing: A Sandia Perspective

    Energy Technology Data Exchange (ETDEWEB)

    Dosanjh, Sudip S.; Greenberg, David S.; Hendrickson, Bruce; Heroux, Michael A.; Plimpton, Steve J.; Tomkins, James L.; Womble, David E.

    1999-05-06

    The computing power available to scientists and engineers has increased dramatically in the past decade, due in part to progress in making massively parallel computing practical and available. The expectation for these machines has been great. The reality is that progress has been slower than expected. Nevertheless, massively parallel computing is beginning to realize its potential for enabling significant break-throughs in science and engineering. This paper provides a perspective on the state of the field, colored by the authors' experiences using large scale parallel machines at Sandia National Laboratories. We address trends in hardware, system software and algorithms, and we also offer our view of the forces shaping the parallel computing industry.

  7. Parallel hierarchical radiosity rendering

    Energy Technology Data Exchange (ETDEWEB)

    Carter, Michael [Iowa State Univ., Ames, IA (United States)

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  8. Contribution of diffuser surfaces to efficiency of tilted T shape parallel highway noise barriers

    Directory of Open Access Journals (Sweden)

    N. Javid Rouzi

    2009-04-01

    Full Text Available Background and aimsThe paper presents the results of an investigation on the acoustic  performance of tilted profile parallel barriers with quadratic residue diffuser tops and faces.MethodsA2D boundary element method (BEM is used to predict the barrier insertion loss. The results of rigid and with absorptive coverage are also calculated for comparisons. Using QRD on the top surface and faces of all tilted profile parallel barrier models introduced here is found to  improve the efficiency of barriers compared with rigid equivalent parallel barrier at the examined  receiver positions.Results Applying a QRD with frequency design of 400 Hz on 5 degrees tilted parallel barrier  improves the overall performance of its equivalent rigid barrier by 1.8 dB(A. Increase the treated surfaces with reactive elements shifts the effective performance toward lower frequencies. It is  found that by tilting the barriers from 0 to 10 degrees in parallel set up, the degradation effects in  parallel barriers is reduced but the absorption effect of fibrous materials and also diffusivity of thequadratic residue diffuser is reduced significantly. In this case all the designed barriers have better  performance with 10 degrees tilting in parallel set up.ConclusionThe most economic traffic noise parallel barrier, which produces significantly  high performance, is achieved by covering the top surface of the barrier closed to the receiver by  just a QRD with frequency design of 400 Hz and tilting angle of 10 degrees. The average Aweighted  insertion loss in this barrier is predicted to be 16.3 dB (A.

  9. Policy Choices for Progressive Realization of Universal Health Coverage Comment on "Ethical Perspective: Five Unacceptable Trade-offs on the Path to Universal Health Coverage".

    Science.gov (United States)

    Tangcharoensathien, Viroj; Patcharanarumol, Walaiporn; Panichkriangkrai, Warisa; Sommanustweechai, Angkana

    2016-07-31

    In responses to Norheim's editorial, this commentary offers reflections from Thailand, how the five unacceptable trade-offs were applied to the universal health coverage (UHC) reforms between 1975 and 2002 when the whole 64 million people were covered by one of the three public health insurance systems. This commentary aims to generate global discussions on how best UHC can be gradually achieved. Not only the proposed five discrete trade-offs within each dimension, there are also trade-offs between the three dimensions of UHC such as population coverage, service coverage and cost coverage. Findings from Thai UHC show that equity is applied for the population coverage extension, when the low income households and the informal sector were the priority population groups for coverage extension by different prepayment schemes in 1975 and 1984, respectively. With an exception of public sector employees who were historically covered as part of fringe benefits were covered well before the poor. The private sector employees were covered last in 1990. Historically, Thailand applied a comprehensive benefit package where a few items are excluded using the negative list; until there was improved capacities on technology assessment that cost-effectiveness are used for the inclusion of new interventions into the benefit package. Not only cost-effectiveness, but long term budget impact, equity and ethical considerations are taken into account. Cost coverage is mostly determined by the fiscal capacities. Close ended budget with mix of provider payment methods are used as a tool for trade-off service coverage and financial risk protection. Introducing copayment in the context of fee-for-service can be harmful to beneficiaries due to supplier induced demands, inefficiency and unpredictable out of pocket payment by households. UHC achieves favorable outcomes as it was implemented when there was a full geographical coverage of primary healthcare coverage in all districts and sub

  10. Parallel Programming with Intel Parallel Studio XE

    CERN Document Server

    Blair-Chappell , Stephen

    2012-01-01

    Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the

  11. Measuring coverage in MNCH: a validation study linking population survey derived coverage to maternal, newborn, and child health care records in rural China.

    Directory of Open Access Journals (Sweden)

    Li Liu

    Full Text Available Accurate data on coverage of key maternal, newborn, and child health (MNCH interventions are crucial for monitoring progress toward the Millennium Development Goals 4 and 5. Coverage estimates are primarily obtained from routine population surveys through self-reporting, the validity of which is not well understood. We aimed to examine the validity of the coverage of selected MNCH interventions in Gongcheng County, China.We conducted a validation study by comparing women's self-reported coverage of MNCH interventions relating to antenatal and postnatal care, mode of delivery, and child vaccinations in a community survey with their paper- and electronic-based health care records, treating the health care records as the reference standard. Of 936 women recruited, 914 (97.6% completed the survey. Results show that self-reported coverage of these interventions had moderate to high sensitivity (0.57 [95% confidence interval (CI: 0.50-0.63] to 0.99 [95% CI: 0.98-1.00] and low to high specificity (0 to 0.83 [95% CI: 0.80-0.86]. Despite varying overall validity, with the area under the receiver operating characteristic curve (AUC ranging between 0.49 [95% CI: 0.39-0.57] and 0.90 [95% CI: 0.88-0.92], bias in the coverage estimates at the population level was small to moderate, with the test to actual positive (TAP ratio ranging between 0.8 and 1.5 for 24 of the 28 indicators examined. Our ability to accurately estimate validity was affected by several caveats associated with the reference standard. Caution should be exercised when generalizing the results to other settings.The overall validity of self-reported coverage was moderate across selected MNCH indicators. However, at the population level, self-reported coverage appears to have small to moderate degree of bias. Accuracy of the coverage was particularly high for indicators with high recorded coverage or low recorded coverage but high specificity. The study provides insights into the accuracy of

  12. Multistage parallel-serial time averaging filters

    International Nuclear Information System (INIS)

    Theodosiou, G.E.

    1980-01-01

    Here, a new time averaging circuit design, the 'parallel filter' is presented, which can reduce the time jitter, introduced in time measurements using counters of large dimensions. This parallel filter could be considered as a single stage unit circuit which can be repeated an arbitrary number of times in series, thus providing a parallel-serial filter type as a result. The main advantages of such a filter over a serial one are much less electronic gate jitter and time delay for the same amount of total time uncertainty reduction. (orig.)

  13. 5 CFR 890.1106 - Coverage.

    Science.gov (United States)

    2010-01-01

    ... family member is an individual whose relationship to the enrollee meets the requirements of 5 U.S.C. 8901... EMPLOYEES HEALTH BENEFITS PROGRAM Temporary Continuation of Coverage § 890.1106 Coverage. (a) Type of enrollment. An individual who enrolls under this subpart may elect coverage for self alone or self and family...

  14. A QoS-Guaranteed Coverage Precedence Routing Algorithm for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Jiun-Chuan Lin

    2011-03-01

    Full Text Available For mission-critical applications of wireless sensor networks (WSNs involving extensive battlefield surveillance, medical healthcare, etc., it is crucial to have low-power, new protocols, methodologies and structures for transferring data and information in a network with full sensing coverage capability for an extended working period. The upmost mission is to ensure that the network is fully functional providing reliable transmission of the sensed data without the risk of data loss. WSNs have been applied to various types of mission-critical applications. Coverage preservation is one of the most essential functions to guarantee quality of service (QoS in WSNs. However, a tradeoff exists between sensing coverage and network lifetime due to the limited energy supplies of sensor nodes. In this study, we propose a routing protocol to accommodate both energy-balance and coverage-preservation for sensor nodes in WSNs. The energy consumption for radio transmissions and the residual energy over the network are taken into account when the proposed protocol determines an energy-efficient route for a packet. The simulation results demonstrate that the proposed protocol is able to increase the duration of the on-duty network and provide up to 98.3% and 85.7% of extra service time with 100% sensing coverage ratio comparing with LEACH and the LEACH-Coverage-U protocols, respectively.

  15. 29 CFR 801.3 - Coverage.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Coverage. 801.3 Section 801.3 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR OTHER LAWS APPLICATION OF THE EMPLOYEE POLYGRAPH PROTECTION ACT OF 1988 General § 801.3 Coverage. (a) The coverage of the Act extends to “any...

  16. 42 CFR 436.308 - Medically needy coverage of individuals under age 21.

    Science.gov (United States)

    2010-10-01

    ... THE VIRGIN ISLANDS Optional Coverage of the Medically Needy § 436.308 Medically needy coverage of... (b) of this section: (1) Who would not be covered under the mandatory medically needy group of... nursing facility services are provided under the plan to individuals within the age group selected under...

  17. A Tutorial on Parallel and Concurrent Programming in Haskell

    Science.gov (United States)

    Peyton Jones, Simon; Singh, Satnam

    This practical tutorial introduces the features available in Haskell for writing parallel and concurrent programs. We first describe how to write semi-explicit parallel programs by using annotations to express opportunities for parallelism and to help control the granularity of parallelism for effective execution on modern operating systems and processors. We then describe the mechanisms provided by Haskell for writing explicitly parallel programs with a focus on the use of software transactional memory to help share information between threads. Finally, we show how nested data parallelism can be used to write deterministically parallel programs which allows programmers to use rich data types in data parallel programs which are automatically transformed into flat data parallel versions for efficient execution on multi-core processors.

  18. PALNS - A software framework for parallel large neighborhood search

    DEFF Research Database (Denmark)

    Røpke, Stefan

    2009-01-01

    This paper propose a simple, parallel, portable software framework for the metaheuristic named large neighborhood search (LNS). The aim is to provide a framework where the user has to set up a few data structures and implement a few functions and then the framework provides a metaheuristic where ...... parallelization "comes for free". We apply the parallel LNS heuristic to two different problems: the traveling salesman problem with pickup and delivery (TSPPD) and the capacitated vehicle routing problem (CVRP)....

  19. Coverage-based constraints for IMRT optimization

    Science.gov (United States)

    Mescher, H.; Ulrich, S.; Bangert, M.

    2017-09-01

    Radiation therapy treatment planning requires an incorporation of uncertainties in order to guarantee an adequate irradiation of the tumor volumes. In current clinical practice, uncertainties are accounted for implicitly with an expansion of the target volume according to generic margin recipes. Alternatively, it is possible to account for uncertainties by explicit minimization of objectives that describe worst-case treatment scenarios, the expectation value of the treatment or the coverage probability of the target volumes during treatment planning. In this note we show that approaches relying on objectives to induce a specific coverage of the clinical target volumes are inevitably sensitive to variation of the relative weighting of the objectives. To address this issue, we introduce coverage-based constraints for intensity-modulated radiation therapy (IMRT) treatment planning. Our implementation follows the concept of coverage-optimized planning that considers explicit error scenarios to calculate and optimize patient-specific probabilities q(\\hat{d}, \\hat{v}) of covering a specific target volume fraction \\hat{v} with a certain dose \\hat{d} . Using a constraint-based reformulation of coverage-based objectives we eliminate the trade-off between coverage and competing objectives during treatment planning. In-depth convergence tests including 324 treatment plan optimizations demonstrate the reliability of coverage-based constraints for varying levels of probability, dose and volume. General clinical applicability of coverage-based constraints is demonstrated for two cases. A sensitivity analysis regarding penalty variations within this planing study based on IMRT treatment planning using (1) coverage-based constraints, (2) coverage-based objectives, (3) probabilistic optimization, (4) robust optimization and (5) conventional margins illustrates the potential benefit of coverage-based constraints that do not require tedious adjustment of target volume objectives.

  20. Monitoring intervention coverage in the context of universal health coverage.

    Directory of Open Access Journals (Sweden)

    Ties Boerma

    2014-09-01

    Full Text Available Monitoring universal health coverage (UHC focuses on information on health intervention coverage and financial protection. This paper addresses monitoring intervention coverage, related to the full spectrum of UHC, including health promotion and disease prevention, treatment, rehabilitation, and palliation. A comprehensive core set of indicators most relevant to the country situation should be monitored on a regular basis as part of health progress and systems performance assessment for all countries. UHC monitoring should be embedded in a broad results framework for the country health system, but focus on indicators related to the coverage of interventions that most directly reflect the results of UHC investments and strategies in each country. A set of tracer coverage indicators can be selected, divided into two groups-promotion/prevention, and treatment/care-as illustrated in this paper. Disaggregation of the indicators by the main equity stratifiers is critical to monitor progress in all population groups. Targets need to be set in accordance with baselines, historical rate of progress, and measurement considerations. Critical measurement gaps also exist, especially for treatment indicators, covering issues such as mental health, injuries, chronic conditions, surgical interventions, rehabilitation, and palliation. Consequently, further research and proxy indicators need to be used in the interim. Ideally, indicators should include a quality of intervention dimension. For some interventions, use of a single indicator is feasible, such as management of hypertension; but in many areas additional indicators are needed to capture quality of service provision. The monitoring of UHC has significant implications for health information systems. Major data gaps will need to be filled. At a minimum, countries will need to administer regular household health surveys with biological and clinical data collection. Countries will also need to improve the

  1. Estimating IBD tracts from low coverage NGS data

    DEFF Research Database (Denmark)

    Garrett Vieira, Filipe Jorge; Albrechtsen, Anders; Nielsen, Rasmus

    2016-01-01

    that the new method provides a marked increase in accuracy even at low coverage. AVAILABILITY AND IMPLEMENTATION: The methods presented in this work were implemented in C/C ++ and are freely available for non-commercial use from https://github.com/fgvieira/ngsF-HMM CONTACT: fgvieira@snm.ku.dk SUPPLEMENTARY...... method for estimating inbreeding IBD tracts from low coverage NGS data. Contrary to other methods that use genotype data, the one presented here uses genotype likelihoods to take the uncertainty of the data into account. We benchmark it under a wide range of biologically relevant conditions and show...

  2. CEOS Ocean Variables Enabling Research and Applications for Geo (COVERAGE)

    Science.gov (United States)

    Tsontos, V. M.; Vazquez, J.; Zlotnicki, V.

    2017-12-01

    The CEOS Ocean Variables Enabling Research and Applications for GEO (COVERAGE) initiative seeks to facilitate joint utilization of different satellite data streams on ocean physics, better integrated with biological and in situ observations, including near real-time data streams in support of oceanographic and decision support applications for societal benefit. COVERAGE aligns with programmatic objectives of CEOS (the Committee on Earth Observation Satellites) and the missions of GEO-MBON (Marine Biodiversity Observation Network) and GEO-Blue Planet, which are to advance and exploit synergies among the many observational programs devoted to ocean and coastal waters. COVERAGE is conceived of as 3 year pilot project involving international collaboration. It focuses on implementing technologies, including cloud based solutions, to provide a data rich, web-based platform for integrated ocean data delivery and access: multi-parameter observations, easily discoverable and usable, organized by disciplines, available in near real-time, collocated to a common grid and including climatologies. These will be complemented by a set of value-added data services available via the COVERAGE portal including an advanced Web-based visualization interface, subsetting/extraction, data collocation/matchup and other relevant on demand processing capabilities. COVERAGE development will be organized around priority use cases and applications identified by GEO and agency partners. The initial phase will be to develop co-located 25km products from the four Ocean Virtual Constellations (VCs), Sea Surface Temperature, Sea Level, Ocean Color, and Sea Surface Winds. This aims to stimulate work among the ocean VCs while developing products and system functionality based on community recommendations. Such products as anomalies from a time mean, would build on the theme of applications with a relevance to CEOS/GEO mission and vision. Here we provide an overview of the COVERAGE initiative with an

  3. -Net Approach to Sensor -Coverage

    Directory of Open Access Journals (Sweden)

    Fusco Giordano

    2010-01-01

    Full Text Available Wireless sensors rely on battery power, and in many applications it is difficult or prohibitive to replace them. Hence, in order to prolongate the system's lifetime, some sensors can be kept inactive while others perform all the tasks. In this paper, we study the -coverage problem of activating the minimum number of sensors to ensure that every point in the area is covered by at least sensors. This ensures higher fault tolerance, robustness, and improves many operations, among which position detection and intrusion detection. The -coverage problem is trivially NP-complete, and hence we can only provide approximation algorithms. In this paper, we present an algorithm based on an extension of the classical -net technique. This method gives an -approximation, where is the number of sensors in an optimal solution. We do not make any particular assumption on the shape of the areas covered by each sensor, besides that they must be closed, connected, and without holes.

  4. Self-balanced modulation and magnetic rebalancing method for parallel multilevel inverters

    Science.gov (United States)

    Li, Hui; Shi, Yanjun

    2017-11-28

    A self-balanced modulation method and a closed-loop magnetic flux rebalancing control method for parallel multilevel inverters. The combination of the two methods provides for balancing of the magnetic flux of the inter-cell transformers (ICTs) of the parallel multilevel inverters without deteriorating the quality of the output voltage. In various embodiments a parallel multi-level inverter modulator is provide including a multi-channel comparator to generate a multiplexed digitized ideal waveform for a parallel multi-level inverter and a finite state machine (FSM) module coupled to the parallel multi-channel comparator, the FSM module to receive the multiplexed digitized ideal waveform and to generate a pulse width modulated gate-drive signal for each switching device of the parallel multi-level inverter. The system and method provides for optimization of the output voltage spectrum without influence the magnetic balancing.

  5. [Quantification of acetabular coverage in normal adult].

    Science.gov (United States)

    Lin, R M; Yang, C Y; Yu, C Y; Yang, C R; Chang, G L; Chou, Y L

    1991-03-01

    Quantification of acetabular coverage is important and can be expressed by superimposition of cartilage tracings on the maximum cross-sectional area of the femoral head. A practical Autolisp program on PC AutoCAD has been developed by us to quantify the acetabular coverage through numerical expression of the images of computed tomography. Thirty adults (60 hips) with normal center-edge angle and acetabular index in plain X ray were randomly selected for serial drops. These slices were prepared with a fixed coordination and in continuous sections of 5 mm in thickness. The contours of the cartilage of each section were digitized into a PC computer and processed by AutoCAD programs to quantify and characterize the acetabular coverage of normal and dysplastic adult hips. We found that a total coverage ratio of greater than 80%, an anterior coverage ratio of greater than 75% and a posterior coverage ratio of greater than 80% can be categorized in a normal group. Polar edge distance is a good indicator for the evaluation of preoperative and postoperative coverage conditions. For standardization and evaluation of acetabular coverage, the most suitable parameters are the total coverage ratio, anterior coverage ratio, posterior coverage ratio and polar edge distance. However, medial coverage and lateral coverage ratios are indispensable in cases of dysplastic hip because variations between them are so great that acetabuloplasty may be impossible. This program can also be used to classify precisely the type of dysplastic hip.

  6. Building high-coverage monolayers of covalently bound magnetic nanoparticles

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Mackenzie G.; Teplyakov, Andrew V., E-mail: andrewt@udel.edu

    2016-12-01

    Graphical abstract: - Highlights: • A method for forming a layer of covalently bound nanoparticles is offered. • A nearly perfect monolayer of covalently bound magnetic nanoparticles was formed on gold. • Spectroscopic techniques confirmed covalent binding by the “click” reaction. • The influence of the functionalization scheme on surface coverage was investigated. - Abstract: This work presents an approach for producing a high-coverage single monolayer of magnetic nanoparticles using “click chemistry” between complementarily functionalized nanoparticles and a flat substrate. This method highlights essential aspects of the functionalization scheme for substrate surface and nanoparticles to produce exceptionally high surface coverage without sacrificing selectivity or control over the layer produced. The deposition of one single layer of magnetic particles without agglomeration, over a large area, with a nearly 100% coverage is confirmed by electron microscopy. Spectroscopic techniques, supplemented by computational predictions, are used to interrogate the chemistry of the attachment and to confirm covalent binding, rather than attachment through self-assembly or weak van der Waals bonding. Density functional theory calculations for the surface intermediate of this copper-catalyzed process provide mechanistic insight into the effects of the functionalization scheme on surface coverage. Based on this analysis, it appears that steric limitations of the intermediate structure affect nanoparticle coverage on a flat solid substrate; however, this can be overcome by designing a functionalization scheme in such a way that the copper-based intermediate is formed on the spherical nanoparticles instead. This observation can be carried over to other approaches for creating highly controlled single- or multilayered nanostructures of a wide range of materials to result in high coverage and possibly, conformal filling.

  7. 76 FR 46677 - Requirements for Group Health Plans and Health Insurance Issuers Relating to Coverage of...

    Science.gov (United States)

    2011-08-03

    ... Requirements for Group Health Plans and Health Insurance Issuers Relating to Coverage of Preventive Services... regulations published July 19, 2010 with respect to group health plans and health insurance coverage offered... plans, and health insurance issuers providing group health insurance coverage. The text of those...

  8. Automatic Parallelization Tool: Classification of Program Code for Parallel Computing

    Directory of Open Access Journals (Sweden)

    Mustafa Basthikodi

    2016-04-01

    Full Text Available Performance growth of single-core processors has come to a halt in the past decade, but was re-enabled by the introduction of parallelism in processors. Multicore frameworks along with Graphical Processing Units empowered to enhance parallelism broadly. Couples of compilers are updated to developing challenges forsynchronization and threading issues. Appropriate program and algorithm classifications will have advantage to a great extent to the group of software engineers to get opportunities for effective parallelization. In present work we investigated current species for classification of algorithms, in that related work on classification is discussed along with the comparison of issues that challenges the classification. The set of algorithms are chosen which matches the structure with different issues and perform given task. We have tested these algorithms utilizing existing automatic species extraction toolsalong with Bones compiler. We have added functionalities to existing tool, providing a more detailed characterization. The contributions of our work include support for pointer arithmetic, conditional and incremental statements, user defined types, constants and mathematical functions. With this, we can retain significant data which is not captured by original speciesof algorithms. We executed new theories into the device, empowering automatic characterization of program code.

  9. Ultrascalable petaflop parallel supercomputer

    Science.gov (United States)

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  10. Vector and parallel processors in computational science. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Duff, I S; Reid, J K

    1985-01-01

    This volume contains papers from most of the invited talks and from several of the contributed talks and poster sessions presented at VAPP II. The contents present an extensive coverage of all important aspects of vector and parallel processors, including hardware, languages, numerical algorithms and applications. The topics covered include descriptions of new machines (both research and commercial machines), languages and software aids, and general discussions of whole classes of machines and their uses. Numerical methods papers include Monte Carlo algorithms, iterative and direct methods for solving large systems, finite elements, optimization, random number generation and mathematical software. The specific applications covered include neutron diffusion calculations, molecular dynamics, weather forecasting, lattice gauge calculations, fluid dynamics, flight simulation, cartography, image processing and cryptography. Most machines and architecture types are being used for these applications. many refs.

  11. Universal health coverage in Turkey: enhancement of equity.

    Science.gov (United States)

    Atun, Rifat; Aydın, Sabahattin; Chakraborty, Sarbani; Sümer, Safir; Aran, Meltem; Gürol, Ipek; Nazlıoğlu, Serpil; Ozgülcü, Senay; Aydoğan, Ulger; Ayar, Banu; Dilmen, Uğur; Akdağ, Recep

    2013-07-06

    Turkey has successfully introduced health system changes and provided its citizens with the right to health to achieve universal health coverage, which helped to address inequities in financing, health service access, and health outcomes. We trace the trajectory of health system reforms in Turkey, with a particular emphasis on 2003-13, which coincides with the Health Transformation Program (HTP). The HTP rapidly expanded health insurance coverage and access to health-care services for all citizens, especially the poorest population groups, to achieve universal health coverage. We analyse the contextual drivers that shaped the transformations in the health system, explore the design and implementation of the HTP, identify the factors that enabled its success, and investigate its effects. Our findings suggest that the HTP was instrumental in achieving universal health coverage to enhance equity substantially, and led to quantifiable and beneficial effects on all health system goals, with an improved level and distribution of health, greater fairness in financing with better financial protection, and notably increased user satisfaction. After the HTP, five health insurance schemes were consolidated to create a unified General Health Insurance scheme with harmonised and expanded benefits. Insurance coverage for the poorest population groups in Turkey increased from 2·4 million people in 2003, to 10·2 million in 2011. Health service access increased across the country-in particular, access and use of key maternal and child health services improved to help to greatly reduce the maternal mortality ratio, and under-5, infant, and neonatal mortality, especially in socioeconomically disadvantaged groups. Several factors helped to achieve universal health coverage and improve outcomes. These factors include economic growth, political stability, a comprehensive transformation strategy led by a transformation team, rapid policy translation, flexible implementation with

  12. Health-system reform and universal health coverage in Latin America.

    Science.gov (United States)

    Atun, Rifat; de Andrade, Luiz Odorico Monteiro; Almeida, Gisele; Cotlear, Daniel; Dmytraczenko, T; Frenz, Patricia; Garcia, Patrícia; Gómez-Dantés, Octavio; Knaul, Felicia M; Muntaner, Carles; de Paula, Juliana Braga; Rígoli, Felix; Serrate, Pastor Castell-Florit; Wagstaff, Adam

    2015-03-28

    Starting in the late 1980s, many Latin American countries began social sector reforms to alleviate poverty, reduce socioeconomic inequalities, improve health outcomes, and provide financial risk protection. In particular, starting in the 1990s, reforms aimed at strengthening health systems to reduce inequalities in health access and outcomes focused on expansion of universal health coverage, especially for poor citizens. In Latin America, health-system reforms have produced a distinct approach to universal health coverage, underpinned by the principles of equity, solidarity, and collective action to overcome social inequalities. In most of the countries studied, government financing enabled the introduction of supply-side interventions to expand insurance coverage for uninsured citizens--with defined and enlarged benefits packages--and to scale up delivery of health services. Countries such as Brazil and Cuba introduced tax-financed universal health systems. These changes were combined with demand-side interventions aimed at alleviating poverty (targeting many social determinants of health) and improving access of the most disadvantaged populations. Hence, the distinguishing features of health-system strengthening for universal health coverage and lessons from the Latin American experience are relevant for countries advancing universal health coverage. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Acceleration of cardiovascular MRI using parallel imaging: basic principles, practical considerations, clinical applications and future directions

    International Nuclear Information System (INIS)

    Niendorf, T.; Sodickson, D.

    2006-01-01

    Cardiovascular Magnetic Resonance (CVMR) imaging has proven to be of clinical value for non-invasive diagnostic imaging of cardiovascular diseases. CVMR requires rapid imaging; however, the speed of conventional MRI is fundamentally limited due to its sequential approach to image acquisition, in which data points are collected one after the other in the presence of sequentially-applied magnetic field gradients and radiofrequency coils to acquire multiple data points simultaneously, and thereby to increase imaging speed and efficiency beyond the limits of purely gradient-based approaches. The resulting improvements in imaging speed can be used in various ways, including shortening long examinations, improving spatial resolution and anatomic coverage, improving temporal resolution, enhancing image quality, overcoming physiological constraints, detecting and correcting for physiologic motion, and streamlining work flow. Examples of these strategies will be provided in this review, after some of the fundamentals of parallel imaging methods now in use for cardiovascular MRI are outlined. The emphasis will rest upon basic principles and clinical state-of-the art cardiovascular MRI applications. In addition, practical aspects such as signal-to-noise ratio considerations, tailored parallel imaging protocols and potential artifacts will be discussed, and current trends and future directions will be explored. (orig.)

  14. Concurrent Collections (CnC): A new approach to parallel programming

    CERN Multimedia

    CERN. Geneva

    2010-01-01

    A common approach in designing parallel languages is to provide some high level handles to manipulate the use of the parallel platform. This exposes some aspects of the target platform, for example, shared vs. distributed memory. It may expose some but not all types of parallelism, for example, data parallelism but not task parallelism. This approach must find a balance between the desire to provide a simple view for the domain expert and provide sufficient power for tuning. This is hard for any given architecture and harder if the language is to apply to a range of architectures. Either simplicity or power is lost. Instead of viewing the language design problem as one of providing the programmer with high level handles, we view the problem as one of designing an interface. On one side of this interface is the programmer (domain expert) who knows the application but needs no knowledge of any aspects of the platform. On the other side of the interface is the performance expert (programmer o...

  15. .NET 4.5 parallel extensions

    CERN Document Server

    Freeman, Bryan

    2013-01-01

    This book contains practical recipes on everything you will need to create task-based parallel programs using C#, .NET 4.5, and Visual Studio. The book is packed with illustrated code examples to create scalable programs.This book is intended to help experienced C# developers write applications that leverage the power of modern multicore processors. It provides the necessary knowledge for an experienced C# developer to work with .NET parallelism APIs. Previous experience of writing multithreaded applications is not necessary.

  16. 75 FR 69577 - Deposit Insurance Regulations; Unlimited Coverage for Noninterest-Bearing Transaction Accounts

    Science.gov (United States)

    2010-11-15

    ..., contending that providing such coverage for these accounts promotes moral hazard. Four commenters suggested... withdrawals at any time, whether held by a business, an individual or other type of depositor. Unlike the... for unlimited separate coverage as a noninterest-bearing transaction account. One issue raised during...

  17. Coverage Extension via Side-Lobe Transmission in Multibeam Satellite System

    OpenAIRE

    Gharanjik, Ahmad; Kmieciak, Jarek; Shankar, Bhavani; Ottersten, Björn

    2017-01-01

    In this paper, we study feasibility of coverage extension of a multibeam satellite network by providing low-rate communications to terminals located outside the coverage of main beams. Focusing on the MEO satellite network, and using realistic link budgets from O3b networks, we investigate the performance of both forward and return-links for terminals stationed in the side lobes of the main beams. Particularly, multi-carrier transmission for forward-link and single carrier transmission for re...

  18. State Mandated Benefits and Employer Provided Health Insurance

    OpenAIRE

    Jonathan Gruber

    1992-01-01

    One popular explanation for this low rate of employee coverage is the presence of numerous state regulations which mandate that group health insurance plans must include certain benefits. By raising the minimum costs of providing any health insurance coverage, these mandated benefits make it impossible for firms which would have desired to offer minimal health insurance at a low cost to do so. I use data on insurance coverage among employees in small firms to investigate whether this problem ...

  19. Parallel Task Processing on a Multicore Platform in a PC-based Control System for Parallel Kinematics

    Directory of Open Access Journals (Sweden)

    Harald Michalik

    2009-02-01

    Full Text Available Multicore platforms are such that have one physical processor chip with multiple cores interconnected via a chip level bus. Because they deliver a greater computing power through concurrency, offer greater system density multicore platforms provide best qualifications to address the performance bottleneck encountered in PC-based control systems for parallel kinematic robots with heavy CPU-load. Heavy load control tasks are generated by new control approaches that include features like singularity prediction, structure control algorithms, vision data integration and similar tasks. In this paper we introduce the parallel task scheduling extension of a communication architecture specially tailored for the development of PC-based control of parallel kinematics. The Sche-duling is specially designed for the processing on a multicore platform. It breaks down the serial task processing of the robot control cycle and extends it with parallel task processing paths in order to enhance the overall control performance.

  20. Unified Singularity Modeling and Reconfiguration of 3rTPS Metamorphic Parallel Mechanisms with Parallel Constraint Screws

    Directory of Open Access Journals (Sweden)

    Yufeng Zhuang

    2015-01-01

    Full Text Available This paper presents a unified singularity modeling and reconfiguration analysis of variable topologies of a class of metamorphic parallel mechanisms with parallel constraint screws. The new parallel mechanisms consist of three reconfigurable rTPS limbs that have two working phases stemming from the reconfigurable Hooke (rT joint. While one phase has full mobility, the other supplies a constraint force to the platform. Based on these, the platform constraint screw systems show that the new metamorphic parallel mechanisms have four topologies by altering the limb phases with mobility change among 1R2T (one rotation with two translations, 2R2T, and 3R2T and mobility 6. Geometric conditions of the mechanism design are investigated with some special topologies illustrated considering the limb arrangement. Following this and the actuation scheme analysis, a unified Jacobian matrix is formed using screw theory to include the change between geometric constraints and actuation constraints in the topology reconfiguration. Various singular configurations are identified by analyzing screw dependency in the Jacobian matrix. The work in this paper provides basis for singularity-free workspace analysis and optimal design of the class of metamorphic parallel mechanisms with parallel constraint screws which shows simple geometric constraints with potential simple kinematics and dynamics properties.

  1. 29 CFR 2.13 - Audiovisual coverage prohibited.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true Audiovisual coverage prohibited. 2.13 Section 2.13 Labor Office of the Secretary of Labor GENERAL REGULATIONS Audiovisual Coverage of Administrative Hearings § 2.13 Audiovisual coverage prohibited. The Department shall not permit audiovisual coverage of the...

  2. Investigation of growth, coverage and effectiveness of plasma assisted nano-films of fluorocarbon

    International Nuclear Information System (INIS)

    Joshi, Pratik P.; Pulikollu, Rajasekhar; Higgins, Steven R.; Hu Xiaoming; Mukhopadhyay, S.M.

    2006-01-01

    Plasma-assisted functional films have significant potential in various engineering applications. They can be tailored to impart desired properties by bonding specific molecular groups to the substrate surface. The aim of this investigation was to develop a fundamental understanding of the atomic level growth, coverage and functional effectiveness of plasma nano-films on flat surfaces and to explore their application-potential for complex and uneven shaped nano-materials. In this paper, results on plasma-assisted nano-scale fluorocarbon films, which are known for imparting inertness or hydrophobicity to the surface, will be discussed. The film deposition was studied as a function of time on flat single crystal surfaces of silicon, sapphire and graphite, using microwave plasma. X-ray photoelectron spectroscopy (XPS) was used for detailed study of composition and chemistry of the substrate and coating atoms, at all stages of deposition. Atomic force microscopy (AFM) was performed in parallel to study the coverage and growth morphology of these films at each stage. Combined XPS and AFM results indicated complete coverage of all the substrates at the nanometer scale. It was also shown that these films grew in a layer-by-layer fashion. The nano-films were also applied to complex and uneven shaped nano-structured and porous materials, such as microcellular porous foam and nano fibers. It was seen that these nano-films can be a viable approach for effective surface modification of complex or uneven shaped nano-materials

  3. "Feeling" Series and Parallel Resistances.

    Science.gov (United States)

    Morse, Robert A.

    1993-01-01

    Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)

  4. Building a parallel file system simulator

    International Nuclear Information System (INIS)

    Molina-Estolano, E; Maltzahn, C; Brandt, S A; Bent, J

    2009-01-01

    Parallel file systems are gaining in popularity in high-end computing centers as well as commercial data centers. High-end computing systems are expected to scale exponentially and to pose new challenges to their storage scalability in terms of cost and power. To address these challenges scientists and file system designers will need a thorough understanding of the design space of parallel file systems. Yet there exist few systematic studies of parallel file system behavior at petabyte- and exabyte scale. An important reason is the significant cost of getting access to large-scale hardware to test parallel file systems. To contribute to this understanding we are building a parallel file system simulator that can simulate parallel file systems at very large scale. Our goal is to simulate petabyte-scale parallel file systems on a small cluster or even a single machine in reasonable time and fidelity. With this simulator, file system experts will be able to tune existing file systems for specific workloads, scientists and file system deployment engineers will be able to better communicate workload requirements, file system designers and researchers will be able to try out design alternatives and innovations at scale, and instructors will be able to study very large-scale parallel file system behavior in the class room. In this paper we describe our approach and provide preliminary results that are encouraging both in terms of fidelity and simulation scalability.

  5. Performance Evaluation of a Dual Coverage System for Internet of Things Environments

    Directory of Open Access Journals (Sweden)

    Omar Said

    2016-01-01

    Full Text Available A dual coverage system for Internet of Things (IoT environments is introduced. This system is used to connect IoT nodes regardless of their locations. The proposed system has three different architectures, which are based on satellites and High Altitude Platforms (HAPs. In case of Internet coverage problems, the Internet coverage will be replaced with the Satellite/HAP network coverage under specific restrictions such as loss and delay. According to IoT requirements, the proposed architectures should include multiple levels of satellites or HAPs, or a combination of both, to cover the global Internet things. It was shown that the Satellite/HAP/HAP/Things architecture provides the largest coverage area. A network simulation package, NS2, was used to test the performance of the proposed multilevel architectures. The results indicated that the HAP/HAP/Things architecture has the best end-to-end delay, packet loss, throughput, energy consumption, and handover.

  6. Mental Health Insurance Parity and Provider Wages.

    Science.gov (United States)

    Golberstein, Ezra; Busch, Susan H

    2017-06-01

    Policymakers frequently mandate that employers or insurers provide insurance benefits deemed to be critical to individuals' well-being. However, in the presence of private market imperfections, mandates that increase demand for a service can lead to price increases for that service, without necessarily affecting the quantity being supplied. We test this idea empirically by looking at mental health parity mandates. This study evaluated whether implementation of parity laws was associated with changes in mental health provider wages. Quasi-experimental analysis of average wages by state and year for six mental health care-related occupations were considered: Clinical, Counseling, and School Psychologists; Substance Abuse and Behavioral Disorder Counselors; Marriage and Family Therapists; Mental Health Counselors; Mental Health and Substance Abuse Social Workers; and Psychiatrists. Data from 1999-2013 were used to estimate the association between the implementation of state mental health parity laws and the Paul Wellstone and Pete Domenici Mental Health Parity and Addiction Equity Act and average mental health provider wages. Mental health parity laws were associated with a significant increase in mental health care provider wages controlling for changes in mental health provider wages in states not exposed to parity (3.5 percent [95% CI: 0.3%, 6.6%]; pwages. Health insurance benefit expansions may lead to increased prices for health services when the private market that supplies the service is imperfect or constrained. In the context of mental health parity, this work suggests that part of the value of expanding insurance benefits for mental health coverage was captured by providers. Given historically low wage levels of mental health providers, this increase may be a first step in bringing mental health provider wages in line with parallel health professions, potentially reducing turnover rates and improving treatment quality.

  7. The exploitation of "Exploitation" in the tenofovir prep trial in Cameroon: Lessons learned from media coverage of an HIV prevention trial.

    Science.gov (United States)

    Mack, Natasha; Robinson, Elizabeth T; MacQueen, Kathleen M; Moffett, Jill; Johnson, Laura M

    2010-06-01

    media coverage influences how clinical trials are perceived internationally and in communities where trials occur, affecting recruitment, retention, and political support for research. We conducted a discourse analysis of news coverage from 2004-2005 of a trial in Cameroon on oral PrEP for HIV prevention, to identify messages, communication techniques, and sources of messages that were amplified via media. We identified two parallel discourses: one on ethical concerns about the Cameroon trial, and a second, more general "science exploitation" discourse concerned with the potential for trials with vulnerable participant populations to be conducted unethically, benefiting only wealthy populations. Researchers should overtly address exploitation as an integral, ongoing component of research, particularly where historical or cultural conditions set the stage for controversy to emerge.

  8. Evaluation of fault coverage for digitalized system in nuclear power plants using VHDL

    International Nuclear Information System (INIS)

    Kim, Suk Joon; Lee, Jun Suk; Seong, Poong Hyun

    2003-01-01

    Fault coverage of digital systems is found to be one of the most important factors in the safety analysis of nuclear power plants. Several axiomatic models for the estimation of fault coverage of digital systems have been proposed, but to apply those axiomatic models to real digital systems, parameters that the axiomatic models require should be approximated using analytic methods, empirical methods or expert opinions. In this paper, we apply the fault injection method to VHDL computer simulation model of a real digital system which provides the protection function to nuclear power plants, for the approximation of fault detection coverage of the digital system. As a result, the fault detection coverage of the digital system could be obtained

  9. Parallel computing for event reconstruction in high-energy physics

    International Nuclear Information System (INIS)

    Wolbers, S.

    1993-01-01

    Parallel computing has been recognized as a solution to large computing problems. In High Energy Physics offline event reconstruction of detector data is a very large computing problem that has been solved with parallel computing techniques. A review of the parallel programming package CPS (Cooperative Processes Software) developed and used at Fermilab for offline reconstruction of Terabytes of data requiring the delivery of hundreds of Vax-Years per experiment is given. The Fermilab UNIX farms, consisting of 180 Silicon Graphics workstations and 144 IBM RS6000 workstations, are used to provide the computing power for the experiments. Fermilab has had a long history of providing production parallel computing starting with the ACP (Advanced Computer Project) Farms in 1986. The Fermilab UNIX Farms have been in production for over 2 years with 24 hour/day service to experimental user groups. Additional tools for management, control and monitoring these large systems will be described. Possible future directions for parallel computing in High Energy Physics will be given

  10. 42 CFR 440.330 - Benchmark health benefits coverage.

    Science.gov (United States)

    2010-10-01

    ...) Federal Employees Health Benefit Plan Equivalent Coverage (FEHBP—Equivalent Health Insurance Coverage). A... coverage. Health benefits coverage that is offered and generally available to State employees in the State... 42 Public Health 4 2010-10-01 2010-10-01 false Benchmark health benefits coverage. 440.330 Section...

  11. Introduction to parallel algorithms and architectures arrays, trees, hypercubes

    CERN Document Server

    Leighton, F Thomson

    1991-01-01

    Introduction to Parallel Algorithms and Architectures: Arrays Trees Hypercubes provides an introduction to the expanding field of parallel algorithms and architectures. This book focuses on parallel computation involving the most popular network architectures, namely, arrays, trees, hypercubes, and some closely related networks.Organized into three chapters, this book begins with an overview of the simplest architectures of arrays and trees. This text then presents the structures and relationships between the dominant network architectures, as well as the most efficient parallel algorithms for

  12. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  13. Insurance coverage for male infertility care in the United States.

    Science.gov (United States)

    Dupree, James M

    2016-01-01

    Infertility is a common condition experienced by many men and women, and treatments are expensive. The World Health Organization and American Society of Reproductive Medicine define infertility as a disease, yet private companies infrequently offer insurance coverage for infertility treatments. This is despite the clear role that healthcare insurance plays in ensuring access to care and minimizing the financial burden of expensive services. In this review, we assess the current knowledge of how male infertility care is covered by insurance in the United States. We begin with an appraisal of the costs of male infertility care, then examine the state insurance laws relevant to male infertility, and close with a discussion of why insurance coverage for male infertility is important to both men and women. Importantly, we found that despite infertility being classified as a disease and males contributing to almost half of all infertility cases, coverage for male infertility is often excluded from health insurance laws. Excluding coverage for male infertility places an undue burden on their female partners. In addition, excluding care for male infertility risks missing opportunities to diagnose important health conditions and identify reversible or irreversible causes of male infertility. Policymakers should consider providing equal coverage for male and female infertility care in future health insurance laws.

  14. A possibility of parallel and anti-parallel diffraction measurements on neu- tron diffractometer employing bent perfect crystal monochromator at the monochromatic focusing condition

    Science.gov (United States)

    Choi, Yong Nam; Kim, Shin Ae; Kim, Sung Kyu; Kim, Sung Baek; Lee, Chang-Hee; Mikula, Pavel

    2004-07-01

    In a conventional diffractometer having single monochromator, only one position, parallel position, is used for the diffraction experiment (i.e. detection) because the resolution property of the other one, anti-parallel position, is very poor. However, a bent perfect crystal (BPC) monochromator at monochromatic focusing condition can provide a quite flat and equal resolution property at both parallel and anti-parallel positions and thus one can have a chance to use both sides for the diffraction experiment. From the data of the FWHM and the Delta d/d measured on three diffraction geometries (symmetric, asymmetric compression and asymmetric expansion), we can conclude that the simultaneous diffraction measurement in both parallel and anti-parallel positions can be achieved.

  15. Microwave tomography global optimization, parallelization and performance evaluation

    CERN Document Server

    Noghanian, Sima; Desell, Travis; Ashtari, Ali

    2014-01-01

    This book provides a detailed overview on the use of global optimization and parallel computing in microwave tomography techniques. The book focuses on techniques that are based on global optimization and electromagnetic numerical methods. The authors provide parallelization techniques on homogeneous and heterogeneous computing architectures on high performance and general purpose futuristic computers. The book also discusses the multi-level optimization technique, hybrid genetic algorithm and its application in breast cancer imaging.

  16. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  17. 75 FR 70159 - Group Health Plans and Health Insurance Coverage Rules Relating to Status as a Grandfathered...

    Science.gov (United States)

    2010-11-17

    ... Group Health Plans and Health Insurance Coverage Rules Relating to Status as a Grandfathered Health Plan... contracts of insurance. The temporary regulations provide guidance to employers, group health plans, and health insurance issuers providing group health insurance coverage. The IRS is issuing the temporary...

  18. Enhancing Coverage in Narrow Band-IoT Using Machine Learning

    OpenAIRE

    Chafii , Marwa; Bader , Faouzi; Palicot , Jacques

    2018-01-01

    International audience; —Narrow Band-Internet of Thing (NB-IoT) is a recently proposed technology by 3GPP in Release-13. It provides low energy consumption and wide coverage in order to meet the requirements of its diverse applications that span social, industrial and environmental aspects. Increasing the number of repetitions of the transmission has been selected as a promising approach to enhance the coverage in NB-IoT up to 164 dB in terms of maximum coupling loss for uplink transmissions,...

  19. 40 CFR 51.356 - Vehicle coverage.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 2 2010-07-01 2010-07-01 false Vehicle coverage. 51.356 Section 51.356....356 Vehicle coverage. The performance standard for enhanced I/M programs assumes coverage of all 1968 and later model year light duty vehicles and light duty trucks up to 8,500 pounds GVWR, and includes...

  20. Chernobyl coverage: how the US media treated the nuclear industry

    International Nuclear Information System (INIS)

    Friedman, S.M.; Gorney, C.M.; Egolf, B.P.

    1992-01-01

    This study attempted to uncover whether enough background information about nuclear power and the nuclear industries in the USA, USSR and Eastern and Western Europe had been included during the first two weeks of US coverage of the Chernobyl accident so that Americans would not be misled in their understanding of and attitudes toward nuclear power in general. It also sought to determine if reporters took advantage of the Chernobyl accident to attack nuclear technology or the nuclear industry in general. Coverage was analysed in five US newspapers and on the evening newscasts of the three major US television networks. Despite heavy coverage of the accident, no more than 25% of the coverage was devoted to information on safety records, history of accidents and current status of nuclear industries. Not enough information was provided to help the public's level of understanding of nuclear power or to put the Chernobyl accident in context. However, articles and newscasts generally balanced use of pro- and anti-nuclear statements, and did not include excessive amounts of fear-inducing and negative information. (author)

  1. Distributed Parallel Architecture for "Big Data"

    Directory of Open Access Journals (Sweden)

    Catalin BOJA

    2012-01-01

    Full Text Available This paper is an extension to the "Distributed Parallel Architecture for Storing and Processing Large Datasets" paper presented at the WSEAS SEPADS’12 conference in Cambridge. In its original version the paper went over the benefits of using a distributed parallel architecture to store and process large datasets. This paper analyzes the problem of storing, processing and retrieving meaningful insight from petabytes of data. It provides a survey on current distributed and parallel data processing technologies and, based on them, will propose an architecture that can be used to solve the analyzed problem. In this version there is more emphasis put on distributed files systems and the ETL processes involved in a distributed environment.

  2. Media coverage of chronic diseases in the Netherlands.

    NARCIS (Netherlands)

    van der Wardt, E.M.; van der Wardt, Elly M.; Taal, Erik; Rasker, Johannes J.; Wiegman, O.

    1999-01-01

    Objective: Little is known about the quantity or quality of information on rheumatic diseases provided by the mass media. The aim of this study was to gain insight into the media coverage of rheumatic diseases compared with other chronic diseases in the Netherlands. - Materials and Methods:

  3. Parallel Tensor Compression for Large-Scale Scientific Data.

    Energy Technology Data Exchange (ETDEWEB)

    Kolda, Tamara G. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ballard, Grey [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Austin, Woody Nathan [Univ. of Texas, Austin, TX (United States)

    2015-10-01

    As parallel computing trends towards the exascale, scientific data produced by high-fidelity simulations are growing increasingly massive. For instance, a simulation on a three-dimensional spatial grid with 512 points per dimension that tracks 64 variables per grid point for 128 time steps yields 8 TB of data. By viewing the data as a dense five way tensor, we can compute a Tucker decomposition to find inherent low-dimensional multilinear structure, achieving compression ratios of up to 10000 on real-world data sets with negligible loss in accuracy. So that we can operate on such massive data, we present the first-ever distributed memory parallel implementation for the Tucker decomposition, whose key computations correspond to parallel linear algebra operations, albeit with nonstandard data layouts. Our approach specifies a data distribution for tensors that avoids any tensor data redistribution, either locally or in parallel. We provide accompanying analysis of the computation and communication costs of the algorithms. To demonstrate the compression and accuracy of the method, we apply our approach to real-world data sets from combustion science simulations. We also provide detailed performance results, including parallel performance in both weak and strong scaling experiments.

  4. Massive hybrid parallelism for fully implicit multiphysics

    International Nuclear Information System (INIS)

    Gaston, D. R.; Permann, C. J.; Andrs, D.; Peterson, J. W.

    2013-01-01

    As hardware advances continue to modify the supercomputing landscape, traditional scientific software development practices will become more outdated, ineffective, and inefficient. The process of rewriting/retooling existing software for new architectures is a Sisyphean task, and results in substantial hours of development time, effort, and money. Software libraries which provide an abstraction of the resources provided by such architectures are therefore essential if the computational engineering and science communities are to continue to flourish in this modern computing environment. The Multiphysics Object Oriented Simulation Environment (MOOSE) framework enables complex multiphysics analysis tools to be built rapidly by scientists, engineers, and domain specialists, while also allowing them to both take advantage of current HPC architectures, and efficiently prepare for future supercomputer designs. MOOSE employs a hybrid shared-memory and distributed-memory parallel model and provides a complete and consistent interface for creating multiphysics analysis tools. In this paper, a brief discussion of the mathematical algorithms underlying the framework and the internal object-oriented hybrid parallel design are given. Representative massively parallel results from several applications areas are presented, and a brief discussion of future areas of research for the framework are provided. (authors)

  5. Massive hybrid parallelism for fully implicit multiphysics

    Energy Technology Data Exchange (ETDEWEB)

    Gaston, D. R.; Permann, C. J.; Andrs, D.; Peterson, J. W. [Idaho National Laboratory, 2525 N. Fremont Ave., Idaho Falls, ID 83415 (United States)

    2013-07-01

    As hardware advances continue to modify the supercomputing landscape, traditional scientific software development practices will become more outdated, ineffective, and inefficient. The process of rewriting/retooling existing software for new architectures is a Sisyphean task, and results in substantial hours of development time, effort, and money. Software libraries which provide an abstraction of the resources provided by such architectures are therefore essential if the computational engineering and science communities are to continue to flourish in this modern computing environment. The Multiphysics Object Oriented Simulation Environment (MOOSE) framework enables complex multiphysics analysis tools to be built rapidly by scientists, engineers, and domain specialists, while also allowing them to both take advantage of current HPC architectures, and efficiently prepare for future supercomputer designs. MOOSE employs a hybrid shared-memory and distributed-memory parallel model and provides a complete and consistent interface for creating multiphysics analysis tools. In this paper, a brief discussion of the mathematical algorithms underlying the framework and the internal object-oriented hybrid parallel design are given. Representative massively parallel results from several applications areas are presented, and a brief discussion of future areas of research for the framework are provided. (authors)

  6. MASSIVE HYBRID PARALLELISM FOR FULLY IMPLICIT MULTIPHYSICS

    Energy Technology Data Exchange (ETDEWEB)

    Cody J. Permann; David Andrs; John W. Peterson; Derek R. Gaston

    2013-05-01

    As hardware advances continue to modify the supercomputing landscape, traditional scientific software development practices will become more outdated, ineffective, and inefficient. The process of rewriting/retooling existing software for new architectures is a Sisyphean task, and results in substantial hours of development time, effort, and money. Software libraries which provide an abstraction of the resources provided by such architectures are therefore essential if the computational engineering and science communities are to continue to flourish in this modern computing environment. The Multiphysics Object Oriented Simulation Environment (MOOSE) framework enables complex multiphysics analysis tools to be built rapidly by scientists, engineers, and domain specialists, while also allowing them to both take advantage of current HPC architectures, and efficiently prepare for future supercomputer designs. MOOSE employs a hybrid shared-memory and distributed-memory parallel model and provides a complete and consistent interface for creating multiphysics analysis tools. In this paper, a brief discussion of the mathematical algorithms underlying the framework and the internal object-oriented hybrid parallel design are given. Representative massively parallel results from several applications areas are presented, and a brief discussion of future areas of research for the framework are provided.

  7. Impact of insurance coverage on utilization of pre-exposure prophylaxis for HIV prevention.

    Science.gov (United States)

    Patel, Rupa R; Mena, Leandro; Nunn, Amy; McBride, Timothy; Harrison, Laura C; Oldenburg, Catherine E; Liu, Jingxia; Mayer, Kenneth H; Chan, Philip A

    2017-01-01

    Pre-exposure prophylaxis (PrEP) can reduce U.S. HIV incidence. We assessed insurance coverage and its association with PrEP utilization. We reviewed patient data at three PrEP clinics (Jackson, Mississippi; St. Louis, Missouri; Providence, Rhode Island) from 2014-2015. The outcome, PrEP utilization, was defined as patient PrEP use at three months. Multivariable logistic regression was performed to determine the association between insurance coverage and PrEP utilization. Of 201 patients (Jackson: 34%; St. Louis: 28%; Providence: 28%), 91% were male, 51% were White, median age was 29 years, and 21% were uninsured; 82% of patients reported taking PrEP at three months. Insurance coverage was significantly associated with PrEP utilization. After adjusting for Medicaid-expansion and individual socio-demographics, insured patients were four times as likely to use PrEP services compared to the uninsured (OR: 4.49, 95% CI: 1.68-12.01; p = 0.003). Disparities in insurance coverage are important considerations in implementation programs and may impede PrEP utilization.

  8. Impact of insurance coverage on utilization of pre-exposure prophylaxis for HIV prevention.

    Directory of Open Access Journals (Sweden)

    Rupa R Patel

    Full Text Available Pre-exposure prophylaxis (PrEP can reduce U.S. HIV incidence. We assessed insurance coverage and its association with PrEP utilization. We reviewed patient data at three PrEP clinics (Jackson, Mississippi; St. Louis, Missouri; Providence, Rhode Island from 2014-2015. The outcome, PrEP utilization, was defined as patient PrEP use at three months. Multivariable logistic regression was performed to determine the association between insurance coverage and PrEP utilization. Of 201 patients (Jackson: 34%; St. Louis: 28%; Providence: 28%, 91% were male, 51% were White, median age was 29 years, and 21% were uninsured; 82% of patients reported taking PrEP at three months. Insurance coverage was significantly associated with PrEP utilization. After adjusting for Medicaid-expansion and individual socio-demographics, insured patients were four times as likely to use PrEP services compared to the uninsured (OR: 4.49, 95% CI: 1.68-12.01; p = 0.003. Disparities in insurance coverage are important considerations in implementation programs and may impede PrEP utilization.

  9. Parallel SN transport calculations on a transputer network

    International Nuclear Information System (INIS)

    Kim, Yong Hee; Cho, Nam Zin

    1994-01-01

    A parallel computing algorithm for the neutron transport problems has been implemented on a transputer network and two reactor benchmark problems (a fixed-source problem and an eigenvalue problem) are solved. We have shown that the parallel calculations provided significant reduction in execution time over the sequential calculations

  10. Optimisation of a parallel ocean general circulation model

    Science.gov (United States)

    Beare, M. I.; Stevens, D. P.

    1997-10-01

    This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  11. A biologically inspired controller to solve the coverage problem in robotics.

    Science.gov (United States)

    Rañó, Iñaki; Santos, José A

    2017-06-05

    The coverage problem consists on computing a path or trajectory for a robot to pass over all the points in some free area and has applications ranging from floor cleaning to demining. Coverage is solved as a planning problem-providing theoretical validation of the solution-or through heuristic techniques which rely on experimental validation. Through a combination of theoretical results and simulations, this paper presents a novel solution to the coverage problem that exploits the chaotic behaviour of a simple biologically inspired motion controller, the Braitenberg vehicle 2b. Although chaos has been used for coverage, our approach has much less restrictive assumptions about the environment and can be implemented using on-board sensors. First, we prove theoretically that this vehicle-a well known model of animal tropotaxis-behaves as a charge in an electro-magnetic field. The motion equations can be reduced to a Hamiltonian system, and, therefore the vehicle follows quasi-periodic or chaotic trajectories, which pass arbitrarily close to any point in the work-space, i.e. it solves the coverage problem. Secondly, through a set of extensive simulations, we show that the trajectories cover regions of bounded workspaces, and full coverage is achieved when the perceptual range of the vehicle is short. We compare the performance of this new approach with different types of random motion controllers in the same bounded environments.

  12. Parallel MR imaging.

    Science.gov (United States)

    Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A; Seiberlich, Nicole

    2012-07-01

    Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the undersampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. Copyright © 2012 Wiley Periodicals, Inc.

  13. 77 FR 70374 - Servicemembers' Group Life Insurance-Stillborn Child Coverage

    Science.gov (United States)

    2012-11-26

    ... is the biological mother of a stillborn and if both the surrogate and the stillborn's biological... the coverage of the child's SGLI-insured biological mother. This final rule will provide consistency... proceeds would be paid to the child's SGLI- insured mother. We provided a 60-day public-comment period...

  14. BCG coverage and barriers to BCG vaccination in Guinea-Bissau

    DEFF Research Database (Denmark)

    Thysen, Sanne Marie; Byberg, Stine; Pedersen, Marie

    2014-01-01

    , not disclosing the delay in vaccination. Several studies show that BCG at birth lowers neonatal mortality. We assessed BCG coverage at different ages and explored reasons for delay in BCG vaccination in rural Guinea-Bissau. METHODS: Bandim Health Project (BHP) runs a health and demographic surveillance system...... covering women and their children in 182 randomly selected village clusters in rural Guinea-Bissau. BCG coverage was assessed for children born in 2010, when the restricted vial-opening policy was universally implemented, and in 2012-2013, where BHP provided BCG to all children at monthly visits...

  15. DNA barcoding in the media: does coverage of cool science reflect its social context?

    Science.gov (United States)

    Geary, Janis; Camicioli, Emma; Bubela, Tania

    2016-09-01

    Paul Hebert and colleagues first described DNA barcoding in 2003, which led to international efforts to promote and coordinate its use. Since its inception, DNA barcoding has generated considerable media coverage. We analysed whether this coverage reflected both the scientific and social mandates of international barcoding organizations. We searched newspaper databases to identify 900 English-language articles from 2003 to 2013. Coverage of the science of DNA barcoding was highly positive but lacked context for key topics. Coverage omissions pose challenges for public understanding of the science and applications of DNA barcoding; these included coverage of governance structures and issues related to the sharing of genetic resources across national borders. Our analysis provided insight into how barcoding communication efforts have translated into media coverage; more targeted communication efforts may focus media attention on previously omitted, but important topics. Our analysis is timely as the DNA barcoding community works to establish the International Society for the Barcode of Life.

  16. High-Performance Psychometrics: The Parallel-E Parallel-M Algorithm for Generalized Latent Variable Models. Research Report. ETS RR-16-34

    Science.gov (United States)

    von Davier, Matthias

    2016-01-01

    This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…

  17. Print News Coverage of School-Based HPV Vaccine Mandate

    Science.gov (United States)

    Casciotti, Dana; Smith, Katherine C.; Andon, Lindsay; Vernick, Jon; Tsui, Amy; Klassen, Ann C.

    2015-01-01

    BACKGROUND In 2007, legislation was proposed in 24 states and the District of Columbia for school-based HPV vaccine mandates, and mandates were enacted in Texas, Virginia, and the District of Columbia. Media coverage of these events was extensive, and media messages both reflected and contributed to controversy surrounding these legislative activities. Messages communicated through the media are an important influence on adolescent and parent understanding of school-based vaccine mandates. METHODS We conducted structured text analysis of newspaper coverage, including quantitative analysis of 169 articles published in mandate jurisdictions from 2005-2009, and qualitative analysis of 63 articles from 2007. Our structured analysis identified topics, key stakeholders and sources, tone, and the presence of conflict. Qualitative thematic analysis identified key messages and issues. RESULTS Media coverage was often incomplete, providing little context about cervical cancer or screening. Skepticism and autonomy concerns were common. Messages reflected conflict and distrust of government activities, which could negatively impact this and other youth-focused public health initiatives. CONCLUSIONS If school health professionals are aware of the potential issues raised in media coverage of school-based health mandates, they will be more able to convey appropriate health education messages, and promote informed decision-making by parents and students. PMID:25099421

  18. Space Shuttle Communications Coverage Analysis for Thermal Tile Inspection

    Science.gov (United States)

    Kroll, Quin D.; Hwu, Shian U.; Upanavage, Matthew; Boster, John P.; Chavez, Mark A.

    2009-01-01

    The space shuttle ultra-high frequency Space-to-Space Communication System has to provide adequate communication coverage for astronauts who are performing thermal tile inspection and repair on the underside of the space shuttle orbiter (SSO). Careful planning and quantitative assessment are necessary to ensure successful system operations and mission safety in this work environment. This study assesses communication systems performance for astronauts who are working in the underside, non-line-of-sight shadow region on the space shuttle. All of the space shuttle and International Space Station (ISS) transmitting antennas are blocked by the SSO structure. To ensure communication coverage at planned inspection worksites, the signal strength and link margin between the SSO/ISS antennas and the extravehicular activity astronauts, whose line-of-sight is blocked by vehicle structure, was analyzed. Investigations were performed using rigorous computational electromagnetic modeling techniques. Signal strength was obtained by computing the reflected and diffracted fields along the signal propagation paths between transmitting and receiving antennas. Radio frequency (RF) coverage was determined for thermal tile inspection and repair missions using the results of this computation. Analysis results from this paper are important in formulating the limits on reliable communication range and RF coverage at planned underside inspection and repair worksites.

  19. Parallel fuzzy connected image segmentation on GPU

    OpenAIRE

    Zhuge, Ying; Cao, Yong; Udupa, Jayaram K.; Miller, Robert W.

    2011-01-01

    Purpose: Image segmentation techniques using fuzzy connectedness (FC) principles have shown their effectiveness in segmenting a variety of objects in several large applications. However, one challenge in these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays, commodity graphics hardware provides a highly parallel computing environment. In this paper, the authors present a parallel fuzzy connected image segmentation algorithm impleme...

  20. What hysteria? A systematic study of newspaper coverage of accused child molesters.

    Science.gov (United States)

    Cheit, Ross E

    2003-06-01

    There were three aims: First, to determine the extent to which those charged with child molestation receive newspaper coverage; second, to analyze the nature of that coverage; and third, to compare the universe of coverage to the nature of child molestation charges in the criminal justice system as a whole. Two databases were created. The first one identified all defendants charged with child molestation in Rhode Island in 1993. The database was updated after 5 years to include relevant information about case disposition. The second database was created by electronic searching the Providence Journal for every story that mentioned each defendant. Most defendants (56.1%) were not mentioned in the newspaper. Factors associated with a greater chance of coverage include: cases involving first-degree charges, cases with multiple counts, cases involving additional violence or multiple victims, and cases resulting in long prison sentences. The data indicate that the press exaggerates "stranger danger," while intra-familial cases are underreported. Newspaper accounts also minimize the extent to which guilty defendants avoid prison. Generalizing about the nature of child molestation cases in criminal court on the basis of newspaper coverage is inappropriate. The coverage is less extensive than often claimed, and it is skewed in ways that are typical of the mass media.

  1. Percent Coverage

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Percent Coverage is a spreadsheet that keeps track of and compares the number of vessels that have departed with and without observers to the numbers of vessels...

  2. Optimisation of a parallel ocean general circulation model

    Directory of Open Access Journals (Sweden)

    M. I. Beare

    1997-10-01

    Full Text Available This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  3. Optimisation of a parallel ocean general circulation model

    Directory of Open Access Journals (Sweden)

    M. I. Beare

    Full Text Available This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  4. Cost-effectiveness of increasing cervical cancer screening coverage in the Middle East: An example from Lebanon.

    Science.gov (United States)

    Sharma, Monisha; Seoud, Muhieddine; Kim, Jane J

    2017-01-23

    Most cervical cancer (CC) cases in Lebanon are detected at later stages and associated with high mortality. There is no national organized CC screening program so screening is opportunistic and limited to women who can pay out-of-pocket. Therefore, a small percentage of women receive repeated screenings while most are under-or never screened. We evaluated the cost-effectiveness of increasing screening coverage and extending intervals. We used an individual-based Monte Carlo model simulating HPV and CC natural history and screening. We calibrated the model to epidemiological data from Lebanon, including CC incidence and HPV type distribution. We evaluated cytology and HPV DNA screening for women aged 25-65years, varying coverage from 20 to 70% and frequency from 1 to 5years. At 20% coverage, annual cytologic screening reduced lifetime CC risk by 14% and had an incremental cost-effectiveness ratio of I$80,670/year of life saved (YLS), far exceeding Lebanon's gross domestic product (GDP) per capita (I$17,460), a commonly cited cost-effectiveness threshold. By comparison, increasing cytologic screening coverage to 50% and extending screening intervals to 3 and 5years provided greater CC reduction (26.1% and 21.4, respectively) at lower costs compared to 20% coverage with annual screening. Screening every 5years with HPV DNA testing at 50% coverage provided greater CC reductions than cytology at the same frequency (23.4%) and was cost-effective assuming a cost of I$18 per HPV test administered (I$12,210/YLS); HPV DNA testing every 4years at 50% coverage was also cost-effective at the same cost per test (I$16,340). Increasing coverage of annual cytology was not found to be cost-effective. Current practice of repeated cytology in a small percentage of women is inefficient. Increasing coverage to 50% with extended screening intervals provides greater health benefits at a reasonable cost and can more equitably distribute health gains. Novel HPV DNA strategies offer greater

  5. Optimization approaches to mpi and area merging-based parallel buffer algorithm

    Directory of Open Access Journals (Sweden)

    Junfu Fan

    Full Text Available On buffer zone construction, the rasterization-based dilation method inevitably introduces errors, and the double-sided parallel line method involves a series of complex operations. In this paper, we proposed a parallel buffer algorithm based on area merging and MPI (Message Passing Interface to improve the performances of buffer analyses on processing large datasets. Experimental results reveal that there are three major performance bottlenecks which significantly impact the serial and parallel buffer construction efficiencies, including the area merging strategy, the task load balance method and the MPI inter-process results merging strategy. Corresponding optimization approaches involving tree-like area merging strategy, the vertex number oriented parallel task partition method and the inter-process results merging strategy were suggested to overcome these bottlenecks. Experiments were carried out to examine the performance efficiency of the optimized parallel algorithm. The estimation results suggested that the optimization approaches could provide high performance and processing ability for buffer construction in a cluster parallel environment. Our method could provide insights into the parallelization of spatial analysis algorithm.

  6. Applying FDTD to the Coverage Prediction of WiMAX Femtocells

    Directory of Open Access Journals (Sweden)

    Valcarce Alvaro

    2009-01-01

    Full Text Available Femtocells, or home base stations, are a potential future solution for operators to increase indoor coverage and reduce network cost. In a real WiMAX femtocell deployment in residential areas covered by WiMAX macrocells, interference is very likely to occur both in the streets and certain indoor regions. Propagation models that take into account both the outdoor and indoor channel characteristics are thus necessary for the purpose of WiMAX network planning in the presence of femtocells. In this paper, the finite-difference time-domain (FDTD method is adapted for the computation of radiowave propagation predictions at WiMAX frequencies. This model is particularly suitable for the study of hybrid indoor/outdoor scenarios and thus well adapted for the case of WiMAX femtocells in residential environments. Two optimization methods are proposed for the reduction of the FDTD simulation time: the reduction of the simulation frequency for problem simplification and a parallel graphics processing units (GPUs implementation. The calibration of the model is then thoroughly described. First, the calibration of the absorbing boundary condition, necessary for proper coverage predictions, is presented. Then a calibration of the material parameters that minimizes the error function between simulation and real measurements is proposed. Finally, some mobile WiMAX system-level simulations that make use of the presented propagation model are presented to illustrate the applicability of the model for the study of femto- to macrointerference.

  7. A SPECT reconstruction method for extending parallel to non-parallel geometries

    International Nuclear Information System (INIS)

    Wen Junhai; Liang Zhengrong

    2010-01-01

    Due to its simplicity, parallel-beam geometry is usually assumed for the development of image reconstruction algorithms. The established reconstruction methodologies are then extended to fan-beam, cone-beam and other non-parallel geometries for practical application. This situation occurs for quantitative SPECT (single photon emission computed tomography) imaging in inverting the attenuated Radon transform. Novikov reported an explicit parallel-beam formula for the inversion of the attenuated Radon transform in 2000. Thereafter, a formula for fan-beam geometry was reported by Bukhgeim and Kazantsev (2002 Preprint N. 99 Sobolev Institute of Mathematics). At the same time, we presented a formula for varying focal-length fan-beam geometry. Sometimes, the reconstruction formula is so implicit that we cannot obtain the explicit reconstruction formula in the non-parallel geometries. In this work, we propose a unified reconstruction framework for extending parallel-beam geometry to any non-parallel geometry using ray-driven techniques. Studies by computer simulations demonstrated the accuracy of the presented unified reconstruction framework for extending parallel-beam to non-parallel geometries in inverting the attenuated Radon transform.

  8. The language parallel Pascal and other aspects of the massively parallel processor

    Science.gov (United States)

    Reeves, A. P.; Bruner, J. D.

    1982-01-01

    A high level language for the Massively Parallel Processor (MPP) was designed. This language, called Parallel Pascal, is described in detail. A description of the language design, a description of the intermediate language, Parallel P-Code, and details for the MPP implementation are included. Formal descriptions of Parallel Pascal and Parallel P-Code are given. A compiler was developed which converts programs in Parallel Pascal into the intermediate Parallel P-Code language. The code generator to complete the compiler for the MPP is being developed independently. A Parallel Pascal to Pascal translator was also developed. The architecture design for a VLSI version of the MPP was completed with a description of fault tolerant interconnection networks. The memory arrangement aspects of the MPP are discussed and a survey of other high level languages is given.

  9. Parallel Atomistic Simulations

    Energy Technology Data Exchange (ETDEWEB)

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  10. Universal Health Coverage - The Critical Importance of Global Solidarity and Good Governance Comment on "Ethical Perspective: Five Unacceptable Trade-offs on the Path to Universal Health Coverage".

    Science.gov (United States)

    Reis, Andreas A

    2016-06-07

    This article provides a commentary to Ole Norheim' s editorial entitled "Ethical perspective: Five unacceptable trade-offs on the path to universal health coverage." It reinforces its message that an inclusive, participatory process is essential for ethical decision-making and underlines the crucial importance of good governance in setting fair priorities in healthcare. Solidarity on both national and international levels is needed to make progress towards the goal of universal health coverage (UHC). © 2016 by Kerman University of Medical Sciences.

  11. Survey on present status and trend of parallel programming environments

    International Nuclear Information System (INIS)

    Takemiya, Hiroshi; Higuchi, Kenji; Honma, Ichiro; Ohta, Hirofumi; Kawasaki, Takuji; Imamura, Toshiyuki; Koide, Hiroshi; Akimoto, Masayuki.

    1997-03-01

    This report intends to provide useful information on software tools for parallel programming through the survey on parallel programming environments of the following six parallel computers, Fujitsu VPP300/500, NEC SX-4, Hitachi SR2201, Cray T94, IBM SP, and Intel Paragon, all of which are installed at Japan Atomic Energy Research Institute (JAERI), moreover, the present status of R and D's on parallel softwares of parallel languages, compilers, debuggers, performance evaluation tools, and integrated tools is reported. This survey has been made as a part of our project of developing a basic software for parallel programming environment, which is designed on the concept of STA (Seamless Thinking Aid to programmers). (author)

  12. CALTRANS: A parallel, deterministic, 3D neutronics code

    Energy Technology Data Exchange (ETDEWEB)

    Carson, L.; Ferguson, J.; Rogers, J.

    1994-04-01

    Our efforts to parallelize the deterministic solution of the neutron transport equation has culminated in a new neutronics code CALTRANS, which has full 3D capability. In this article, we describe the layout and algorithms of CALTRANS and present performance measurements of the code on a variety of platforms. Explicit implementation of the parallel algorithms of CALTRANS using both the function calls of the Parallel Virtual Machine software package (PVM 3.2) and the Meiko CS-2 tagged message passing library (based on the Intel NX/2 interface) are provided in appendices.

  13. Network television news coverage of environmental risks

    International Nuclear Information System (INIS)

    Greenberg, M.R.; Sandman, P.M.; Sachsman, D.V.; Salomone, K.L.

    1989-01-01

    Despite the criticisms that surround television coverage of environmental risk, there have been relatively few attempts to measure what and whom television shows. Most research has focused analysis on a few weeks of coverage of major stories like the gas leak at Bhopal, the Three Mile Island nuclear accident, or the Mount St. Helen's eruption. To advance the research into television coverage of environmental risk, an analysis has been made of all environmental risk coverage by the network nightly news broadcasts for a period of more than two years. Researchers have analyzed all environmental risk coverage-564 stories in 26 months-presented on ABC, CBS, and NBC's evening news broadcasts from January 1984 through February 1986. The quantitative information from the 564 stories was balanced by a more qualitative analysis of the television coverage of two case studies-the dioxin contamination in Times Beach, Missouri, and the suspected methyl isocyanate emissions from the Union Carbide plant in Institute, West Virginia. Both qualitative and quantitative data contributed to the analysis of the role played by experts and environmental advocacy sources in coverage of environmental risk and to the suggestions for increasing that role

  14. The BLAZE language - A parallel language for scientific programming

    Science.gov (United States)

    Mehrotra, Piyush; Van Rosendale, John

    1987-01-01

    A Pascal-like scientific programming language, BLAZE, is described. BLAZE contains array arithmetic, forall loops, and APL-style accumulation operators, which allow natural expression of fine grained parallelism. It also employs an applicative or functional procedure invocation mechanism, which makes it easy for compilers to extract coarse grained parallelism using machine specific program restructuring. Thus BLAZE should allow one to achieve highly parallel execution on multiprocessor architectures, while still providing the user with conceptually sequential control flow. A central goal in the design of BLAZE is portability across a broad range of parallel architectures. The multiple levels of parallelism present in BLAZE code, in principle, allow a compiler to extract the types of parallelism appropriate for the given architecture while neglecting the remainder. The features of BLAZE are described and it is shown how this language would be used in typical scientific programming.

  15. The BLAZE language: A parallel language for scientific programming

    Science.gov (United States)

    Mehrotra, P.; Vanrosendale, J.

    1985-01-01

    A Pascal-like scientific programming language, Blaze, is described. Blaze contains array arithmetic, forall loops, and APL-style accumulation operators, which allow natural expression of fine grained parallelism. It also employs an applicative or functional procedure invocation mechanism, which makes it easy for compilers to extract coarse grained parallelism using machine specific program restructuring. Thus Blaze should allow one to achieve highly parallel execution on multiprocessor architectures, while still providing the user with onceptually sequential control flow. A central goal in the design of Blaze is portability across a broad range of parallel architectures. The multiple levels of parallelism present in Blaze code, in principle, allow a compiler to extract the types of parallelism appropriate for the given architecture while neglecting the remainder. The features of Blaze are described and shows how this language would be used in typical scientific programming.

  16. 20 CFR 404.1065 - Self-employment coverage.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Self-employment coverage. 404.1065 Section... INSURANCE (1950- ) Employment, Wages, Self-Employment, and Self-Employment Income Self-Employment § 404.1065 Self-employment coverage. For an individual to have self-employment coverage under social security, the...

  17. Variation in Private Payer Coverage of Rheumatoid Arthritis Drugs.

    Science.gov (United States)

    Chambers, James D; Wilkinson, Colby L; Anderson, Jordan E; Chenoweth, Matthew D

    2016-10-01

    1.4 clinical guidelines, 1.1 clinical reviews, 0.8 other clinical studies, and 0.5 technology assessments per policy. Only 1 payer reported reviewing cost-effectiveness analyses. The evidence base that the payers reported reviewing varied in terms of volume and composition. Payers most often covered rheumatoid arthritis drugs more restrictively than the corresponding FDA label indication and the ACR treatment recommendations. Payers reported reviewing a varied evidence base in their coverage policies. Funding for this study was provided by Genentech. Chambers has participated in a Sanofi advisory board, unrelated to this study. The authors report no other potential conflicts of interest. Study concept and design were contributed by Chambers. Anderson, Wilkinson, and Chenoweth collected the data, assisted by Chambers, and data interpretation was primarily performed by Chambers, along with Anderson and with assistance from Wilkinson and Chenoweth. The manuscript was written primarily by Chambers, along with Wilkinson and with assistance from Anderson and Chenoweth. Chambers, Chenoweth, Wilkinson, and Anderson revised the manuscript.

  18. Portable programming on parallel/networked computers using the Application Portable Parallel Library (APPL)

    Science.gov (United States)

    Quealy, Angela; Cole, Gary L.; Blech, Richard A.

    1993-01-01

    The Application Portable Parallel Library (APPL) is a subroutine-based library of communication primitives that is callable from applications written in FORTRAN or C. APPL provides a consistent programmer interface to a variety of distributed and shared-memory multiprocessor MIMD machines. The objective of APPL is to minimize the effort required to move parallel applications from one machine to another, or to a network of homogeneous machines. APPL encompasses many of the message-passing primitives that are currently available on commercial multiprocessor systems. This paper describes APPL (version 2.3.1) and its usage, reports the status of the APPL project, and indicates possible directions for the future. Several applications using APPL are discussed, as well as performance and overhead results.

  19. Assessing Measurement Error in Medicare Coverage

    Data.gov (United States)

    U.S. Department of Health & Human Services — Assessing Measurement Error in Medicare Coverage From the National Health Interview Survey Using linked administrative data, to validate Medicare coverage estimates...

  20. Mapping robust parallel multigrid algorithms to scalable memory architectures

    Science.gov (United States)

    Overman, Andrea; Vanrosendale, John

    1993-01-01

    The convergence rate of standard multigrid algorithms degenerates on problems with stretched grids or anisotropic operators. The usual cure for this is the use of line or plane relaxation. However, multigrid algorithms based on line and plane relaxation have limited and awkward parallelism and are quite difficult to map effectively to highly parallel architectures. Newer multigrid algorithms that overcome anisotropy through the use of multiple coarse grids rather than relaxation are better suited to massively parallel architectures because they require only simple point-relaxation smoothers. In this paper, we look at the parallel implementation of a V-cycle multiple semicoarsened grid (MSG) algorithm on distributed-memory architectures such as the Intel iPSC/860 and Paragon computers. The MSG algorithms provide two levels of parallelism: parallelism within the relaxation or interpolation on each grid and across the grids on each multigrid level. Both levels of parallelism must be exploited to map these algorithms effectively to parallel architectures. This paper describes a mapping of an MSG algorithm to distributed-memory architectures that demonstrates how both levels of parallelism can be exploited. The result is a robust and effective multigrid algorithm for distributed-memory machines.

  1. Synthesis of Volumetric Ring Antenna Array for Terrestrial Coverage Pattern

    Directory of Open Access Journals (Sweden)

    Alberto Reyna

    2014-01-01

    Full Text Available This paper presents a synthesis of a volumetric ring antenna array for a terrestrial coverage pattern. This synthesis regards the spacing among the rings on the planes X-Y, the positions of the rings on the plane X-Z, and uniform and concentric excitations. The optimization is carried out by implementing the particle swarm optimization. The synthesis is compared with previous designs by resulting with proper performance of this geometry to provide an accurate coverage to be applied in satellite applications with a maximum reduction of the antenna hardware as well as the side lobe level reduction.

  2. Synthesis of Volumetric Ring Antenna Array for Terrestrial Coverage Pattern

    Science.gov (United States)

    Reyna, Alberto; Panduro, Marco A.; Del Rio Bocio, Carlos

    2014-01-01

    This paper presents a synthesis of a volumetric ring antenna array for a terrestrial coverage pattern. This synthesis regards the spacing among the rings on the planes X-Y, the positions of the rings on the plane X-Z, and uniform and concentric excitations. The optimization is carried out by implementing the particle swarm optimization. The synthesis is compared with previous designs by resulting with proper performance of this geometry to provide an accurate coverage to be applied in satellite applications with a maximum reduction of the antenna hardware as well as the side lobe level reduction. PMID:24701150

  3. Change of mobile network coverage in France from 29 August

    CERN Multimedia

    IT Department

    2016-01-01

    The change of mobile network coverage on the French part of the CERN site will take effect on 29 August and not on 11 July as previously announced.    From 29 August, the Swisscom transmitters in France will be deactivated and Orange France will thenceforth provide coverage on the French part of the CERN site.  This switch will result in changes to billing. You should also ensure that you can still be contacted by your colleagues when you are on the French part of the CERN site. Please consult the information and instructions in this official communication.

  4. Device evaluation and coverage policy in workers' compensation: examples from Washington State.

    Science.gov (United States)

    Franklin, G M; Lifka, J; Milstein, J

    1998-09-25

    Workers' compensation health benefits are broader than general health benefits and include payment for medical and rehabilitation costs, associated indemnity (lost time) costs, and vocational rehabilitation (return-to-work) costs. In addition, cost liability is for the life of the claim (injury), rather than for each plan year. We examined device evaluation and coverage policy in workers' compensation over a 10-year period in Washington State. Most requests for device coverage in workers' compensation relate to the diagnosis, prognosis, or treatment of chronic musculoskeletal conditions. A number of specific problems have been recognized in making device coverage decisions within workers' compensation: (1) invasive devices with a high adverse event profile and history of poor outcomes could significantly increase both indemnity and medical costs; (2) many noninvasive devices, while having a low adverse event profile, have not proved effective for managing chronic musculoskeletal conditions relevant to injured workers; (3) some devices are marketed and billed as surrogate diagnostic tests for generally accepted, and more clearly proven, standard tests; (4) quality oversight of technology use among physicians may be inadequate; and (5) insurers' access to efficacy data adequate to make timely and appropriate coverage decisions in workers' compensation is often lacking. Emerging technology may substantially increase the costs of workers' compensation without significant evidence of health benefit for injured workers. To prevent ever-rising costs, we need to increase provider education and patient education and consent, involve the state medical society in coverage policy, and collect relevant outcomes data from healthcare providers.

  5. Parallel Computing in SCALE

    International Nuclear Information System (INIS)

    DeHart, Mark D.; Williams, Mark L.; Bowman, Stephen M.

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  6. Parallel integer sorting with medium and fine-scale parallelism

    Science.gov (United States)

    Dagum, Leonardo

    1993-01-01

    Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.

  7. 7 CFR 457.172 - Coverage Enhancement Option.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 6 2010-01-01 2010-01-01 false Coverage Enhancement Option. 457.172 Section 457.172..., DEPARTMENT OF AGRICULTURE COMMON CROP INSURANCE REGULATIONS § 457.172 Coverage Enhancement Option. The Coverage Enhancement Option for the 2009 and succeeding crop years are as follows: FCIC policies: United...

  8. 29 CFR 2.12 - Audiovisual coverage permitted.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true Audiovisual coverage permitted. 2.12 Section 2.12 Labor Office of the Secretary of Labor GENERAL REGULATIONS Audiovisual Coverage of Administrative Hearings § 2.12 Audiovisual coverage permitted. The following are the types of hearings where the Department...

  9. Vaccination coverage among children in kindergarten - United States, 2013-14 school year.

    Science.gov (United States)

    Seither, Ranee; Masalovich, Svetlana; Knighton, Cynthia L; Mellerson, Jenelle; Singleton, James A; Greby, Stacie M

    2014-10-17

    State and local vaccination requirements for school entry are implemented to maintain high vaccination coverage and protect schoolchildren from vaccine-preventable diseases. Each year, to assess state and national vaccination coverage and exemption levels among kindergartners, CDC analyzes school vaccination data collected by federally funded state, local, and territorial immunization programs. This report describes vaccination coverage in 49 states and the District of Columbia (DC) and vaccination exemption rates in 46 states and DC for children enrolled in kindergarten during the 2013-14 school year. Median vaccination coverage was 94.7% for 2 doses of measles, mumps, and rubella (MMR) vaccine; 95.0% for varying local requirements for diphtheria, tetanus toxoid, and acellular pertussis (DTaP) vaccine; and 93.3% for 2 doses of varicella vaccine among those states with a 2-dose requirement. The median total exemption rate was 1.8%. High exemption levels and suboptimal vaccination coverage leave children vulnerable to vaccine-preventable diseases. Although vaccination coverage among kindergartners for the majority of reporting states was at or near the 95% national Healthy People 2020 targets for 4 doses of DTaP, 2 doses of MMR, and 2 doses of varicella vaccine, low vaccination coverage and high exemption levels can cluster within communities. Immunization programs might have access to school vaccination coverage and exemption rates at a local level for counties, school districts, or schools that can identify areas where children are more vulnerable to vaccine-preventable diseases. Health promotion efforts in these local areas can be used to help parents understand the risks for vaccine-preventable diseases and the protection that vaccinations provide to their children.

  10. State Medicaid Expansion Tobacco Cessation Coverage and Number of Adult Smokers Enrolled in Expansion Coverage - United States, 2016.

    Science.gov (United States)

    DiGiulio, Anne; Haddix, Meredith; Jump, Zach; Babb, Stephen; Schecter, Anna; Williams, Kisha-Ann S; Asman, Kat; Armour, Brian S

    2016-12-09

    In 2015, 27.8% of adult Medicaid enrollees were current cigarette smokers, compared with 11.1% of adults with private health insurance, placing Medicaid enrollees at increased risk for smoking-related disease and death (1). In addition, smoking-related diseases are a major contributor to Medicaid costs, accounting for about 15% (>$39 billion) of annual Medicaid spending during 2006-2010 (2). Individual, group, and telephone counseling and seven Food and Drug Administration (FDA)-approved medications are effective treatments for helping tobacco users quit (3). Insurance coverage for tobacco cessation treatments is associated with increased quit attempts, use of cessation treatments, and successful smoking cessation (3); this coverage has the potential to reduce Medicaid costs (4). However, barriers such as requiring copayments and prior authorization for treatment can impede access to cessation treatments (3,5). As of July 1, 2016, 32 states (including the District of Columbia) have expanded Medicaid eligibility through the Patient Protection and Affordable Care Act (ACA),* ,† which has increased access to health care services, including cessation treatments (5). CDC used data from the Centers for Medicare and Medicaid Services (CMS) Medicaid Budget and Expenditure System (MBES) and the Behavioral Risk Factor Surveillance System (BRFSS) to estimate the number of adult smokers enrolled in Medicaid expansion coverage. To assess cessation coverage among Medicaid expansion enrollees, the American Lung Association collected data on coverage of, and barriers to accessing, evidence-based cessation treatments. As of December 2015, approximately 2.3 million adult smokers were newly enrolled in Medicaid because of Medicaid expansion. As of July 1, 2016, all 32 states that have expanded Medicaid eligibility under ACA covered some cessation treatments for all Medicaid expansion enrollees, with nine states covering all nine cessation treatments for all Medicaid expansion

  11. Functional coverages

    NARCIS (Netherlands)

    Donchyts, G.; Baart, F.; Jagers, H.R.A.; Van Dam, A.

    2011-01-01

    A new Application Programming Interface (API) is presented which simplifies working with geospatial coverages as well as many other data structures of a multi-dimensional nature. The main idea extends the Common Data Model (CDM) developed at the University Corporation for Atmospheric Research

  12. Parallel, Asynchronous Executive (PAX): System concepts, facilities, and architecture

    Science.gov (United States)

    Jones, W. H.

    1983-01-01

    The Parallel, Asynchronous Executive (PAX) is a software operating system simulation that allows many computers to work on a single problem at the same time. PAX is currently implemented on a UNIVAC 1100/42 computer system. Independent UNIVAC runstreams are used to simulate independent computers. Data are shared among independent UNIVAC runstreams through shared mass-storage files. PAX has achieved the following: (1) applied several computing processes simultaneously to a single, logically unified problem; (2) resolved most parallel processor conflicts by careful work assignment; (3) resolved by means of worker requests to PAX all conflicts not resolved by work assignment; (4) provided fault isolation and recovery mechanisms to meet the problems of an actual parallel, asynchronous processing machine. Additionally, one real-life problem has been constructed for the PAX environment. This is CASPER, a collection of aerodynamic and structural dynamic problem simulation routines. CASPER is not discussed in this report except to provide examples of parallel-processing techniques.

  13. About Parallel Programming: Paradigms, Parallel Execution and Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Loredana MOCEAN

    2009-01-01

    Full Text Available In the last years, there were made efforts for delineation of a stabile and unitary frame, where the problems of logical parallel processing must find solutions at least at the level of imperative languages. The results obtained by now are not at the level of the made efforts. This paper wants to be a little contribution at these efforts. We propose an overview in parallel programming, parallel execution and collaborative systems.

  14. Parallel Careers and their Consequences for Companies in Brazil

    Directory of Open Access Journals (Sweden)

    Maria Candida Baumer Azevedo

    2014-04-01

    Full Text Available Given the relevance of the need to manage parallel careers to attract and retain people in organizations, this paper provides insight into this phenomenon from an organizational perspective. The parallel career concept, introduced by Alboher (2007 and recently addressed by Schuiling (2012, has previously been examined only from the perspective of the parallel career holder (PC holder. The paper provides insight from both individual and organizational perspectives on the phenomenon of parallel careers and considers how it can function as an important tool for attracting and retaining people by contributing to human development. This paper employs a qualitative approach that includes 30 semi-structured one-on-one interviews. The organizational perspective arises from the 15 interviews with human resources (HR executives from different companies. The individual viewpoint originates from the interviews with 15 executives who are also PC holders. An inductive content analysis approach was used to examine Brazilian companies and the Brazilian office of multinationals. Companies that are concerned about having the best talent on their teams can benefit from a deeper understanding of parallel careers, which can be used to attract, develop, and retain talent. Limitations and directions for future research are discussed.

  15. A novel six-degrees-of-freedom series-parallel manipulator

    Energy Technology Data Exchange (ETDEWEB)

    Gallardo-Alvarado, J.; Rodriguez-Castro, R.; Aguilar-Najera, C. R.; Perez-Gonzalez, L. [Instituto Tecnologico de Celaya, Celaya (Mexico)

    2012-06-15

    This paper addresses the description and kinematic analyses of a new non-redundant series-parallel manipulator. The primary feature of the robot is to have a decoupled topology consisting of a lower parallel manipulator, for controlling the orientation of the coupler platform, assembled in series connection with a upper parallel manipulator, for controlling the position of the output platform, capable to provide arbitrary poses to the output platform with respect to the fixed platform. The forward displacement analysis is carried-out in semi-closed form solutions by resorting to simple closure equations. On the other hand; the velocity, acceleration and singularity analyses of the manipulator are approached by means of the theory of screws. Simple and compact expressions are derived here for solving the infinitesimal kinematics by taking advantage of the concept of reciprocal screws. Furthermore, the analysis of the Jacobians of the robot shows that the lower parallel manipulator is practically free of singularities. In order to illustrate the performance of the manipulator, a numerical example which consists of solving the inverse/forward kinematics of the series-parallel manipulator as well as its singular configurations is provided.

  16. Parallel computing works!

    CERN Document Server

    Fox, Geoffrey C; Messina, Guiseppe C

    2014-01-01

    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  17. The WET Coverage - How Well Do We Do?

    Directory of Open Access Journals (Sweden)

    Solheim Jan-Erik

    2003-06-01

    Full Text Available The Whole Earth Telescope collaboration is build solidly on the interest of the participants. One of the goals of the collaboration is to produce a high signal to noise, as continuous as possible, light curve for a selected target. During the nearly 15 years of existence the operation of the network has been based on what the members have been able to provide of local funds for their own participation, in addition to NSF grants to run the headquarters activities. This has led to a very uneven geographical distribution of participating groups and observatories. An analysis of the coverage of some of the last WET runs shows that we still have large holes in the coverage, and this leads to aliasing and loss of precision in our final products.

  18. Parallel processor programs in the Federal Government

    Science.gov (United States)

    Schneck, P. B.; Austin, D.; Squires, S. L.; Lehmann, J.; Mizell, D.; Wallgren, K.

    1985-01-01

    In 1982, a report dealing with the nation's research needs in high-speed computing called for increased access to supercomputing resources for the research community, research in computational mathematics, and increased research in the technology base needed for the next generation of supercomputers. Since that time a number of programs addressing future generations of computers, particularly parallel processors, have been started by U.S. government agencies. The present paper provides a description of the largest government programs in parallel processing. Established in fiscal year 1985 by the Institute for Defense Analyses for the National Security Agency, the Supercomputing Research Center will pursue research to advance the state of the art in supercomputing. Attention is also given to the DOE applied mathematical sciences research program, the NYU Ultracomputer project, the DARPA multiprocessor system architectures program, NSF research on multiprocessor systems, ONR activities in parallel computing, and NASA parallel processor projects.

  19. Contraception coverage and methods used among women in South ...

    African Journals Online (AJOL)

    its convenience for providers and women, cost effectiveness, and high acceptability ... Using data from the 2012 SA National HIV Prevalence, Incidence ... Data on contraceptive coverage and service gaps could help to shape these initiatives. ... 7 London School of Hygiene and Tropical Medicine, University of London, UK.

  20. 78 FR 54986 - Information Reporting of Minimum Essential Coverage

    Science.gov (United States)

    2013-09-09

    ... employees, and offer that coverage to spouses and dependents, all with no employee contribution, to forgo... health benefits provided through a contribution to a health savings account. Health savings accounts are... agenda will be available free of charge at the hearing. Drafting Information The principal authors of...

  1. Coverage and Capacity Analysis of Sigfox, LoRa, GPRS, and NB-IoT

    DEFF Research Database (Denmark)

    Vejlgaard, Benny; Lauridsen, Mads; Nguyen, Huan Cong

    2017-01-01

    is challenged for indoor coverage. Furthermore, the study analyzes the capacity of the four technologies assuming a traffic growth from 1 to 10 IoT device per user. The conclusion is that the 95 %-tile uplink failure rate for outdoor users is below 5 % for all technologies. For indoor users only NB-IoT provides......In this paper the coverage and capacity of SigFox, LoRa, GPRS, and NB-IoT is compared using a real site deployment covering 8000 km2 in Northern Denmark. Using the existing Telenor cellular site grid it is shown that the four technologies have more than 99 % outdoor coverage, while GPRS...

  2. Parallel processing for artificial intelligence 2

    CERN Document Server

    Kumar, V; Suttner, CB

    1994-01-01

    With the increasing availability of parallel machines and the raising of interest in large scale and real world applications, research on parallel processing for Artificial Intelligence (AI) is gaining greater importance in the computer science environment. Many applications have been implemented and delivered but the field is still considered to be in its infancy. This book assembles diverse aspects of research in the area, providing an overview of the current state of technology. It also aims to promote further growth across the discipline. Contributions have been grouped according to their

  3. Multibus-based parallel processor for simulation

    Science.gov (United States)

    Ogrady, E. P.; Wang, C.-H.

    1983-01-01

    A Multibus-based parallel processor simulation system is described. The system is intended to serve as a vehicle for gaining hands-on experience, testing system and application software, and evaluating parallel processor performance during development of a larger system based on the horizontal/vertical-bus interprocessor communication mechanism. The prototype system consists of up to seven Intel iSBC 86/12A single-board computers which serve as processing elements, a multiple transmission controller (MTC) designed to support system operation, and an Intel Model 225 Microcomputer Development System which serves as the user interface and input/output processor. All components are interconnected by a Multibus/IEEE 796 bus. An important characteristic of the system is that it provides a mechanism for a processing element to broadcast data to other selected processing elements. This parallel transfer capability is provided through the design of the MTC and a minor modification to the iSBC 86/12A board. The operation of the MTC, the basic hardware-level operation of the system, and pertinent details about the iSBC 86/12A and the Multibus are described.

  4. Comparison of gene coverage of mouse oligonucleotide microarray platforms

    Directory of Open Access Journals (Sweden)

    Medrano Juan F

    2006-03-01

    reveals that the commercial microarray Sentrix, which is based on the MEEBO public oligoset, showed the best mouse genome coverage currently available. We also suggest the creation of guidelines to standardize the minimum set of information that vendors should provide to allow researchers to accurately evaluate the advantages and disadvantages of using a given platform.

  5. Federally-Assisted Healthcare Coverage among Male State Prisoners with Chronic Health Problems.

    Directory of Open Access Journals (Sweden)

    David L Rosen

    Full Text Available Prisoners have higher rates of chronic diseases such as substance dependence, mental health conditions and infectious disease, as compared to the general population. We projected the number of male state prisoners with a chronic health condition who at release would be eligible or ineligible for healthcare coverage under the Affordable Care Act (ACA. We used ACA income guidelines in conjunction with reported pre-arrest social security benefits and income from a nationally representative sample of prisoners to estimate the number eligible for healthcare coverage at release. There were 643,290 US male prisoners aged 18-64 with a chronic health condition. At release, 73% in Medicaid-expansion states would qualify for Medicaid or tax credits. In non-expansion states, 54% would qualify for tax credits, but 22% (n = 69,827 had incomes of ≤ 100% the federal poverty limit and thus would be ineligible for ACA-mediated healthcare coverage. These prisoners comprise 11% of all male prisoners with a chronic condition. The ACA was projected to provide coverage to most male state prisoners with a chronic health condition; however, roughly 70,000 fall in the "coverage gap" and may require non-routine care at emergency departments. Mechanisms are needed to secure coverage for this at risk group and address barriers to routine utilization of health services.

  6. Federally-Assisted Healthcare Coverage among Male State Prisoners with Chronic Health Problems.

    Science.gov (United States)

    Rosen, David L; Grodensky, Catherine A; Holley, Tara K

    2016-01-01

    Prisoners have higher rates of chronic diseases such as substance dependence, mental health conditions and infectious disease, as compared to the general population. We projected the number of male state prisoners with a chronic health condition who at release would be eligible or ineligible for healthcare coverage under the Affordable Care Act (ACA). We used ACA income guidelines in conjunction with reported pre-arrest social security benefits and income from a nationally representative sample of prisoners to estimate the number eligible for healthcare coverage at release. There were 643,290 US male prisoners aged 18-64 with a chronic health condition. At release, 73% in Medicaid-expansion states would qualify for Medicaid or tax credits. In non-expansion states, 54% would qualify for tax credits, but 22% (n = 69,827) had incomes of ≤ 100% the federal poverty limit and thus would be ineligible for ACA-mediated healthcare coverage. These prisoners comprise 11% of all male prisoners with a chronic condition. The ACA was projected to provide coverage to most male state prisoners with a chronic health condition; however, roughly 70,000 fall in the "coverage gap" and may require non-routine care at emergency departments. Mechanisms are needed to secure coverage for this at risk group and address barriers to routine utilization of health services.

  7. A model for determining when an analysis contains sufficient detail to provide adequate NEPA coverage for a proposed action

    International Nuclear Information System (INIS)

    Eccleston, C.H.

    1994-11-01

    Neither the National Environmental Policy Act (NEPA) nor its subsequent regulations provide substantive guidance for determining the Level of detail, discussion, and analysis that is sufficient to adequately cover a proposed action. Yet, decisionmakers are routinely confronted with the problem of making such determinations. Experience has shown that no two decisionmakers are Likely to completely agree on the amount of discussion that is sufficient to adequately cover a proposed action. one decisionmaker may determine that a certain Level of analysis is adequate, while another may conclude the exact opposite. Achieving a consensus within the agency and among the public can be problematic. Lacking definitive guidance, decisionmakers and critics alike may point to a universe of potential factors as the basis for defending their claim that an action is or is not adequately covered. Experience indicates that assertions are often based on ambiguous opinions that can be neither proved nor disproved. Lack of definitive guidance slows the decisionmaking process and can result in project delays. Furthermore, it can also Lead to inconsistencies in decisionmaking, inappropriate Levels of NEPA documentation, and increased risk of a project being challenged for inadequate coverage. A more systematic and less subjective approach for making such determinations is obviously needed. A paradigm for reducing the degree of subjectivity inherent in such decisions is presented in the following paper. The model is specifically designed to expedite the decisionmaking process by providing a systematic approach for making these determination. In many cases, agencies may find that using this model can reduce the analysis and size of NEPA documents

  8. Prosodic structure as a parallel to musical structure

    Directory of Open Access Journals (Sweden)

    Christopher Cullen Heffner

    2015-12-01

    Full Text Available What structural properties do language and music share? Although early speculation identified a wide variety of possibilities, the literature has largely focused on the parallels between musical structure and syntactic structure. Here, we argue that parallels between musical structure and prosodic structure deserve more attention. We review the evidence for a link between musical and prosodic structure and find it to be strong. In fact, certain elements of prosodic structure may provide a parsimonious comparison with musical structure without sacrificing empirical findings related to the parallels between language and music. We then develop several predictions related to such a hypothesis.

  9. Program Transformation to Identify List-Based Parallel Skeletons

    Directory of Open Access Journals (Sweden)

    Venkatesh Kannan

    2016-07-01

    Full Text Available Algorithmic skeletons are used as building-blocks to ease the task of parallel programming by abstracting the details of parallel implementation from the developer. Most existing libraries provide implementations of skeletons that are defined over flat data types such as lists or arrays. However, skeleton-based parallel programming is still very challenging as it requires intricate analysis of the underlying algorithm and often uses inefficient intermediate data structures. Further, the algorithmic structure of a given program may not match those of list-based skeletons. In this paper, we present a method to automatically transform any given program to one that is defined over a list and is more likely to contain instances of list-based skeletons. This facilitates the parallel execution of a transformed program using existing implementations of list-based parallel skeletons. Further, by using an existing transformation called distillation in conjunction with our method, we produce transformed programs that contain fewer inefficient intermediate data structures.

  10. CUBESIM, Hypercube and Denelcor Hep Parallel Computer Simulation

    International Nuclear Information System (INIS)

    Dunigan, T.H.

    1988-01-01

    1 - Description of program or function: CUBESIM is a set of subroutine libraries and programs for the simulation of message-passing parallel computers and shared-memory parallel computers. Subroutines are supplied to simulate the Intel hypercube and the Denelcor HEP parallel computers. The system permits a user to develop and test parallel programs written in C or FORTRAN on a single processor. The user may alter such hypercube parameters as message startup times, packet size, and the computation-to-communication ratio. The simulation generates a trace file that can be used for debugging, performance analysis, or graphical display. 2 - Method of solution: The CUBESIM simulator is linked with the user's parallel application routines to run as a single UNIX process. The simulator library provides a small operating system to perform process and message management. 3 - Restrictions on the complexity of the problem: Up to 128 processors can be simulated with a virtual memory limit of 6 million bytes. Up to 1000 processes can be simulated

  11. Radiology 24/7 In-House Attending Coverage: Do Benefits Outweigh Cost?

    Science.gov (United States)

    Coleman, Stephanie; Holalkere, Nagaraj Setty; O׳Malley, Julie; Doherty, Gemma; Norbash, Alexander; Kadom, Nadja

    2016-01-01

    Many radiology practices, including academic centers, are moving to in-house 24/7 attending coverage. This could be costly and may not be easily accepted by radiology trainees and attending radiologists. In this article, we evaluated the effects of 24/7 in-house attending coverage on patient care, costs, and qualitative aspects such as trainee education. We retrospectively collected report turnaround times (TAT) and work relative value units (wRVU). We compared these parameters between the years before and after the implementation of 24/7 in-house attending coverage. The cost to provide additional attending coverage was estimated from departmental financial reports. A qualitative survey of radiology residents and faculty was performed to study perceived effects on trainee education. There were decreases in report TAT following 24/7 attending implementation: 69% reduction in computed tomography, 43% reduction in diagnostic radiography, 7% reduction in magnetic resonance imaging, and 43% reduction in ultrasound. There was an average daytime wRVU decrease of 9%, although this was compounded by a decrease in total RVUs of the 2013 calendar year. The financial investment by the institution was estimated at $850,000. Qualitative data demonstrated overall positive feedback from trainees and faculty in radiology, although loss of independence was reported as a negative effect. TAT and wRVU metrics changed with implementation of 24/7 attending coverage, although these metrics do not directly relate to patient outcomes. Additional clinical benefits may include fewer discrepancies between preliminary and final reports that may improve emergency and inpatient department workflows and liability exposure. Radiologists reported the impression that clinicians appreciated 24/7 in-house attending coverage, particularly surgical specialists. Loss of trainee independence on call was a perceived disadvantage of 24/7 attending coverage and raised a concern that residency education

  12. Attaining higher coverage: obstacles to overcome. English-speaking Caribbean and Suriname.

    Science.gov (United States)

    1984-12-01

    In 1983, 8 (42%) of the 19 English-speaking Caribbean countries (including Suriname) achieved at least 50% coverage with 3 doses of diphtheria-pertussis-tetanus (DPT) vaccine among children under 1 year of age and 6 countries (32%) had at least 50% coverage with 3 doses of trivalent oral polio vaccine (TOPV). In addition, 10 countries (53%) achieved over 75% DPT coverage and 11 (58%) achieved over 75% TOPV coverage. Despite this record of progress, several factors continue to impede further gains in immunization coverage. Of particular concern is the high dropout rate. As many as 25% of infants receive their 1st dose of DPT and TOPV but do not return to complete their course of immunization. There is also a need for each health center to estimate its annual target population for immunization every year through analysis of the total live births from the previous year in the health center's catchment area (minus infant mortality). Monthly target figures can thus be computed and coverage monitored. A further problem has been a reluctance on the part of some health workers to administer vaccines simultaneously. This does not reduce effectiveness or increase the risk of complications, and reduces the number of visits needed to complete the immunization schedule. An unresolved question is whether to immunize ill or malnourished children. Decisions on this matter should take into account the availability and accessibility of health care services, the ability to follow-up children who are not immunized, and the likelihood that children will return for subsequent immunizations. Finally, a number of immunizations performed by private practitioners and institutions are not reported. Both public and private health care providers should agree on a standardized reporting format to allow better estimation of coverage.

  13. Experimental study on influence of vegetation coverage on runoff in wind-water erosion crisscross region

    Science.gov (United States)

    Wang, Jinhua; Zhang, Ronggang; Sun, Juan

    2018-02-01

    Using artificial rainfall simulation method, 23 simulation experiments were carried out in water-wind erosion crisscross region in order to analyze the influence of vegetation coverage on runoff and sediment yield. The experimental plots are standard plots with a length of 20m, width of 5m and slope of 15 degrees. The simulation experiments were conducted in different vegetation coverage experimental plots based on three different rainfall intensities. According to the experimental observation data, the influence of vegetation coverage on runoff and infiltration was analyzed. Vegetation coverage has a significant impact on runoff, and the higher the vegetation coverage is, the smaller the runoff is. Under the condition of 0.6mm/min rainfall intensity, the runoff volume from the experimental plot with 18% vegetation coverage was 1.2 times of the runoff from the experimental with 30% vegetation coverage. What’s more, the difference of runoff is more obvious in higher rainfall intensity. If the rainfall intensity reaches 1.32mm/min, the runoff from the experimental plot with 11% vegetation coverage is about 2 times as large as the runoff from the experimental plot with 53%vegetation coverage. Under the condition of small rainfall intensity, the starting time of runoff in the experimental plot with higher vegetation coverage is later than that in the experimental plot with low vegetation coverage. However, under the condition of heavy rainfall intensity, there is no obvious difference in the beginning time of runoff. In addition, the higher the vegetation coverage is, the deeper the rainfall infiltration depth is.The results can provide reference for ecological construction carried out in wind erosion crisscross region with serious soil erosion.

  14. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  15. Renal magnetic resonance angiography at 3.0 Tesla using a 32-element phased-array coil system and parallel imaging in 2 directions.

    Science.gov (United States)

    Fenchel, Michael; Nael, Kambiz; Deshpande, Vibhas S; Finn, J Paul; Kramer, Ulrich; Miller, Stephan; Ruehm, Stefan; Laub, Gerhard

    2006-09-01

    The aim of the present study was to assess the feasibility of renal magnetic resonance angiography at 3.0 T using a phased-array coil system with 32-coil elements. Specifically, high parallel imaging factors were used for an increased spatial resolution and anatomic coverage of the whole abdomen. Signal-to-noise values and the g-factor distribution of the 32 element coil were examined in phantom studies for the magnetic resonance angiography (MRA) sequence. Eleven volunteers (6 men, median age of 30.0 years) were examined on a 3.0-T MR scanner (Magnetom Trio, Siemens Medical Solutions, Malvern, PA) using a 32-element phased-array coil (prototype from In vivo Corp.). Contrast-enhanced 3D-MRA (TR 2.95 milliseconds, TE 1.12 milliseconds, flip angle 25-30 degrees , bandwidth 650 Hz/pixel) was acquired with integrated generalized autocalibrating partially parallel acquisition (GRAPPA), in both phase- and slice-encoding direction. Images were assessed by 2 independent observers with regard to image quality, noise and presence of artifacts. Signal-to-noise levels of 22.2 +/- 22.0 and 57.9 +/- 49.0 were measured with (GRAPPAx6) and without parallel-imaging, respectively. The mean g-factor of the 32-element coil for GRAPPA with an acceleration of 3 and 2 in the phase-encoding and slice-encoding direction, respectively, was 1.61. High image quality was found in 9 of 11 volunteers (2.6 +/- 0.8) with good overall interobserver agreement (k = 0.87). Relatively low image quality with higher noise levels were encountered in 2 volunteers. MRA at 3.0 T using a 32-element phased-array coil is feasible in healthy volunteers. High diagnostic image quality and extended anatomic coverage could be achieved with application of high parallel imaging factors.

  16. LTE-A cellular networks multi-hop relay for coverage, capacity and performance enhancement

    CERN Document Server

    Yahya, Abid

    2017-01-01

    In this book, three different methods are presented to enhance the capacity and coverage area in LTE-A cellular networks. The scope involves the evaluation of the effect of the RN location in terms of capacity and the determination of the optimum location of the relay that provides maximum achievable data rate for users with limited interference at the cell boundaries. This book presents a new model to enhance both capacity and coverage area in LTE-A cellular network by determining the optimum location for the RN with limited interference. The new model is designed to enhance the capacity of the relay link by employing two antennas in RN. This design enables the relay link to absorb more users at cell edge regions. An algorithm called the Balance Power Algorithm (BPA) is developed to reduce MR power consumption. The book pertains to postgraduate students and researchers in wireless & mobile communications. Provides a variety of methods for enhancing capacity and coverage in LTE-A cellular networks Develop...

  17. An Automatic Instruction-Level Parallelization of Machine Code

    Directory of Open Access Journals (Sweden)

    MARINKOVIC, V.

    2018-02-01

    Full Text Available Prevailing multicores and novel manycores have made a great challenge of modern day - parallelization of embedded software that is still written as sequential. In this paper, automatic code parallelization is considered, focusing on developing a parallelization tool at the binary level as well as on the validation of this approach. The novel instruction-level parallelization algorithm for assembly code which uses the register names after SSA to find independent blocks of code and then to schedule independent blocks using METIS to achieve good load balance is developed. The sequential consistency is verified and the validation is done by measuring the program execution time on the target architecture. Great speedup, taken as the performance measure in the validation process, and optimal load balancing are achieved for multicore RISC processors with 2 to 16 cores (e.g. MIPS, MicroBlaze, etc.. In particular, for 16 cores, the average speedup is 7.92x, while in some cases it reaches 14x. An approach to automatic parallelization provided by this paper is useful to researchers and developers in the area of parallelization as the basis for further optimizations, as the back-end of a compiler, or as the code parallelization tool for an embedded system.

  18. Circuit and bond polytopes on series–parallel graphs

    OpenAIRE

    Borne , Sylvie; Fouilhoux , Pierre; Grappe , Roland; Lacroix , Mathieu; Pesneau , Pierre

    2015-01-01

    International audience; In this paper, we describe the circuit polytope on series–parallel graphs. We first show the existence of a compact extended formulation. Though not being explicit, its construction process helps us to inductively provide the description in the original space. As a consequence, using the link between bonds and circuits in planar graphs, we also describe the bond polytope on series–parallel graphs.

  19. Lemon : An MPI parallel I/O library for data encapsulation using LIME

    NARCIS (Netherlands)

    Deuzeman, Albert; Reker, Siebren; Urbach, Carsten

    We introduce Lemon, an MPI parallel I/O library that provides efficient parallel I/O of both binary and metadata on massively parallel architectures. Motivated by the demands of the lattice Quantum Chromodynamics community, the data is stored in the SciDAC Lattice QCD Interchange Message

  20. Functional Coverage of the Human Genome by Existing Structures, Structural Genomics Targets, and Homology Models.

    Directory of Open Access Journals (Sweden)

    2005-08-01

    Full Text Available The bias in protein structure and function space resulting from experimental limitations and targeting of particular functional classes of proteins by structural biologists has long been recognized, but never continuously quantified. Using the Enzyme Commission and the Gene Ontology classifications as a reference frame, and integrating structure data from the Protein Data Bank (PDB, target sequences from the structural genomics projects, structure homology derived from the SUPERFAMILY database, and genome annotations from Ensembl and NCBI, we provide a quantified view, both at the domain and whole-protein levels, of the current and projected coverage of protein structure and function space relative to the human genome. Protein structures currently provide at least one domain that covers 37% of the functional classes identified in the genome; whole structure coverage exists for 25% of the genome. If all the structural genomics targets were solved (twice the current number of structures in the PDB, it is estimated that structures of one domain would cover 69% of the functional classes identified and complete structure coverage would be 44%. Homology models from existing experimental structures extend the 37% coverage to 56% of the genome as single domains and 25% to 31% for complete structures. Coverage from homology models is not evenly distributed by protein family, reflecting differing degrees of sequence and structure divergence within families. While these data provide coverage, conversely, they also systematically highlight functional classes of proteins for which structures should be determined. Current key functional families without structure representation are highlighted here; updated information on the "most wanted list" that should be solved is available on a weekly basis from http://function.rcsb.org:8080/pdb/function_distribution/index.html.

  1. Default Parallels Plesk Panel Page

    Science.gov (United States)

    services that small businesses want and need. Our software includes key building blocks of cloud service virtualized servers Service Provider Products Parallels® Automation Hosting, SaaS, and cloud computing , the leading hosting automation software. You see this page because there is no Web site at this

  2. Coverage matters: insurance and health care

    National Research Council Canada - National Science Library

    Board on Health Care Services Staff; Institute of Medicine Staff; Institute of Medicine; National Academy of Sciences

    2001-01-01

    ...? How does the system of insurance coverage in the U.S. operate, and where does it fail? The first of six Institute of Medicine reports that will examine in detail the consequences of having a large uninsured population, Coverage Matters...

  3. Pricing of drugs with heterogeneous health insurance coverage.

    Science.gov (United States)

    Ferrara, Ida; Missios, Paul

    2012-03-01

    In this paper, we examine the role of insurance coverage in explaining the generic competition paradox in a two-stage game involving a single producer of brand-name drugs and n quantity-competing producers of generic drugs. Independently of brand loyalty, which some studies rely upon to explain the paradox, we show that heterogeneity in insurance coverage may result in higher prices of brand-name drugs following generic entry. With market segmentation based on insurance coverage present in both the pre- and post-entry stages, the paradox can arise when the two types of drugs are highly substitutable and the market is quite profitable but does not have to arise when the two types of drugs are highly differentiated. However, with market segmentation occurring only after generic entry, the paradox can arise when the two types of drugs are weakly substitutable, provided, however, that the industry is not very profitable. In both cases, that is, when market segmentation is present in the pre-entry stage and when it is not, the paradox becomes more likely to arise as the market expands and/or insurance companies decrease deductibles applied on the purchase of generic drugs. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Universal Health Coverage – The Critical Importance of Global Solidarity and Good Governance

    Science.gov (United States)

    Reis, Andreas A.

    2016-01-01

    This article provides a commentary to Ole Norheim’ s editorial entitled "Ethical perspective: Five unacceptable trade-offs on the path to universal health coverage." It reinforces its message that an inclusive, participatory process is essential for ethical decision-making and underlines the crucial importance of good governance in setting fair priorities in healthcare. Solidarity on both national and international levels is needed to make progress towards the goal of universal health coverage (UHC). PMID:27694683

  5. Constructing and Using Broad-coverage Lexical Resource for Enhancing Morphological Analysis of Arabic

    OpenAIRE

    Sawalha, M.; Atwell, E.S.

    2010-01-01

    Broad-coverage language resources which provide prior linguistic knowledge must improve the accuracy and the performance of NLP applications. We are constructing a broad-coverage lexical resource to improve the accuracy of morphological analyzers and part-of-speech taggers of Arabic text. Over the past 1200 years, many different kinds of Arabic language lexicons were constructed; these lexicons are different in ordering, size and aim or goal of construction. We collected 23 machine-readable l...

  6. Coverage and Capacity Analysis of LTE-M and NB-IoT in a Rural Area

    DEFF Research Database (Denmark)

    Lauridsen, Mads; Kovács, István; Mogensen, Preben Elgaard

    2016-01-01

    equipped with either of the newly standardized technologies. The study is made for a site specific network deployment of a Danish operator, and the simulation is calibrated using drive test measurements. The results show that LTE-M can provide coverage for 99.9% of outdoor and indoor devices, if the latter......The 3GPP has introduced the LTE-M and NB-IoT User Equipment categories and made amendments to LTE release 13 to support the cellular Internet of Things. The contribution of this paper is to analyze the coverage probability, the number of supported devices, and the device battery life in networks...... is experiencing 10 dB additional loss. However, for deep indoor users NB-IoT is required and provides coverage for about 95% of the users. The cost is support for more than 10 times fewer devices and a 2-6 times higher device power consumption. Thus both LTE-M and NB-IoT provide extended support for the cellular...

  7. Parallel phase model : a programming model for high-end parallel machines with manycores.

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Junfeng (Syracuse University, Syracuse, NY); Wen, Zhaofang; Heroux, Michael Allen; Brightwell, Ronald Brian

    2009-04-01

    This paper presents a parallel programming model, Parallel Phase Model (PPM), for next-generation high-end parallel machines based on a distributed memory architecture consisting of a networked cluster of nodes with a large number of cores on each node. PPM has a unified high-level programming abstraction that facilitates the design and implementation of parallel algorithms to exploit both the parallelism of the many cores and the parallelism at the cluster level. The programming abstraction will be suitable for expressing both fine-grained and coarse-grained parallelism. It includes a few high-level parallel programming language constructs that can be added as an extension to an existing (sequential or parallel) programming language such as C; and the implementation of PPM also includes a light-weight runtime library that runs on top of an existing network communication software layer (e.g. MPI). Design philosophy of PPM and details of the programming abstraction are also presented. Several unstructured applications that inherently require high-volume random fine-grained data accesses have been implemented in PPM with very promising results.

  8. Systematic approach for deriving feasible mappings of parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir; Imre, Kayhan M.

    2017-01-01

    The need for high-performance computing together with the increasing trend from single processor to parallel computer architectures has leveraged the adoption of parallel computing. To benefit from parallel computing power, usually parallel algorithms are defined that can be mapped and executed

  9. Immunization Coverage

    Science.gov (United States)

    ... room/fact-sheets/detail/immunization-coverage","@context":"http://schema.org","@type":"Article"}; العربية 中文 français русский español ... Plan Global Health Observatory (GHO) data - Immunization More information on vaccines and immunization News 1 in 10 ...

  10. Reporting by multiple employer welfare arrangements and certain other entities that offer or provide coverage for medical care to the employees of two or more employers. Final rule.

    Science.gov (United States)

    2003-04-09

    This document contains a final rule governing certain reporting requirements under Title I of the Employee Retirement Income Security Act of 1974 (ERISA) for multiple employer welfare arrangements (MEWAs) and certain other entities that offer or provide coverage for medical care to the employees of two or more employers. The final rule generally requires the administrator of a MEWA, and certain other entities, to file a form with the Secretary of Labor for the purpose of determining whether the requirements of certain recent health care laws are being met.

  11. [Falsified medicines in parallel trade].

    Science.gov (United States)

    Muckenfuß, Heide

    2017-11-01

    The number of falsified medicines on the German market has distinctly increased over the past few years. In particular, stolen pharmaceutical products, a form of falsified medicines, have increasingly been introduced into the legal supply chain via parallel trading. The reasons why parallel trading serves as a gateway for falsified medicines are most likely the complex supply chains and routes of transport. It is hardly possible for national authorities to trace the history of a medicinal product that was bought and sold by several intermediaries in different EU member states. In addition, the heterogeneous outward appearance of imported and relabelled pharmaceutical products facilitates the introduction of illegal products onto the market. Official batch release at the Paul-Ehrlich-Institut offers the possibility of checking some aspects that might provide an indication of a falsified medicine. In some circumstances, this may allow the identification of falsified medicines before they come onto the German market. However, this control is only possible for biomedicinal products that have not received a waiver regarding official batch release. For improved control of parallel trade, better networking among the EU member states would be beneficial. European-wide regulations, e. g., for disclosure of the complete supply chain, would help to minimise the risks of parallel trading and hinder the marketing of falsified medicines.

  12. An Educational Tool for Interactive Parallel and Distributed Processing

    DEFF Research Database (Denmark)

    Pagliarini, Luigi; Lund, Henrik Hautop

    2011-01-01

    In this paper we try to describe how the Modular Interactive Tiles System (MITS) can be a valuable tool for introducing students to interactive parallel and distributed processing programming. This is done by providing an educational hands-on tool that allows a change of representation of the abs......In this paper we try to describe how the Modular Interactive Tiles System (MITS) can be a valuable tool for introducing students to interactive parallel and distributed processing programming. This is done by providing an educational hands-on tool that allows a change of representation...... of the abstract problems related to designing interactive parallel and distributed systems. Indeed, MITS seems to bring a series of goals into the education, such as parallel programming, distributedness, communication protocols, master dependency, software behavioral models, adaptive interactivity, feedback......, connectivity, topology, island modeling, user and multiuser interaction, which can hardly be found in other tools. Finally, we introduce the system of modular interactive tiles as a tool for easy, fast, and flexible hands-on exploration of these issues, and through examples show how to implement interactive...

  13. Broader health coverage is good for the nation's health: evidence from country level panel data.

    Science.gov (United States)

    Moreno-Serra, Rodrigo; Smith, Peter C

    2015-01-01

    Progress towards universal health coverage involves providing people with access to needed health services without entailing financial hardship and is often advocated on the grounds that it improves population health. The paper offers econometric evidence on the effects of health coverage on mortality outcomes at the national level. We use a large panel data set of countries, examined by using instrumental variable specifications that explicitly allow for potential reverse causality and unobserved country-specific characteristics. We employ various proxies for the coverage level in a health system. Our results indicate that expanded health coverage, particularly through higher levels of publicly funded health spending, results in lower child and adult mortality, with the beneficial effect on child mortality being larger in poorer countries.

  14. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  15. Out-of-order parallel discrete event simulation for electronic system-level design

    CERN Document Server

    Chen, Weiwei

    2014-01-01

    This book offers readers a set of new approaches and tools a set of tools and techniques for facing challenges in parallelization with design of embedded systems.? It provides an advanced parallel simulation infrastructure for efficient and effective system-level model validation and development so as to build better products in less time.? Since parallel discrete event simulation (PDES) has the potential to exploit the underlying parallel computational capability in today's multi-core simulation hosts, the author begins by reviewing the parallelization of discrete event simulation, identifyin

  16. pcircle - A Suite of Scalable Parallel File System Tools

    Energy Technology Data Exchange (ETDEWEB)

    2015-10-01

    Most of the software related to file system are written for conventional local file system, they are serialized and can't take advantage of the benefit of a large scale parallel file system. "pcircle" software builds on top of ubiquitous MPI in cluster computing environment and "work-stealing" pattern to provide a scalable, high-performance suite of file system tools. In particular - it implemented parallel data copy and parallel data checksumming, with advanced features such as async progress report, checkpoint and restart, as well as integrity checking.

  17. Parallel Ada benchmarks for the SVMS

    Science.gov (United States)

    Collard, Philippe E.

    1990-01-01

    The use of parallel processing paradigm to design and develop faster and more reliable computers appear to clearly mark the future of information processing. NASA started the development of such an architecture: the Spaceborne VHSIC Multi-processor System (SVMS). Ada will be one of the languages used to program the SVMS. One of the unique characteristics of Ada is that it supports parallel processing at the language level through the tasking constructs. It is important for the SVMS project team to assess how efficiently the SVMS architecture will be implemented, as well as how efficiently Ada environment will be ported to the SVMS. AUTOCLASS II, a Bayesian classifier written in Common Lisp, was selected as one of the benchmarks for SVMS configurations. The purpose of the R and D effort was to provide the SVMS project team with the version of AUTOCLASS II, written in Ada, that would make use of Ada tasking constructs as much as possible so as to constitute a suitable benchmark. Additionally, a set of programs was developed that would measure Ada tasking efficiency on parallel architectures as well as determine the critical parameters influencing tasking efficiency. All this was designed to provide the SVMS project team with a set of suitable tools in the development of the SVMS architecture.

  18. 22 CFR 226.31 - Insurance coverage.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Insurance coverage. 226.31 Section 226.31 Foreign Relations AGENCY FOR INTERNATIONAL DEVELOPMENT ADMINISTRATION OF ASSISTANCE AWARDS TO U.S. NON-GOVERNMENTAL ORGANIZATIONS Post-award Requirements Property Standards § 226.31 Insurance coverage. Recipients...

  19. Existence of parallel spinors on non-simply-connected Riemannian manifolds

    International Nuclear Information System (INIS)

    McInnes, B.

    1997-04-01

    It is well known, and important for applications, that Ricci-flat Riemannian manifolds of non-generic holonomy always admit a parallel [covariant constant] spinor if they are simply connected. The non-simply-connected case is much more subtle, however. We show that a parallel spinor can still be found in this case provided that the [real] dimension is not a multiple of four, and provided that the spin structure is carefully chosen. (author). 10 refs

  20. Clustered lot quality assurance sampling to assess immunisation coverage: increasing rapidity and maintaining precision.

    Science.gov (United States)

    Pezzoli, Lorenzo; Andrews, Nick; Ronveaux, Olivier

    2010-05-01

    Vaccination programmes targeting disease elimination aim to achieve very high coverage levels (e.g. 95%). We calculated the precision of different clustered lot quality assurance sampling (LQAS) designs in computer-simulated surveys to provide local health officers in the field with preset LQAS plans to simply and rapidly assess programmes with high coverage targets. We calculated sample size (N), decision value (d) and misclassification errors (alpha and beta) of several LQAS plans by running 10 000 simulations. We kept the upper coverage threshold (UT) at 90% or 95% and decreased the lower threshold (LT) progressively by 5%. We measured the proportion of simulations with d unvaccinated individuals if the coverage was LT% (pLT) to calculate alpha (1-pLT). We divided N in clusters (between 5 and 10) and recalculated the errors hypothesising that the coverage would vary in the clusters according to a binomial distribution with preset standard deviations of 0.05 and 0.1 from the mean lot coverage. We selected the plans fulfilling these criteria: alpha LQAS plans dividing the lot in five clusters with N = 50 (5 x 10) and d = 4 to evaluate programmes with 95% coverage target and d = 7 to evaluate programmes with 90% target. These plans will considerably increase the feasibility and the rapidity of conducting the LQAS in the field.

  1. Effects of coverage gap reform on adherence to diabetes medications.

    Science.gov (United States)

    Zeng, Feng; Patel, Bimal V; Brunetti, Louis

    2013-04-01

    To investigate the impact of Part D coverage gap reform on diabetes medication adherence. Retrospective data analysis based on pharmacy claims data from a national pharmacy benefit manager. We used a difference-in-difference-indifference method to evaluate the impact of coverage gap reform on adherence to diabetes medications. Two cohorts (2010 and 2011) were constructed to represent the last year before Affordable Care Act (ACA) reform and the first year after reform, respectively. Each patient had 2 observations: 1 before and 1 after entering the coverage gap. Patients in each cohort were divided into groups based on type of gap coverage: no coverage, partial coverage (generics only), and full coverage. Following ACA reform, patients with no gap coverage and patients with partial gap coverage experienced substantial drops in copayments in the coverage gap in 2011. Their adherence to diabetes medications in the gap, measured by percentage of days covered, improved correspondingly (2.99 percentage points, 95% confidence interval [CI] 0.49-5.48, P = .019 for patients with no coverage; 6.46 percentage points, 95% CI 3.34-9.58, P gap in 2011. However, their adherence did not increase (-0.13 percentage point, P = .8011). In the first year of ACA coverage gap reform, copayments in the gap decreased substantially for all patients. Patients with no coverage and patients with partial coverage in the gap had better adherence in the gap in 2011.

  2. Recommendation system for immunization coverage and monitoring.

    Science.gov (United States)

    Bhatti, Uzair Aslam; Huang, Mengxing; Wang, Hao; Zhang, Yu; Mehmood, Anum; Di, Wu

    2018-01-02

    Immunization averts an expected 2 to 3 million deaths every year from diphtheria, tetanus, pertussis (whooping cough), and measles; however, an additional 1.5 million deaths could be avoided if vaccination coverage was improved worldwide. 1 1 Data source for immunization records of 1.5 M: http://www.who.int/mediacentre/factsheets/fs378/en/ New vaccination technologies provide earlier diagnoses, personalized treatments and a wide range of other benefits for both patients and health care professionals. Childhood diseases that were commonplace less than a generation ago have become rare because of vaccines. However, 100% vaccination coverage is still the target to avoid further mortality. Governments have launched special campaigns to create an awareness of vaccination. In this paper, we have focused on data mining algorithms for big data using a collaborative approach for vaccination datasets to resolve problems with planning vaccinations in children, stocking vaccines, and tracking and monitoring non-vaccinated children appropriately. Geographical mapping of vaccination records helps to tackle red zone areas, where vaccination rates are poor, while green zone areas, where vaccination rates are good, can be monitored to enable health care staff to plan the administration of vaccines. Our recommendation algorithm assists in these processes by using deep data mining and by accessing records of other hospitals to highlight locations with lower rates of vaccination. The overall performance of the model is good. The model has been implemented in hospitals to control vaccination across the coverage area.

  3. An Analysis of Television's Coverage of the "Iran Crisis": 5 November 1979 to 15 January 1980.

    Science.gov (United States)

    Miller, Christine

    The three television networks, acting under severe restrictions imposed by the Iranian government, all provided comprehensive coverage of the hostage crisis. A study was conducted to examine what, if any, salient differences arose or existed in this coverage from November 5, 1979, until January 15, 1980. A research procedure combining qualitative…

  4. Parallel Algorithms for Switching Edges in Heterogeneous Graphs.

    Science.gov (United States)

    Bhuiyan, Hasanuzzaman; Khan, Maleq; Chen, Jiangzhuo; Marathe, Madhav

    2017-06-01

    An edge switch is an operation on a graph (or network) where two edges are selected randomly and one of their end vertices are swapped with each other. Edge switch operations have important applications in graph theory and network analysis, such as in generating random networks with a given degree sequence, modeling and analyzing dynamic networks, and in studying various dynamic phenomena over a network. The recent growth of real-world networks motivates the need for efficient parallel algorithms. The dependencies among successive edge switch operations and the requirement to keep the graph simple (i.e., no self-loops or parallel edges) as the edges are switched lead to significant challenges in designing a parallel algorithm. Addressing these challenges requires complex synchronization and communication among the processors leading to difficulties in achieving a good speedup by parallelization. In this paper, we present distributed memory parallel algorithms for switching edges in massive networks. These algorithms provide good speedup and scale well to a large number of processors. A harmonic mean speedup of 73.25 is achieved on eight different networks with 1024 processors. One of the steps in our edge switch algorithms requires the computation of multinomial random variables in parallel. This paper presents the first non-trivial parallel algorithm for the problem, achieving a speedup of 925 using 1024 processors.

  5. 28 CFR 55.6 - Coverage under section 203(c).

    Science.gov (United States)

    2010-07-01

    ... THE VOTING RIGHTS ACT REGARDING LANGUAGE MINORITY GROUPS Nature of Coverage § 55.6 Coverage under section 203(c). (a) Coverage formula. There are four ways in which a political subdivision can become subject to section 203(c). 2 2 The criteria for coverage are contained in section 203(b). (1) Political...

  6. Armenian media coverage of science topics

    Science.gov (United States)

    Mkhitaryan, Marie

    2016-12-01

    The article discusses features and issues of Armenian media coverage on scientific topics and provides recommendations on how to promote scientific topics in media. The media is more interested in social or public reaction rather than in scientific information itself. Medical science has a large share of the global media coverage. It is followed by articles about environment, space, technology, physics and other areas. Armenian media mainly tends to focus on a scientific topic if at first sight it contains something revolutionary. Media primarily reviews whether that scientific study can affect the Armenian economy and only then decides to refer to it. Unfortunately, nowadays the perception of science is a little distorted in media. We can often see headlines of news where is mentioned that the scientist has made "an invention". Nowadays it is hard to see the border between a scientist and an inventor. In fact, the technological term "invention" attracts the media by making illusionary sensation and ensuring large audience. The report also addresses the "Gitamard" ("A science-man") special project started in 2016 in Mediamax that tells about scientists and their motivations.

  7. A methodology for extending domain coverage in SemRep.

    Science.gov (United States)

    Rosemblat, Graciela; Shin, Dongwook; Kilicoglu, Halil; Sneiderman, Charles; Rindflesch, Thomas C

    2013-12-01

    We describe a domain-independent methodology to extend SemRep coverage beyond the biomedical domain. SemRep, a natural language processing application originally designed for biomedical texts, uses the knowledge sources provided by the Unified Medical Language System (UMLS©). Ontological and terminological extensions to the system are needed in order to support other areas of knowledge. We extended SemRep's application by developing a semantic representation of a previously unsupported domain. This was achieved by adapting well-known ontology engineering phases and integrating them with the UMLS knowledge sources on which SemRep crucially depends. While the process to extend SemRep coverage has been successfully applied in earlier projects, this paper presents in detail the step-wise approach we followed and the mechanisms implemented. A case study in the field of medical informatics illustrates how the ontology engineering phases have been adapted for optimal integration with the UMLS. We provide qualitative and quantitative results, which indicate the validity and usefulness of our methodology. Published by Elsevier Inc.

  8. Insurance premiums and insurance coverage of near-poor children.

    Science.gov (United States)

    Hadley, Jack; Reschovsky, James D; Cunningham, Peter; Kenney, Genevieve; Dubay, Lisa

    States increasingly are using premiums for near-poor children in their public insurance programs (Medicaid/SCHIP) to limit private insurance crowd-out and constrain program costs. Using national data from four rounds of the Community Tracking Study Household Surveys spanning the seven years from 1996 to 2003, this study estimates a multinomial logistic regression model examining how public and private insurance premiums affect insurance coverage outcomes (Medicaid/SCHIP coverage, private coverage, and no coverage). Higher public premiums are significantly associated with a lower probability of public coverage and higher probabilities of private coverage and uninsurance; higher private premiums are significantly related to a lower probability of private coverage and higher probabilities of public coverage and uninsurance. The results imply that uninsurance rates will rise if both public and private premiums increase, and suggest that states that impose or increase public insurance premiums for near-poor children will succeed in discouraging crowd-out of private insurance, but at the expense of higher rates of uninsurance. Sustained increases in private insurance premiums will continue to create enrollment pressures on state insurance programs for children.

  9. 42 CFR 457.410 - Health benefits coverage options.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Health benefits coverage options. 457.410 Section 457.410 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... State Plan Requirements: Coverage and Benefits § 457.410 Health benefits coverage options. (a) Types of...

  10. Parallel algorithms for mapping pipelined and parallel computations

    Science.gov (United States)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  11. Cooperative storage of shared files in a parallel computing system with dynamic block size

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-11-10

    Improved techniques are provided for parallel writing of data to a shared object in a parallel computing system. A method is provided for storing data generated by a plurality of parallel processes to a shared object in a parallel computing system. The method is performed by at least one of the processes and comprises: dynamically determining a block size for storing the data; exchanging a determined amount of the data with at least one additional process to achieve a block of the data having the dynamically determined block size; and writing the block of the data having the dynamically determined block size to a file system. The determined block size comprises, e.g., a total amount of the data to be stored divided by the number of parallel processes. The file system comprises, for example, a log structured virtual parallel file system, such as a Parallel Log-Structured File System (PLFS).

  12. Improving matrix-vector product performance and multi-level preconditioning for the parallel PCG package

    Energy Technology Data Exchange (ETDEWEB)

    McLay, R.T.; Carey, G.F.

    1996-12-31

    In this study we consider parallel solution of sparse linear systems arising from discretized PDE`s. As part of our continuing work on our parallel PCG Solver package, we have made improvements in two areas. The first is improving the performance of the matrix-vector product. Here on regular finite-difference grids, we are able to use the cache memory more efficiently for smaller domains or where there are multiple degrees of freedom. The second problem of interest in the present work is the construction of preconditioners in the context of the parallel PCG solver we are developing. Here the problem is partitioned over a set of processors subdomains and the matrix-vector product for PCG is carried out in parallel for overlapping grid subblocks. For problems of scaled speedup, the actual rate of convergence of the unpreconditioned system deteriorates as the mesh is refined. Multigrid and subdomain strategies provide a logical approach to resolving the problem. We consider the parallel trade-offs between communication and computation and provide a complexity analysis of a representative algorithm. Some preliminary calculations using the parallel package and comparisons with other preconditioners are provided together with parallel performance results.

  13. Xyce parallel electronic simulator : users' guide.

    Energy Technology Data Exchange (ETDEWEB)

    Mei, Ting; Rankin, Eric Lamont; Thornquist, Heidi K.; Santarelli, Keith R.; Fixel, Deborah A.; Coffey, Todd Stirling; Russo, Thomas V.; Schiek, Richard Louis; Warrender, Christina E.; Keiter, Eric Richard; Pawlowski, Roger Patrick

    2011-05-01

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: (1) Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers; (2) Improved performance for all numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-art algorithms and novel techniques. (3) Device models which are specifically tailored to meet Sandia's needs, including some radiation-aware devices (for Sandia users only); and (4) Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing parallel implementation - which allows it to run efficiently on the widest possible number of computing platforms. These include serial, shared-memory and distributed-memory parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The development of Xyce provides a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods, parallel solver algorithms) research and development can be performed. As a result, Xyce is

  14. Parallel processing of structural integrity analysis codes

    International Nuclear Information System (INIS)

    Swami Prasad, P.; Dutta, B.K.; Kushwaha, H.S.

    1996-01-01

    Structural integrity analysis forms an important role in assessing and demonstrating the safety of nuclear reactor components. This analysis is performed using analytical tools such as Finite Element Method (FEM) with the help of digital computers. The complexity of the problems involved in nuclear engineering demands high speed computation facilities to obtain solutions in reasonable amount of time. Parallel processing systems such as ANUPAM provide an efficient platform for realising the high speed computation. The development and implementation of software on parallel processing systems is an interesting and challenging task. The data and algorithm structure of the codes plays an important role in exploiting the parallel processing system capabilities. Structural analysis codes based on FEM can be divided into two categories with respect to their implementation on parallel processing systems. The first category codes such as those used for harmonic analysis, mechanistic fuel performance codes need not require the parallelisation of individual modules of the codes. The second category of codes such as conventional FEM codes require parallelisation of individual modules. In this category, parallelisation of equation solution module poses major difficulties. Different solution schemes such as domain decomposition method (DDM), parallel active column solver and substructuring method are currently used on parallel processing systems. Two codes, FAIR and TABS belonging to each of these categories have been implemented on ANUPAM. The implementation details of these codes and the performance of different equation solvers are highlighted. (author). 5 refs., 12 figs., 1 tab

  15. Message passing with parallel queue traversal

    Science.gov (United States)

    Underwood, Keith D [Albuquerque, NM; Brightwell, Ronald B [Albuquerque, NM; Hemmert, K Scott [Albuquerque, NM

    2012-05-01

    In message passing implementations, associative matching structures are used to permit list entries to be searched in parallel fashion, thereby avoiding the delay of linear list traversal. List management capabilities are provided to support list entry turnover semantics and priority ordering semantics.

  16. Cosmic Shear With ACS Pure Parallels

    Science.gov (United States)

    Rhodes, Jason

    2002-07-01

    Small distortions in the shapes of background galaxies by foreground mass provide a powerful method of directly measuring the amount and distribution of dark matter. Several groups have recently detected this weak lensing by large-scale structure, also called cosmic shear. The high resolution and sensitivity of HST/ACS provide a unique opportunity to measure cosmic shear accurately on small scales. Using 260 parallel orbits in Sloan textiti {F775W} we will measure for the first time: beginlistosetlength sep0cm setlengthemsep0cm setlengthopsep0cm em the cosmic shear variance on scales Omega_m^0.5, with signal-to-noise {s/n} 20, and the mass density Omega_m with s/n=4. They will be done at small angular scales where non-linear effects dominate the power spectrum, providing a test of the gravitational instability paradigm for structure formation. Measurements on these scales are not possible from the ground, because of the systematic effects induced by PSF smearing from seeing. Having many independent lines of sight reduces the uncertainty due to cosmic variance, making parallel observations ideal.

  17. Feed-forward volume rendering algorithm for moderately parallel MIMD machines

    Science.gov (United States)

    Yagel, Roni

    1993-01-01

    Algorithms for direct volume rendering on parallel and vector processors are investigated. Volumes are transformed efficiently on parallel processors by dividing the data into slices and beams of voxels. Equal sized sets of slices along one axis are distributed to processors. Parallelism is achieved at two levels. Because each slice can be transformed independently of others, processors transform their assigned slices with no communication, thus providing maximum possible parallelism at the first level. Within each slice, consecutive beams are incrementally transformed using coherency in the transformation computation. Also, coherency across slices can be exploited to further enhance performance. This coherency yields the second level of parallelism through the use of the vector processing or pipelining. Other ongoing efforts include investigations into image reconstruction techniques, load balancing strategies, and improving performance.

  18. Stakeholders apply the GRADE evidence-to-decision framework to facilitate coverage decisions.

    Science.gov (United States)

    Dahm, Philipp; Oxman, Andrew D; Djulbegovic, Benjamin; Guyatt, Gordon H; Murad, M Hassan; Amato, Laura; Parmelli, Elena; Davoli, Marina; Morgan, Rebecca L; Mustafa, Reem A; Sultan, Shahnaz; Falck-Ytter, Yngve; Akl, Elie A; Schünemann, Holger J

    2017-06-01

    Coverage decisions are complex and require the consideration of many factors. A well-defined, transparent process could improve decision-making and facilitate decision-maker accountability. We surveyed key US-based stakeholders regarding their current approaches for coverage decisions. Then, we held a workshop to test an evidence-to-decision (EtD) framework for coverage based on the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) criteria. A total of 42 individuals (including 19 US stakeholders as well as international health policymakers and GRADE working group members) attended the workshop. Of the 19 stakeholders, 14 (74%) completed the survey before the workshop. Almost all of their organizations (13 of 14; 93%) used systematic reviews for coverage decision-making; few (2 of 14; 14%) developed their own evidence synthesis; a majority (9 of 14; 64%) rated the certainty of evidence (using various systems); almost all (13 of 14; 93%) denied formal consideration of resource use; and half (7 of 14; 50%) reported explicit criteria for decision-making. At the workshop, stakeholders successfully applied the EtD framework to four case studies and provided narrative feedback, which centered on contextual factors affecting coverage decisions in the United States, the need for reliable data on subgroups of patients, and the challenge of decision-making without formal consideration of resource use. Stakeholders successfully applied the EtD framework to four case studies and highlighted contextual factors affecting coverage decisions and affirmed its value. Their input informed the further development of a revised EtD framework, now publicly available (http://gradepro.org/). Published by Elsevier Inc.

  19. A framework to estimate the coverage of AOPs in nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jinkyun; Jung, Wondea [KAERI, Daejeon (Korea, Republic of)

    2015-05-15

    In this paper, a framework to estimate the coverage of AOPs in NPPs is proposed based on a SPV (Single Point Vulnerability) model. It is apparent that the sufficient coverage of AOPs is one of the prerequisites for improving the operational safety of NPPs because they provide a series of proper actions to be conducted by human operators, which are crucial for coping with off-normal conditions caused by the failure of critical components. In this light, the catalog of BEs (i.e., SPV components) identified from an SPV model could be a good source of information to enhance the coverage of AOPs. Unfortunately, because of the avalanche of the number of corresponding MCSs, it is inevitable to develop a screening process that allows us to select critical MCSs. For this reason, the MCSC score is defined along with the DIF concept. Based on the MCSC score, a framework that allows us to systematically investigate the coverage of AOPs is proposed in Ref. As a result, it is estimated that the coverage of AOPs being used in OPR1000 is about 63%. It should be noted that there are a couple of limitations in this study. For example, the precision of the abovementioned coverage entirely depends on that of the SPV model being scrutinized by the proposed framework. This implies that independent reviews of SMEs (Subject Matter Experts) who have sufficient knowledge on both the configuration and operation of NPPs are unavoidable to confirm the appropriateness of the suggested framework.

  20. PARALLEL IMPLEMENTATION OF MORPHOLOGICAL PROFILE BASED SPECTRAL-SPATIAL CLASSIFICATION SCHEME FOR HYPERSPECTRAL IMAGERY

    Directory of Open Access Journals (Sweden)

    B. Kumar

    2016-06-01

    Full Text Available Extended morphological profile (EMP is a good technique for extracting spectral-spatial information from the images but large size of hyperspectral images is an important concern for creating EMPs. However, with the availability of modern multi-core processors and commodity parallel processing systems like graphics processing units (GPUs at desktop level, parallel computing provides a viable option to significantly accelerate execution of such computations. In this paper, parallel implementation of an EMP based spectralspatial classification method for hyperspectral imagery is presented. The parallel implementation is done both on multi-core CPU and GPU. The impact of parallelization on speed up and classification accuracy is analyzed. For GPU, the implementation is done in compute unified device architecture (CUDA C. The experiments are carried out on two well-known hyperspectral images. It is observed from the experimental results that GPU implementation provides a speed up of about 7 times, while parallel implementation on multi-core CPU resulted in speed up of about 3 times. It is also observed that parallel implementation has no adverse impact on the classification accuracy.

  1. Evolving provider payment models and patient access to innovative medical technology.

    Science.gov (United States)

    Long, Genia; Mortimer, Richard; Sanzenbacher, Geoffrey

    2014-12-01

    Abstract Objective: To investigate the evolving use and expected impact of pay-for-performance (P4P) and risk-based provider reimbursement on patient access to innovative medical technology. Structured interviews with leading private payers representing over 110 million commercially-insured lives exploring current and planned use of P4P provider payment models, evidence requirements for technology assessment and new technology coverage, and the evolving relationship between the two topics. Respondents reported rapid increases in the use of P4P and risk-sharing programs, with roughly half of commercial lives affected 3 years ago, just under two-thirds today, and an expected three-quarters in 3 years. All reported well-established systems for evaluating new technology coverage. Five of nine reported becoming more selective in the past 3 years in approving new technologies; four anticipated that in the next 3 years there will be a higher evidence requirement for new technology access. Similarly, four expected it will become more difficult for clinically appropriate but costly technologies to gain coverage. All reported planning to rely more on these types of provider payment incentives to control costs, but didn't see them as a substitute for payer technology reviews and coverage limitations; they each have a role to play. Interviews limited to nine leading payers with models in place; self-reported data. Likely implications include a more uncertain payment environment for providers, and indirectly for innovative medical technology and future investment, greater reliance on quality and financial metrics, and increased evidence requirements for favorable coverage and utilization decisions. Increasing provider financial risk may challenge the traditional technology adoption paradigm, where payers assumed a 'gatekeeping' role and providers a countervailing patient advocacy role with regard to access to new technology. Increased provider financial risk may result in an

  2. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  3. Diagnostic imaging, a 'parallel' discipline. Can current technology provide a reliable digital diagnostic radiology department

    International Nuclear Information System (INIS)

    Moore, C.J.; Eddleston, B.

    1985-01-01

    Only recently has any detailed criticism been voiced about the practicalities of the introduction of generalised, digital, imaging complexes in diagnostic radiology. Although attendant technological problems are highlighted the authors argue that the fundamental causes of current difficulties are not in the generation but in the processing, filing and subsequent retrieval for display of digital image records. In the real world, looking at images is a parallel process of some complexity and so it is perhaps untimely to expect versatile handling of vast image data bases by existing computer hardware and software which, by their current nature, perform tasks serially. (author)

  4. A PARALLEL EXTENSION OF THE UAL ENVIRONMENT

    International Nuclear Information System (INIS)

    MALITSKY, N.; SHISHLO, A.

    2001-01-01

    The deployment of the Unified Accelerator Library (UAL) environment on the parallel cluster is presented. The approach is based on the Message-Passing Interface (MPI) library and the Perl adapter that allows one to control and mix together the existing conventional UAL components with the new MPI-based parallel extensions. In the paper, we provide timing results and describe the application of the new environment to the SNS Ring complex beam dynamics studies, particularly, simulations of several physical effects, such as space charge, field errors, fringe fields, and others

  5. Parallel quantum computing in a single ensemble quantum computer

    International Nuclear Information System (INIS)

    Long Guilu; Xiao, L.

    2004-01-01

    We propose a parallel quantum computing mode for ensemble quantum computer. In this mode, some qubits are in pure states while other qubits are in mixed states. It enables a single ensemble quantum computer to perform 'single-instruction-multidata' type of parallel computation. Parallel quantum computing can provide additional speedup in Grover's algorithm and Shor's algorithm. In addition, it also makes a fuller use of qubit resources in an ensemble quantum computer. As a result, some qubits discarded in the preparation of an effective pure state in the Schulman-Varizani and the Cleve-DiVincenzo algorithms can be reutilized

  6. MARBLE: A system for executing expert systems in parallel

    Science.gov (United States)

    Myers, Leonard; Johnson, Coe; Johnson, Dean

    1990-01-01

    This paper details the MARBLE 2.0 system which provides a parallel environment for cooperating expert systems. The work has been done in conjunction with the development of an intelligent computer-aided design system, ICADS, by the CAD Research Unit of the Design Institute at California Polytechnic State University. MARBLE (Multiple Accessed Rete Blackboard Linked Experts) is a system of C Language Production Systems (CLIPS) expert system tool. A copied blackboard is used for communication between the shells to establish an architecture which supports cooperating expert systems that execute in parallel. The design of MARBLE is simple, but it provides support for a rich variety of configurations, while making it relatively easy to demonstrate the correctness of its parallel execution features. In its most elementary configuration, individual CLIPS expert systems execute on their own processors and communicate with each other through a modified blackboard. Control of the system as a whole, and specifically of writing to the blackboard is provided by one of the CLIPS expert systems, an expert control system.

  7. Design Patterns: establishing a discipline of parallel software engineering

    CERN Multimedia

    CERN. Geneva

    2010-01-01

    Many core processors present us with a software challenge. We must turn our serial code into parallel code. To accomplish this wholesale transformation of our software ecosystem, we must define established practice is in parallel programming and then develop tools to support that practice. This leads to design patterns supported by frameworks optimized at runtime with advanced autotuning compilers. In this talk I provide an update of my ongoing research with the ParLab at UC Berkeley to realize this vision. In particular, I will describe our draft parallel pattern language, our early experiments with software frameworks, and the associated runtime optimization tools.About the speakerTim Mattson is a parallel programmer (Ph.D. Chemistry, UCSC, 1985). He does linear algebra, finds oil, shakes molecules, solves differential equations, and models electrons in simple atomic systems. He has spent his career working with computer scientists to make sure the needs of parallel applications programmers are met.Tim has ...

  8. A Parallel Prefix Algorithm for Almost Toeplitz Tridiagonal Systems

    Science.gov (United States)

    Sun, Xian-He; Joslin, Ronald D.

    1995-01-01

    A compact scheme is a discretization scheme that is advantageous in obtaining highly accurate solutions. However, the resulting systems from compact schemes are tridiagonal systems that are difficult to solve efficiently on parallel computers. Considering the almost symmetric Toeplitz structure, a parallel algorithm, simple parallel prefix (SPP), is proposed. The SPP algorithm requires less memory than the conventional LU decomposition and is efficient on parallel machines. It consists of a prefix communication pattern and AXPY operations. Both the computation and the communication can be truncated without degrading the accuracy when the system is diagonally dominant. A formal accuracy study has been conducted to provide a simple truncation formula. Experimental results have been measured on a MasPar MP-1 SIMD machine and on a Cray 2 vector machine. Experimental results show that the simple parallel prefix algorithm is a good algorithm for symmetric, almost symmetric Toeplitz tridiagonal systems and for the compact scheme on high-performance computers.

  9. ASME nuclear codes and standards: Scope of coverage and current initiatives

    International Nuclear Information System (INIS)

    Eisenberg, G. M.

    1995-01-01

    The objective of this paper is to address the broad scope of coverage of nuclear codes, standards and guides produced and administered by the American Society of Mechanical Engineers (ASME). Background information is provided regarding the evolution of the present activities. Details are provided on current initiatives intended to permit ASME to meet the needs of a changing nuclear industry on a worldwide scale. During the early years of commercial nuclear power, ASME produced a code for the construction of nuclear vessels used in the reactor coolant pressure boundary, containment and auxiliary systems. In response to industry growth, ASME Code coverage soon broadened to include rules for construction of other nuclear components, and inservice inspection of nuclear reactor coolant systems. In the years following this, the scope of ASME nuclear codes, standards and guides has been broadened significantly to include air cleaning activities for nuclear power reactors, operation and maintenance of nuclear power plants, quality assurance programs, cranes for nuclear facilities, qualification of mechanical equipment, and concrete reactor vessels and containments. ASME focuses on globalization of its codes, standards and guides by encouraging and promoting their use in the international community and by actively seeking participation of international members on its technical and supervisory committees and in accreditation activities. Details are provided on current international representation. Initiatives are underway to separate the technical requirements from administrative and enforcement requirements, to convert to hard metric units, to provide for non-U. S. materials, and to provide for translations into non-English languages. ASME activity as an accredited ISO 9000 registrar for suppliers of mechanical equipment is described. Rules are being developed for construction of containment systems for nuclear spent fuel and high-level waste transport packagings. Intensive

  10. Performance assessment of the SIMFAP parallel cluster at IFIN-HH Bucharest

    International Nuclear Information System (INIS)

    Adam, Gh.; Adam, S.; Ayriyan, A.; Dushanov, E.; Hayryan, E.; Korenkov, V.; Lutsenko, A.; Mitsyn, V.; Sapozhnikova, T.; Sapozhnikov, A; Streltsova, O.; Buzatu, F.; Dulea, M.; Vasile, I.; Sima, A.; Visan, C.; Busa, J.; Pokorny, I.

    2008-01-01

    Performance assessment and case study outputs of the parallel SIMFAP cluster at IFIN-HH Bucharest point to its effective and reliable operation. A comparison with results on the supercomputing system in LIT-JINR Dubna adds insight on resource allocation for problem solving by parallel computing. The solution of models asking for very large numbers of knots in the discretization mesh needs the migration to high performance computing based on parallel cluster architectures. The acquisition of ready-to-use parallel computing facilities being beyond limited budgetary resources, the solution at IFIN-HH was to buy the hardware and the inter-processor network, and to implement by own efforts the open software concerning both the operating system and the parallel computing standard. The present paper provides a report demonstrating the successful solution of these tasks. The implementation of the well-known HPL (High Performance LINPACK) Benchmark points to the effective and reliable operation of the cluster. The comparison of HPL outputs obtained on parallel clusters of different magnitudes shows that there is an optimum range of the order N of the linear algebraic system over which a given parallel cluster provides optimum parallel solutions. For the SIMFAP cluster, this range can be inferred to correspond to about 1 to 2 x 10 4 linear algebraic equations. For an algorithm of polynomial complexity N α the task sharing among p processors within a parallel solution mainly follows an (N/p)α behaviour under peak performance achievement. Thus, while the problem complexity remains the same, a substantial decrease of the coefficient of the leading order of the polynomial complexity is achieved. (authors)

  11. Health-financing reforms in southeast Asia: challenges in achieving universal coverage.

    Science.gov (United States)

    Tangcharoensathien, Viroj; Patcharanarumol, Walaiporn; Ir, Por; Aljunid, Syed Mohamed; Mukti, Ali Ghufron; Akkhavong, Kongsap; Banzon, Eduardo; Huong, Dang Boi; Thabrany, Hasbullah; Mills, Anne

    2011-03-05

    In this sixth paper of the Series, we review health-financing reforms in seven countries in southeast Asia that have sought to reduce dependence on out-of-pocket payments, increase pooled health finance, and expand service use as steps towards universal coverage. Laos and Cambodia, both resource-poor countries, have mostly relied on donor-supported health equity funds to reach the poor, and reliable funding and appropriate identification of the eligible poor are two major challenges for nationwide expansion. For Thailand, the Philippines, Indonesia, and Vietnam, social health insurance financed by payroll tax is commonly used for formal sector employees (excluding Malaysia), with varying outcomes in terms of financial protection. Alternative payment methods have different implications for provider behaviour and financial protection. Two alternative approaches for financial protection of the non-poor outside the formal sector have emerged-contributory arrangements and tax-financed schemes-with different abilities to achieve high population coverage rapidly. Fiscal space and mobilisation of payroll contributions are both important in accelerating financial protection. Expanding coverage of good-quality services and ensuring adequate human resources are also important to achieve universal coverage. As health-financing reform is complex, institutional capacity to generate evidence and inform policy is essential and should be strengthened. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. 42 CFR 435.350 - Coverage for certain aliens.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Coverage for certain aliens. 435.350 Section 435... ISLANDS, AND AMERICAN SAMOA Optional Coverage of the Medically Needy § 435.350 Coverage for certain aliens... treatment of an emergency medical condition, as defined in § 440.255(c) of this chapter, to those aliens...

  13. A class of parallel algorithms for computation of the manipulator inertia matrix

    Science.gov (United States)

    Fijany, Amir; Bejczy, Antal K.

    1989-01-01

    Parallel and parallel/pipeline algorithms for computation of the manipulator inertia matrix are presented. An algorithm based on composite rigid-body spatial inertia method, which provides better features for parallelization, is used for the computation of the inertia matrix. Two parallel algorithms are developed which achieve the time lower bound in computation. Also described is the mapping of these algorithms with topological variation on a two-dimensional processor array, with nearest-neighbor connection, and with cardinality variation on a linear processor array. An efficient parallel/pipeline algorithm for the linear array was also developed, but at significantly higher efficiency.

  14. Parallel hyperbolic PDE simulation on clusters: Cell versus GPU

    Science.gov (United States)

    Rostrup, Scott; De Sterck, Hans

    2010-12-01

    Increasingly, high-performance computing is looking towards data-parallel computational devices to enhance computational performance. Two technologies that have received significant attention are IBM's Cell Processor and NVIDIA's CUDA programming model for graphics processing unit (GPU) computing. In this paper we investigate the acceleration of parallel hyperbolic partial differential equation simulation on structured grids with explicit time integration on clusters with Cell and GPU backends. The message passing interface (MPI) is used for communication between nodes at the coarsest level of parallelism. Optimizations of the simulation code at the several finer levels of parallelism that the data-parallel devices provide are described in terms of data layout, data flow and data-parallel instructions. Optimized Cell and GPU performance are compared with reference code performance on a single x86 central processing unit (CPU) core in single and double precision. We further compare the CPU, Cell and GPU platforms on a chip-to-chip basis, and compare performance on single cluster nodes with two CPUs, two Cell processors or two GPUs in a shared memory configuration (without MPI). We finally compare performance on clusters with 32 CPUs, 32 Cell processors, and 32 GPUs using MPI. Our GPU cluster results use NVIDIA Tesla GPUs with GT200 architecture, but some preliminary results on recently introduced NVIDIA GPUs with the next-generation Fermi architecture are also included. This paper provides computational scientists and engineers who are considering porting their codes to accelerator environments with insight into how structured grid based explicit algorithms can be optimized for clusters with Cell and GPU accelerators. It also provides insight into the speed-up that may be gained on current and future accelerator architectures for this class of applications. Program summaryProgram title: SWsolver Catalogue identifier: AEGY_v1_0 Program summary URL

  15. Template based parallel checkpointing in a massively parallel computer system

    Science.gov (United States)

    Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  16. Dental Use and Expenditures for Older Uninsured Americans: The Simulated Impact of Expanded Coverage

    Science.gov (United States)

    Manski, Richard J; Moeller, John F; Chen, Haiyan; Schimmel, Jody; Pepper, John V; St Clair, Patricia A

    2015-01-01

    Objective To determine if providing dental insurance to older Americans would close the current gaps in dental use and expenditure between insured and uninsured older Americans. Data Sources/Study Setting We used data from the 2008 Health and Retirement Survey (HRS) supplemented by data from the 2006 Medical Expenditure Panel Survey (MEPS). Study Design We compared the simulated dental use and expenditures rates of newly insured persons against the corresponding rates for those previously insured. Data Collection/Extraction Methods The HRS is a nationally representative survey administered by the Institute for Social Research (ISR). The MEPS is a nationally representative household survey sponsored by the Agency for Healthcare Research and Quality (AHRQ). Principal Findings We found that expanding dental coverage to older uninsured Americans would close previous gaps in dental use and expense between uninsured and insured noninstitutionalized Americans 55 years and older. Conclusions Providing dental coverage to previously uninsured older adults would produce estimated monthly costs net of markups for administrative costs that comport closely to current market rates. Estimates also suggest that the total cost of providing dental coverage targeted specifically to nonusers of dental care may be less than similar costs for prior users. PMID:25040355

  17. Coverage and Compliance of Mass Drug Administration in Lymphatic Filariasis: A Comparative Analysis in a District of West Bengal, India

    Directory of Open Access Journals (Sweden)

    Tanmay Kanti Panja

    2012-01-01

    Full Text Available Background: Despite several rounds of Mass Drug Administration (MDA as an elimination strategy of Lymphatic Filariasis (LF from India, still the coverage is far behind the required level of 85%.Objectives: The present study was carried out with the objectives to assess the coverage and compliance of MDA and their possible determinants. Methods: A cross-sectional community based study was conducted in Paschim Midnapur district of West Bengal, India for consecutive two years following MDA. Study participants were chosen by 30-cluster sampling technique. Data was collected by using pre-tested semi-structured proforma to assess the coverage and compliance of MDA along with possible determinants for non-attaining the expected coverage. Results: In the year 2009, coverage, compliance, coverage compliance gap (CCG and effective coverage was seen to be 84.1%, 70.5%, 29.5% and 59.3% respectively. In 2010, the results further deteriorated to 78.5%, 66.9%, 33.3% and 57% respectively. The poor coverage and compliance were attributed to improper training of service providers and lack of community awareness regarding MDA.Conclusion: The study emphasized supervised consumption, retraining of service providers before MDA activities, strengthening behaviour change communication strategy for community awareness. Advocacy by the program managers and policy makers towards prioritization of MDA program will make the story of filaria elimination a success.

  18. Parallelization Experience with Four Canonical Econometric Models Using ParMitISEM

    NARCIS (Netherlands)

    N. Basturk (Nalan); S. Grassi (Stefano); L.F. Hoogerheide (Lennart); H.K. van Dijk (Herman)

    2016-01-01

    textabstractThis paper presents the parallel computing implementation of the MitISEM algorithm, labeled Parallel MitISEM. The basic MitISEM algorithm, introduced by Hoogerheide, Opschoor and Van Dijk (2012), provides an automatic and flexible method to approximate a non-elliptical target density

  19. Parallel embedded systems: where real-time and low-power meet

    DEFF Research Database (Denmark)

    Karakehayov, Zdravko; Guo, Yu

    2008-01-01

    This paper introduces a combination of models and proofs for optimal power management via Dynamic Frequency Scaling and Dynamic Voltage Scaling. The approach is suitable for systems on a chip or microcontrollers where processors run in parallel with embedded peripherals. We have developed...... a software tool, called CASTLE, to provide computer assistance in the design process of energy-aware embedded systems. The tool considers single processor and parallel architectures. An example shows an energy reduction of 23% when the tool allocates two microcontrollers for parallel execution....

  20. A multitransputer parallel processing system (MTPPS)

    International Nuclear Information System (INIS)

    Jethra, A.K.; Pande, S.S.; Borkar, S.P.; Khare, A.N.; Ghodgaonkar, M.D.; Bairi, B.R.

    1993-01-01

    This report describes the design and implementation of a 16 node Multi Transputer Parallel Processing System(MTPPS) which is a platform for parallel program development. It is a MIMD machine based on message passing paradigm. The basic compute engine is an Inmos Transputer Ims T800-20. Transputer with local memory constitutes the processing element (NODE) of this MIMD architecture. Multiple NODES can be connected to each other in an identifiable network topology through the high speed serial links of the transputer. A Network Configuration Unit (NCU) incorporates the necessary hardware to provide software controlled network configuration. System is modularly expandable and more NODES can be added to the system to achieve the required processing power. The system is backend to the IBM-PC which has been integrated into the system to provide user I/O interface. PC resources are available to the programmer. Interface hardware between the PC and the network of transputers is INMOS compatible. Therefore, all the commercially available development software compatible to INMOS products can run on this system. While giving the details of design and implementation, this report briefly summarises MIMD Architectures, Transputer Architecture and Parallel Processing Software Development issues. LINPACK performance evaluation of the system and solutions of neutron physics and plasma physics problem have been discussed along with results. (author). 12 refs., 22 figs., 3 tabs., 3 appendixes

  1. More Rhode Island Adults Have Dental Coverage After the Medicaid Expansion: Did More Adults Receive Dental Services? Did More Dentists Provide Services?

    Science.gov (United States)

    Zwetchkenbaum, Samuel; Oh, Junhie

    2017-10-02

    Under the Affordable Care Act (ACA) Medicaid expansion since 2014, 68,000 more adults under age 65 years were enrolled in Rhode Island Medicaid as of December 2015, a 78% increase from 2013 enrollment. This report assesses changes in dental utilization associated with this expansion. Medicaid enrollment and dental claims for calendar years 2012-2015 were extracted from the RI Medicaid Management Information System. Among adults aged 18-64 years, annual numbers and percentages of Medicaid enrollees who received any dental service were summarized. Additionally, dental service claims were assessed by provider type (private practice or health center). Although 15,000 more adults utilized dental services by the end of 2015, the annual percentage of Medicaid enrollees who received any dental services decreased over the reporting periods, compared to pre-ACA years (2012-13: 39%, 2014: 35%, 2015: 32%). From 2012 to 2015, dental patient increases in community health centers were larger than in private dental offices (78% vs. 34%). Contrary to the Medicaid population increase, the number of dentists that submitted Medicaid claims decreased, particularly among dentists in private dental offices; the percentage of RI private dentists who provided any dental service to adult Medicaid enrollees decreased from 29% in 2012 to 21% in 2015. Implementation of Medicaid expansion has played a critical role in increasing the number of Rhode Islanders with dental coverage, particularly among low-income adults under age 65. However, policymakers must address the persistent and worsening shortage of dental providers that accept Medicaid to provide a more accessible source of oral healthcare for all Rhode Islanders. [Full article available at http://rimed.org/rimedicaljournal-2017-10.asp].

  2. A road map for universal coverage: finding a pass through the financial mountains.

    Science.gov (United States)

    Sessions, Samuel Y; Lee, Philip R

    2008-04-01

    Government already pays for more than half of U.S. health care costs, and nearly all universal health insurance proposals assume continued government involvement through tax subsidies and other means. The question of what specific taxes could be used to finance universal coverage is, however, seldom carefully examined, in part due to efforts by health care reform proponents to downplay tax issues. In this article we undertake such an examination. We argue that the challenges of relying on taxes for universal coverage are even greater than is generally appreciated, but that they can nevertheless be met. A proposal to fund a universal health insurance voucher system with a value-added tax illustrates issues that would arise for tax-financed plans in general and provides a broad framework for a bipartisan approach to universal coverage. We discuss significant problems that such an approach would face and suggest solutions. We outline a long-term political and legislative strategy for enacting universal coverage that draws upon precedents set by comparable legislative initiatives, including tax reform and Medicare. The results are an improved understanding of the relationship between systemic health care finance reform and taxation and a politically realistic plan for universal coverage that employs undisguised taxes.

  3. Dansylation isotope labeling liquid chromatography mass spectrometry for parallel profiling of human urinary and fecal submetabolomes

    Energy Technology Data Exchange (ETDEWEB)

    Su, Xiaoling [State Key Laboratory and Collaborative Innovation Center for Diagnosis and Treatment of Infectious Diseases, The First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou 310003 (China); Wang, Nan [State Key Laboratory and Collaborative Innovation Center for Diagnosis and Treatment of Infectious Diseases, The First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou 310003 (China); Department of Chemistry, University of Alberta, Edmonton, Alberta T6G 2G2 (Canada); Chen, Deying [State Key Laboratory and Collaborative Innovation Center for Diagnosis and Treatment of Infectious Diseases, The First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou 310003 (China); Li, Yunong [Department of Chemistry, University of Alberta, Edmonton, Alberta T6G 2G2 (Canada); Lu, Yingfeng [State Key Laboratory and Collaborative Innovation Center for Diagnosis and Treatment of Infectious Diseases, The First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou 310003 (China); Huan, Tao [Department of Chemistry, University of Alberta, Edmonton, Alberta T6G 2G2 (Canada); Xu, Wei [State Key Laboratory and Collaborative Innovation Center for Diagnosis and Treatment of Infectious Diseases, The First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou 310003 (China); Li, Liang, E-mail: Liang.Li@ualberta.ca [State Key Laboratory and Collaborative Innovation Center for Diagnosis and Treatment of Infectious Diseases, The First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou 310003 (China); Department of Chemistry, University of Alberta, Edmonton, Alberta T6G 2G2 (Canada); Li, Lanjuan, E-mail: ljli@zju.edu.cn [State Key Laboratory and Collaborative Innovation Center for Diagnosis and Treatment of Infectious Diseases, The First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou 310003 (China)

    2016-01-15

    Human urine and feces can be non-invasively collected for metabolomics-based disease biomarker discovery research. Because urinary and fecal metabolomes are thought to be different, analysis of both biospecimens may generate a more comprehensive metabolomic profile that can be better related to the health state of an individual. Herein we describe a method of using differential chemical isotope labeling (CIL) liquid chromatography mass spectrometry (LC-MS) for parallel metabolomic profiling of urine and feces. Dansylation labeling was used to quantify the amine/phenol submetabolome changes among different samples based on {sup 12}C-labeling of individual samples and {sup 13}C-labeling of a pooled urine or pooled feces and subsequent analysis of the {sup 13}C-/{sup 12}C-labeled mixture by LC-MS. The pooled urine and pooled feces are further differentially labeled, mixed and then analyzed by LC-MS in order to relate the metabolite concentrations of the common metabolites found in both biospecimens. This method offers a means of direct comparison of urinary and fecal submetabolomes. We evaluated the analytical performance and demonstrated the utility of this method in the analysis of urine and feces collected daily from three healthy individuals for 7 days. On average, 2534 ± 113 (n = 126) peak pairs or metabolites could be detected from a urine sample, while 2507 ± 77 (n = 63) peak pairs were detected from a fecal sample. In total, 5372 unique peak pairs were detected from all the samples combined; 3089 and 3012 pairs were found in urine and feces, respectively. These results reveal that the urine and fecal metabolomes are very different, thereby justifying the consideration of using both biospecimens to increase the probability of finding specific biomarkers of diseases. Furthermore, the CIL LC-MS method described can be used to perform parallel quantitative analysis of urine and feces, resulting in more complete coverage of the human metabolome

  4. Dansylation isotope labeling liquid chromatography mass spectrometry for parallel profiling of human urinary and fecal submetabolomes

    International Nuclear Information System (INIS)

    Su, Xiaoling; Wang, Nan; Chen, Deying; Li, Yunong; Lu, Yingfeng; Huan, Tao; Xu, Wei; Li, Liang; Li, Lanjuan

    2016-01-01

    Human urine and feces can be non-invasively collected for metabolomics-based disease biomarker discovery research. Because urinary and fecal metabolomes are thought to be different, analysis of both biospecimens may generate a more comprehensive metabolomic profile that can be better related to the health state of an individual. Herein we describe a method of using differential chemical isotope labeling (CIL) liquid chromatography mass spectrometry (LC-MS) for parallel metabolomic profiling of urine and feces. Dansylation labeling was used to quantify the amine/phenol submetabolome changes among different samples based on "1"2C-labeling of individual samples and "1"3C-labeling of a pooled urine or pooled feces and subsequent analysis of the "1"3C-/"1"2C-labeled mixture by LC-MS. The pooled urine and pooled feces are further differentially labeled, mixed and then analyzed by LC-MS in order to relate the metabolite concentrations of the common metabolites found in both biospecimens. This method offers a means of direct comparison of urinary and fecal submetabolomes. We evaluated the analytical performance and demonstrated the utility of this method in the analysis of urine and feces collected daily from three healthy individuals for 7 days. On average, 2534 ± 113 (n = 126) peak pairs or metabolites could be detected from a urine sample, while 2507 ± 77 (n = 63) peak pairs were detected from a fecal sample. In total, 5372 unique peak pairs were detected from all the samples combined; 3089 and 3012 pairs were found in urine and feces, respectively. These results reveal that the urine and fecal metabolomes are very different, thereby justifying the consideration of using both biospecimens to increase the probability of finding specific biomarkers of diseases. Furthermore, the CIL LC-MS method described can be used to perform parallel quantitative analysis of urine and feces, resulting in more complete coverage of the human metabolome. - Highlights: • A

  5. Environmental conditions in health care facilities in low- and middle-income countries: Coverage and inequalities.

    Science.gov (United States)

    Cronk, Ryan; Bartram, Jamie

    2018-04-01

    Safe environmental conditions and the availability of standard precaution items are important to prevent and treat infection in health care facilities (HCFs) and to achieve Sustainable Development Goal (SDG) targets for health and water, sanitation, and hygiene. Baseline coverage estimates for HCFs have yet to be formed for the SDGs; and there is little evidence describing inequalities in coverage. To address this, we produced the first coverage estimates of environmental conditions and standard precaution items in HCFs in low- and middle-income countries (LMICs); and explored factors associated with low coverage. Data from monitoring reports and peer-reviewed literature were systematically compiled; and information on conditions, service levels, and inequalities tabulated. We used logistic regression to identify factors associated with low coverage. Data for 21 indicators of environmental conditions and standard precaution items were compiled from 78 LMICs which were representative of 129,557 HCFs. 50% of HCFs lack piped water, 33% lack improved sanitation, 39% lack handwashing soap, 39% lack adequate infectious waste disposal, 73% lack sterilization equipment, and 59% lack reliable energy services. Using nationally representative data from six countries, 2% of HCFs provide all four of water, sanitation, hygiene, and waste management services. Statistically significant inequalities in coverage exist between HCFs by: urban-rural setting, managing authority, facility type, and sub-national administrative unit. We identified important, previously undocumented inequalities and environmental health challenges faced by HCFs in LMICs. The information and analyses provide evidence for those engaged in improving HCF conditions to develop evidence-based policies and efficient programs, enhance service delivery systems, and make better use of available resources. Copyright © 2018 The Authors. Published by Elsevier GmbH.. All rights reserved.

  6. [Coverage by health insurance or discount cards: a household survey in the coverage area of the Family Health Strategy].

    Science.gov (United States)

    Fontenelle, Leonardo Ferreira; Camargo, Maria Beatriz Junqueira de; Bertoldi, Andréa Dâmaso; Gonçalves, Helen; Maciel, Ethel Leonor Noia; Barros, Aluísio J D

    2017-10-26

    This study was designed to assess the reasons for health insurance coverage in a population covered by the Family Health Strategy in Brazil. We describe overall health insurance coverage and according to types, and analyze its association with health-related and socio-demographic characteristics. Among the 31.3% of persons (95%CI: 23.8-39.9) who reported "health insurance" coverage, 57.0% (95%CI: 45.2-68.0) were covered only by discount cards, which do not offer any kind of coverage for medical care, but only discounts in pharmacies, clinics, and hospitals. Both for health insurance and discount cards, the most frequently cited reasons for such coverage were "to be on the safe side" and "to receive better care". Both types of coverage were associated statistically with age (+65 vs. 15-24 years: adjusted odds ratios, aOR = 2.98, 95%CI: 1.28-6.90; and aOR = 3.67; 95%CI: 2.22-6.07, respectively) and socioeconomic status (additional standard deviation: aOR = 2.25, 95%CI: 1.62-3.14; and aOR = 1.96, 95%CI: 1.34-2.97). In addition, health insurance coverage was associated with schooling (aOR = 7.59, 95%CI: 4.44-13.00) for complete University Education and aOR = 3.74 (95%CI: 1.61-8.68) for complete Secondary Education, compared to less than complete Primary Education. Meanwhile, neither health insurance nor discount card was associated with health status or number of diagnosed diseases. In conclusion, studies that aim to assess private health insurance should be planned to distinguish between discount cards and formal health insurance.

  7. Providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer

    Energy Technology Data Exchange (ETDEWEB)

    Archer, Charles J.; Faraj, Daniel A.; Inglett, Todd A.; Ratterman, Joseph D.

    2018-01-30

    Methods, apparatus, and products are disclosed for providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: receiving a network packet in a compute node, the network packet specifying a destination compute node; selecting, in dependence upon the destination compute node, at least one of the links for the compute node along which to forward the network packet toward the destination compute node; and forwarding the network packet along the selected link to the adjacent compute node connected to the compute node through the selected link.

  8. Mobile-robot navigation with complete coverage of unstructured environments

    OpenAIRE

    García Armada, Elena; González de Santos, Pablo

    2004-01-01

    There are some mobile-robot applications that require the complete coverage of an unstructured environment. Examples are humanitarian de-mining and floor-cleaning tasks. A complete-coverage algorithm is then used, a path-planning technique that allows the robot to pass over all points in the environment, avoiding unknown obstacles. Different coverage algorithms exist, but they fail working in unstructured environments. This paper details a complete-coverage algorithm for unstructured environm...

  9. A model for optimizing file access patterns using spatio-temporal parallelism

    Energy Technology Data Exchange (ETDEWEB)

    Boonthanome, Nouanesengsy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Patchett, John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Geveci, Berk [Kitware Inc., Clifton Park, NY (United States); Ahrens, James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bauer, Andy [Kitware Inc., Clifton Park, NY (United States); Chaudhary, Aashish [Kitware Inc., Clifton Park, NY (United States); Miller, Ross G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Shipman, Galen M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Williams, Dean N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-01-01

    For many years now, I/O read time has been recognized as the primary bottleneck for parallel visualization and analysis of large-scale data. In this paper, we introduce a model that can estimate the read time for a file stored in a parallel filesystem when given the file access pattern. Read times ultimately depend on how the file is stored and the access pattern used to read the file. The file access pattern will be dictated by the type of parallel decomposition used. We employ spatio-temporal parallelism, which combines both spatial and temporal parallelism, to provide greater flexibility to possible file access patterns. Using our model, we were able to configure the spatio-temporal parallelism to design optimized read access patterns that resulted in a speedup factor of approximately 400 over traditional file access patterns.

  10. 76 FR 7767 - Student Health Insurance Coverage

    Science.gov (United States)

    2011-02-11

    ... Student Health Insurance Coverage AGENCY: Centers for Medicare & Medicaid Services (CMS), HHS. ACTION... health insurance coverage under the Public Health Service Act and the Affordable Care Act. The proposed rule would define ``student health insurance [[Page 7768

  11. Parallelization experience with four canonical econometric models using ParMitISEM

    NARCIS (Netherlands)

    Baştürk, N.; Grassi, S.; Hoogerheide, L.; van Dijk, H.K.

    2016-01-01

    This paper presents the parallel computing implementation of the MitISEM algorithm, labeled Parallel MitISEM. The basic MitISEM algorithm, introduced by Hoogerheide et al. (2012), provides an automatic and flexible method to approximate a non-elliptical target density using adaptive mixtures of

  12. Introduction to parallel programming

    CERN Document Server

    Brawer, Steven

    1989-01-01

    Introduction to Parallel Programming focuses on the techniques, processes, methodologies, and approaches involved in parallel programming. The book first offers information on Fortran, hardware and operating system models, and processes, shared memory, and simple parallel programs. Discussions focus on processes and processors, joining processes, shared memory, time-sharing with multiple processors, hardware, loops, passing arguments in function/subroutine calls, program structure, and arithmetic expressions. The text then elaborates on basic parallel programming techniques, barriers and race

  13. Parallelism in matrix computations

    CERN Document Server

    Gallopoulos, Efstratios; Sameh, Ahmed H

    2016-01-01

    This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...

  14. Chinese newspaper coverage of (unproven) stem cell therapies and their providers.

    Science.gov (United States)

    Ogbogu, Ubaka; Du, Li; Rachul, Christen; Bélanger, Lisa; Caulfield, Timothy

    2013-04-01

    China is a primary destination for stem cell tourism, the phenomenon whereby patients travel abroad to receive unproven stem cell-based treatments that have not been approved in their home countries. Yet, much remains unknown about the state of the stem cell treatment industry in China and about how the Chinese view treatments and providers. Given the media's crucial role in science/health communication and in framing public dialogue, this study sought to examine Chinese newspaper portrayal and perceptions of stem cell treatments and their providers. Based on a content analysis of over 300 newspaper articles, the study revealed that while Chinese newspaper reporting is generally neutral in tone, it is also inaccurate, overly positive, heavily influenced by "interested" treatment providers and focused on the therapeutic uses of stem cells to address the health needs of the local population. The study findings suggest a need to counterbalance providers' influence on media reporting through strategies that encourage media uptake of accurate information about stem cell research and treatments.

  15. Cholera in Haiti: Reproductive numbers and vaccination coverage estimates

    Science.gov (United States)

    Mukandavire, Zindoga; Smith, David L.; Morris, J. Glenn, Jr.

    2013-01-01

    Cholera reappeared in Haiti in October, 2010 after decades of absence. Cases were first detected in Artibonite region and in the ensuing months the disease spread to every department in the country. The rate of increase in the number of cases at the start of epidemics provides valuable information about the basic reproductive number (). Quantitative analysis of such data gives useful information for planning and evaluating disease control interventions, including vaccination. Using a mathematical model, we fitted data on the cumulative number of reported hospitalized cholera cases in Haiti. varied by department, ranging from 1.06 to 2.63. At a national level, 46% vaccination coverage would result in an () cholera vaccines in endemic and non-endemic regions, our results suggest that moderate cholera vaccine coverage would be an important element of disease control in Haiti.

  16. The commercial health insurance industry in an era of eroding employer coverage.

    Science.gov (United States)

    Robinson, James C

    2006-01-01

    This paper analyzes the commercial health insurance industry in an era of weakening employer commitment to providing coverage and strengthening interest by public programs to offer coverage through private plans. It documents the willingness of the industry to accept erosion of employment-based enrollment rather than to sacrifice earnings, the movement of Medicaid beneficiaries into managed care, and the distribution of market shares in the employment-based, Medicaid, and Medicare markets. The profitability of the commercial health insurance industry, exceptionally strong over the past five years, will henceforth be linked to the budgetary cycles and political fluctuations of state and federal governments.

  17. Media Coverage of Nuclear Energy after Fukushima

    International Nuclear Information System (INIS)

    Oltra, C.; Roman, P.; Prades, A.

    2013-01-01

    This report presents the main findings of a content analysis of printed media coverage of nuclear energy in Spain before and after the Fukushima accident. Our main objective is to understand the changes in the presentation of nuclear fission and nuclear fusion as a result of the accident in Japan. We specifically analyze the volume of coverage and thematic content in the media coverage for nuclear fusion from a sample of Spanish print articles in more than 20 newspapers from 2008 to 2012. We also analyze the media coverage of nuclear energy (fission) in three main Spanish newspapers one year before and one year after the accident. The results illustrate how the media contributed to the presentation of nuclear power in the months before and after the accident. This could have implications for the public understanding of nuclear power. (Author)

  18. Media Coverage of Nuclear Energy after Fukushima

    Energy Technology Data Exchange (ETDEWEB)

    Oltra, C.; Roman, P.; Prades, A.

    2013-07-01

    This report presents the main findings of a content analysis of printed media coverage of nuclear energy in Spain before and after the Fukushima accident. Our main objective is to understand the changes in the presentation of nuclear fission and nuclear fusion as a result of the accident in Japan. We specifically analyze the volume of coverage and thematic content in the media coverage for nuclear fusion from a sample of Spanish print articles in more than 20 newspapers from 2008 to 2012. We also analyze the media coverage of nuclear energy (fission) in three main Spanish newspapers one year before and one year after the accident. The results illustrate how the media contributed to the presentation of nuclear power in the months before and after the accident. This could have implications for the public understanding of nuclear power. (Author)

  19. Patient choice of providers in a preferred provider organization.

    Science.gov (United States)

    Wouters, A V; Hester, J

    1988-03-01

    This article is an analysis of patient choice of providers by the employees of the Security Pacific Bank of California and their dependents who have access to the Med Network Preferred Provider Organization (PPO). The empirical results show that not only is the PPO used by individuals who require relatively little medical care (as measured by predicted office visit charges) but that the PPO is most intensively used for low-risk services such as treatment for minor illness and preventive care. Also, the most likely Security Pacific Health Care beneficiary to use a PPO provider is a recently hired employee who lives in the south urban region, has a relatively low income, does not have supplemental insurance coverage, and is without previous attachments to non-PPO primary care providers. In order to maximize their ability to reduce plan paid benefits, insurers who contract with PPOs should focus on increasing PPO utilization among poorer health risks.

  20. Multi-petascale highly efficient parallel supercomputer

    Science.gov (United States)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen-Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2018-05-15

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaflop-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC). The ASIC nodes are interconnected by a five dimensional torus network that optimally maximize the throughput of packet communications between nodes and minimize latency. The network implements collective network and a global asynchronous network that provides global barrier and notification functions. Integrated in the node design include a list-based prefetcher. The memory system implements transaction memory, thread level speculation, and multiversioning cache that improves soft error rate at the same time and supports DMA functionality allowing for parallel processing message-passing.

  1. Improving parallel imaging by jointly reconstructing multi-contrast data.

    Science.gov (United States)

    Bilgic, Berkin; Kim, Tae Hyung; Liao, Congyu; Manhard, Mary Kate; Wald, Lawrence L; Haldar, Justin P; Setsompop, Kawin

    2018-08-01

    To develop parallel imaging techniques that simultaneously exploit coil sensitivity encoding, image phase prior information, similarities across multiple images, and complementary k-space sampling for highly accelerated data acquisition. We introduce joint virtual coil (JVC)-generalized autocalibrating partially parallel acquisitions (GRAPPA) to jointly reconstruct data acquired with different contrast preparations, and show its application in 2D, 3D, and simultaneous multi-slice (SMS) acquisitions. We extend the joint parallel imaging concept to exploit limited support and smooth phase constraints through Joint (J-) LORAKS formulation. J-LORAKS allows joint parallel imaging from limited autocalibration signal region, as well as permitting partial Fourier sampling and calibrationless reconstruction. We demonstrate highly accelerated 2D balanced steady-state free precession with phase cycling, SMS multi-echo spin echo, 3D multi-echo magnetization-prepared rapid gradient echo, and multi-echo gradient recalled echo acquisitions in vivo. Compared to conventional GRAPPA, proposed joint acquisition/reconstruction techniques provide more than 2-fold reduction in reconstruction error. JVC-GRAPPA takes advantage of additional spatial encoding from phase information and image similarity, and employs different sampling patterns across acquisitions. J-LORAKS achieves a more parsimonious low-rank representation of local k-space by considering multiple images as additional coils. Both approaches provide dramatic improvement in artifact and noise mitigation over conventional single-contrast parallel imaging reconstruction. Magn Reson Med 80:619-632, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.

  2. Handbook of infrared standards II with spectral coverage between

    CERN Document Server

    Meurant, Gerard

    1993-01-01

    This timely compilation of infrared standards has been developed for use by infrared researchers in chemistry, physics, engineering, astrophysics, and laser and atmospheric sciences. Providing maps of closely spaced molecular spectra along with their measured wavenumbers between 1.4vm and 4vm, this handbook will complement the 1986 Handbook of Infrared Standards that included special coverage between 3 and 2600vm. It will serve as a necessary reference for all researchers conducting spectroscopic investigations in the near-infrared region.Key Features:- Provides all new spec

  3. Coverage and Financial Risk Protection for Institutional Delivery: How Universal Is Provision of Maternal Health Care in India?

    Science.gov (United States)

    Prinja, Shankar; Bahuguna, Pankaj; Gupta, Rakesh; Sharma, Atul; Rana, Saroj Kumar; Kumar, Rajesh

    2015-01-01

    -economic groups and must be strengthened. The success of the public sector in providing high coverage and financial risk protection in maternal health provides encouragement for the role that the public sector can play in universalizing health care.

  4. Analysis of multigrid methods on massively parallel computers: Architectural implications

    Science.gov (United States)

    Matheson, Lesley R.; Tarjan, Robert E.

    1993-01-01

    We study the potential performance of multigrid algorithms running on massively parallel computers with the intent of discovering whether presently envisioned machines will provide an efficient platform for such algorithms. We consider the domain parallel version of the standard V cycle algorithm on model problems, discretized using finite difference techniques in two and three dimensions on block structured grids of size 10(exp 6) and 10(exp 9), respectively. Our models of parallel computation were developed to reflect the computing characteristics of the current generation of massively parallel multicomputers. These models are based on an interconnection network of 256 to 16,384 message passing, 'workstation size' processors executing in an SPMD mode. The first model accomplishes interprocessor communications through a multistage permutation network. The communication cost is a logarithmic function which is similar to the costs in a variety of different topologies. The second model allows single stage communication costs only. Both models were designed with information provided by machine developers and utilize implementation derived parameters. With the medium grain parallelism of the current generation and the high fixed cost of an interprocessor communication, our analysis suggests an efficient implementation requires the machine to support the efficient transmission of long messages, (up to 1000 words) or the high initiation cost of a communication must be significantly reduced through an alternative optimization technique. Furthermore, with variable length message capability, our analysis suggests the low diameter multistage networks provide little or no advantage over a simple single stage communications network.

  5. Accuracy and impact of spatial aids based upon satellite enumeration to improve indoor residual spraying spatial coverage.

    Science.gov (United States)

    Bridges, Daniel J; Pollard, Derek; Winters, Anna M; Winters, Benjamin; Sikaala, Chadwick; Renn, Silvia; Larsen, David A

    2018-02-23

    Indoor residual spraying (IRS) is a key tool in the fight to control, eliminate and ultimately eradicate malaria. IRS protection is based on a communal effect such that an individual's protection primarily relies on the community-level coverage of IRS with limited protection being provided by household-level coverage. To ensure a communal effect is achieved through IRS, achieving high and uniform community-level coverage should be the ultimate priority of an IRS campaign. Ensuring high community-level coverage of IRS in malaria-endemic areas is challenging given the lack of information available about both the location and number of households needing IRS in any given area. A process termed 'mSpray' has been developed and implemented and involves use of satellite imagery for enumeration for planning IRS and a mobile application to guide IRS implementation. This study assessed (1) the accuracy of the satellite enumeration and (2) how various degrees of spatial aid provided through the mSpray process affected community-level IRS coverage during the 2015 spray campaign in Zambia. A 2-stage sampling process was applied to assess accuracy of satellite enumeration to determine number and location of sprayable structures. Results indicated an overall sensitivity of 94% for satellite enumeration compared to finding structures on the ground. After adjusting for structure size, roof, and wall type, households in Nchelenge District where all types of satellite-based spatial aids (paper-based maps plus use of the mobile mSpray application) were used were more likely to have received IRS than Kasama district where maps used were not based on satellite enumeration. The probability of a household being sprayed in Nchelenge district where tablet-based maps were used, did not differ statistically from that of a household in Samfya District, where detailed paper-based spatial aids based on satellite enumeration were provided. IRS coverage from the 2015 spray season benefited from

  6. Parallelization experience with four canonical econometric models using ParMitISEM

    NARCIS (Netherlands)

    Bastürk, Nalan; Grassi, S.; Hoogerheide, L.; van Dijk, Herman K.

    2016-01-01

    This paper presents the parallel computing implementation of the MitISEM algorithm, labeled Parallel MitISEM. The basic MitISEM algorithm provides an automatic and flexible method to approximate a non-elliptical target density using adaptive mixtures of Student-t densities, where only a kernel of

  7. Design, analysis and control of cable-suspended parallel robots and its applications

    CERN Document Server

    Zi, Bin

    2017-01-01

    This book provides an essential overview of the authors’ work in the field of cable-suspended parallel robots, focusing on innovative design, mechanics, control, development and applications. It presents and analyzes several typical mechanical architectures of cable-suspended parallel robots in practical applications, including the feed cable-suspended structure for super antennae, hybrid-driven-based cable-suspended parallel robots, and cooperative cable parallel manipulators for multiple mobile cranes. It also addresses the fundamental mechanics of cable-suspended parallel robots on the basis of their typical applications, including the kinematics, dynamics and trajectory tracking control of the feed cable-suspended structure for super antennae. In addition it proposes a novel hybrid-driven-based cable-suspended parallel robot that uses integrated mechanism design methods to improve the performance of traditional cable-suspended parallel robots. A comparative study on error and performance indices of hybr...

  8. Model-driven product line engineering for mapping parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir

    2016-01-01

    Mapping parallel algorithms to parallel computing platforms requires several activities such as the analysis of the parallel algorithm, the definition of the logical configuration of the platform, the mapping of the algorithm to the logical configuration platform and the implementation of the

  9. Parallelization in Modern C++

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The traditionally used and well established parallel programming models OpenMP and MPI are both targeting lower level parallelism and are meant to be as language agnostic as possible. For a long time, those models were the only widely available portable options for developing parallel C++ applications beyond using plain threads. This has strongly limited the optimization capabilities of compilers, has inhibited extensibility and genericity, and has restricted the use of those models together with other, modern higher level abstractions introduced by the C++11 and C++14 standards. The recent revival of interest in the industry and wider community for the C++ language has also spurred a remarkable amount of standardization proposals and technical specifications being developed. Those efforts however have so far failed to build a vision on how to seamlessly integrate various types of parallelism, such as iterative parallel execution, task-based parallelism, asynchronous many-task execution flows, continuation s...

  10. Evaluation of the Defense Contract Audit Agency Audit Coverage of Tricare Contracts

    National Research Council Canada - National Science Library

    Brannin, Patricia

    2000-01-01

    Our objective was to evaluate the adequacy of the Defense Contract Audit Agency (DCAA) audit coverage of contracts for health care provided under TRICARE and the former Civilian Health Care and Medical Program of the Uniformed Services...

  11. Numerical kinematic transformation calculations for a parallel link manipulator

    International Nuclear Information System (INIS)

    Killough, S.M.

    1993-01-01

    Parallel link manipulators are often considered for particular robotic applications because of the unique advantages they provide. Unfortunately, they have significant disadvantages with respect to calculating the kinematic transformations because of the high-order equations that must be solved. Presented is a manipulator design that exploits the mechanical advantages of parallel links yet also has a corresponding numerical kinematic solution that can be solved in real time on common microcomputers

  12. Massively parallel whole genome amplification for single-cell sequencing using droplet microfluidics.

    Science.gov (United States)

    Hosokawa, Masahito; Nishikawa, Yohei; Kogawa, Masato; Takeyama, Haruko

    2017-07-12

    Massively parallel single-cell genome sequencing is required to further understand genetic diversities in complex biological systems. Whole genome amplification (WGA) is the first step for single-cell sequencing, but its throughput and accuracy are insufficient in conventional reaction platforms. Here, we introduce single droplet multiple displacement amplification (sd-MDA), a method that enables massively parallel amplification of single cell genomes while maintaining sequence accuracy and specificity. Tens of thousands of single cells are compartmentalized in millions of picoliter droplets and then subjected to lysis and WGA by passive droplet fusion in microfluidic channels. Because single cells are isolated in compartments, their genomes are amplified to saturation without contamination. This enables the high-throughput acquisition of contamination-free and cell specific sequence reads from single cells (21,000 single-cells/h), resulting in enhancement of the sequence data quality compared to conventional methods. This method allowed WGA of both single bacterial cells and human cancer cells. The obtained sequencing coverage rivals those of conventional techniques with superior sequence quality. In addition, we also demonstrate de novo assembly of uncultured soil bacteria and obtain draft genomes from single cell sequencing. This sd-MDA is promising for flexible and scalable use in single-cell sequencing.

  13. Increasing Coverage of Hepatitis B Vaccination in China

    OpenAIRE

    Wang, Shengnan; Smith, Helen; Peng, Zhuoxin; Xu, Biao; Wang, Weibing

    2016-01-01

    Abstract This study used a system evaluation method to summarize China's experience on improving the coverage of hepatitis B vaccine, especially the strategies employed to improve the uptake of timely birth dosage. Identifying successful methods and strategies will provide strong evidence for policy makers and health workers in other countries with high hepatitis B prevalence. We conducted a literature review included English or Chinese literature carried out in mainland China, using PubMed, ...

  14. Is expanding Medicare coverage cost-effective?

    Directory of Open Access Journals (Sweden)

    Muennig Peter

    2005-03-01

    Full Text Available Abstract Background Proposals to expand Medicare coverage tend to be expensive, but the value of services purchased is not known. This study evaluates the efficiency of the average private supplemental insurance plan for Medicare recipients. Methods Data from the National Health Interview Survey, the National Death Index, and the Medical Expenditure Panel Survey were analyzed to estimate the costs, changes in life expectancy, and health-related quality of life gains associated with providing private supplemental insurance coverage for Medicare beneficiaries. Model inputs included socio-demographic, health, and health behavior characteristics. Parameter estimates from regression models were used to predict quality-adjusted life years (QALYs and costs associated with private supplemental insurance relative to Medicare only. Markov decision analysis modeling was then employed to calculate incremental cost-effectiveness ratios. Results Medicare supplemental insurance is associated with increased health care utilization, but the additional costs associated with this utilization are offset by gains in quality-adjusted life expectancy. The incremental cost-effectiveness of private supplemental insurance is approximately $24,000 per QALY gained relative to Medicare alone. Conclusion Supplemental insurance for Medicare beneficiaries is a good value, with an incremental cost-effectiveness ratio comparable to medical interventions commonly deemed worthwhile.

  15. A massive parallel sequencing workflow for diagnostic genetic testing of mismatch repair genes

    Science.gov (United States)

    Hansen, Maren F; Neckmann, Ulrike; Lavik, Liss A S; Vold, Trine; Gilde, Bodil; Toft, Ragnhild K; Sjursen, Wenche

    2014-01-01

    The purpose of this study was to develop a massive parallel sequencing (MPS) workflow for diagnostic analysis of mismatch repair (MMR) genes using the GS Junior system (Roche). A pathogenic variant in one of four MMR genes, (MLH1, PMS2, MSH6, and MSH2), is the cause of Lynch Syndrome (LS), which mainly predispose to colorectal cancer. We used an amplicon-based sequencing method allowing specific and preferential amplification of the MMR genes including PMS2, of which several pseudogenes exist. The amplicons were pooled at different ratios to obtain coverage uniformity and maximize the throughput of a single-GS Junior run. In total, 60 previously identified and distinct variants (substitutions and indels), were sequenced by MPS and successfully detected. The heterozygote detection range was from 19% to 63% and dependent on sequence context and coverage. We were able to distinguish between false-positive and true-positive calls in homopolymeric regions by cross-sample comparison and evaluation of flow signal distributions. In addition, we filtered variants according to a predefined status, which facilitated variant annotation. Our study shows that implementation of MPS in routine diagnostics of LS can accelerate sample throughput and reduce costs without compromising sensitivity, compared to Sanger sequencing. PMID:24689082

  16. Massively parallel mathematical sieves

    Energy Technology Data Exchange (ETDEWEB)

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  17. CDMA coverage under mobile heterogeneous network load

    NARCIS (Netherlands)

    Saban, D.; van den Berg, Hans Leo; Boucherie, Richardus J.; Endrayanto, A.I.

    2002-01-01

    We analytically investigate coverage (determined by the uplink) under non-homogeneous and moving traffic load of third generation UMTS mobile networks. In particular, for different call assignment policies, we investigate cell breathing and the movement of the coverage gap occurring between cells

  18. 20 CFR 404.1913 - Precluding dual coverage.

    Science.gov (United States)

    2010-04-01

    ... precluding dual coverage to avoid inequitable or anomalous coverage situations for certain workers. However... 404.1913 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND DISABILITY...) General. Employment or self-employment or services recognized as equivalent under the Act or the social...

  19. Modular high-temperature gas-cooled reactor simulation using parallel processors

    International Nuclear Information System (INIS)

    Ball, S.J.; Conklin, J.C.

    1989-01-01

    The MHPP (Modular HTGR Parallel Processor) code has been developed to simulate modular high-temperature gas-cooled reactor (MHTGR) transients and accidents. MHPP incorporates a very detailed model for predicting the dynamics of the reactor core, vessel, and cooling systems over a wide variety of scenarios ranging from expected transients to very-low-probability severe accidents. The simulations routines, which had originally been developed entirely as serial code, were readily adapted to parallel processing Fortran. The resulting parallelized simulation speed was enhanced significantly. Workstation interfaces are being developed to provide for user (operator) interaction. In this paper the benefits realized by adapting previous MHTGR codes to run on a parallel processor are discussed, along with results of typical accident analyses

  20. Kalman Filter Tracking on Parallel Architectures

    International Nuclear Information System (INIS)

    Cerati, Giuseppe; Elmer, Peter; Krutelyov, Slava; Lantz, Steven; Lefebvre, Matthieu; McDermott, Kevin; Riley, Daniel; Tadel, Matevž; Wittich, Peter; Würthwein, Frank; Yagil, Avi

    2016-01-01

    Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen the introduction of lower-power, multi-core processors such as GPGPU, ARM and Intel MIC. In order to achieve the theoretical performance gains of these processors, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High-Luminosity Large Hadron Collider (HL-LHC), for example, this will be by far the dominant problem. The need for greater parallelism has driven investigations of very different track finding techniques such as Cellular Automata or Hough Transforms. The most common track finding techniques in use today, however, are those based on a Kalman filter approach. Significant experience has been accumulated with these techniques on real tracking detector systems, both in the trigger and offline. They are known to provide high physics performance, are robust, and are in use today at the LHC. Given the utility of the Kalman filter in track finding, we have begun to port these algorithms to parallel architectures, namely Intel Xeon and Xeon Phi. We report here on our progress towards an end-to-end track reconstruction algorithm fully exploiting vectorization and parallelization techniques in a simplified experimental environment

  1. Socio-economic inequality in oral healthcare coverage

    DEFF Research Database (Denmark)

    Hosseinpoor, A R; Itani, L; Petersen, P E

    2012-01-01

    wealth quintiles in each country, a wealth-based relative index of inequality was used to measure socio-economic inequality. The index was adjusted for sex, age, marital status, education, employment, overall health status, and urban/rural residence. Pro-rich inequality in oral healthcare coverage......The objective of this study was to assess socio-economic inequality in oral healthcare coverage among adults with expressed need living in 52 countries. Data on 60,332 adults aged 18 years or older were analyzed from 52 countries participating in the 2002-2004 World Health Survey. Oral healthcare...... coverage was defined as the proportion of individuals who received any medical care from a dentist or other oral health specialist during a period of 12 months prior to the survey, among those who expressed any mouth and/or teeth problems during that period. In addition to assessment of the coverage across...

  2. Data communications in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-11-12

    Data communications in a parallel active messaging interface (`PAMI`) of a parallel computer composed of compute nodes that execute a parallel application, each compute node including application processors that execute the parallel application and at least one management processor dedicated to gathering information regarding data communications. The PAMI is composed of data communications endpoints, each endpoint composed of a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes and the endpoints coupled for data communications through the PAMI and through data communications resources. Embodiments function by gathering call site statistics describing data communications resulting from execution of data communications instructions and identifying in dependence upon the call cite statistics a data communications algorithm for use in executing a data communications instruction at a call site in the parallel application.

  3. Parallelization of MCNP4 code by using simple FORTRAN algorithms

    International Nuclear Information System (INIS)

    Yazid, P.I.; Takano, Makoto; Masukawa, Fumihiro; Naito, Yoshitaka.

    1993-12-01

    Simple FORTRAN algorithms, that rely only on open, close, read and write statements, together with disk files and some UNIX commands have been applied to parallelization of MCNP4. The code, named MCNPNFS, maintains almost all capabilities of MCNP4 in solving shielding problems. It is able to perform parallel computing on a set of any UNIX workstations connected by a network, regardless of the heterogeneity in hardware system, provided that all processors produce a binary file in the same format. Further, it is confirmed that MCNPNFS can be executed also on Monte-4 vector-parallel computer. MCNPNFS has been tested intensively by executing 5 photon-neutron benchmark problems, a spent fuel cask problem and 17 sample problems included in the original code package of MCNP4. Three different workstations, connected by a network, have been used to execute MCNPNFS in parallel. By measuring CPU time, the parallel efficiency is determined to be 58% to 99% and 86% in average. On Monte-4, MCNPNFS has been executed using 4 processors concurrently and has achieved the parallel efficiency of 79% in average. (author)

  4. Progress towards the Conventionon Biological Diversity terrestrial2010 and marine 2012 targets forprotected area coverage

    DEFF Research Database (Denmark)

    Coad, Lauren; Burgess, Neil David; Fish, Lucy

    2010-01-01

    coverage targets. National protected areas data from the WDPA have been used to measure progress in protected areas coverage at global, regional and national scale. The mean protected area coverage per nation was 12.2% for terrestrial area, and only 5.1% for near-shore marine area. Variation in protected......Protected area coverage targets set by the Convention on Biological Diversity (CBD) for both terrestrial and marine environments provide a major incentive for governments to review and upgrade their protected area systems. Assessing progress towards these targets will form an important component...... of the work of the Xth CBD Conference of Parties meeting to be held in Japan in 2010. The World Database on Protected Areas (WDPA) is the largest assembly of data on the world's terrestrial and marine protected areas and, as such, represents a fundamental tool in tracking progress towards protected area...

  5. Collaborative Event-Driven Coverage and Rate Allocation for Event Miss-Ratio Assurances in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Ozgur Sanli H

    2010-01-01

    Full Text Available Wireless sensor networks are often required to provide event miss-ratio assurance for a given event type. To meet such assurances along with minimum energy consumption, this paper shows how a node's activation and rate assignment is dependent on its distance to event sources, and proposes a practical coverage and rate allocation (CORA protocol to exploit this dependency in realistic environments. Both uniform event distribution and nonuniform event distribution are considered and the notion of ideal correlation distance around a clusterhead is introduced for on-duty node selection. In correlation distance guided CORA, rate assignment assists coverage scheduling by determining which nodes should be activated for minimizing data redundancy in transmission. Coverage scheduling assists rate assignment by controlling the amount of overlap among sensing regions of neighboring nodes, thereby providing sufficient data correlation for rate assignment. Extensive simulation results show that CORA meets the required event miss-ratios in realistic environments. CORA's joint coverage scheduling and rate allocation reduce the total energy expenditure by 85%, average battery energy consumption by 25%, and the overhead of source coding up to 90% as compared to existing rate allocation techniques.

  6. A parallel buffer tree

    DEFF Research Database (Denmark)

    Sitchinava, Nodar; Zeh, Norbert

    2012-01-01

    We present the parallel buffer tree, a parallel external memory (PEM) data structure for batched search problems. This data structure is a non-trivial extension of Arge's sequential buffer tree to a private-cache multiprocessor environment and reduces the number of I/O operations by the number of...... in the optimal OhOf(psortN + K/PB) parallel I/O complexity, where K is the size of the output reported in the process and psortN is the parallel I/O complexity of sorting N elements using P processors....

  7. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  8. Heterogeneous Multicore Parallel Programming for Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Francois Bodin

    2009-01-01

    Full Text Available Hybrid parallel multicore architectures based on graphics processing units (GPUs can provide tremendous computing power. Current NVIDIA and AMD Graphics Product Group hardware display a peak performance of hundreds of gigaflops. However, exploiting GPUs from existing applications is a difficult task that requires non-portable rewriting of the code. In this paper, we present HMPP, a Heterogeneous Multicore Parallel Programming workbench with compilers, developed by CAPS entreprise, that allows the integration of heterogeneous hardware accelerators in a unintrusive manner while preserving the legacy code.

  9. Mediating Trust in Terrorism Coverage

    DEFF Research Database (Denmark)

    Mogensen, Kirsten

    crisis. While the framework is presented in the context of television coverage of a terror-related crisis situation, it can equally be used in connection with all other forms of mediated trust. Key words: National crisis, risk communication, crisis management, television coverage, mediated trust.......Mass mediated risk communication can contribute to perceptions of threats and fear of “others” and/or to perceptions of trust in fellow citizens and society to overcome problems. This paper outlines a cross-disciplinary holistic framework for research in mediated trust building during an acute...

  10. 28 CFR 55.5 - Coverage under section 4(f)(4).

    Science.gov (United States)

    2010-07-01

    ... THE VOTING RIGHTS ACT REGARDING LANGUAGE MINORITY GROUPS Nature of Coverage § 55.5 Coverage under section 4(f)(4). (a) Coverage formula. Section 4(f)(4) applies to any State or political subdivision in...) Coverage may be determined with regard to section 4(f)(4) on a statewide or political subdivision basis. (1...

  11. [Media coverage of suicide: From the epidemiological observations to prevention avenues].

    Science.gov (United States)

    Notredame, Charles-Édouard; Pauwels, Nathalie; Walter, Michel; Danel, Thierry; Vaiva, Guillaume

    2015-12-01

    Media coverage of suicide can result in increased morbi-mortality suicidal rates, due to an imitation process in those who are particularly vulnerable. This phenomenon is known as "Werther effect". Werther effect's magnitude depends on several qualitative and quantitative characteristics of the media coverage, in a dose-effect relationship. An extensive (in terms of audience and history repetition) and salient coverage (glorification of suicide, description of the suicidal method, etc.) increases the risk of contagion. Celebrities' suicide is particularly at risk of Werther effect. Media may also have a preventive role with respect to suicide. Indeed, according to "Papageno effect", journalists could, under certain conditions, help preventing suicide when reporting suicide stories. Two main theories in the field of social psychology have been proposed to account for Werther and Papageno effects: social learning theory and differential identification. Identification of Werther and Papageno effects uncovers new responsibilities and potentialities for the journalists in terms of public health. Their description provides a basis for promising targeted prevention actions. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  12. Target Coverage in Wireless Sensor Networks with Probabilistic Sensors

    Science.gov (United States)

    Shan, Anxing; Xu, Xianghua; Cheng, Zongmao

    2016-01-01

    Sensing coverage is a fundamental problem in wireless sensor networks (WSNs), which has attracted considerable attention. Conventional research on this topic focuses on the 0/1 coverage model, which is only a coarse approximation to the practical sensing model. In this paper, we study the target coverage problem, where the objective is to find the least number of sensor nodes in randomly-deployed WSNs based on the probabilistic sensing model. We analyze the joint detection probability of target with multiple sensors. Based on the theoretical analysis of the detection probability, we formulate the minimum ϵ-detection coverage problem. We prove that the minimum ϵ-detection coverage problem is NP-hard and present an approximation algorithm called the Probabilistic Sensor Coverage Algorithm (PSCA) with provable approximation ratios. To evaluate our design, we analyze the performance of PSCA theoretically and also perform extensive simulations to demonstrate the effectiveness of our proposed algorithm. PMID:27618902

  13. Adapting high-level language programs for parallel processing using data flow

    Science.gov (United States)

    Standley, Hilda M.

    1988-01-01

    EASY-FLOW, a very high-level data flow language, is introduced for the purpose of adapting programs written in a conventional high-level language to a parallel environment. The level of parallelism provided is of the large-grained variety in which parallel activities take place between subprograms or processes. A program written in EASY-FLOW is a set of subprogram calls as units, structured by iteration, branching, and distribution constructs. A data flow graph may be deduced from an EASY-FLOW program.

  14. Energization of Long HVAC Cables in Parallel - Analysis and Estimation Formulas

    DEFF Research Database (Denmark)

    Silva, Filipe Faria Da; Bak, Claus Leth

    2012-01-01

    The installation of long HVAC cables has recently become more common and it tends to increase during the next years. Consequently, the energization of long HVAC cables in parallel is also a more common condition. The energization of HVAC cables in parallel resembles the en-ergization of capacitor...... has several simplifications and does not always provide accurate results. This paper proposes a new formula that can be used for the estimation of these two quantities for two HVAC cables in parallel....

  15. On-line event reconstruction using a parallel in-memory data base

    OpenAIRE

    Argante, E; Van der Stok, P D V; Willers, Ian Malcolm

    1995-01-01

    PORS is a system designed for on-line event reconstruction in high energy physics (HEP) experiments. It uses the CPREAD reconstruction program. Central to the system is a parallel in-memory database which is used as communication medium between parallel workers. A farming control structure is implemented with PORS in a natural way. The database provides structured storage of data with a short life time. PORS serves as a case study for the construction of a methodology on how to apply parallel...

  16. Investing in Nurses is a Prerequisite for Ensuring Universal Health Coverage.

    Science.gov (United States)

    Kurth, Ann E; Jacob, Sheena; Squires, Allison P; Sliney, Anne; Davis, Sheila; Stalls, Suzanne; Portillo, Carmen J

    2016-01-01

    Nurses and midwives constitute the majority of the global health workforce and the largest health care expenditure. Efficient production, successful deployment, and ongoing retention based on carefully constructed policies regarding the career opportunities of nurses, midwives, and other providers in health care systems are key to ensuring universal health coverage. Yet nurses are constrained by practice regulations, workplaces, and career ladder barriers from contributing to primary health care delivery. Evidence shows that quality HIV care, comparable to that of physicians, is provided by trained nurses and associate clinicians, but many African countries' health systems remain dependent on limited numbers of physicians and fail to meet the demand for treatment. The World Health Organization endorses task sharing to ensure universal health coverage in HIV and maternal health, which requires an investment in nursing education, retention, and professional growth opportunities. Exemplars from Haiti, Rwanda, Republic of Georgia, and multi-country efforts are described. Copyright © 2016 Association of Nurses in AIDS Care. Published by Elsevier Inc. All rights reserved.

  17. Coverage of private sector community midwife services in rural Punjab, Pakistan: development and demand.

    Science.gov (United States)

    Mumtaz, Zubia; Levay, Adrienne V; Jhangri, Gian S; Bhatti, Afshan

    2015-11-25

    In 2007, the Government of Pakistan introduced a new cadre of community midwives (CMWs) to address low skilled birth attendance rates in rural areas; this workforce is located in the private-sector. There are concerns about the effectiveness of the programme for increasing skilled birth attendance as previous experience from private-sector programmes has been sub-optimal. Indonesia first promoted private sector midwifery care, but the initiative failed to provide universal coverage and reduce maternal mortality rates. A clustered, stratified survey was conducted in the districts of Jhelum and Layyah, Punjab. A total of 1,457 women who gave birth in the 2 years prior to the survey were interviewed. χ(2) analyses were performed to assess variation in coverage of maternal health services between the two districts. Logistic regression models were developed to explore whether differentials in coverage between the two districts could be explained by differential levels of development and demand for skilled birth attendance. Mean cost of childbirth care by type of provider was also calculated. Overall, 7.9% of women surveyed reported a CMW-attended birth. Women in Jhelum were six times more likely to report a CMW-attended birth than women in Layyah. The mean cost of a CMW-attended birth compared favourably with a dai-attended birth. The CMWs were, however, having difficulty garnering community trust. The majority of women, when asked why they had not sought care from their neighbourhood CMW, cited a lack of trust in CMWs' competency and that they wanted a different provider. The CMWs have yet to emerge as a significant maternity care provider in rural Punjab. Levels of overall community development determined uptake and hence coverage of CMW care. The CMWs were able to insert themselves into the maternal health marketplace in Jhelum because of an existing demand. A lower demand in Layyah meant there was less 'space' for the CMWs to enter the market. To ensure universal

  18. Determining the dimensions of essential medical coverage required by military body armour plates utilising Computed Tomography.

    Science.gov (United States)

    Breeze, J; Lewis, E A; Fryer, R

    2016-09-01

    Military body armour is designed to prevent the penetration of ballistic projectiles into the most vulnerable structures within the thorax and abdomen. Currently the OSPREY and VIRTUS body armour systems issued to United Kingdom (UK) Armed Forces personnel are provided with a single size front and rear ceramic plate regardless of the individual's body dimensions. Currently limited information exists to determine whether these plates overprotect some members of the military population, and no method exists to accurately size plates to an individual. Computed Tomography (CT) scans of 120 male Caucasian UK Armed Forces personnel were analysed to measure the dimensions of internal thoraco-abdominal anatomical structures that had been defined as requiring essential medical coverage. The boundaries of these structures were related to three potential anthropometric landmarks on the skin surface and statistical analysis was undertaken to validate the results. The range of heights of each individual used in this study was comparable to previous anthropometric surveys, confirming that a representative sample had been used. The vertical dimension of essential medical coverage demonstrated good correlation to torso height (suprasternal notch to iliac crest) but not to stature (r(2)=0.53 versus 0.04). Horizontal coverage did not correlate to either measure of height. Surface landmarks utilised in this study were proven to be reliable surrogate markers for the boundaries of the underlying anatomical structures potentially requiring essential protection by a plate. Providing a range of plate sizes, particularly multiple heights, should optimise the medical coverage and thus effectiveness of body armour for UK Armed Forces personnel. The results of this work provide evidence that a single width of plate if chosen correctly will provide the essential medical coverage for the entire military population, whilst recognising that it still could overprotect the smallest individuals

  19. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  20. Parallel grid generation algorithm for distributed memory computers

    Science.gov (United States)

    Moitra, Stuti; Moitra, Anutosh

    1994-01-01

    A parallel grid-generation algorithm and its implementation on the Intel iPSC/860 computer are described. The grid-generation scheme is based on an algebraic formulation of homotopic relations. Methods for utilizing the inherent parallelism of the grid-generation scheme are described, and implementation of multiple levELs of parallelism on multiple instruction multiple data machines are indicated. The algorithm is capable of providing near orthogonality and spacing control at solid boundaries while requiring minimal interprocessor communications. Results obtained on the Intel hypercube for a blended wing-body configuration are used to demonstrate the effectiveness of the algorithm. Fortran implementations bAsed on the native programming model of the iPSC/860 computer and the Express system of software tools are reported. Computational gains in execution time speed-up ratios are given.

  1. A solution for automatic parallelization of sequential assembly code

    Directory of Open Access Journals (Sweden)

    Kovačević Đorđe

    2013-01-01

    Full Text Available Since modern multicore processors can execute existing sequential programs only on a single core, there is a strong need for automatic parallelization of program code. Relying on existing algorithms, this paper describes one new software solution tool for parallelization of sequential assembly code. The main goal of this paper is to develop the parallelizator which reads sequential assembler code and at the output provides parallelized code for MIPS processor with multiple cores. The idea is the following: the parser translates assembler input file to program objects suitable for further processing. After that the static single assignment is done. Based on the data flow graph, the parallelization algorithm separates instructions on different cores. Once sequential code is parallelized by the parallelization algorithm, registers are allocated with the algorithm for linear allocation, and the result at the end of the program is distributed assembler code on each of the cores. In the paper we evaluate the speedup of the matrix multiplication example, which was processed by the parallelizator of assembly code. The result is almost linear speedup of code execution, which increases with the number of cores. The speed up on the two cores is 1.99, while on 16 cores the speed up is 13.88.

  2. 12 CFR 334.20 - Coverage and definitions.

    Science.gov (United States)

    2010-01-01

    ... FAIR CREDIT REPORTING Affiliate Marketing § 334.20 Coverage and definitions. (a) Coverage. Subpart C of... account numbers, names, or addresses. (4) Pre-existing business relationship. (i) In general. The term “pre-existing business relationship” means a relationship between a person, or a person's licensed...

  3. 12 CFR 571.20 - Coverage and definitions.

    Science.gov (United States)

    2010-01-01

    ... Affiliate Marketing § 571.20 Coverage and definitions. (a) Coverage. Subpart C of this part applies to... account numbers, names, or addresses. (4) Pre-existing business relationship. (i) In general. The term “pre-existing business relationship” means a relationship between a person, or a person's licensed...

  4. A Parallel Encryption Algorithm Based on Piecewise Linear Chaotic Map

    Directory of Open Access Journals (Sweden)

    Xizhong Wang

    2013-01-01

    Full Text Available We introduce a parallel chaos-based encryption algorithm for taking advantage of multicore processors. The chaotic cryptosystem is generated by the piecewise linear chaotic map (PWLCM. The parallel algorithm is designed with a master/slave communication model with the Message Passing Interface (MPI. The algorithm is suitable not only for multicore processors but also for the single-processor architecture. The experimental results show that the chaos-based cryptosystem possesses good statistical properties. The parallel algorithm provides much better performance than the serial ones and would be useful to apply in encryption/decryption file with large size or multimedia.

  5. Health insurance coverage and impact: a survey in three cities in China.

    Science.gov (United States)

    Fang, Kuangnan; Shia, BenChang; Ma, Shuangge

    2012-01-01

    China has one of the world's largest health insurance systems, composed of government-run basic health insurance and commercial health insurance. The basic health insurance has undergone system-wide reform in recent years. Meanwhile, there is also significant development in the commercial health insurance sector. A phone call survey was conducted in three major cities in China in July and August, 2011. The goal was to provide an updated description of the effect of health insurance on the population covered. Of special interest were insurance coverage, gross and out-of-pocket medical cost and coping strategies. Records on 5,097 households were collected. Analysis showed that smaller households, higher income, lower expense, presence of at least one inpatient treatment and living in rural areas were significantly associated with a lower overall coverage rate. In the separate analysis of basic and commercial health insurance, similar factors were found to have significant associations. Higher income, presence of chronic disease, presence of inpatient treatment, higher coverage rates and living in urban areas were significantly associated with higher gross medical cost. A similar set of factors were significantly associated with higher out-of-pocket cost. Households with lower income, inpatient treatment, higher commercial insurance coverage, and living in rural areas were significantly more likely to pursue coping strategies other than salary. The surveyed cities and surrounding rural areas had socioeconomic status far above China's average. However, there was still a need to further improve coverage. Even for households with coverage, there was considerable out-of-pocket medical cost, particularly for households with inpatient treatments and/or chronic diseases. A small percentage of households were unable to self-finance out-of-pocket medical cost. Such observations suggest possible targets for further improving the health insurance system.

  6. 5 CFR 890.401 - Temporary extension of coverage and conversion.

    Science.gov (United States)

    2010-01-01

    ... exercise the right of conversion provided for by this part. The 31-day extension of coverage and the right... benefits with respect to that person while he or she is entitled to any inpatient benefits under the prior... Program contract on the day after the day all inpatient benefits have been exhausted under the prior plan...

  7. The Coverage of the Holocaust in High School History Textbooks

    Science.gov (United States)

    Lindquist, David

    2009-01-01

    The Holocaust is now a regular part of high school history curricula throughout the United States and, as a result, coverage of the Holocaust has become a standard feature of high school textbooks. As with any major event, it is important for textbooks to provide a rigorously accurate and valid historical account. In dealing with the Holocaust,…

  8. 20 CFR 701.401 - Coverage under state compensation programs.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Coverage under state compensation programs...; DEFINITIONS AND USE OF TERMS Coverage Under State Compensation Programs § 701.401 Coverage under state compensation programs. (a) Exclusions from the definition of “employee” under § 701.301(a)(12), and the...

  9. Java parallel secure stream for grid computing

    International Nuclear Information System (INIS)

    Chen, J.; Akers, W.; Chen, Y.; Watson, W.

    2001-01-01

    The emergence of high speed wide area networks makes grid computing a reality. However grid applications that need reliable data transfer still have difficulties to achieve optimal TCP performance due to network tuning of TCP window size to improve the bandwidth and to reduce latency on a high speed wide area network. The authors present a pure Java package called JPARSS (Java Parallel Secure Stream) that divides data into partitions that are sent over several parallel Java streams simultaneously and allows Java or Web applications to achieve optimal TCP performance in a gird environment without the necessity of tuning the TCP window size. Several experimental results are provided to show that using parallel stream is more effective than tuning TCP window size. In addition X.509 certificate based single sign-on mechanism and SSL based connection establishment are integrated into this package. Finally a few applications using this package will be discussed

  10. Abstract Level Parallelization of Finite Difference Methods

    Directory of Open Access Journals (Sweden)

    Edwin Vollebregt

    1997-01-01

    Full Text Available A formalism is proposed for describing finite difference calculations in an abstract way. The formalism consists of index sets and stencils, for characterizing the structure of sets of data items and interactions between data items (“neighbouring relations”. The formalism provides a means for lifting programming to a more abstract level. This simplifies the tasks of performance analysis and verification of correctness, and opens the way for automaticcode generation. The notation is particularly useful in parallelization, for the systematic construction of parallel programs in a process/channel programming paradigm (e.g., message passing. This is important because message passing, unfortunately, still is the only approach that leads to acceptable performance for many more unstructured or irregular problems on parallel computers that have non-uniform memory access times. It will be shown that the use of index sets and stencils greatly simplifies the determination of which data must be exchanged between different computing processes.

  11. A Parallel Numerical Micromagnetic Code Using FEniCS

    Science.gov (United States)

    Nagy, L.; Williams, W.; Mitchell, L.

    2013-12-01

    Many problems in the geosciences depend on understanding the ability of magnetic minerals to provide stable paleomagnetic recordings. Numerical micromagnetic modelling allows us to calculate the domain structures found in naturally occurring magnetic materials. However the computational cost rises exceedingly quickly with respect to the size and complexity of the geometries that we wish to model. This problem is compounded by the fact that the modern processor design no longer focuses on the speed at which calculations are performed, but rather on the number of computational units amongst which we may distribute our calculations. Consequently to better exploit modern computational resources our micromagnetic simulations must "go parallel". We present a parallel and scalable micromagnetics code written using FEniCS. FEniCS is a multinational collaboration involving several institutions (University of Cambridge, University of Chicago, The Simula Research Laboratory, etc.) that aims to provide a set of tools for writing scientific software; in particular software that employs the finite element method. The advantages of this approach are the leveraging of pre-existing projects from the world of scientific computing (PETSc, Trilinos, Metis/Parmetis, etc.) and exposing these so that researchers may pose problems in a manner closer to the mathematical language of their domain. Our code provides a scriptable interface (in Python) that allows users to not only run micromagnetic models in parallel, but also to perform pre/post processing of data.

  12. Evaluation of Coverage and Barriers to Access to MAM Treatment in West Pokot County, Kenya

    International Nuclear Information System (INIS)

    Basquin, Cecile; Imelda, Awino; Gallagher, Maureen

    2014-01-01

    Full text: Despite an increased number of nutrition treatment coverage assessments conducted, they often focus on Severe Acute Malnutrition (SAM) treatment. In a recent experience in Kenya, Action Against Hunger| ACF International (ACF) conducted a coverage assessment to evaluate access to SAM and Moderate Acute Malnutrition (MAM) treatment. ACF supports the Ministry of Health (MoH) in delivering SAM and MAM treatment at health facility level through an Integrated Management of Acute Malnutrition (IMAM) programme in West Pokot county since 2011. In order to evaluate the coverage of Outpatient Therapeutic Programme (OTP) and Supplementary Feeding Programme (SFP) components, the Simplified Lot Quality Assurance Sampling Evaluation of Access and Coverage (SLEAC) methodology was used. The goals of the coverage assessment were i) to estimate coverage for OTP and SFP; ii) to identify barriers to access to SAM and MAM treatment; iii) to evaluate whether any differences exist between barriers to access to SAM versus to MAM treatment as SFP coverage and uptake of MAM services were never assessed before; and iv) to build local capacities in assessing coverage and to provide recommendations for the MoH-led IMAM programme. With the support of the Coverage Monitoring Network (CMN), ACF led the SLEAC assessment as part of an on-the-job training exercise for MoH and partners in July 2013, covering all of West Pokot county. SLEAC is a rapid and low-resource survey method that uses a three-tier classification approach to evaluate and classify coverage, i.e., low coverage: < 20%; moderate: 20% -50%; and high coverage: ≤ 50%. In a first sampling stage, villages in each of the four sub-counties were randomly selected using systematic sampling. In a second sampling stage, in order to also assess MAM coverage, a house-to-house approach was applied to identify all or near all acutely malnourished children using Mid Upper Arm Circumference (MUAC) tape and identification of bilateral

  13. 28 CFR 55.7 - Termination of coverage.

    Science.gov (United States)

    2010-07-01

    ... VOTING RIGHTS ACT REGARDING LANGUAGE MINORITY GROUPS Nature of Coverage § 55.7 Termination of coverage. (a) Section 4(f)(4). A covered State, a political subdivision of a covered State, or a separately covered political subdivision may terminate the application of section 4(f)(4) by obtaining the...

  14. Stability of parallel flows

    CERN Document Server

    Betchov, R

    2012-01-01

    Stability of Parallel Flows provides information pertinent to hydrodynamical stability. This book explores the stability problems that occur in various fields, including electronics, mechanics, oceanography, administration, economics, as well as naval and aeronautical engineering. Organized into two parts encompassing 10 chapters, this book starts with an overview of the general equations of a two-dimensional incompressible flow. This text then explores the stability of a laminar boundary layer and presents the equation of the inviscid approximation. Other chapters present the general equation

  15. On Connected Target k-Coverage in Heterogeneous Wireless Sensor Networks.

    Science.gov (United States)

    Yu, Jiguo; Chen, Ying; Ma, Liran; Huang, Baogui; Cheng, Xiuzhen

    2016-01-15

    Coverage and connectivity are two important performance evaluation indices for wireless sensor networks (WSNs). In this paper, we focus on the connected target k-coverage (CTC k) problem in heterogeneous wireless sensor networks (HWSNs). A centralized connected target k-coverage algorithm (CCTC k) and a distributed connected target k-coverage algorithm (DCTC k) are proposed so as to generate connected cover sets for energy-efficient connectivity and coverage maintenance. To be specific, our proposed algorithms aim at achieving minimum connected target k-coverage, where each target in the monitored region is covered by at least k active sensor nodes. In addition, these two algorithms strive to minimize the total number of active sensor nodes and guarantee that each sensor node is connected to a sink, such that the sensed data can be forwarded to the sink. Our theoretical analysis and simulation results show that our proposed algorithms outperform a state-of-art connected k-coverage protocol for HWSNs.

  16. Small file aggregation in a parallel computing system

    Science.gov (United States)

    Faibish, Sorin; Bent, John M.; Tzelnic, Percy; Grider, Gary; Zhang, Jingwang

    2014-09-02

    Techniques are provided for small file aggregation in a parallel computing system. An exemplary method for storing a plurality of files generated by a plurality of processes in a parallel computing system comprises aggregating the plurality of files into a single aggregated file; and generating metadata for the single aggregated file. The metadata comprises an offset and a length of each of the plurality of files in the single aggregated file. The metadata can be used to unpack one or more of the files from the single aggregated file.

  17. A high performance parallel approach to medical imaging

    International Nuclear Information System (INIS)

    Frieder, G.; Frieder, O.; Stytz, M.R.

    1988-01-01

    Research into medical imaging using general purpose parallel processing architectures is described and a review of the performance of previous medical imaging machines is provided. Results demonstrating that general purpose parallel architectures can achieve performance comparable to other, specialized, medical imaging machine architectures is presented. A new back-to-front hidden-surface removal algorithm is described. Results demonstrating the computational savings obtained by using the modified back-to-front hidden-surface removal algorithm are presented. Performance figures for forming a full-scale medical image on a mesh interconnected multiprocessor are presented

  18. The island dynamics model on parallel quadtree grids

    Science.gov (United States)

    Mistani, Pouria; Guittet, Arthur; Bochkov, Daniil; Schneider, Joshua; Margetis, Dionisios; Ratsch, Christian; Gibou, Frederic

    2018-05-01

    We introduce an approach for simulating epitaxial growth by use of an island dynamics model on a forest of quadtree grids, and in a parallel environment. To this end, we use a parallel framework introduced in the context of the level-set method. This framework utilizes: discretizations that achieve a second-order accurate level-set method on non-graded adaptive Cartesian grids for solving the associated free boundary value problem for surface diffusion; and an established library for the partitioning of the grid. We consider the cases with: irreversible aggregation, which amounts to applying Dirichlet boundary conditions at the island boundary; and an asymmetric (Ehrlich-Schwoebel) energy barrier for attachment/detachment of atoms at the island boundary, which entails the use of a Robin boundary condition. We provide the scaling analyses performed on the Stampede supercomputer and numerical examples that illustrate the capability of our methodology to efficiently simulate different aspects of epitaxial growth. The combination of adaptivity and parallelism in our approach enables simulations that are several orders of magnitude faster than those reported in the recent literature and, thus, provides a viable framework for the systematic study of mound formation on crystal surfaces.

  19. Vacuum Large Current Parallel Transfer Numerical Analysis

    Directory of Open Access Journals (Sweden)

    Enyuan Dong

    2014-01-01

    Full Text Available The stable operation and reliable breaking of large generator current are a difficult problem in power system. It can be solved successfully by the parallel interrupters and proper timing sequence with phase-control technology, in which the strategy of breaker’s control is decided by the time of both the first-opening phase and second-opening phase. The precise transfer current’s model can provide the proper timing sequence to break the generator circuit breaker. By analysis of the transfer current’s experiments and data, the real vacuum arc resistance and precise correctional model in the large transfer current’s process are obtained in this paper. The transfer time calculated by the correctional model of transfer current is very close to the actual transfer time. It can provide guidance for planning proper timing sequence and breaking the vacuum generator circuit breaker with the parallel interrupters.

  20. Expanding Kenya's protected areas under the Convention on Biological Diversity to maximize coverage of plant diversity.

    Science.gov (United States)

    Scherer, Laura; Curran, Michael; Alvarez, Miguel

    2017-04-01

    Biodiversity is highly valuable and critically threatened by anthropogenic degradation of the natural environment. In response, governments have pledged enhanced protected-area coverage, which requires scarce biological data to identify conservation priorities. To assist this effort, we mapped conservation priorities in Kenya based on maximizing alpha (species richness) and beta diversity (species turnover) of plant communities while minimizing economic costs. We used plant-cover percentages from vegetation surveys of over 2000 plots to build separate models for each type of diversity. Opportunity and management costs were based on literature data and interviews with conservation organizations. Species richness was predicted to be highest in a belt from Lake Turkana through Mount Kenya and in a belt parallel to the coast, and species turnover was predicted to be highest in western Kenya and along the coast. Our results suggest the expanding reserve network should focus on the coast and northeastern provinces of Kenya, where new biological surveys would also fill biological data gaps. Meeting the Convention on Biological Diversity target of 17% terrestrial coverage by 2020 would increase representation of Kenya's plant communities by 75%. However, this would require about 50 times more funds than Kenya has received thus far from the Global Environment Facility. © 2016 Society for Conservation Biology.

  1. Parallel implementation of the PHOENIX generalized stellar atmosphere program. II. Wavelength parallelization

    International Nuclear Information System (INIS)

    Baron, E.; Hauschildt, Peter H.

    1998-01-01

    We describe an important addition to the parallel implementation of our generalized nonlocal thermodynamic equilibrium (NLTE) stellar atmosphere and radiative transfer computer program PHOENIX. In a previous paper in this series we described data and task parallel algorithms we have developed for radiative transfer, spectral line opacity, and NLTE opacity and rate calculations. These algorithms divided the work spatially or by spectral lines, that is, distributing the radial zones, individual spectral lines, or characteristic rays among different processors and employ, in addition, task parallelism for logically independent functions (such as atomic and molecular line opacities). For finite, monotonic velocity fields, the radiative transfer equation is an initial value problem in wavelength, and hence each wavelength point depends upon the previous one. However, for sophisticated NLTE models of both static and moving atmospheres needed to accurately describe, e.g., novae and supernovae, the number of wavelength points is very large (200,000 - 300,000) and hence parallelization over wavelength can lead both to considerable speedup in calculation time and the ability to make use of the aggregate memory available on massively parallel supercomputers. Here, we describe an implementation of a pipelined design for the wavelength parallelization of PHOENIX, where the necessary data from the processor working on a previous wavelength point is sent to the processor working on the succeeding wavelength point as soon as it is known. Our implementation uses a MIMD design based on a relatively small number of standard message passing interface (MPI) library calls and is fully portable between serial and parallel computers. copyright 1998 The American Astronomical Society

  2. User experience with a health insurance coverage and benefit-package access: implications for policy implementation towards expansion in Nigeria.

    Science.gov (United States)

    Mohammed, Shafiu; Aji, Budi; Bermejo, Justo Lorenzo; Souares, Aurelia; Dong, Hengjin; Sauerborn, Rainer

    2016-04-01

    Developing countries are devising strategies and mechanisms to expand coverage and benefit-package access for their citizens through national health insurance schemes (NHIS). In Nigeria, the scheme aims to provide affordable healthcare services to insured-persons and their dependants. However, inclusion of dependants is restricted to four biological children and a spouse per user. This study assesses the progress of implementation of the NHIS in Nigeria, relating to coverage and benefit-package access, and examines individual factors associated with the implementation, according to users' perspectives. A retrospective, cross-sectional survey was done between October 2010 and March 2011 in Kaduna state and 796 users were randomly interviewed. Questions regarding coverage of immediate-family members and access to benefit-package for treatment were analysed. Indicators of coverage and benefit-package access were each further aggregated and assessed by unit-weighted composite. The additive-ordinary least square regression model was used to identify user factors that may influence coverage and benefit-package access. With respect to coverage, immediate-dependants were included for 62.3% of the users, and 49.6 rated this inclusion 'good' (49.6%). In contrast, 60.2% supported the abolishment of the policy restriction for non-inclusion of enrolees' additional children and spouses. With respect to benefit-package access, 82.7% of users had received full treatments, and 77.6% of them rated this as 'good'. Also, 14.4% of users had been refused treatments because they could not afford them. The coverage of immediate-dependants was associated with age, sex, educational status, children and enrolment duration. The benefit-package access was associated with types of providers, marital status and duration of enrolment. This study revealed that coverage of family members was relatively poor, while benefit-package access was more adequate. Non-inclusion of family members could

  3. 12 CFR 717.20 - Coverage and definitions

    Science.gov (United States)

    2010-01-01

    ... REPORTING Affiliate Marketing § 717.20 Coverage and definitions (a) Coverage. Subpart C of this part applies...-existing business relationship. (i) In general. The term “pre-existing business relationship” means a relationship between a person, or a person's licensed agent, and a consumer based on— (A) A financial contract...

  4. Determinants of vaccination coverage among pastoralists in north ...

    African Journals Online (AJOL)

    Determinants of vaccination coverage among pastoralists in north eastern Kenya. ... Attitudes, and Practices (KAPs) on vaccination coverage among settled and ... We used a structured instrument to survey pastoralist mothers with children ...

  5. Societal Implications of Health Insurance Coverage for Medically Necessary Services in the U.S. Transgender Population: A Cost-Effectiveness Analysis.

    Science.gov (United States)

    Padula, William V; Heru, Shiona; Campbell, Jonathan D

    2016-04-01

    Recently, the Massachusetts Group Insurance Commission (GIC) prioritized research on the implications of a clause expressly prohibiting the denial of health insurance coverage for transgender-related services. These medically necessary services include primary and preventive care as well as transitional therapy. To analyze the cost-effectiveness of insurance coverage for medically necessary transgender-related services. Markov model with 5- and 10-year time horizons from a U.S. societal perspective, discounted at 3% (USD 2013). Data on outcomes were abstracted from the 2011 National Transgender Discrimination Survey (NTDS). U.S. transgender population starting before transitional therapy. No health benefits compared to health insurance coverage for medically necessary services. This coverage can lead to hormone replacement therapy, sex reassignment surgery, or both. Cost per quality-adjusted life year (QALY) for successful transition or negative outcomes (e.g. HIV, depression, suicidality, drug abuse, mortality) dependent on insurance coverage or no health benefit at a willingness-to-pay threshold of $100,000/QALY. Budget impact interpreted as the U.S. per-member-per-month cost. Compared to no health benefits for transgender patients ($23,619; 6.49 QALYs), insurance coverage for medically necessary services came at a greater cost and effectiveness ($31,816; 7.37 QALYs), with an incremental cost-effectiveness ratio (ICER) of $9314/QALY. The budget impact of this coverage is approximately $0.016 per member per month. Although the cost for transitions is $10,000-22,000 and the cost of provider coverage is $2175/year, these additional expenses hold good value for reducing the risk of negative endpoints--HIV, depression, suicidality, and drug abuse. Results were robust to uncertainty. The probabilistic sensitivity analysis showed that provider coverage was cost-effective in 85% of simulations. Health insurance coverage for the U.S. transgender population is affordable

  6. Interpregnancy intervals: impact of postpartum contraceptive effectiveness and coverage.

    Science.gov (United States)

    Thiel de Bocanegra, Heike; Chang, Richard; Howell, Mike; Darney, Philip

    2014-04-01

    The purpose of this study was to determine the use of contraceptive methods, which was defined by effectiveness, length of coverage, and their association with short interpregnancy intervals, when controlling for provider type and client demographics. We identified a cohort of 117,644 women from the 2008 California Birth Statistical Master file with second or higher order birth and at least 1 Medicaid (Family Planning, Access, Care, and Treatment [Family PACT] program or Medi-Cal) claim within 18 months after index birth. We explored the effect of contraceptive method provision on the odds of having an optimal interpregnancy interval and controlled for covariates. The average length of contraceptive coverage was 3.81 months (SD = 4.84). Most women received user-dependent hormonal contraceptives as their most effective contraceptive method (55%; n = 65,103 women) and one-third (33%; n = 39,090 women) had no contraceptive claim. Women who used long-acting reversible contraceptive methods had 3.89 times the odds and women who used user-dependent hormonal methods had 1.89 times the odds of achieving an optimal birth interval compared with women who used barrier methods only; women with no method had 0.66 times the odds. When user-dependent methods are considered, the odds of having an optimal birth interval increased for each additional month of contraceptive coverage by 8% (odds ratio, 1.08; 95% confidence interval, 1.08-1.09). Women who were seen by Family PACT or by both Family PACT and Medi-Cal providers had significantly higher odds of optimal birth intervals compared with women who were served by Medi-Cal only. To achieve optimal birth spacing and ultimately to improve birth outcomes, attention should be given to contraceptive counseling and access to contraceptive methods in the postpartum period. Copyright © 2014 Mosby, Inc. All rights reserved.

  7. Indonesia's road to universal health coverage: a political journey.

    Science.gov (United States)

    Pisani, Elizabeth; Olivier Kok, Maarten; Nugroho, Kharisma

    2017-03-01

    In 2013 Indonesia, the world's fourth most populous country, declared that it would provide affordable health care for all its citizens within seven years. This crystallised an ambition first enshrined in law over five decades earlier, but never previously realised. This paper explores Indonesia's journey towards universal health coverage (UHC) from independence to the launch of a comprehensive health insurance scheme in January 2014. We find that Indonesia's path has been determined largely by domestic political concerns – different groups obtained access to healthcare as their socio-political importance grew. A major inflection point occurred following the Asian financial crisis of 1997. To stave off social unrest, the government provided health coverage for the poor for the first time, creating a path dependency that influenced later policy choices. The end of this programme coincided with decentralisation, leading to experimentation with several different models of health provision at the local level. When direct elections for local leaders were introduced in 2005, popular health schemes led to success at the polls. UHC became an electoral asset, moving up the political agenda. It also became contested, with national policy-makers appropriating health insurance programmes that were first developed locally, and taking credit for them. The Indonesian experience underlines the value of policy experimentation, and of a close understanding of the contextual and political factors that drive successful UHC models at the local level. Specific drivers of success and failure should be taken into account when scaling UHC to the national level. In the Indonesian example, UHC became possible when the interests of politically and economically influential groups were either satisfied or neutralised. While technical considerations took a back seat to political priorities in developing the structures for health coverage nationally, they will have to be addressed going forward

  8. Parallel k-means++

    Energy Technology Data Exchange (ETDEWEB)

    2017-04-04

    A parallelization of the k-means++ seed selection algorithm on three distinct hardware platforms: GPU, multicore CPU, and multithreaded architecture. K-means++ was developed by David Arthur and Sergei Vassilvitskii in 2007 as an extension of the k-means data clustering technique. These algorithms allow people to cluster multidimensional data, by attempting to minimize the mean distance of data points within a cluster. K-means++ improved upon traditional k-means by using a more intelligent approach to selecting the initial seeds for the clustering process. While k-means++ has become a popular alternative to traditional k-means clustering, little work has been done to parallelize this technique. We have developed original C++ code for parallelizing the algorithm on three unique hardware architectures: GPU using NVidia's CUDA/Thrust framework, multicore CPU using OpenMP, and the Cray XMT multithreaded architecture. By parallelizing the process for these platforms, we are able to perform k-means++ clustering much more quickly than it could be done before.

  9. Parallel magnetic resonance imaging

    International Nuclear Information System (INIS)

    Larkman, David J; Nunes, Rita G

    2007-01-01

    Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed. (invited topical review)

  10. 77 FR 8667 - Summary of Benefits and Coverage and Uniform Glossary

    Science.gov (United States)

    2012-02-14

    ... health plan. There are three general scenarios under which an SBC will be provided: (1) By a group health... enrolled, a plan or issuer generally has up to seven business days (rather than seven calendar days, as... comments regarding the coverage examples. Some comments supported the general approach in the proposed...

  11. Experiences in Data-Parallel Programming

    Directory of Open Access Journals (Sweden)

    Terry W. Clark

    1997-01-01

    Full Text Available To efficiently parallelize a scientific application with a data-parallel compiler requires certain structural properties in the source program, and conversely, the absence of others. A recent parallelization effort of ours reinforced this observation and motivated this correspondence. Specifically, we have transformed a Fortran 77 version of GROMOS, a popular dusty-deck program for molecular dynamics, into Fortran D, a data-parallel dialect of Fortran. During this transformation we have encountered a number of difficulties that probably are neither limited to this particular application nor do they seem likely to be addressed by improved compiler technology in the near future. Our experience with GROMOS suggests a number of points to keep in mind when developing software that may at some time in its life cycle be parallelized with a data-parallel compiler. This note presents some guidelines for engineering data-parallel applications that are compatible with Fortran D or High Performance Fortran compilers.

  12. 42 CFR 436.128 - Coverage for certain qualified aliens.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Coverage for certain qualified aliens. 436.128... Mandatory Coverage of the Categorically Needy § 436.128 Coverage for certain qualified aliens. The agency... § 440.255(c) of this chapter to those aliens described in § 436.406(c) of this subpart. [55 FR 36820...

  13. Assessing the value of the NHIS for studying changes in state coverage policies: the case of New York.

    Science.gov (United States)

    Long, Sharon K; Graves, John A; Zuckerman, Stephen

    2007-12-01

    (1) To assess the effects of New York's Health Care Reform Act of 2000 on the insurance coverage of eligible adults and (2) to explore the feasibility of using the National Health Interview Survey (NHIS) as opposed to the Current Population Survey (CPS) to conduct evaluations of state health reform initiatives. We take advantage of the natural experiment that occurred in New York to compare health insurance coverage for adults before and after the state implemented its coverage initiative using a difference-in-differences framework. We estimate the effects of New York's initiative on insurance coverage using the NHIS, comparing the results to estimates based on the CPS, the most widely used data source for studies of state coverage policy changes. Although the sample sizes are smaller in the NHIS, the NHIS addresses a key limitation of the CPS for such evaluations by providing a better measure of health insurance status. Given the complexity of the timing of the expansion efforts in New York (which encompassed the September 11, 2001 terrorist attacks), we allow for difference in the effects of the state's policy changes over time. In particular, we allow for differences between the period of Disaster Relief Medicaid (DRM), which was a temporary program implemented immediately after September 11th, and the original components of the state's reform efforts-Family Health Plus (FHP), an expansion of direct Medicaid coverage, and Healthy New York (HNY), an effort to make private coverage more affordable. 2000-2004 CPS; 1999-2004 NHIS. We find evidence of a significant reduction in uninsurance for parents in New York, particularly in the period following DRM. For childless adults, for whom the coverage expansion was more circumscribed, the program effects are less promising, as we find no evidence of a significant decline in uninsurance. The success of New York at reducing uninsurance for parents through expansions of both public and private coverage offers hope for new

  14. Declarative Parallel Programming in Spreadsheet End-User Development

    DEFF Research Database (Denmark)

    Biermann, Florian

    2016-01-01

    Spreadsheets are first-order functional languages and are widely used in research and industry as a tool to conveniently perform all kinds of computations. Because cells on a spreadsheet are immutable, there are possibilities for implicit parallelization of spreadsheet computations. In this liter...... can directly apply results from functional array programming to a spreadsheet model of computations.......Spreadsheets are first-order functional languages and are widely used in research and industry as a tool to conveniently perform all kinds of computations. Because cells on a spreadsheet are immutable, there are possibilities for implicit parallelization of spreadsheet computations....... In this literature study, we provide an overview of the publications on spreadsheet end-user programming and declarative array programming to inform further research on parallel programming in spreadsheets. Our results show that there is a clear overlap between spreadsheet programming and array programming and we...

  15. Legislating health care coverage for the unemployed.

    Science.gov (United States)

    Palley, H A; Feldman, G; Gallner, I; Tysor, M

    1985-01-01

    Because the unemployed and their families are often likely to develop stress-related health problems, ensuring them access to health care is a public health issue. Congressional efforts thus far to legislate health coverage for the unemployed have proposed a system that recognizes people's basic need for coverage but has several limitations.

  16. Influence of Paralleling Dies and Paralleling Half-Bridges on Transient Current Distribution in Multichip Power Modules

    DEFF Research Database (Denmark)

    Li, Helong; Zhou, Wei; Wang, Xiongfei

    2018-01-01

    This paper addresses the transient current distribution in the multichip half-bridge power modules, where two types of paralleling connections with different current commutation mechanisms are considered: paralleling dies and paralleling half-bridges. It reveals that with paralleling dies, both t...

  17. Assessment of Effective Coverage of Voluntary Counseling and ...

    African Journals Online (AJOL)

    Assessment of Effective Coverage of Voluntary Counseling and Testing ... The objective of this study was to assess effective coverage level for Voluntary Counseling and testing services in major health facilities ... AJOL African Journals Online.

  18. Increasing Coverage of Hepatitis B Vaccination in China: A Systematic Review of Interventions and Implementation Experiences.

    Science.gov (United States)

    Wang, Shengnan; Smith, Helen; Peng, Zhuoxin; Xu, Biao; Wang, Weibing

    2016-05-01

    This study used a system evaluation method to summarize China's experience on improving the coverage of hepatitis B vaccine, especially the strategies employed to improve the uptake of timely birth dosage. Identifying successful methods and strategies will provide strong evidence for policy makers and health workers in other countries with high hepatitis B prevalence.We conducted a literature review included English or Chinese literature carried out in mainland China, using PubMed, the Cochrane databases, Web of Knowledge, China National Knowledge Infrastructure, Wanfang data, and other relevant databases.Nineteen articles about the effectiveness and impact of interventions on improving the coverage of hepatitis B vaccine were included. Strong or moderate evidence showed that reinforcing health education, training and supervision, providing subsidies for facility birth, strengthening the coordination among health care providers, and using out-of-cold-chain storage for vaccines were all important to improving vaccination coverage.We found evidence that community education was the most commonly used intervention, and out-reach programs such as out-of-cold chain strategy were more effective in increasing the coverage of vaccination in remote areas where the facility birth rate was respectively low. The essential impact factors were found to be strong government commitment and the cooperation of the different government departments.Public interventions relying on basic health care systems combined with outreach care services were critical elements in improving the hepatitis B vaccination rate in China. This success could not have occurred without exceptional national commitment.

  19. Regulating the for-profit private healthcare providers towards universal health coverage: A qualitative study of legal and organizational framework in Mongolia.

    Science.gov (United States)

    Tsevelvaanchig, Uranchimeg; Narula, Indermohan S; Gouda, Hebe; Hill, Peter S

    2018-01-01

    Regulating the behavior of private providers in the context of mixed health systems has become increasingly important and challenging in many developing countries moving towards universal health coverage including Mongolia. This study examines the current regulatory architecture for private healthcare in Mongolia exploring its role for improving accessibility, affordability, and quality of private care and identifies gaps in policy design and implementation. Qualitative research methods were used including documentary review, analysis, and in-depth interviews with 45 representatives of key actors involved in and affected by regulations in Mongolia's mixed health system, along with long-term participant observation. There has been extensive legal documentation developed regulating private healthcare, with specific organizations assigned to conduct health regulations and inspections. However, the regulatory architecture for healthcare in Mongolia is not optimally designed to improve affordability and quality of private care. This is not limited only to private care: important regulatory functions targeted to quality of care do not exist at the national level. The imprecise content and details of regulations in laws inviting increased political interference, governance issues, unclear roles, and responsibilities of different government regulatory bodies have contributed to failures in implementation of existing regulations. Copyright © 2017 John Wiley & Sons, Ltd.

  20. A Laboratory Preparation of Aspartame Analogs Using Simultaneous Multiple Parallel Synthesis Methodology

    Science.gov (United States)

    Qvit, Nir; Barda, Yaniv; Gilon, Chaim; Shalev, Deborah E.

    2007-01-01

    This laboratory experiment provides a unique opportunity for students to synthesize three analogues of aspartame, a commonly used artificial sweetener. The students are introduced to the powerful and useful method of parallel synthesis while synthesizing three dipeptides in parallel using solid-phase peptide synthesis (SPPS) and simultaneous…

  1. Tuning HDF5 subfiling performance on parallel file systems

    Energy Technology Data Exchange (ETDEWEB)

    Byna, Suren [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chaarawi, Mohamad [Intel Corp. (United States); Koziol, Quincey [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Mainzer, John [The HDF Group (United States); Willmore, Frank [The HDF Group (United States)

    2017-05-12

    Subfiling is a technique used on parallel file systems to reduce locking and contention issues when multiple compute nodes interact with the same storage target node. Subfiling provides a compromise between the single shared file approach that instigates the lock contention problems on parallel file systems and having one file per process, which results in generating a massive and unmanageable number of files. In this paper, we evaluate and tune the performance of recently implemented subfiling feature in HDF5. In specific, we explain the implementation strategy of subfiling feature in HDF5, provide examples of using the feature, and evaluate and tune parallel I/O performance of this feature with parallel file systems of the Cray XC40 system at NERSC (Cori) that include a burst buffer storage and a Lustre disk-based storage. We also evaluate I/O performance on the Cray XC30 system, Edison, at NERSC. Our results show performance benefits of 1.2X to 6X performance advantage with subfiling compared to writing a single shared HDF5 file. We present our exploration of configurations, such as the number of subfiles and the number of Lustre storage targets to storing files, as optimization parameters to obtain superior I/O performance. Based on this exploration, we discuss recommendations for achieving good I/O performance as well as limitations with using the subfiling feature.

  2. US Media Coverage of Tobacco Industry Corporate Social Responsibility Initiatives.

    Science.gov (United States)

    McDaniel, Patricia A; Lown, E Anne; Malone, Ruth E

    2018-02-01

    Media coverage of tobacco industry corporate social responsibility (CSR) initiatives represents a competitive field where tobacco control advocates and the tobacco industry vie to shape public and policymaker understandings about tobacco control and the industry. Through a content analysis of 649 US news items, we examined US media coverage of tobacco industry CSR and identified characteristics of media items associated with positive coverage. Most coverage appeared in local newspapers, and CSR initiatives unrelated to tobacco, with non-controversial beneficiaries, were most commonly mentioned. Coverage was largely positive. Tobacco control advocates were infrequently cited as sources and rarely authored opinion pieces; however, when their voices were included, coverage was less likely to have a positive slant. Media items published in the South, home to several tobacco company headquarters, were more likely than those published in the West to have a positive slant. The absence of tobacco control advocates from media coverage represents a missed opportunity to influence opinion regarding the negative public health implications of tobacco industry CSR. Countering the media narrative of virtuous companies doing good deeds could be particularly beneficial in the South, where the burdens of tobacco-caused disease are greatest, and coverage of tobacco companies more positive.

  3. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    Science.gov (United States)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  4. Dynamic partitioning and coverage control with asynchronous one-to-base-station communication

    NARCIS (Netherlands)

    Patel, Rushabh; Frasca, Paolo; Durham, Joseph W.; Carli, Ruggero; Bullo, Francesco

    We propose algorithms to automatically deploy a group of mobile robots and provide coverage of a non-convex environment with communication limitations. In settings such as hilly terrain or for underwater ocean gliders, peer-to-peer communication can be impossible and frequent communication to a

  5. 29 CFR 1620.7 - “Enterprise” coverage.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 4 2010-07-01 2010-07-01 false âEnterpriseâ coverage. 1620.7 Section 1620.7 Labor Regulations Relating to Labor (Continued) EQUAL EMPLOYMENT OPPORTUNITY COMMISSION THE EQUAL PAY ACT § 1620.7 “Enterprise” coverage. (a) The terms “enterprise” and “enterprise engaged in commerce or in the production of...

  6. High Temporal and Spatial Resolution Coverage of Earth from Commercial AVSTAR Systems in Geostationary Orbit

    Science.gov (United States)

    Lecompte, M. A.; Heaps, J. F.; Williams, F. H.

    Imaging the earth from Geostationary Earth Orbit (GEO) allows frequent updates of environmental conditions within an observable hemisphere at time and spatial scales appropriate to the most transient observable terrestrial phenomena. Coverage provided by current GEO Meteorological Satellites (METSATS) fails to fully exploit this advantage due primarily to obsolescent technology and also institutional inertia. With the full benefit of GEO based imaging unrealized, rapidly evolving phenomena, occurring at the smallest spatial and temporal scales that frequently have significant environmental impact remain unobserved. These phenomena may be precursors for the most destructive natural processes that adversely effect society. Timely distribution of information derived from "real-time" observations thus may provide opportunities to mitigate much of the damage to life and property that would otherwise occur. AstroVision International's AVStar Earth monitoring system is designed to overcome the current limitations if GEO Earth coverage and to provide real time monitoring of changes to the Earth's complete atmospheric, land and marine surface environments including fires, volcanic events, lightning and meteoritic events on a "live," true color, and multispectral basis. The understanding of severe storm dynamics and its coupling to the earth's electro-sphere will be greatly enhanced by observations at unprecedented sampling frequencies and spatial resolution. Better understanding of these natural phenomena and AVStar operational real-time coverage may also benefit society through improvements in severe weather prediction and warning. AstroVision's AVStar system, designed to provide this capability with the first of a constellation of GEO- based commercial environmental monitoring satellites to be launched in late 2003 will be discussed, including spatial and temporal resolution, spectral coverage with applications and an inventory of the potential benefits to society

  7. Pattern-Driven Automatic Parallelization

    Directory of Open Access Journals (Sweden)

    Christoph W. Kessler

    1996-01-01

    Full Text Available This article describes a knowledge-based system for automatic parallelization of a wide class of sequential numerical codes operating on vectors and dense matrices, and for execution on distributed memory message-passing multiprocessors. Its main feature is a fast and powerful pattern recognition tool that locally identifies frequently occurring computations and programming concepts in the source code. This tool also works for dusty deck codes that have been "encrypted" by former machine-specific code transformations. Successful pattern recognition guides sophisticated code transformations including local algorithm replacement such that the parallelized code need not emerge from the sequential program structure by just parallelizing the loops. It allows access to an expert's knowledge on useful parallel algorithms, available machine-specific library routines, and powerful program transformations. The partially restored program semantics also supports local array alignment, distribution, and redistribution, and allows for faster and more exact prediction of the performance of the parallelized target code than is usually possible.

  8. MC/DC and Toggle Coverage Measurement Tool for FBD Program Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Eui Sub; Jung, Se Jin; Kim, Jae Yeob; Yoo, Jun Beom [Konkuk University, Seoul (Korea, Republic of)

    2016-05-15

    The functional verification of FBD program can be implemented with various techniques such as testing and simulation. Simulation is preferable to verify FBD program, because it replicates operation of the PLC as well. The PLC is executed repeatedly as long as the controlled system is running based on scan time. Likewise, the simulation technique operates continuously and sequentially. Although engineers try to verify the functionality wholly, it is difficult to find residual errors in the design. Even if 100% functional coverage is accomplished, code coverage have 50%, which might indicate that the scenario is missing some key features of the design. Unfortunately, errors and bugs are often found in the missing points. To assure a high quality of functional verification, code coverage is important as well as functional coverage. We developed a pair tool 'FBDSim' and 'FBDCover' for FBD simulation and coverage measurement. The 'FBDSim' automatically simulates a set of FBD simulation scenarios. While the 'FBDSim' simulates the FBD program, it calculates the MC/DC and Toggle coverage and identifies unstimulated points. After FBD simulation is done, the 'FBDCover' reads the coverage results and shows the coverage with graphical feature and uncovered points with tree feature. The coverages and uncovered points can help engineers to improve the quality of simulation. We slightly dealt with the both coverages, but the coverage is dealt with more concrete and rigorous manner.

  9. Defining the minimum anatomical coverage required to protect the axilla and arm against penetrating ballistic projectiles.

    Science.gov (United States)

    Breeze, Johno; Fryer, R; Lewis, E A; Clasper, J

    2016-08-01

    Defining the minimum anatomical structural coverage required to protect from ballistic threats is necessary to enable objective comparisons between body armour designs. Current protection for the axilla and arm is in the form of brassards, but no evidence exists to justify the coverage that should be provided by them. A systematic review was undertaken to ascertain which anatomical components within the arm or axilla would be highly likely to lead to either death within 60 min or would cause significant long-term morbidity. Haemorrhage from vascular damage to the axillary or brachial vessels was demonstrated to be the principal cause of mortality from arm trauma on combat operations. Peripheral nerve injuries are the primary cause of long-term morbidity and functional disability following upper extremity arterial trauma. Haemorrhage is managed through direct pressure and the application of a tourniquet. It is therefore recommended that the minimum coverage should be the most proximal extent to which a tourniquet can be applied. Superimposition of OSPREY brassards over these identified anatomical structures demonstrates that current coverage provided by the brassards could potentially be reduced. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  10. The Research of the Parallel Computing Development from the Angle of Cloud Computing

    Science.gov (United States)

    Peng, Zhensheng; Gong, Qingge; Duan, Yanyu; Wang, Yun

    2017-10-01

    Cloud computing is the development of parallel computing, distributed computing and grid computing. The development of cloud computing makes parallel computing come into people’s lives. Firstly, this paper expounds the concept of cloud computing and introduces two several traditional parallel programming model. Secondly, it analyzes and studies the principles, advantages and disadvantages of OpenMP, MPI and Map Reduce respectively. Finally, it takes MPI, OpenMP models compared to Map Reduce from the angle of cloud computing. The results of this paper are intended to provide a reference for the development of parallel computing.

  11. Data communications in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-10-29

    Data communications in a parallel active messaging interface (`PAMI`) of a parallel computer, the parallel computer including a plurality of compute nodes that execute a parallel application, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes and the endpoints coupled for data communications through the PAMI and through data communications resources, including receiving in an origin endpoint of the PAMI a data communications instruction, the instruction characterized by an instruction type, the instruction specifying a transmission of transfer data from the origin endpoint to a target endpoint and transmitting, in accordance with the instruction type, the transfer data from the origin endpoint to the target endpoint.

  12. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,; Fidel, Adam; Amato, Nancy M.; Rauchwerger, Lawrence

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable

  13. Increasing coverage and decreasing inequity in insecticide-treated bed net use among rural Kenyan children.

    Directory of Open Access Journals (Sweden)

    Abdisalan M Noor

    2007-08-01

    Full Text Available Inexpensive and efficacious interventions that avert childhood deaths in sub-Saharan Africa have failed to reach effective coverage, especially among the poorest rural sectors. One particular example is insecticide-treated bed nets (ITNs. In this study, we present repeat observations of ITN coverage among rural Kenyan homesteads exposed at different times to a range of delivery models, and assess changes in coverage across socioeconomic groups.We undertook a study of annual changes in ITN coverage among a cohort of 3,700 children aged 0-4 y in four districts of Kenya (Bondo, Greater Kisii, Kwale, and Makueni annually between 2004 and 2006. Cross-sectional surveys of ITN coverage were undertaken coincidentally with the incremental availability of commercial sector nets (2004, the introduction of heavily subsidized nets through clinics (2005, and the introduction of free mass distributed ITNs (2006. The changing prevalence of ITN coverage was examined with special reference to the degree of equity in each delivery approach. ITN coverage was only 7.1% in 2004 when the predominant source of nets was the commercial retail sector. By the end of 2005, following the expansion of heavily subsidized clinic distribution system, ITN coverage rose to 23.5%. In 2006 a large-scale mass distribution of ITNs was mounted providing nets free of charge to children, resulting in a dramatic increase in ITN coverage to 67.3%. With each subsequent survey socioeconomic inequity in net coverage sequentially decreased: 2004 (most poor [2.9%] versus least poor [15.6%]; concentration index 0.281; 2005 (most poor [17.5%] versus least poor [37.9%]; concentration index 0.131, and 2006 with near-perfect equality (most poor [66.3%] versus least poor [66.6%]; concentration index 0.000. The free mass distribution method achieved highest coverage among the poorest children, the highly subsidised clinic nets programme was marginally in favour of the least poor, and the commercial

  14. Parallel Computation of the Jacobian Matrix for Nonlinear Equation Solvers Using MATLAB

    Science.gov (United States)

    Rose, Geoffrey K.; Nguyen, Duc T.; Newman, Brett A.

    2017-01-01

    Demonstrating speedup for parallel code on a multicore shared memory PC can be challenging in MATLAB due to underlying parallel operations that are often opaque to the user. This can limit potential for improvement of serial code even for the so-called embarrassingly parallel applications. One such application is the computation of the Jacobian matrix inherent to most nonlinear equation solvers. Computation of this matrix represents the primary bottleneck in nonlinear solver speed such that commercial finite element (FE) and multi-body-dynamic (MBD) codes attempt to minimize computations. A timing study using MATLAB's Parallel Computing Toolbox was performed for numerical computation of the Jacobian. Several approaches for implementing parallel code were investigated while only the single program multiple data (spmd) method using composite objects provided positive results. Parallel code speedup is demonstrated but the goal of linear speedup through the addition of processors was not achieved due to PC architecture.

  15. Frontiers of massively parallel scientific computation

    International Nuclear Information System (INIS)

    Fischer, J.R.

    1987-07-01

    Practical applications using massively parallel computer hardware first appeared during the 1980s. Their development was motivated by the need for computing power orders of magnitude beyond that available today for tasks such as numerical simulation of complex physical and biological processes, generation of interactive visual displays, satellite image analysis, and knowledge based systems. Representative of the first generation of this new class of computers is the Massively Parallel Processor (MPP). A team of scientists was provided the opportunity to test and implement their algorithms on the MPP. The first results are presented. The research spans a broad variety of applications including Earth sciences, physics, signal and image processing, computer science, and graphics. The performance of the MPP was very good. Results obtained using the Connection Machine and the Distributed Array Processor (DAP) are presented

  16. Leveraging Cloud Heterogeneity for Cost-Efficient Execution of Parallel Applications

    OpenAIRE

    Roloff, Eduardo; Diener, Matthias; Diaz Carreño, Emmanuell; Gaspary, Luciano Paschoal; Navaux, Philippe O.A.

    2017-01-01

    Public cloud providers offer a wide range of instance types, with different processing and interconnection speeds, as well as varying prices. Furthermore, the tasks of many parallel applications show different computational demands due to load imbalance. These differences can be exploited for improving the cost efficiency of parallel applications in many cloud environments by matching application requirements to instance types. In this paper, we introduce the concept of heterogeneous cloud sy...

  17. Parallelism and array processing

    International Nuclear Information System (INIS)

    Zacharov, V.

    1983-01-01

    Modern computing, as well as the historical development of computing, has been dominated by sequential monoprocessing. Yet there is the alternative of parallelism, where several processes may be in concurrent execution. This alternative is discussed in a series of lectures, in which the main developments involving parallelism are considered, both from the standpoint of computing systems and that of applications that can exploit such systems. The lectures seek to discuss parallelism in a historical context, and to identify all the main aspects of concurrency in computation right up to the present time. Included will be consideration of the important question as to what use parallelism might be in the field of data processing. (orig.)

  18. Diderot: a Domain-Specific Language for Portable Parallel Scientific Visualization and Image Analysis.

    Science.gov (United States)

    Kindlmann, Gordon; Chiw, Charisee; Seltzer, Nicholas; Samuels, Lamont; Reppy, John

    2016-01-01

    Many algorithms for scientific visualization and image analysis are rooted in the world of continuous scalar, vector, and tensor fields, but are programmed in low-level languages and libraries that obscure their mathematical foundations. Diderot is a parallel domain-specific language that is designed to bridge this semantic gap by providing the programmer with a high-level, mathematical programming notation that allows direct expression of mathematical concepts in code. Furthermore, Diderot provides parallel performance that takes advantage of modern multicore processors and GPUs. The high-level notation allows a concise and natural expression of the algorithms and the parallelism allows efficient execution on real-world datasets.

  19. RCT: Module 2.11, Radiological Work Coverage, Course 8777

    Energy Technology Data Exchange (ETDEWEB)

    Hillmer, Kurt T. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-07-20

    Radiological work is usually approved and controlled by radiation protection personnel by using administrative and procedural controls, such as radiological work permits (RWPs). In addition, some jobs will require working in, or will have the potential for creating, very high radiation, contamination, or airborne radioactivity areas. Radiological control technicians (RCTs) providing job coverage have an integral role in controlling radiological hazards. This course will prepare the student with the skills necessary for RCT qualification by passing quizzes, tests, and the RCT Comprehensive Phase 1, Unit 2 Examination (TEST 27566) and will provide in-the-field skills.

  20. Greedy Sparse Approaches for Homological Coverage in Location Unaware Sensor Networks

    Science.gov (United States)

    2017-12-08

    problems (e.g., coverage hole detection, coverage verification , hole local- ization, and so on; see Section 2 for more details). The sparse coverage...10,17,20–25 2. detection or verification of coverage (i.e., ensuring there is no coverage gap or hole),11,12,26–29 3 Approved for public release...v))) = 0 then Broadcast self as candidate for collapse to neighbors if All neighboring nodes broadcast themselves as non-candidates then v not needed

  1. Frame-Based and Subpicture-Based Parallelization Approaches of the HEVC Video Encoder

    Directory of Open Access Journals (Sweden)

    Héctor Migallón

    2018-05-01

    Full Text Available The most recent video coding standard, High Efficiency Video Coding (HEVC, is able to significantly improve the compression performance at the expense of a huge computational complexity increase with respect to its predecessor, H.264/AVC. Parallel versions of the HEVC encoder may help to reduce the overall encoding time in order to make it more suitable for practical applications. In this work, we study two parallelization strategies. One of them follows a coarse-grain approach, where parallelization is based on frames, and the other one follows a fine-grain approach, where parallelization is performed at subpicture level. Two different frame-based approaches have been developed. The first one only uses MPI and the second one is a hybrid MPI/OpenMP algorithm. An exhaustive experimental test was carried out to study the performance of both approaches in order to find out the best setup in terms of parallel efficiency and coding performance. Both frame-based and subpicture-based approaches are compared under the same hardware platform. Although subpicture-based schemes provide an excellent performance with high-resolution video sequences, scalability is limited by resolution, and the coding performance worsens by increasing the number of processes. Conversely, the proposed frame-based approaches provide the best results with respect to both parallel performance (increasing scalability and coding performance (not degrading the rate/distortion behavior.

  2. Vectorization, parallelization and porting of nuclear codes (vectorization and parallelization). Progress report fiscal 1998

    International Nuclear Information System (INIS)

    Ishizuki, Shigeru; Kawai, Wataru; Nemoto, Toshiyuki; Ogasawara, Shinobu; Kume, Etsuo; Adachi, Masaaki; Kawasaki, Nobuo; Yatake, Yo-ichi

    2000-03-01

    Several computer codes in the nuclear field have been vectorized, parallelized and transported on the FUJITSU VPP500 system, the AP3000 system and the Paragon system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. We dealt with 12 codes in fiscal 1998. These results are reported in 3 parts, i.e., the vectorization and parallelization on vector processors part, the parallelization on scalar processors part and the porting part. In this report, we describe the vectorization and parallelization on vector processors. In this vectorization and parallelization on vector processors part, the vectorization of General Tokamak Circuit Simulation Program code GTCSP, the vectorization and parallelization of Molecular Dynamics NTV (n-particle, Temperature and Velocity) Simulation code MSP2, Eddy Current Analysis code EDDYCAL, Thermal Analysis Code for Test of Passive Cooling System by HENDEL T2 code THANPACST2 and MHD Equilibrium code SELENEJ on the VPP500 are described. In the parallelization on scalar processors part, the parallelization of Monte Carlo N-Particle Transport code MCNP4B2, Plasma Hydrodynamics code using Cubic Interpolated Propagation Method PHCIP and Vectorized Monte Carlo code (continuous energy model / multi-group model) MVP/GMVP on the Paragon are described. In the porting part, the porting of Monte Carlo N-Particle Transport code MCNP4B2 and Reactor Safety Analysis code RELAP5 on the AP3000 are described. (author)

  3. Footprints: A Visual Search Tool that Supports Discovery and Coverage Tracking.

    Science.gov (United States)

    Isaacs, Ellen; Domico, Kelly; Ahern, Shane; Bart, Eugene; Singhal, Mudita

    2014-12-01

    Searching a large document collection to learn about a broad subject involves the iterative process of figuring out what to ask, filtering the results, identifying useful documents, and deciding when one has covered enough material to stop searching. We are calling this activity "discoverage," discovery of relevant material and tracking coverage of that material. We built a visual analytic tool called Footprints that uses multiple coordinated visualizations to help users navigate through the discoverage process. To support discovery, Footprints displays topics extracted from documents that provide an overview of the search space and are used to construct searches visuospatially. Footprints allows users to triage their search results by assigning a status to each document (To Read, Read, Useful), and those status markings are shown on interactive histograms depicting the user's coverage through the documents across dates, sources, and topics. Coverage histograms help users notice biases in their search and fill any gaps in their analytic process. To create Footprints, we used a highly iterative, user-centered approach in which we conducted many evaluations during both the design and implementation stages and continually modified the design in response to feedback.

  4. Parallel External Memory Graph Algorithms

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Sitchinava, Nodari

    2010-01-01

    In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... an optimal speedup of ¿(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts....

  5. Global yellow fever vaccination coverage from 1970 to 2016: an adjusted retrospective analysis.

    Science.gov (United States)

    Shearer, Freya M; Moyes, Catherine L; Pigott, David M; Brady, Oliver J; Marinho, Fatima; Deshpande, Aniruddha; Longbottom, Joshua; Browne, Annie J; Kraemer, Moritz U G; O'Reilly, Kathleen M; Hombach, Joachim; Yactayo, Sergio; de Araújo, Valdelaine E M; da Nóbrega, Aglaêr A; Mosser, Jonathan F; Stanaway, Jeffrey D; Lim, Stephen S; Hay, Simon I; Golding, Nick; Reiner, Robert C

    2017-11-01

    Substantial outbreaks of yellow fever in Angola and Brazil in the past 2 years, combined with global shortages in vaccine stockpiles, highlight a pressing need to assess present control strategies. The aims of this study were to estimate global yellow fever vaccination coverage from 1970 through to 2016 at high spatial resolution and to calculate the number of individuals still requiring vaccination to reach population coverage thresholds for outbreak prevention. For this adjusted retrospective analysis, we compiled data from a range of sources (eg, WHO reports and health-service-provider registeries) reporting on yellow fever vaccination activities between May 1, 1939, and Oct 29, 2016. To account for uncertainty in how vaccine campaigns were targeted, we calculated three population coverage values to encompass alternative scenarios. We combined these data with demographic information and tracked vaccination coverage through time to estimate the proportion of the population who had ever received a yellow fever vaccine for each second level administrative division across countries at risk of yellow fever virus transmission from 1970 to 2016. Overall, substantial increases in vaccine coverage have occurred since 1970, but notable gaps still exist in contemporary coverage within yellow fever risk zones. We estimate that between 393·7 million and 472·9 million people still require vaccination in areas at risk of yellow fever virus transmission to achieve the 80% population coverage threshold recommended by WHO; this represents between 43% and 52% of the population within yellow fever risk zones, compared with between 66% and 76% of the population who would have required vaccination in 1970. Our results highlight important gaps in yellow fever vaccination coverage, can contribute to improved quantification of outbreak risk, and help to guide planning of future vaccination efforts and emergency stockpiling. The Rhodes Trust, Bill & Melinda Gates Foundation, the

  6. Parallelization of the MAAP-A code neutronics/thermal hydraulics coupling

    International Nuclear Information System (INIS)

    Froehle, P.H.; Wei, T.Y.C.; Weber, D.P.; Henry, R.E.

    1998-01-01

    A major new feature, one-dimensional space-time kinetics, has been added to a developmental version of the MAAP code through the introduction of the DIF3D-K module. This code is referred to as MAAP-A. To reduce the overall job time required, a capability has been provided to run the MAAP-A code in parallel. The parallel version of MAAP-A utilizes two machines running in parallel, with the DIF3D-K module executing on one machine and the rest of the MAAP-A code executing on the other machine. Timing results obtained during the development of the capability indicate that reductions in time of 30--40% are possible. The parallel version can be run on two SPARC 20 (SUN OS 5.5) workstations connected through the ethernet. MPI (Message Passing Interface standard) needs to be implemented on the machines. If necessary the parallel version can also be run on only one machine. The results obtained running in this one-machine mode identically match the results obtained from the serial version of the code

  7. MEDUSA - An overset grid flow solver for network-based parallel computer systems

    Science.gov (United States)

    Smith, Merritt H.; Pallis, Jani M.

    1993-01-01

    Continuing improvement in processing speed has made it feasible to solve the Reynolds-Averaged Navier-Stokes equations for simple three-dimensional flows on advanced workstations. Combining multiple workstations into a network-based heterogeneous parallel computer allows the application of programming principles learned on MIMD (Multiple Instruction Multiple Data) distributed memory parallel computers to the solution of larger problems. An overset-grid flow solution code has been developed which uses a cluster of workstations as a network-based parallel computer. Inter-process communication is provided by the Parallel Virtual Machine (PVM) software. Solution speed equivalent to one-third of a Cray-YMP processor has been achieved from a cluster of nine commonly used engineering workstation processors. Load imbalance and communication overhead are the principal impediments to parallel efficiency in this application.

  8. AdiosStMan: Parallelizing Casacore Table Data System using Adaptive IO System

    Science.gov (United States)

    Wang, R.; Harris, C.; Wicenec, A.

    2016-07-01

    In this paper, we investigate the Casacore Table Data System (CTDS) used in the casacore and CASA libraries, and methods to parallelize it. CTDS provides a storage manager plugin mechanism for third-party developers to design and implement their own CTDS storage managers. Having this in mind, we looked into various storage backend techniques that can possibly enable parallel I/O for CTDS by implementing new storage managers. After carrying on benchmarks showing the excellent parallel I/O throughput of the Adaptive IO System (ADIOS), we implemented an ADIOS based parallel CTDS storage manager. We then applied the CASA MSTransform frequency split task to verify the ADIOS Storage Manager. We also ran a series of performance tests to examine the I/O throughput in a massively parallel scenario.

  9. Coverage or Cover-up: A Comparison of Newspaper Coverage of the 19th Amendment and the Equal Rights Amendment.

    Science.gov (United States)

    Smith, Linda Lazier

    A study compared newspaper coverage of the women's suffrage movement in the 1920s with coverage of efforts to pass the Equal Rights Amendment in the 1970s and early 1980s, to see if the similar movements with different outcomes were treated similarly or differently by the press. A content analysis of relevant articles in the "New York…

  10. SystemVerilog assertions and functional coverage guide to language, methodology and applications

    CERN Document Server

    Mehta, Ashok B

    2016-01-01

    This book provides a hands-on, application-oriented guide to the language and methodology of both SystemVerilog Assertions and SystemVerilog Functional Coverage. Readers will benefit from the step-by-step approach to functional hardware verification using SystemVerilog Assertions and Functional Coverage, which will enable them to uncover hidden and hard to find bugs, point directly to the source of the bug, provide for a clean and easy way to model complex timing checks and objectively answer the question ‘have we functionally verified everything’. Written by a professional end-user of ASIC/SoC/CPU and FPGA design and Verification, this book explains each concept with easy to understand examples, simulation logs and applications derived from real projects. Readers will be empowered to tackle the modeling of complex checkers for functional verification, thereby drastically reducing their time to design and debug. This updated second edition addresses the latest functional set released in IEEE-1800 (2012) L...

  11. Parallel inter channel interaction mechanisms

    International Nuclear Information System (INIS)

    Jovic, V.; Afgan, N.; Jovic, L.

    1995-01-01

    Parallel channels interactions are examined. For experimental researches of nonstationary regimes flow in three parallel vertical channels results of phenomenon analysis and mechanisms of parallel channel interaction for adiabatic condition of one-phase fluid and two-phase mixture flow are shown. (author)

  12. Initial Assessment of Parallelization of Monte Carlo Calculation using Graphics Processing Units

    International Nuclear Information System (INIS)

    Choi, Sung Hoon; Joo, Han Gyu

    2009-01-01

    Monte Carlo (MC) simulation is an effective tool for calculating neutron transports in complex geometry. However, because Monte Carlo simulates each neutron behavior one by one, it takes a very long computing time if enough neutrons are used for high precision of calculation. Accordingly, methods that reduce the computing time are required. In a Monte Carlo code, parallel calculation is well-suited since it simulates the behavior of each neutron independently and thus parallel computation is natural. The parallelization of the Monte Carlo codes, however, was done using multi CPUs. By the global demand for high quality 3D graphics, the Graphics Processing Unit (GPU) has developed into a highly parallel, multi-core processor. This parallel processing capability of GPUs can be available to engineering computing once a suitable interface is provided. Recently, NVIDIA introduced CUDATM, a general purpose parallel computing architecture. CUDA is a software environment that allows developers to manage GPU using C/C++ or other languages. In this work, a GPU-based Monte Carlo is developed and the initial assessment of it parallel performance is investigated

  13. Expanding the universe of universal coverage: the population health argument for increasing coverage for immigrants.

    Science.gov (United States)

    Nandi, Arijit; Loue, Sana; Galea, Sandro

    2009-12-01

    As the US recession deepens, furthering the debate about healthcare reform is now even more important than ever. Few plans aimed at facilitating universal coverage make any mention of increasing access for uninsured non-citizens living in the US, many of whom are legally restricted from certain types of coverage. We conducted a critical review of the public health literature concerning the health status and access to health services among immigrant populations in the US. Using examples from infectious and chronic disease epidemiology, we argue that access to health services is at the intersection of the health of uninsured immigrants and the general population and that extending access to healthcare to all residents of the US, including undocumented immigrants, is beneficial from a population health perspective. Furthermore, from a health economics perspective, increasing access to care for immigrant populations may actually reduce net costs by increasing primary prevention and reducing the emphasis on emergency care for preventable conditions. It is unlikely that proposals for universal coverage will accomplish their objectives of improving population health and reducing social disparities in health if they do not address the substantial proportion of uninsured non-citizens living in the US.

  14. Detecting Boundary Nodes and Coverage Holes in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Li-Hui Zhao

    2016-01-01

    Full Text Available The emergence of coverage holes in wireless sensor networks (WSNs means that some special events have broken out and the function of WSNs will be seriously influenced. Therefore, the issues of coverage holes have attracted considerable attention. In this paper, we focus on the identification of boundary nodes and coverage holes, which is crucially important to preventing the enlargement of coverage holes and ensuring the transmission of data. We define the problem of coverage holes and propose two novel algorithms to identify the coverage holes in WSNs. The first algorithm, Distributed Sector Cover Scanning (DSCS, can be used to identify the nodes on hole borders and the outer boundary of WSNs. The second scheme, Directional Walk (DW, can locate the coverage holes based on the boundary nodes identified with DSCS. We implement the algorithms in various scenarios and fully evaluate their performance. The simulation results show that the boundary nodes can be accurately detected by DSCS and the holes enclosed by the detected boundary nodes can be identified by DW. The comparisons confirm that the proposed algorithms outperform the existing ones.

  15. Using Coarrays to Parallelize Legacy Fortran Applications: Strategy and Case Study

    Directory of Open Access Journals (Sweden)

    Hari Radhakrishnan

    2015-01-01

    Full Text Available This paper summarizes a strategy for parallelizing a legacy Fortran 77 program using the object-oriented (OO and coarray features that entered Fortran in the 2003 and 2008 standards, respectively. OO programming (OOP facilitates the construction of an extensible suite of model-verification and performance tests that drive the development. Coarray parallel programming facilitates a rapid evolution from a serial application to a parallel application capable of running on multicore processors and many-core accelerators in shared and distributed memory. We delineate 17 code modernization steps used to refactor and parallelize the program and study the resulting performance. Our initial studies were done using the Intel Fortran compiler on a 32-core shared memory server. Scaling behavior was very poor, and profile analysis using TAU showed that the bottleneck in the performance was due to our implementation of a collective, sequential summation procedure. We were able to improve the scalability and achieve nearly linear speedup by replacing the sequential summation with a parallel, binary tree algorithm. We also tested the Cray compiler, which provides its own collective summation procedure. Intel provides no collective reductions. With Cray, the program shows linear speedup even in distributed-memory execution. We anticipate similar results with other compilers once they support the new collective procedures proposed for Fortran 2015.

  16. The influence of patient positioning in breast CT on breast tissue coverage and patient comfort

    Energy Technology Data Exchange (ETDEWEB)

    Roessler, A.C.; Althoff, F.; Kalender, W. [Erlangen Univ. (Germany). Inst. of Medical Physics; Wenkel, E. [University Hospital of Erlangen (Germany). Radiological Inst.

    2015-02-15

    The presented study aimed at optimizing a patient table design for breast CT (BCT) systems with respect to breast tissue coverage and patient comfort. Additionally, the benefits and acceptance of an immobilization device for BCT using underpressure were evaluated. Three different study parts were carried out. In a positioning study women were investigated on an MRI tabletop with exchangeable inserts (flat and cone-shaped with different opening diameters) to evaluate their influence on breast coverage and patient comfort in various positioning alternatives. Breast length and volume were calculated to compare positioning modalities including various opening diameters and forms. In the second study part, an underpressure system was tested for its functionality and comfort on a stereotactic biopsy table mimicking a future CT scanner table. In the last study part, this system was tested regarding breast tissue coverage. Best results for breast tissue coverage were shown for cone-shaped table inserts with an opening of 180 mm. Flat inserts did not provide complete coverage of breast tissue. The underpressure system showed robust function and tended to pull more breast tissue into the field of view. Patient comfort was rated good for all table inserts, with highest ratings for cone-shaped inserts. Cone-shaped tabletops appeared to be adequate for BCT systems and to allow imaging of almost the complete breast. An underpressure system proved promising for the fixation of the breast during imaging and increased coverage. Patient comfort appears to be adequate.

  17. Parallel paving: An algorithm for generating distributed, adaptive, all-quadrilateral meshes on parallel computers

    Energy Technology Data Exchange (ETDEWEB)

    Lober, R.R.; Tautges, T.J.; Vaughan, C.T.

    1997-03-01

    Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.

  18. Strategies to improve treatment coverage in community-based public health programs: A systematic review of the literature.

    Directory of Open Access Journals (Sweden)

    Katrina V Deardorff

    2018-02-01

    Full Text Available Community-based public health campaigns, such as those used in mass deworming, vitamin A supplementation and child immunization programs, provide key healthcare interventions to targeted populations at scale. However, these programs often fall short of established coverage targets. The purpose of this systematic review was to evaluate the impact of strategies used to increase treatment coverage in community-based public health campaigns.We systematically searched CAB Direct, Embase, and PubMed archives for studies utilizing specific interventions to increase coverage of community-based distribution of drugs, vaccines, or other public health services. We identified 5,637 articles, from which 79 full texts were evaluated according to pre-defined inclusion and exclusion criteria. Twenty-eight articles met inclusion criteria and data were abstracted regarding strategy-specific changes in coverage from these sources. Strategies used to increase coverage included community-directed treatment (n = 6, pooled percent change in coverage: +26.2%, distributor incentives (n = 2, +25.3%, distribution along kinship networks (n = 1, +24.5%, intensified information, education, and communication activities (n = 8, +21.6%, fixed-point delivery (n = 1, +21.4%, door-to-door delivery (n = 1, +14.0%, integrated service distribution (n = 9, +12.7%, conversion from school- to community-based delivery (n = 3, +11.9%, and management by a non-governmental organization (n = 1, +5.8%.Strategies that target improving community member ownership of distribution appear to have a large impact on increasing treatment coverage. However, all strategies used to increase coverage successfully did so. These results may be useful to National Ministries, programs, and implementing partners in optimizing treatment coverage in community-based public health programs.

  19. Length and coverage of inhibitory decision rules

    KAUST Repository

    Alsolami, Fawaz

    2012-01-01

    Authors present algorithms for optimization of inhibitory rules relative to the length and coverage. Inhibitory rules have a relation "attribute ≠ value" on the right-hand side. The considered algorithms are based on extensions of dynamic programming. Paper contains also comparison of length and coverage of inhibitory rules constructed by a greedy algorithm and by the dynamic programming algorithm. © 2012 Springer-Verlag.

  20. A Novel Trip Coverage Index for Transit Accessibility Assessment Using Mobile Phone Data

    Directory of Open Access Journals (Sweden)

    Zhengyi Cai

    2017-01-01

    Full Text Available Transit accessibility is an important measure on the service performance of transit systems. To assess whether the public transit service is well accessible for trips of specific origins, destinations, and origin-destination (OD pairs, a novel measure, the Trip Coverage Index (TCI, is proposed in this paper. TCI considers both the transit trip coverage and spatial distribution of individual travel demands. Massive trips between cellular base stations are estimated by using over four-million mobile phone users. An easy-to-implement method is also developed to extract the transit information and driving routes for millions of requests. Then the trip coverage of each OD pair is calculated. For demonstrative purposes, TCI is applied to the transit network of Hangzhou, China. The results show that TCI represents the better transit trip coverage and provides a more powerful assessment tool of transit quality of service. Since the calculation is based on trips of all modes, but not only the transit trips, TCI offers an overall accessibility for the transit system performance. It enables decision makers to assess transit accessibility in a finer-grained manner on the individual trip level and can be well transformed to measure transit services of other cities.

  1. Resolution, coverage, and geometry beyond traditional limits

    Energy Technology Data Exchange (ETDEWEB)

    Ronen, Shuki; Ferber, Ralf

    1998-12-31

    The presentation relates to the optimization of the image of seismic data and improved resolution and coverage of acquired data. Non traditional processing methods such as inversion to zero offset (IZO) are used. To realize the potential of saving acquisition cost by reducing in-fill and to plan resolution improvement by processing, geometry QC methods such as DMO Dip Coverage Spectrum (DDCS) and Bull`s Eyes Analysis are used. The DDCS is a 2-D spectrum whose entries consist of the DMO (Dip Move Out) coverage for a particular reflector specified by it`s true time dip and reflector normal strike. The Bull`s Eyes Analysis relies on real time processing of synthetic data generated with the real geometry. 4 refs., 6 figs.

  2. SWATHtoMRM: Development of High-Coverage Targeted Metabolomics Method Using SWATH Technology for Biomarker Discovery.

    Science.gov (United States)

    Zha, Haihong; Cai, Yuping; Yin, Yandong; Wang, Zhuozhong; Li, Kang; Zhu, Zheng-Jiang

    2018-03-20

    The complexity of metabolome presents a great analytical challenge for quantitative metabolite profiling, and restricts the application of metabolomics in biomarker discovery. Targeted metabolomics using multiple-reaction monitoring (MRM) technique has excellent capability for quantitative analysis, but suffers from the limited metabolite coverage. To address this challenge, we developed a new strategy, namely, SWATHtoMRM, which utilizes the broad coverage of SWATH-MS technology to develop high-coverage targeted metabolomics method. Specifically, SWATH-MS technique was first utilized to untargeted profile one pooled biological sample and to acquire the MS 2 spectra for all metabolites. Then, SWATHtoMRM was used to extract the large-scale MRM transitions for targeted analysis with coverage as high as 1000-2000 metabolites. Then, we demonstrated the advantages of SWATHtoMRM method in quantitative analysis such as coverage, reproducibility, sensitivity, and dynamic range. Finally, we applied our SWATHtoMRM approach to discover potential metabolite biomarkers for colorectal cancer (CRC) diagnosis. A high-coverage targeted metabolomics method with 1303 metabolites in one injection was developed to profile colorectal cancer tissues from CRC patients. A total of 20 potential metabolite biomarkers were discovered and validated for CRC diagnosis. In plasma samples from CRC patients, 17 out of 20 potential biomarkers were further validated to be associated with tumor resection, which may have a great potential in assessing the prognosis of CRC patients after tumor resection. Together, the SWATHtoMRM strategy provides a new way to develop high-coverage targeted metabolomics method, and facilitates the application of targeted metabolomics in disease biomarker discovery. The SWATHtoMRM program is freely available on the Internet ( http://www.zhulab.cn/software.php ).

  3. Evaluation of target coverage and margins adequacy during CyberKnife Lung Optimized Treatment.

    Science.gov (United States)

    Ricotti, Rosalinda; Seregni, Matteo; Ciardo, Delia; Vigorito, Sabrina; Rondi, Elena; Piperno, Gaia; Ferrari, Annamaria; Zerella, Maria Alessia; Arculeo, Simona; Francia, Claudia Maria; Sibio, Daniela; Cattani, Federica; De Marinis, Filippo; Spaggiari, Lorenzo; Orecchia, Roberto; Riboldi, Marco; Baroni, Guido; Jereczek-Fossa, Barbara Alicja

    2018-04-01

    Evaluation of target coverage and verification of safety margins, in motion management strategies implemented by Lung Optimized Treatment (LOT) module in CyberKnife system. Three fiducial-less motion management strategies provided by LOT can be selected according to tumor visibility in the X ray images acquired during treatment. In 2-view modality the tumor is visible in both X ray images and full motion tracking is performed. In 1-view modality the tumor is visible in a single X ray image, therefore, motion tracking is combined with an internal target volume (ITV)-based margin expansion. In 0-view modality the lesion is not visible, consequently the treatment relies entirely on an ITV-based approach. Data from 30 patients treated in 2-view modality were selected providing information on the three-dimensional tumor motion in correspondence to each X ray image. Treatments in 1-view and 0-view modalities were simulated by processing log files and planning volumes. Planning target volume (PTV) margins were defined according to the tracking modality: end-exhale clinical target volume (CTV) + 3 mm in 2-view and ITV + 5 mm in 0-view. In the 1-view scenario, the ITV encompasses only tumor motion along the non-visible direction. Then, non-uniform ITV to PTV margins were applied: 3 mm and 5 mm in the visible and non-visible direction, respectively. We defined the coverage of each voxel of the CTV as the percentage of X ray images where such voxel was included in the PTV. In 2-view modality coverage was calculated as the intersection between the CTV centred on the imaged target position and the PTV centred on the predicted target position, as recorded in log files. In 1-view modality, coverage was calculated as the intersection between the CTV centred on the imaged target position and the PTV centred on the projected predictor data. In 0-view modality coverage was calculated as the intersection between the CTV centred on the imaged target position and the non

  4. Seeing or moving in parallel

    DEFF Research Database (Denmark)

    Christensen, Mark Schram; Ehrsson, H Henrik; Nielsen, Jens Bo

    2013-01-01

    a different network, involving bilateral dorsal premotor cortex (PMd), primary motor cortex, and SMA, was more active when subjects viewed parallel movements while performing either symmetrical or parallel movements. Correlations between behavioral instability and brain activity were present in right lateral...... adduction-abduction movements symmetrically or in parallel with real-time congruent or incongruent visual feedback of the movements. One network, consisting of bilateral superior and middle frontal gyrus and supplementary motor area (SMA), was more active when subjects performed parallel movements, whereas...

  5. An educational tool for interactive parallel and distributed processing

    DEFF Research Database (Denmark)

    Pagliarini, Luigi; Lund, Henrik Hautop

    2012-01-01

    In this article we try to describe how the modular interactive tiles system (MITS) can be a valuable tool for introducing students to interactive parallel and distributed processing programming. This is done by providing a handson educational tool that allows a change in the representation...... of abstract problems related to designing interactive parallel and distributed systems. Indeed, the MITS seems to bring a series of goals into education, such as parallel programming, distributedness, communication protocols, master dependency, software behavioral models, adaptive interactivity, feedback......, connectivity, topology, island modeling, and user and multi-user interaction which can rarely be found in other tools. Finally, we introduce the system of modular interactive tiles as a tool for easy, fast, and flexible hands-on exploration of these issues, and through examples we show how to implement...

  6. The Acoustic and Peceptual Effects of Series and Parallel Processing

    Directory of Open Access Journals (Sweden)

    Melinda C. Anderson

    2009-01-01

    Full Text Available Temporal envelope (TE cues provide a great deal of speech information. This paper explores how spectral subtraction and dynamic-range compression gain modifications affect TE fluctuations for parallel and series configurations. In parallel processing, algorithms compute gains based on the same input signal, and the gains in dB are summed. In series processing, output from the first algorithm forms the input to the second algorithm. Acoustic measurements show that the parallel arrangement produces more gain fluctuations, introducing more changes to the TE than the series configurations. Intelligibility tests for normal-hearing (NH and hearing-impaired (HI listeners show (1 parallel processing gives significantly poorer speech understanding than an unprocessed (UNP signal and the series arrangement and (2 series processing and UNP yield similar results. Speech quality tests show that UNP is preferred to both parallel and series arrangements, although spectral subtraction is the most preferred. No significant differences exist in sound quality between the series and parallel arrangements, or between the NH group and the HI group. These results indicate that gain modifications affect intelligibility and sound quality differently. Listeners appear to have a higher tolerance for gain modifications with regard to intelligibility, while judgments for sound quality appear to be more affected by smaller amounts of gain modification.

  7. An efficient implementation of a backpropagation learning algorithm on quadrics parallel supercomputer

    International Nuclear Information System (INIS)

    Taraglio, S.; Massaioli, F.

    1995-08-01

    A parallel implementation of a library to build and train Multi Layer Perceptrons via the Back Propagation algorithm is presented. The target machine is the SIMD massively parallel supercomputer Quadrics. Performance measures are provided on three different machines with different number of processors, for two network examples. A sample source code is given

  8. The environment in the headlines. Newspaper coverage of climate change and euthropication in Finland

    Energy Technology Data Exchange (ETDEWEB)

    Lyytimaeki, J.

    2009-07-01

    Media representations are an important part of the dynamics of contemporary socio-ecological systems. The media agenda influences and interacts with the public and the policy agenda and all of these are connected to the changes of the state of the environment. Partly as a result of media debate, some issues are considered serious environmental problems, some risks are amplified while others are attenuated, and some proposals for remedies are highlighted and others downplayed. Research on environmental media coverage has focused predominantly on the English-speaking industrialised countries. This thesis presents an analysis of Finnish environmental coverage, focusing on representations of climate change and eutrophication from 1990- 2010. The main source of material is Helsingin Sanomat (HS), the most widely-read newspaper in Finland. The analysis adopts the perspective of contextual constructivism and the agenda-setting function of the mass media. Selected models describing the evolution of environmental coverage are applied within an interdisciplinary emphasis. The results show that the amount of newspaper content on eutrophication and climate change has generally increased, although both debates have been characterised by intense fluctuations. The volume of the coverage on climate change has been higher than that of eutrophication, especially since 2006. Eutrophication was highlighted most during the late 1990s while the peaks of climate coverage occurred between 2007 and 2009. Two key factors have shaped the coverage of eutrophication. First, the coverage is shaped by ecological factors, especially by the algal occurrences that are largely dependent on weather conditions. Second, the national algal monitoring and communication system run by environmental authorities has provided the media with easy-to-use data on the algal situation during the summertime. The peaks of climate coverage have been caused by an accumulation of several contributing factors. The two

  9. Controlling coverage of solution cast materials with unfavourable surface interactions

    KAUST Repository

    Burlakov, V. M.; Eperon, G. E.; Snaith, H. J.; Chapman, S. J.; Goriely, A.

    2014-01-01

    Creating uniform coatings of a solution-cast material is of central importance to a broad range of applications. Here, a robust and generic theoretical framework for calculating surface coverage by a solid film of material de-wetting a substrate is presented. Using experimental data from semiconductor thin films as an example, we calculate surface coverage for a wide range of annealing temperatures and film thicknesses. The model generally predicts that for each value of the annealing temperature there is a range of film thicknesses leading to poor surface coverage. The model accurately reproduces solution-cast thin film coverage for organometal halide perovskites, key modern photovoltaic materials, and identifies processing windows for both high and low levels of surface coverage. © 2014 AIP Publishing LLC.

  10. Controlling coverage of solution cast materials with unfavourable surface interactions

    KAUST Repository

    Burlakov, V. M.

    2014-03-03

    Creating uniform coatings of a solution-cast material is of central importance to a broad range of applications. Here, a robust and generic theoretical framework for calculating surface coverage by a solid film of material de-wetting a substrate is presented. Using experimental data from semiconductor thin films as an example, we calculate surface coverage for a wide range of annealing temperatures and film thicknesses. The model generally predicts that for each value of the annealing temperature there is a range of film thicknesses leading to poor surface coverage. The model accurately reproduces solution-cast thin film coverage for organometal halide perovskites, key modern photovoltaic materials, and identifies processing windows for both high and low levels of surface coverage. © 2014 AIP Publishing LLC.

  11. The numerical parallel computing of photon transport

    International Nuclear Information System (INIS)

    Huang Qingnan; Liang Xiaoguang; Zhang Lifa

    1998-12-01

    The parallel computing of photon transport is investigated, the parallel algorithm and the parallelization of programs on parallel computers both with shared memory and with distributed memory are discussed. By analyzing the inherent law of the mathematics and physics model of photon transport according to the structure feature of parallel computers, using the strategy of 'to divide and conquer', adjusting the algorithm structure of the program, dissolving the data relationship, finding parallel liable ingredients and creating large grain parallel subtasks, the sequential computing of photon transport into is efficiently transformed into parallel and vector computing. The program was run on various HP parallel computers such as the HY-1 (PVP), the Challenge (SMP) and the YH-3 (MPP) and very good parallel speedup has been gotten

  12. Hypergraph partitioning implementation for parallelizing matrix-vector multiplication using CUDA GPU-based parallel computing

    Science.gov (United States)

    Murni, Bustamam, A.; Ernastuti, Handhika, T.; Kerami, D.

    2017-07-01

    Calculation of the matrix-vector multiplication in the real-world problems often involves large matrix with arbitrary size. Therefore, parallelization is needed to speed up the calculation process that usually takes a long time. Graph partitioning techniques that have been discussed in the previous studies cannot be used to complete the parallelized calculation of matrix-vector multiplication with arbitrary size. This is due to the assumption of graph partitioning techniques that can only solve the square and symmetric matrix. Hypergraph partitioning techniques will overcome the shortcomings of the graph partitioning technique. This paper addresses the efficient parallelization of matrix-vector multiplication through hypergraph partitioning techniques using CUDA GPU-based parallel computing. CUDA (compute unified device architecture) is a parallel computing platform and programming model that was created by NVIDIA and implemented by the GPU (graphics processing unit).

  13. Parallel Framework for Cooperative Processes

    Directory of Open Access Journals (Sweden)

    Mitică Craus

    2005-01-01

    Full Text Available This paper describes the work of an object oriented framework designed to be used in the parallelization of a set of related algorithms. The idea behind the system we are describing is to have a re-usable framework for running several sequential algorithms in a parallel environment. The algorithms that the framework can be used with have several things in common: they have to run in cycles and the work should be possible to be split between several "processing units". The parallel framework uses the message-passing communication paradigm and is organized as a master-slave system. Two applications are presented: an Ant Colony Optimization (ACO parallel algorithm for the Travelling Salesman Problem (TSP and an Image Processing (IP parallel algorithm for the Symmetrical Neighborhood Filter (SNF. The implementations of these applications by means of the parallel framework prove to have good performances: approximatively linear speedup and low communication cost.

  14. Modelling and parallel calculation of a kinetic boundary layer

    International Nuclear Information System (INIS)

    Perlat, Jean Philippe

    1998-01-01

    This research thesis aims at addressing reliability and cost issues in the calculation by numeric simulation of flows in transition regime. The first step has been to reduce calculation cost and memory space for the Monte Carlo method which is known to provide performance and reliability for rarefied regimes. Vector and parallel computers allow this objective to be reached. Here, a MIMD (multiple instructions, multiple data) machine has been used which implements parallel calculation at different levels of parallelization. Parallelization procedures have been adapted, and results showed that parallelization by calculation domain decomposition was far more efficient. Due to reliability issue related to the statistic feature of Monte Carlo methods, a new deterministic model was necessary to simulate gas molecules in transition regime. New models and hyperbolic systems have therefore been studied. One is chosen which allows thermodynamic values (density, average velocity, temperature, deformation tensor, heat flow) present in Navier-Stokes equations to be determined, and the equations of evolution of thermodynamic values are described for the mono-atomic case. Numerical resolution of is reported. A kinetic scheme is developed which complies with the structure of all systems, and which naturally expresses boundary conditions. The validation of the obtained 14 moment-based model is performed on shock problems and on Couette flows [fr

  15. Simplifying the parallelization of scientific codes by a function-centric approach in Python

    International Nuclear Information System (INIS)

    Nilsen, Jon K; Cai Xing; Langtangen, Hans Petter; Hoeyland, Bjoern

    2010-01-01

    The purpose of this paper is to show how existing scientific software can be parallelized using a separate thin layer of Python code where all parallelization-specific tasks are implemented. We provide specific examples of such a Python code layer, which can act as templates for parallelizing a wide set of serial scientific codes. The use of Python for parallelization is motivated by the fact that the language is well suited for reusing existing serial codes programmed in other languages. The extreme flexibility of Python with regard to handling functions makes it very easy to wrap up decomposed computational tasks of a serial scientific application as Python functions. Many parallelization-specific components can be implemented as generic Python functions, which may take as input those wrapped functions that perform concrete computational tasks. The overall programming effort needed by this parallelization approach is limited, and the resulting parallel Python scripts have a compact and clean structure. The usefulness of the parallelization approach is exemplified by three different classes of application in natural and social sciences.

  16. Parallel computing: numerics, applications, and trends

    National Research Council Canada - National Science Library

    Trobec, Roman; Vajteršic, Marián; Zinterhof, Peter

    2009-01-01

    ... and/or distributed systems. The contributions to this book are focused on topics most concerned in the trends of today's parallel computing. These range from parallel algorithmics, programming, tools, network computing to future parallel computing. Particular attention is paid to parallel numerics: linear algebra, differential equations, numerica...

  17. Parallel Computing Strategies for Irregular Algorithms

    Science.gov (United States)

    Biswas, Rupak; Oliker, Leonid; Shan, Hongzhang; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Parallel computing promises several orders of magnitude increase in our ability to solve realistic computationally-intensive problems, but relies on their efficient mapping and execution on large-scale multiprocessor architectures. Unfortunately, many important applications are irregular and dynamic in nature, making their effective parallel implementation a daunting task. Moreover, with the proliferation of parallel architectures and programming paradigms, the typical scientist is faced with a plethora of questions that must be answered in order to obtain an acceptable parallel implementation of the solution algorithm. In this paper, we consider three representative irregular applications: unstructured remeshing, sparse matrix computations, and N-body problems, and parallelize them using various popular programming paradigms on a wide spectrum of computer platforms ranging from state-of-the-art supercomputers to PC clusters. We present the underlying problems, the solution algorithms, and the parallel implementation strategies. Smart load-balancing, partitioning, and ordering techniques are used to enhance parallel performance. Overall results demonstrate the complexity of efficiently parallelizing irregular algorithms.

  18. Parallel interactive data analysis with PROOF

    International Nuclear Information System (INIS)

    Ballintijn, Maarten; Biskup, Marek; Brun, Rene; Canal, Philippe; Feichtinger, Derek; Ganis, Gerardo; Kickinger, Guenter; Peters, Andreas; Rademakers, Fons

    2006-01-01

    The Parallel ROOT Facility, PROOF, enables the analysis of much larger data sets on a shorter time scale. It exploits the inherent parallelism in data of uncorrelated events via a multi-tier architecture that optimizes I/O and CPU utilization in heterogeneous clusters with distributed storage. The system provides transparent and interactive access to gigabytes today. Being part of the ROOT framework PROOF inherits the benefits of a performant object storage system and a wealth of statistical and visualization tools. This paper describes the data analysis model of ROOT and the latest developments on closer integration of PROOF into that model and the ROOT user environment, e.g. support for PROOF-based browsing of trees stored remotely, and the popular TTree::Draw() interface. We also outline the ongoing developments aimed to improve the flexibility and user-friendliness of the system

  19. 45 CFR 148.124 - Certification and disclosure of coverage.

    Science.gov (United States)

    2010-10-01

    ... method of counting creditable coverage, and the requesting entity may identify specific information that... a payroll deduction for health coverage, a health insurance identification card, a certificate of...

  20. Genomic Selection Using Genotyping-By-Sequencing Data with Different Coverage Depth in Perennial Ryegrass

    DEFF Research Database (Denmark)

    Cericola, Fabio; Fé, Dario; Janss, Luc

    2015-01-01

    the diagonal elements by estimating the amount of genetic variance caused by the reduction of the coverage depth. Secondly we developed a method to scale the relationship matrix by taking into account the overall amount of pairwise non-missing loci between all families. Rust resistance and heading date were......Genotyping by sequencing (GBS) allows generating up to millions of molecular markers with a cost per sample which is proportional to the level of multiplexing. Increasing the sample multiplexing decreases the genotyping price but also reduces the numbers of reads per marker. In this work we...... investigated how this reduction of the coverage depth affects the genomic relationship matrices used to estimated breeding value of F2 family pools in perennial ryegrass. A total of 995 families were genotyped via GBS providing more than 1.8M allele frequency estimates for each family with an average coverage...

  1. [Vaccination coverage in young, middle age and elderly adults in Mexico].

    Science.gov (United States)

    Cruz-Hervert, Luis Pablo; Ferreira-Guerrero, Elizabeth; Díaz-Ortega, José Luis; Trejo-Valdivia, Belem; Téllez-Rojo, Martha María; Mongua-Rodríguez, Norma; Hernández-Serrato, María I; Montoya-Rodríguez, Airain Alejandra; García-García, Lourdes

    2013-01-01

    To estimate vaccination coverage in adults 20 years of age and older. Analysis of data obtained from the National Health and Nutrition Survey 2012. Among adults 20-59 years old coverage with complete scheme, measles and rubella (MR) and tetanus toxoid and diphtheria toxoid (Td) was 44.7,49. and 67.3%, respectively. Coverage and percentage of vaccination were significantly higher among women than men. Among women 20-49 years coverages with complete scheme, MR and Td were 48.3, 53.2 and 69.8%, respectively. Among adults 60-64 years old, coverage with complete scheme, Td and influenza vaccine were 46.5, 66.2 and 56.0%, respectively. Among adults >65 years coverages for complete scheme, Td, influenza vaccine and pneumococcal vaccine were 44.0, 69.0, 63.3 and 62.0%, respectively. Vaccination coverage among adult population as obtained from vaccination card or self-report is below optimal values although data may be underestimated. Recommendations for improvements are proposed.

  2. The Glasgow Parallel Reduction Machine: Programming Shared-memory Many-core Systems using Parallel Task Composition

    Directory of Open Access Journals (Sweden)

    Ashkan Tousimojarad

    2013-12-01

    Full Text Available We present the Glasgow Parallel Reduction Machine (GPRM, a novel, flexible framework for parallel task-composition based many-core programming. We allow the programmer to structure programs into task code, written as C++ classes, and communication code, written in a restricted subset of C++ with functional semantics and parallel evaluation. In this paper we discuss the GPRM, the virtual machine framework that enables the parallel task composition approach. We focus the discussion on GPIR, the functional language used as the intermediate representation of the bytecode running on the GPRM. Using examples in this language we show the flexibility and power of our task composition framework. We demonstrate the potential using an implementation of a merge sort algorithm on a 64-core Tilera processor, as well as on a conventional Intel quad-core processor and an AMD 48-core processor system. We also compare our framework with OpenMP tasks in a parallel pointer chasing algorithm running on the Tilera processor. Our results show that the GPRM programs outperform the corresponding OpenMP codes on all test platforms, and can greatly facilitate writing of parallel programs, in particular non-data parallel algorithms such as reductions.

  3. Streaming for Functional Data-Parallel Languages

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner

    In this thesis, we investigate streaming as a general solution to the space inefficiency commonly found in functional data-parallel programming languages. The data-parallel paradigm maps well to parallel SIMD-style hardware. However, the traditional fully materializing execution strategy...... by extending two existing data-parallel languages: NESL and Accelerate. In the extensions we map bulk operations to data-parallel streams that can evaluate fully sequential, fully parallel or anything in between. By a dataflow, piecewise parallel execution strategy, the runtime system can adjust to any target...... flattening necessitates all sub-computations to materialize at the same time. For example, naive n by n matrix multiplication requires n^3 space in NESL because the algorithm contains n^3 independent scalar multiplications. For large values of n, this is completely unacceptable. We address the problem...

  4. QDP++: Data Parallel Interface for QCD

    Energy Technology Data Exchange (ETDEWEB)

    Robert Edwards

    2003-03-01

    This is a user's guide for the C++ binding for the QDP Data Parallel Applications Programmer Interface developed under the auspices of the US Department of Energy Scientific Discovery through Advanced Computing (SciDAC) program. The QDP Level 2 API has the following features: (1) Provides data parallel operations (logically SIMD) on all sites across the lattice or subsets of these sites. (2) Operates on lattice objects, which have an implementation-dependent data layout that is not visible above this API. (3) Hides details of how the implementation maps onto a given architecture, namely how the logical problem grid (i.el lattice) is mapped onto the machine architecture. (4) Allows asynchronous (non-blocking) shifts of lattice level objects over any permutation map of site sonto sites. However, from the user's view these instructions appear blocking and in fact may be so in some implementation. (5) Provides broadcast operations (filling a lattice quantity from a scalar value(s)), global reduction operations, and lattice-wide operations on various data-type primitives, such as matrices, vectors, and tensor products of matrices (propagators). (6) Operator syntax that support complex expression constructions.

  5. Whole brain CT perfusion in acute anterior circulation ischemia: coverage size matters

    International Nuclear Information System (INIS)

    Emmer, B.J.; Rijkee, M.; Walderveen, M.A.A. van; Niesten, J.M.; Velthuis, B.K.; Wermer, M.J.H.

    2014-01-01

    Our aim was to compare infarct core volume on whole brain CT perfusion (CTP) with several limited coverage sizes (i.e., 3, 4, 6, and 8 cm), as currently used in routine clinical practice. In total, 40 acute ischemic stroke patients with non-contrast CT (NCCT) and CTP imaging of anterior circulation ischemia were included. Imaging was performed using a 320-multislice CT. Average volumes of infarct core of all simulated partial coverage sizes were calculated. Infarct core volume of each partial brain coverage was compared with infarct core volume of whole brain coverage and expressed using a percentage. To determine the optimal starting position for each simulated CTP coverage, the percentage of infarct coverage was calculated for every possible starting position of the simulated partial coverage in relation to Alberta Stroke Program Early CT Score in Acute Stroke Triage (ASPECTS 1) level. Whole brain CTP coverage further increased the percentage of infarct core volume depicted by 10 % as compared to the 8-cm coverage when the bottom slice was positioned at the ASPECTS 1 level. Optimization of the position of the region of interest (ROI) in 3 cm, 4 cm, and 8 cm improved the percentage of infarct depicted by 4 % for the 8-cm, 7 % for the 4-cm, and 13 % for the 3-cm coverage size. This study shows that whole brain CTP is the optimal coverage for CTP with a substantial improvement in accuracy in quantifying infarct core size. In addition, our results suggest that the optimal position of the ROI in limited coverage depends on the size of the coverage. (orig.)

  6. Whole brain CT perfusion in acute anterior circulation ischemia: coverage size matters

    Energy Technology Data Exchange (ETDEWEB)

    Emmer, B.J. [Erasmus Medical Centre, Department of Radiology, Postbus 2040, Rotterdam (Netherlands); Rijkee, M.; Walderveen, M.A.A. van [Leiden University Medical Centre, Department of Radiology, Leiden (Netherlands); Niesten, J.M.; Velthuis, B.K. [University Medical Centre Utrecht, Department of Radiology, Utrecht (Netherlands); Wermer, M.J.H. [Leiden University Medical Centre, Department of Neurology, Leiden (Netherlands)

    2014-12-15

    Our aim was to compare infarct core volume on whole brain CT perfusion (CTP) with several limited coverage sizes (i.e., 3, 4, 6, and 8 cm), as currently used in routine clinical practice. In total, 40 acute ischemic stroke patients with non-contrast CT (NCCT) and CTP imaging of anterior circulation ischemia were included. Imaging was performed using a 320-multislice CT. Average volumes of infarct core of all simulated partial coverage sizes were calculated. Infarct core volume of each partial brain coverage was compared with infarct core volume of whole brain coverage and expressed using a percentage. To determine the optimal starting position for each simulated CTP coverage, the percentage of infarct coverage was calculated for every possible starting position of the simulated partial coverage in relation to Alberta Stroke Program Early CT Score in Acute Stroke Triage (ASPECTS 1) level. Whole brain CTP coverage further increased the percentage of infarct core volume depicted by 10 % as compared to the 8-cm coverage when the bottom slice was positioned at the ASPECTS 1 level. Optimization of the position of the region of interest (ROI) in 3 cm, 4 cm, and 8 cm improved the percentage of infarct depicted by 4 % for the 8-cm, 7 % for the 4-cm, and 13 % for the 3-cm coverage size. This study shows that whole brain CTP is the optimal coverage for CTP with a substantial improvement in accuracy in quantifying infarct core size. In addition, our results suggest that the optimal position of the ROI in limited coverage depends on the size of the coverage. (orig.)

  7. Coverage-maximization in networks under resource constraints.

    Science.gov (United States)

    Nandi, Subrata; Brusch, Lutz; Deutsch, Andreas; Ganguly, Niloy

    2010-06-01

    Efficient coverage algorithms are essential for information search or dispersal in all kinds of networks. We define an extended coverage problem which accounts for constrained resources of consumed bandwidth B and time T . Our solution to the network challenge is here studied for regular grids only. Using methods from statistical mechanics, we develop a coverage algorithm with proliferating message packets and temporally modulated proliferation rate. The algorithm performs as efficiently as a single random walker but O(B(d-2)/d) times faster, resulting in significant service speed-up on a regular grid of dimension d . The algorithm is numerically compared to a class of generalized proliferating random walk strategies and on regular grids shown to perform best in terms of the product metric of speed and efficiency.

  8. Patterns for Parallel Software Design

    CERN Document Server

    Ortega-Arjona, Jorge Luis

    2010-01-01

    Essential reading to understand patterns for parallel programming Software patterns have revolutionized the way we think about how software is designed, built, and documented, and the design of parallel software requires you to consider other particular design aspects and special skills. From clusters to supercomputers, success heavily depends on the design skills of software developers. Patterns for Parallel Software Design presents a pattern-oriented software architecture approach to parallel software design. This approach is not a design method in the classic sense, but a new way of managin

  9. High performance parallel I/O

    CERN Document Server

    Prabhat

    2014-01-01

    Gain Critical Insight into the Parallel I/O EcosystemParallel I/O is an integral component of modern high performance computing (HPC), especially in storing and processing very large datasets to facilitate scientific discovery. Revealing the state of the art in this field, High Performance Parallel I/O draws on insights from leading practitioners, researchers, software architects, developers, and scientists who shed light on the parallel I/O ecosystem.The first part of the book explains how large-scale HPC facilities scope, configure, and operate systems, with an emphasis on choices of I/O har

  10. Parallel transport of long mean-free-path plasma along open magnetic field lines: Parallel heat flux

    International Nuclear Information System (INIS)

    Guo Zehua; Tang Xianzhu

    2012-01-01

    In a long mean-free-path plasma where temperature anisotropy can be sustained, the parallel heat flux has two components with one associated with the parallel thermal energy and the other the perpendicular thermal energy. Due to the large deviation of the distribution function from local Maxwellian in an open field line plasma with low collisionality, the conventional perturbative calculation of the parallel heat flux closure in its local or non-local form is no longer applicable. Here, a non-perturbative calculation is presented for a collisionless plasma in a two-dimensional flux expander bounded by absorbing walls. Specifically, closures of previously unfamiliar form are obtained for ions and electrons, which relate two distinct components of the species parallel heat flux to the lower order fluid moments such as density, parallel flow, parallel and perpendicular temperatures, and the field quantities such as the magnetic field strength and the electrostatic potential. The plasma source and boundary condition at the absorbing wall enter explicitly in the closure calculation. Although the closure calculation does not take into account wave-particle interactions, the results based on passing orbits from steady-state collisionless drift-kinetic equation show remarkable agreement with fully kinetic-Maxwell simulations. As an example of the physical implications of the theory, the parallel heat flux closures are found to predict a surprising observation in the kinetic-Maxwell simulation of the 2D magnetic flux expander problem, where the parallel heat flux of the parallel thermal energy flows from low to high parallel temperature region.

  11. Dynamic grid refinement for partial differential equations on parallel computers

    International Nuclear Information System (INIS)

    Mccormick, S.; Quinlan, D.

    1989-01-01

    The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids to provide adaptive resolution and fast solution of PDEs. An asynchronous version of FAC, called AFAC, that completely eliminates the bottleneck to parallelism is presented. This paper describes the advantage that this algorithm has in adaptive refinement for moving singularities on multiprocessor computers. This work is applicable to the parallel solution of two- and three-dimensional shock tracking problems. 6 refs

  12. Is Monte Carlo embarrassingly parallel?

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)

    2012-07-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  13. Is Monte Carlo embarrassingly parallel?

    International Nuclear Information System (INIS)

    Hoogenboom, J. E.

    2012-01-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  14. Conventional sunscreen application does not lead to sufficient body coverage.

    Science.gov (United States)

    Jovanovic, Z; Schornstein, T; Sutor, A; Neufang, G; Hagens, R

    2017-10-01

    This study aimed to assess sunscreen application habits and relative body coverage after single whole body application. Fifty-two healthy volunteers were asked to use the test product once, following their usual sunscreen application routine. Standardized UV photographs, which were evaluated by Image Analysis, were conducted before and immediately after product application to evaluate relative body coverage. In addition to these procedures, the volunteers completed an online self-assessment questionnaire to assess sunscreen usage habits. After product application, the front side showed significantly less non-covered skin (4.35%) than the backside (17.27%) (P = 0.0000). Females showed overall significantly less non-covered skin (8.98%) than males (13.16%) (P = 0.0381). On the backside, females showed significantly less non-covered skin (13.57%) (P = 0.0045) than males (21.94%), while on the front side, this difference between females (4.14%) and males (4.53%) was not significant. In most cases, the usual sunscreen application routine does not provide complete body coverage even though an extra light sunscreen with good absorption properties was used. On average, 11% of the body surface was not covered by sunscreen at all. Therefore, appropriate consumer education is required to improve sunscreen application and to warrant effective sun protection. © 2017 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  15. Cosmic Shear With ACS Pure Parallels. Targeted Portion.

    Science.gov (United States)

    Rhodes, Jason

    2002-07-01

    Small distortions in the shapes of background galaxies by foreground mass provide a powerful method of directly measuring the amount and distribution of dark matter. Several groups have recently detected this weak lensing by large-scale structure, also called cosmic shear. The high resolution and sensitivity of HST/ACS provide a unique opportunity to measure cosmic shear accurately on small scales. Using 260 parallel orbits in Sloan i {F775W} we will measure for the first time: the cosmic shear variance on scales Omega_m^0.5, with signal-to-noise {s/n} 20, and the mass density Omega_m with s/n=4. They will be done at small angular scales where non-linear effects dominate the power spectrum, providing a test of the gravitational instability paradigm for structure formation. Measurements on these scales are not possible from the ground, because of the systematic effects induced by PSF smearing from seeing. Having many independent lines of sight reduces the uncertainty due to cosmic variance, making parallel observations ideal.

  16. Computer Security in the Introductory Business Information Systems Course: An Exploratory Study of Textbook Coverage

    Science.gov (United States)

    Sousa, Kenneth J.; MacDonald, Laurie E.; Fougere, Kenneth T.

    2005-01-01

    The authors conducted an evaluation of Management Information Systems (MIS) textbooks and found that computer security receives very little in-depth coverage. The textbooks provide, at best, superficial treatment of security issues. The research results suggest that MIS faculty need to provide material to supplement the textbook to provide…

  17. CRITERIA FOR SELECTION OF THE REINSURANCE COVERAGE FOR EXCESS OF LOSS TREATY

    Directory of Open Access Journals (Sweden)

    V. Veretnov

    2014-03-01

    Full Text Available Optimal reinsurance coverage, selected by the cedent, influenced by a number of internal, external, objective and subjective factors. Their accounting or ignoring depends on the individual conditions of the insurer, knowledge of the specific risk profiles of professional experience and decision-makers about the forms and methods of reinsurance coverage. Justified the selection criteria of reinsurance protection for the treaty excess of loss. Using these criteria makes it possible not only to optimize the reinsurance protection, but also to ensure a balance of interests in the long-term relationship of the cedent company and the reinsurer. The article also provides examples of how classes of insurance is advisable to use the obligatory contract portfolio sexcess of loss.

  18. Assessing Requirements Quality through Requirements Coverage

    Science.gov (United States)

    Rajan, Ajitha; Heimdahl, Mats; Woodham, Kurt

    2008-01-01

    In model-based development, the development effort is centered around a formal description of the proposed software system the model. This model is derived from some high-level requirements describing the expected behavior of the software. For validation and verification purposes, this model can then be subjected to various types of analysis, for example, completeness and consistency analysis [6], model checking [3], theorem proving [1], and test-case generation [4, 7]. This development paradigm is making rapid inroads in certain industries, e.g., automotive, avionics, space applications, and medical technology. This shift towards model-based development naturally leads to changes in the verification and validation (V&V) process. The model validation problem determining that the model accurately captures the customer's high-level requirements has received little attention and the sufficiency of the validation activities has been largely determined through ad-hoc methods. Since the model serves as the central artifact, its correctness with respect to the users needs is absolutely crucial. In our investigation, we attempt to answer the following two questions with respect to validation (1) Are the requirements sufficiently defined for the system? and (2) How well does the model implement the behaviors specified by the requirements? The second question can be addressed using formal verification. Nevertheless, the size and complexity of many industrial systems make formal verification infeasible even if we have a formal model and formalized requirements. Thus, presently, there is no objective way of answering these two questions. To this end, we propose an approach based on testing that, when given a set of formal requirements, explores the relationship between requirements-based structural test-adequacy coverage and model-based structural test-adequacy coverage. The proposed technique uses requirements coverage metrics defined in [9] on formal high-level software

  19. Physics-Aware Informative Coverage Planning for Autonomous Vehicles

    Science.gov (United States)

    2014-06-01

    Physics-Aware Informative Coverage Planning for Autonomous Vehicles Michael J. Kuhlman1, Student Member, IEEE, Petr Švec2, Member, IEEE, Krishnanand...Physics-Aware Informative Coverage Planning for Autonomous Vehicles 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d

  20. Parallel algorithms for nuclear reactor analysis via domain decomposition method

    International Nuclear Information System (INIS)

    Kim, Yong Hee

    1995-02-01

    In this thesis, the neutron diffusion equation in reactor physics is discretized by the finite difference method and is solved on a parallel computer network which is composed of T-800 transputers. T-800 transputer is a message-passing type MIMD (multiple instruction streams and multiple data streams) architecture. A parallel variant of Schwarz alternating procedure for overlapping subdomains is developed with domain decomposition. The thesis provides convergence analysis and improvement of the convergence of the algorithm. The convergence of the parallel Schwarz algorithms with DN(or ND), DD, NN, and mixed pseudo-boundary conditions(a weighted combination of Dirichlet and Neumann conditions) is analyzed for both continuous and discrete models in two-subdomain case and various underlying features are explored. The analysis shows that the convergence rate of the algorithm highly depends on the pseudo-boundary conditions and the theoretically best one is the mixed boundary conditions(MM conditions). Also it is shown that there may exist a significant discrepancy between continuous model analysis and discrete model analysis. In order to accelerate the convergence of the parallel Schwarz algorithm, relaxation in pseudo-boundary conditions is introduced and the convergence analysis of the algorithm for two-subdomain case is carried out. The analysis shows that under-relaxation of the pseudo-boundary conditions accelerates the convergence of the parallel Schwarz algorithm if the convergence rate without relaxation is negative, and any relaxation(under or over) decelerates convergence if the convergence rate without relaxation is positive. Numerical implementation of the parallel Schwarz algorithm on an MIMD system requires multi-level iterations: two levels for fixed source problems, three levels for eigenvalue problems. Performance of the algorithm turns out to be very sensitive to the iteration strategy. In general, multi-level iterations provide good performance when