WorldWideScience

Sample records for largescale sucker catostomus

  1. Contaminants of legacy and emerging concern in largescale suckers (Catostomus macrocheilus) and the foodweb in the lower Columbia River, Oregon and Washington, USA

    Science.gov (United States)

    Nilsen, Elena B.; Zaugg, Steven D.; Alvarez, David A.; Morace, Jennifer L.; Waite, Ian R.; Counihan, Timothy D.; Hardiman, Jill M.; Torres, Leticia; Patino, Reynaldo; Mesa, Matthew G.; Grove, Robert

    2014-01-01

    We investigated occurrence, transport pathways, and effects of polybrominated diphenyl ether (PBDE) flame retardants and other endocrine disrupting chemicals (EDCs) in aquatic media and the foodweb in the lower Columbia River. In 2009 and 2010, foodweb sampling at three sites along a gradient of contaminant exposure near Skamania (Washington), Columbia City (Oregon) and Longview (Washington) included water (via passive samplers), bed sediment, invertebrate biomass residing in sediment, a resident fish species (largescale suckers [Catostomus macrocheilus]), and eggs from osprey (Pandion haliaetus). This paper primarily reports fish tissue concentrations. In 2009, composites of fish brain, fillet, liver, stomach, and gonad tissues revealed that overall contaminant concentrations were highest in livers, followed by brain, stomach, gonad, and fillet. Concentrations of halogenated compounds in tissue samples from all three sites ranged from contaminants in the environment lead to bioaccumulation and potential negative effects in multiple levels of the foodweb.

  2. Health status of Largescale Sucker (Catostomus macrocheilus) collected along an organic contaminant gradient in the lower Columbia River, Oregon and Washington, USA

    Science.gov (United States)

    Torres, Leticia; Nilsen, Elena B.; Grove, Robert A.; Patino, Reynaldo

    2014-01-01

    The health of Largescale Sucker (Catostomus macrocheilus) in the lower Columbia River (USA) was evaluated using morphometric and histopathological approaches, and its association with organic contaminants accumulated in liver was evaluated in males. Fish were sampled from three sites along a contaminant gradient In 2009, body length and mass, condition factor, gonadosomatic index, and hematocrit were measured in males and females; liver and gonad tissue were collected from males for histological analyses; and organ composites were analyzed for contaminant content in males. In 2010, additional data were collected for males and females, including external fish condition assessment, histopathologies of spleen, kidney and gill and, for males, liver contaminant content. Multivariate analysis of variance indicated that biological traits in males, but not females, differed among sites in 2009 and 2010. Discriminant function analysis indicated that site-related differences among male populations were relatively small in 2009, but in 2010, when more variables were analyzed, males differed among sites in regards to kidney, spleen, and liver histopathologies and gill parasites. Kidney tubular hyperplasia, liver and spleen macrophage aggregations, and gill parasites were generally more severe in the downstream sites compared to the reference location. The contaminant content of male livers was also generally higher downstream, and the legacy pesticide hexachlorobenzene and flame retardants BDE-47 and BDE-154 were the primary drivers for site discrimination. However, bivariate correlations between biological variables and liver contaminants retained in the discriminant models failed to reveal associations between the two variable sets. In conclusion, whereas certain non-reproductive biological traits and liver contaminant contents of male Largescale Sucker differed according to an upstream-downstream gradient in the lower Columbia River, results from this study did not reveal

  3. Assessing reproductive and endocrine parameters in male largescale suckers (Catostomus macrocheilus) along a contaminant gradient in the lower Columbia River, USA

    Science.gov (United States)

    Jenkins, Jill A.; Olivier, H.M.; Draugelis-Dale, R. O.; Eilts, B.E.; Torres, L.; Patiño, R.; Nilsen, Elena B.; Goodbred, Steven L.

    2014-01-01

    Persistent organochlorine pollutants such as polychlorinated biphenyls (PCBs), dichlorodiphenyldichloroethylene (p,p′-DDE), and polybrominated diphenyl ethers (PBDEs) are stable, bioaccumulative, and widely found in the environment, wildlife, and the human population. To explore the hypothesis that reproduction in male fish is associated with environmental exposures in the lower Columbia River (LCR), reproductive and endocrine parameters were studied in male resident, non-anadromous largescale sucker (Catostomus macrocheilus) (LSS) in the same habitats as anadromous salmonids having conservation status. Testes, thyroid tissue and plasma collected in 2010 from Longview (LV), Columbia City (CC), and Skamania (SK; reference) were studied. Sperm morphologies and thyrocyte heights were measured by light microscopy, sperm motilities by computer-assisted sperm motion analysis, sperm adenosine triphosphate (ATP) with luciferase, and plasma vitellogenin (VTG), thyroxine (T4), and triiodothyronine (T3) by immunoassay. Sperm apoptosis, viability, mitochondrial membrane potential, nuclear DNA fragmentation, and reproductive stage were measured by flow cytometry. Sperm quality parameters (except counts) and VTG were significantly different among sites, with correlations between VTG and 7 sperm parameters. Thyrocyte heights, T4, T3, gonadosomatic index and Fulton's condition factor differed among sites, but not significantly. Sperm quality was significantly lower and VTG higher where liver contaminants and water estrogen equivalents were highest (LV site). Total PCBs (specifically PCB-138, -146, -151, -170, -174, -177, -180, -183, -187, -194, and -206) and total PBDEs (specifically BDE-47, -100, -153, and -154) were negatively correlated with sperm motility. PCB-206 and BDE-154 were positively correlated with DNA fragmentation, and pentachloroanisole and VTG were positively correlated with sperm apoptosis and negatively correlated with ATP. BDE-99 was positively correlated

  4. Age, growth, and maturity of the longnose sucker Catostomus catostomus, of western Lake Superior

    Science.gov (United States)

    Bailey, Merryll M.

    1969-01-01

    Studies of age, growth, and maturity were based on 1760 fish collected in western Lake Superior in 1964-65. The body:scale relation was curvilinear and the curve had an intercept of 1.65 inches on the length axis. The weight increased as the 2.85 power of the length. Some fish formed an annulus before May 18 in 1965; all had completed annuli by late September. Longnose suckers grew 3.6 inches the 1st year, reached 12 inches in the 6th year, and 18 inches in the 11th year. Fish from Pikes Bay grew faster than those from Gull Island Shoal. Over 6 years were required for weight to reach 1 lb and nearly 10 years to reach 2 lb. Minimum length at maturity was 10.5 inches for males and 11.5 inches for females. The youngest mature male belonged to age-group IV and the youngest mature female to age-group V. All males were mature at 14.5-14.9 inches (age-group VIII) and all females at 15.0-15.4 inches (age-group IX). Finclipped longnose suckers returned to spawn in the Brule River in successive years. One fish returned to spawn in 4 successive years. Many of the fish were not recaptured until 2 or 3 years after marking. The time of the Brule River spawning migration depended more on water temperature than on length of day. The average water temperature during the peak of the spawning runs of 1958-64 was 55.4 F. Larval suckers apparently spend little time in the Brule River and adjacent streams and drift downstream to the lake soon after hatching. The number of eggs in the ovaries of eight suckers ranged from 14 to 35 thousand and averaged 24 thousand for fish 13.9-17.7 inches long.

  5. Mitochondrial population structure and post-glacial dispersal of longnose sucker Catostomus catostomus in Labrador, Canada: evidence for multiple refugial origins and limited ongoing gene flow.

    Science.gov (United States)

    Langille, B L; Perry, R; Keefe, D; Barker, O; Marshall, H D

    2016-08-01

    Two hundred and eighty-seven longnose sucker Catostomus catostomus were collected from 14 lakes in Labrador, 52 from three lakes in Ontario, 43 from two lakes in British Columbia and 32 from a lake in Yukon; a total of 414 in all. The resulting 34 haplotypes (20 in Labrador) contained moderate haplotypic diversity (h = 0·657) and relatively low nucleotide diversity (π = 3·730 × 10(-3) . Mean ϕST (0·453, P network revealed one major and two minor clades within Labrador that were assigned to the Atlantic, Beringian and Mississippian refugia, respectively, with tests of neutrality and mismatch distribution indicative of a recent population expansion in Labrador, dated between c. 3500 and 8300 years ago. © 2016 The Fisheries Society of the British Isles.

  6. Characterization of a novel hepadnavirus in the white sucker (Catostomus commersonii) from the Great Lakes Region of the USA

    Science.gov (United States)

    Hahn, Cassidy M.; Iwanowicz, Luke R.; Cornman, Robert S.; Conway, Carla M.; Winton, James R.; Blazer, Vicki S.

    2015-01-01

    The white sucker Catostomus commersonii is a freshwater teleost often utilized as a resident sentinel. Here, we sequenced the full genome of a hepatitis B-like virus that infects white suckers from the Great Lakes Region of the USA. Dideoxysequencing confirmed the white sucker hepatitis B virus (WSHBV) has a circular genome (3542 bp) with the prototypical codon organization of hepadnaviruses. Electron microscopy demonstrated that complete virions of approximately 40 nm were present in the plasma of infected fish. Compared to avi- and orthohepadnaviruses, sequence conservation of the core, polymerase and surface proteins was low and ranged from 16-27% at the amino acid level. An X protein homologue common to the orthohepadnaviruses was not present. The WSHBV genome included an atypical, presumptively non-coding region absent in previously described hepadnaviruses. Phylogenetic analyses confirmed WSHBV as distinct from previously documented hepadnaviruses. The level of divergence in protein sequences between WSHBV other hepadnaviruses, and the identification of an HBV-like sequence in an African cichlid provide evidence that a novel genus of the family Hepadnaviridae may need to be established that includes these hepatitis B-like viruses in fishes. Viral transcription was observed in 9.5% (16 of 169) of white suckers evaluated. The prevalence of hepatic tumors in these fish was 4.9%, of which only 2.4% were positive for both virus and hepatic tumors. These results are not sufficient to draw inferences regarding the association of WSHBV and carcinogenesis in white sucker.

  7. Chemical and Ecological Health of White Sucker (Catostomus Commersoni) in Rock Creek Park, Washington, D.C., 2003-04

    Science.gov (United States)

    Miller, Cherie V.; Weyers, Holly S.; Blazer, Vicki; Freeman, Mary E.

    2006-01-01

    Several classes of chemicals that are known or suspected contaminants were found in bed sediment in Rock Creek, including polyaromatic hydrocarbons (PAHs), phthalate esters, organochlorine pesticides, dioxins and furans, trace metals and metalloids (mercury, arsenic, cadmium, chromium, cobalt, copper, lead, nickel, silver, and zinc), and polychlorinated biphenyls (total PCBs and selected aroclors). Concentrations of many of these chemicals consistently exceeded threshold or chronic-effects guidelines for the protection of aquatic life and often exceeded probable effects levels (PELs). Exceedance of PELs was dependent on the amount of total organic carbon in the sediments. Concurrent with the collection of sediment-quality data, white sucker (Catostomus commersoni) were evaluated for gross-external and internal-organ anomalies, whole-body burdens of chemical contaminants, and gut contents to determine prey. The histopathology of internal tissues of white sucker was compared to contaminant levels in fish tissue and bed sediment. Gut contents were examined to determine preferential prey and thus potential pathways for the bioaccumulation of chemicals from bed sediments. Male and female fish were tested separately. Lesions and other necroses were observed in all fish collected during both years of sample collection, indicating that fish in Rock Creek have experienced some form of environmental stress. No direct cause and effect was determined for chemical exposure and compromised fish health, but a substantial weight of evidence indicates that white sucker, which are bottom-feeding fish and low-order consumers in Rock Creek, are experiencing some reduction in vitality, possibly due to immunosuppression. Abnormalities observed in gonads of both sexes of white sucker and observations of abnormal behavior during spawning indicated some interruption in reproductive success.

  8. Chemical and ecological health of white sucker (Catostomus Commersoni) in Rock Creek Park, Washington, D.C., 2003?04

    Science.gov (United States)

    Miller, C.V.; Weyers, H.S.; Blazer, V.S.; Freeman, M.E.

    2006-01-01

    Several classes of chemicals that are known or suspected contaminants were found in bed sediment in Rock Creek, including polyaromatic hydrocarbons (PAHs), phthalate esters, organochlorine pesticides, dioxins and furans, trace metals and metalloids (mercury, arsenic, cadmium, chromium, cobalt, copper, lead, nickel, silver, and zinc), and polychlorinated biphenyls (total PCBs and selected aroclors). Concentrations of many of these chemicals consistently exceeded thresholdor chronic-effects guidelines for the protection of aquatic life and often exceeded probable effects levels (PELs). Exceedance of PELs was dependent on the amount of total organic carbon in the sediments. Concurrent with the collection of sediment-quality data, white sucker (Catostomus commersoni) were evaluated for gross-external and internal-organ anomalies, whole-body burdens of chemical contaminants, and gut contents to determine prey. The histopathology of internal tissues of white sucker was compared to contaminant levels in fish tissue and bed sediment. Gut contents were examined to determine preferential prey and thus potential pathways for the bioaccumulation of chemicals from bed sediments. Male and female fish were tested separately. Lesions and other necroses were observed in all fish collected during both years of sample collection, indicating that fish in Rock Creek have experienced some form of environmental stress. No direct cause and effect was determined for chemical exposure and compromised fish health, but a substantial weight of evidence indicates that white sucker, which are bottom-feeding fish and low-order consumers in Rock Creek, are experiencing some reduction in vitality, possibly due to immunosuppression. Abnormalities observed in gonads of both sexes of white sucker and observations of abnormal behavior during spawning indicated some interruption in reproductive success.

  9. Characterization of a Novel Hepadnavirus in the White Sucker (Catostomus commersonii) from the Great Lakes Region of the United States.

    Science.gov (United States)

    Hahn, Cassidy M; Iwanowicz, Luke R; Cornman, Robert S; Conway, Carla M; Winton, James R; Blazer, Vicki S

    2015-12-01

    The white sucker Catostomus commersonii is a freshwater teleost often utilized as a resident sentinel. Here, we sequenced the full genome of a hepatitis B-like virus that infects white suckers from the Great Lakes Region of the United States. Dideoxy sequencing confirmed that the white sucker hepatitis B virus (WSHBV) has a circular genome (3,542 bp) with the prototypical codon organization of hepadnaviruses. Electron microscopy demonstrated that complete virions of approximately 40 nm were present in the plasma of infected fish. Compared to avi- and orthohepadnaviruses, sequence conservation of the core, polymerase, and surface proteins was low and ranged from 16 to 27% at the amino acid level. An X protein homologue common to the orthohepadnaviruses was not present. The WSHBV genome included an atypical, presumptively noncoding region absent in previously described hepadnaviruses. Phylogenetic analyses confirmed WSHBV as distinct from previously documented hepadnaviruses. The level of divergence in protein sequences between WSHBV and other hepadnaviruses and the identification of an HBV-like sequence in an African cichlid provide evidence that a novel genus of the family Hepadnaviridae may need to be established that includes these hepatitis B-like viruses in fishes. Viral transcription was observed in 9.5% (16 of 169) of white suckers evaluated. The prevalence of hepatic tumors in these fish was 4.9%, and only 2.4% of fish were positive for both virus and hepatic tumors. These results are not sufficient to draw inferences regarding the association of WSHBV and carcinogenesis in white sucker. We report the first full-length genome of a hepadnavirus from fishes. Phylogenetic analysis of this genome indicates divergence from genomes of previously described hepadnaviruses from mammalian and avian hosts and supports the creation of a novel genus. The discovery of this novel virus may better our understanding of the evolutionary history of hepatitis B-like viruses of

  10. Hazard evaluation of inorganics, singly and in mixtures, to Flannelmouth Sucker Catostomus latipinnis in the San Juan River, New Mexico

    Science.gov (United States)

    Hamilton, S.J.; Buhl, K.J.

    1997-01-01

    Larval flannelmouth sucker (Catostomus latipinnis) were exposed to arsenate, boron, copper, molybdenum, selenate, selenite, uranium, vanadium, and zinc singly, and to five mixtures of five to nine inorganics. The exposures were conducted in reconstituted water representative of the San Juan River near Shiprock, New Mexico. The mixtures simulated environmental ratios reported for sites along the San Juan River (San Juan River backwater, Fruitland marsh, Hogback East Drain, Mancos River, and McElmo Creek). The rank order of the individual inorganics, from most to least toxic, was: copper > zinc > vanadium > selenite > selenate > arsenate > uranium > boron > molybdenum. All five mixtures exhibited additive toxicity to flannelmouth sucker. In a limited number of tests, 44-day-old and 13-day-old larvae exhibited no difference in sensitivity to three mixtures. Copper was the major toxic component in four mixtures (San Juan backwater, Hogback East Drain, Mancos River, and McElmo Creek), whereas zinc was the major toxic component in the Fruitland marsh mixture, which did not contain copper. The Hogback East Drain was the most toxic mixture tested. Comparison of 96-h LC50values with reported environmental water concentrations from the San Juan River revealed low hazard ratios for arsenic, boron, molybdenum, selenate, selenite, uranium, and vanadium, moderate hazard ratios for zinc and the Fruitland marsh mixture, and high hazard ratios for copper at three sites and four environmental mixtures representing a San Juan backwater, Hogback East Drain, Mancos River, and McElmo Creek. The high hazard ratios suggest that inorganic contaminants could adversely affect larval flannelmouth sucker in the San Juan River at four sites receiving elevated inorganics.

  11. Correlation of gene expression and contaminat concentrations in wild largescale suckers: a field-based study

    Science.gov (United States)

    Christiansen, Helena E.; Mehinto, Alvine C.; Yu, Fahong; Perry, Russell W.; Denslow, Nancy D.; Maule, Alec G.; Mesa, Matthew G.

    2014-01-01

    Toxic compounds such as organochlorine pesticides (OCs), polychlorinated biphenyls (PCBs), and polybrominated diphenyl ether flame retardants (PBDEs) have been detected in fish, birds, and aquatic mammals that live in the Columbia River or use food resources from within the river. We developed a custom microarray for largescale suckers (Catostomus macrocheilus) and used it to investigate the molecular effects of contaminant exposure on wild fish in the Columbia River. Using Significance Analysis of Microarrays (SAM) we identified 72 probes representing 69 unique genes with expression patterns that correlated with hepatic tissue levels of OCs, PCBs, or PBDEs. These genes were involved in many biological processes previously shown to respond to contaminant exposure, including drug and lipid metabolism, apoptosis, cellular transport, oxidative stress, and cellular chaperone function. The relation between gene expression and contaminant concentration suggests that these genes may respond to environmental contaminant exposure and are promising candidates for further field and laboratory studies to develop biomarkers for monitoring exposure of wild fish to contaminant mixtures found in the Columbia River Basin. The array developed in this study could also be a useful tool for studies involving endangered sucker species and other sucker species used in contaminant research.

  12. White sucker Catostomus commersonii respond to conspecific and sea lamprey Petromyzon marinus alarm cues but not potential predator cues

    Science.gov (United States)

    Jordbro, Ethan J.; Di Rocco, Richard T.; Imre, Istvan; Johnson, Nicholas; Brown, Grant E.

    2016-01-01

    Recent studies proposed the use of chemosensory alarm cues to control the distribution of invasive sea lamprey Petromyzon marinus populations in the Laurentian Great Lakes and necessitate the evaluation of sea lamprey chemosensory alarm cues on valuable sympatric species such as white sucker. In two laboratory experiments, 10 replicate groups (10 animals each) of migratory white suckers were exposed to deionized water (control), conspecific whole-body extract, heterospecific whole-body extract (sea lamprey) and two potential predator cues (2-phenylethylamine HCl (PEA HCl) and human saliva) during the day, and exposed to the first four of the above cues at night. White suckers avoided the conspecific and the sea lamprey whole-body extract both during the day and at night to the same extent. Human saliva did not induce avoidance during the day. PEA HCl did not induce avoidance at a higher concentration during the day, or at night at the minimum concentration that was previously shown to induce maximum avoidance by sea lamprey under laboratory conditions. Our findings suggest that human saliva and PEA HCl may be potential species-specific predator cues for sea lamprey.

  13. Effects of Chiloquin Dam on spawning distribution and larval emigration of Lost River, shortnose, and Klamath largescale suckers in the Williamson and Sprague Rivers, Oregon

    Science.gov (United States)

    Martin, Barbara A.; Hewitt, David A.; Ellsworth, Craig M.

    2013-01-01

    Chiloquin Dam was constructed in 1914 on the Sprague River near the town of Chiloquin, Oregon. The dam was identified as a barrier that potentially inhibited or prevented the upstream spawning migrations and other movements of endangered Lost River (Deltistes luxatusChasmistes brevirostris) suckers, as well as other fish species. In 2002, the Bureau of Reclamation led a working group that examined several alternatives to improve fish passage at Chiloquin Dam. Ultimately it was decided that dam removal was the best alternative and the dam was removed in the summer of 2008. The U.S. Geological Survey conducted a long-term study on the spawning ecology of Lost River, shortnose, and Klamath largescale suckers (Catostomus snyderi) in the Sprague and lower Williamson Rivers from 2004 to 2010. The objective of this study was to evaluate shifts in spawning distribution following the removal of Chiloquin Dam. Radio telemetry was used in conjunction with larval production data and detections of fish tagged with passive integrated transponders (PIT tags) to evaluate whether dam removal resulted in increased utilization of spawning habitat farther upstream in the Sprague River. Increased densities of drifting larvae were observed at a site in the lower Williamson River after the dam was removed, but no substantial changes occurred upstream of the former dam site. Adult spawning migrations primarily were influenced by water temperature and did not change with the removal of the dam. Emigration of larvae consistently occurred about 3-4 weeks after adults migrated into a section of river. Detections of PIT-tagged fish showed increases in the numbers of all three suckers that migrated upstream of the dam site following removal, but the increases for Lost River and shortnose suckers were relatively small compared to the total number of fish that made a spawning migration in a given season. Increases for Klamath largescale suckers were more substantial. Post-dam removal monitoring

  14. Population characteristics and the influence of discharge on Bluehead Sucker and Flannelmouth Sucker

    Science.gov (United States)

    Klein, Zachary B.; Breen, Matthew J.; Quist, Michael C.

    2017-01-01

    Rivers are among some of the most complex and important ecosystems in the world. Unfortunately, many fishes endemic to rivers have suffered declines in abundance and distribution suggesting that alterations to lotic environments have negatively influenced native fish populations. Of the 35 fishes native to the Colorado River basin (CRB), seven are considered either endangered, threatened, or species of special concern. As such, the conservation of fishes native to the CRB is a primary interest for natural resource management agencies. One of the major factors limiting the conservation and management of fishes endemic to the CRB is the lack of basic information on their ecology and population characteristics. We sought to describe the population dynamics and demographics of three populations of Bluehead Suckers (Catostomus discobolus) and Flannelmouth Suckers (C. latipinnis) in Utah. Additionally, we evaluated the potential influence of altered flow regimes on the recruitment and growth of Bluehead Suckers and Flannelmouth Suckers. Mortality of Bluehead Suckers and Flannelmouth Suckers from the Green, Strawberry, and White rivers was comparable to other populations. Growth of Bluehead Suckers and Flannelmouth Suckers was higher in the Green, Strawberry, and White rivers when compared to other populations in the CRB. Similarly, recruitment indices suggested that Bluehead Suckers and Flannelmouth Suckers in the Green, Strawberry, and White rivers had more stable recruitment than other populations in the CRB. Models relating growth and recruitment to hydrological indices provided little explanatory power. Notwithstanding, our results indicate that Bluehead Suckers and Flannelmouth Suckers in the Green, Strawberry, and White rivers represent fairly stable populations and provide baseline information that will be valuable for the effective management and conservation of the species.

  15. Spring and Summer Spatial Distribution of Endangered Juvenile Lost River and Shortnose Suckers in Relation to Environmental Variables in Upper Klamath Lake, Oregon: 2007 Annual Report

    Science.gov (United States)

    Burdick, Summer M.; VanderKooi, Scott P.; Anderson, Greer O.

    2009-01-01

    on post-wintering juvenile sucker distribution and habitat use studies. Only 34 percent of nets set during spring sampling (April 2 to May 29) caught juvenile suckers and catch rates were low (0.038 to 0.405 suckers/hour) and widely distributed throughout shoreline areas. Of 13 suckers sacrificed for identification, only one was determined to be a Lost River sucker. All others were either shortnose suckers or Klamath largescale Catostomus snyderi suckers, but were not identified to species. Suckers caught during the spring averaged 93 +- 2 millimeter (mm) standard length (SL; mean +- SE) and were all estimated to be a year old. Spring catches did not vary in respect to nearness to tributary streams or rivers, substrate type, area of the lake, or distance from shore. On the other hand, a higher percentage of nets caught at least one sucker when they were set within 50 meters (m) of a wetland edge (60 percent) compared to nets set 200 m from a wetland (30 percent) or in other shoreline areas (29 percent). Our results also suggest that in the spring age-1 suckers use habitats less than 2 m deep at a greater frequency than deeper environments, a trend that was reversed in the summer. Temporal trends in summer catch rates of age-0 suckers generally were similar to those in previous years, with a peak during the week of August 5. In contrast, age-1 sucker catches were relatively high until the week of July 16, but rapidly declined each week for the rest of the sampling season. Age-0 suckers were caught at higher rates than age-1 suckers though the summer, but both age groups were captured at a similar percentage of sites (age-0, 26.5 percent and age-1, 27.4 percent). Age-0 catches were composed of slightly more Lost River suckers (53.2 percent) than shortnose suckers (42.1 percent). In contrast, most age-1 suckers were shortnose suckers (72.7 percent). Our summer sampling indicates age-0 suckers within Upper Klamath Lake primarily are habitat generalists, whe

  16. Sucker rods

    Energy Technology Data Exchange (ETDEWEB)

    Rylov, B M; Kostur, I N; Shcheigiy, B I; Sukhanov, V S

    1983-01-01

    As an addendum to A.s. USSR patent No 769087, this particular sucker rod utilizes a differential piston spring that has been attached outside the body of the auxiliary pump. The pump cylinder is attached to the intake line of the main pump. The lower part of the auxiliary pump is equipped with vertical slits, while the differential piston is equipped with a perforated pusher and support under the spring; it can also be shifted as necessary with respect to the vertical slits.

  17. Tumours in white suckers from Lake Michigan tributaries: Pathology and prevalence

    Science.gov (United States)

    Blazer, Vicki S.; Walsh, H.L.; Braham, R.P.; Hahn, C. M.; Mazik, P.; McIntyre, P.B.

    2016-01-01

    The prevalence and histopathology of neoplastic lesions were assessed in white suckerCatostomus commersonii captured at two Lake Michigan Areas of Concern (AOCs), the Sheboygan River and Milwaukee Estuary. Findings were compared to those observed at two non-AOC sites, the Root and Kewaunee rivers. At each site, approximately 200 adult suckers were collected during their spawning migration. Raised skin lesions were observed at all sites and included discrete white spots, mucoid plaques on the body surface and fins and large papillomatous lesions on lips and body. Microscopically, hyperplasia, papilloma and squamous cell carcinoma were documented. Liver neoplasms were also observed at all sites and included both hepatocellular and biliary tumours. Based on land use, the Kewaunee River was the site least impacted by human activities previously associated with fish tumours and had significantly fewer liver neoplasms when compared to the other sites. The proportion of white suckers with liver tumours followed the same patterns as the proportion of urban land use in the watershed: the Milwaukee Estuary had the highest prevalence, followed by the Root, Sheboygan and Kewaunee rivers. The overall skin neoplasm (papilloma and carcinoma) prevalence did not follow the same pattern, although the percentage of white suckers with squamous cell carcinoma exhibited a similar relationship to land use. Testicular tumours (seminoma) were observed at both AOC sites but not at the non-AOC sites. Both skin and liver tumours were significantly and positively associated with age but not sex.

  18. Sucker rod motor

    Energy Technology Data Exchange (ETDEWEB)

    Radzalov, N N; Radzhabov, N A

    1983-01-01

    The motor consists of rollers mounted on the wellmouth and connected by a flexible rink. Reciprocating mechanism is in the form of a horizontal non-mobile single-side operation cylinder, inside which a plunger and rod are mounted. The working housing of the hydrocylinder is connected to a gas-hydr aulic batter, and when running is connected via plunger to the high pressure source; running in reverse it is connected with a safety valve and automatic control unit. The unit is equipped with a reducer and a mechanical transformer consisting of screw and nut, and which is shutoff with a single-side lining. The plunger rod consists of an auger-like unit. The high pressure source is provided by the injection line of the sucker rod that has been equipped with a reverse valve.

  19. Modeling and simulation performance of sucker rod beam pump

    Energy Technology Data Exchange (ETDEWEB)

    Aditsania, Annisa, E-mail: annisaaditsania@gmail.com [Department of Computational Sciences, Institut Teknologi Bandung (Indonesia); Rahmawati, Silvy Dewi, E-mail: silvyarahmawati@gmail.com; Sukarno, Pudjo, E-mail: psukarno@gmail.com [Department of Petroleum Engineering, Institut Teknologi Bandung (Indonesia); Soewono, Edy, E-mail: esoewono@math.itb.ac.id [Department of Mathematics, Institut Teknologi Bandung (Indonesia)

    2015-09-30

    Artificial lift is a mechanism to lift hydrocarbon, generally petroleum, from a well to surface. This is used in the case that the natural pressure from the reservoir has significantly decreased. Sucker rod beam pumping is a method of artificial lift. Sucker rod beam pump is modeled in this research as a function of geometry of the surface part, the size of sucker rod string, and fluid properties. Besides its length, sucker rod string also classified into tapered and un-tapered. At the beginning of this research, for easy modeling, the sucker rod string was assumed as un-tapered. The assumption proved non-realistic to use. Therefore, the tapered sucker rod string modeling needs building. The numerical solution of this sucker rod beam pump model is computed using finite difference method. The numerical result shows that the peak of polished rod load for sucker rod beam pump unit C-456-D-256-120, for non-tapered sucker rod string is 38504.2 lb, while for tapered rod string is 25723.3 lb. For that reason, to avoid the sucker rod string breaks due to the overload, the use of tapered sucker rod beam string is suggested in this research.

  20. Modeling and simulation performance of sucker rod beam pump

    International Nuclear Information System (INIS)

    Aditsania, Annisa; Rahmawati, Silvy Dewi; Sukarno, Pudjo; Soewono, Edy

    2015-01-01

    Artificial lift is a mechanism to lift hydrocarbon, generally petroleum, from a well to surface. This is used in the case that the natural pressure from the reservoir has significantly decreased. Sucker rod beam pumping is a method of artificial lift. Sucker rod beam pump is modeled in this research as a function of geometry of the surface part, the size of sucker rod string, and fluid properties. Besides its length, sucker rod string also classified into tapered and un-tapered. At the beginning of this research, for easy modeling, the sucker rod string was assumed as un-tapered. The assumption proved non-realistic to use. Therefore, the tapered sucker rod string modeling needs building. The numerical solution of this sucker rod beam pump model is computed using finite difference method. The numerical result shows that the peak of polished rod load for sucker rod beam pump unit C-456-D-256-120, for non-tapered sucker rod string is 38504.2 lb, while for tapered rod string is 25723.3 lb. For that reason, to avoid the sucker rod string breaks due to the overload, the use of tapered sucker rod beam string is suggested in this research

  1. A portrait of a sucker using landscape genetics: how colonization and life history undermine the idealized dendritic metapopulation.

    Science.gov (United States)

    Salisbury, Sarah J; McCracken, Gregory R; Keefe, Donald; Perry, Robert; Ruzzante, Daniel E

    2016-09-01

    Dendritic metapopulations have been attributed unique properties by in silico studies, including an elevated genetic diversity relative to a panmictic population of equal total size. These predictions have not been rigorously tested in nature, nor has there been full consideration of the interacting effects among contemporary landscape features, colonization history and life history traits of the target species. We tested for the effects of dendritic structure as well as the relative importance of life history, environmental barriers and historical colonization on the neutral genetic structure of a longnose sucker (Catostomus catostomus) metapopulation in the Kogaluk watershed of northern Labrador, Canada. Samples were collected from eight lakes, genotyped with 17 microsatellites, and aged using opercula. Lakes varied in differentiation, historical and contemporary connectivity, and life history traits. Isolation by distance was detected only by removing two highly genetically differentiated lakes, suggesting a lack of migration-drift equilibrium and the lingering influence of historical factors on genetic structure. Bayesian analyses supported colonization via the Kogaluk's headwaters. The historical concentration of genetic diversity in headwaters inferred by this result was supported by high historical and contemporary effective sizes of the headwater lake, T-Bone. Alternatively, reduced allelic richness in headwaters confirmed the dendritic structure's influence on gene flow, but this did not translate to an elevated metapopulation effective size. A lack of equilibrium and upstream migration may have dampened the effects of dendritic structure. We suggest that interacting historical and contemporary factors prevent the achievement of the idealized traits of a dendritic metapopulation in nature. © 2016 John Wiley & Sons Ltd.

  2. Juvenile Lost River and shortnose sucker year class strength, survival, and growth in Upper Klamath Lake, Oregon, and Clear Lake Reservoir, California—2016 Monitoring Report

    Science.gov (United States)

    Burdick, Summer M.; Ostberg, Carl O.; Hoy, Marshal S.

    2018-04-20

    Executive SummaryThe largest populations of federally endangered Lost River (Deltistes luxatus) and shortnose suckers (Chasmistes brevirostris) exist in Upper Klamath Lake, Oregon, and Clear Lake Reservoir, California. Upper Klamath Lake populations are decreasing because adult mortality, which is relatively low, is not being balanced by recruitment of young adult suckers into known spawning aggregations. Most Upper Klamath Lake juvenile sucker mortality appears to occur within the first year of life. Annual production of juvenile suckers in Clear Lake Reservoir appears to be highly variable and may not occur at all in very dry years. However, juvenile sucker survival is much higher in Clear Lake, with non-trivial numbers of suckers surviving to join spawning aggregations. Long-term monitoring of juvenile sucker populations is needed to (1) determine if there are annual and species-specific differences in production, survival, and growth, (2) to identify the season (summer or winter) in which most mortality occurs, and (3) to help identify potential causes of high juvenile sucker mortality, particularly in Upper Klamath Lake.We initiated an annual juvenile sucker monitoring program in 2015 to track cohorts in 3 months (June, August, and September) annually in Upper Klamath Lake and Clear Lake Reservoir. We tracked annual variability in age-0 sucker apparent production, juvenile sucker apparent survival, and apparent growth. Using genetic markers, we were able to classify suckers as one of three taxa: shortnose or Klamath largescale suckers, Lost River, or suckers with genetic markers of both species (Intermediate Prob[LRS]). Using catch data, we generated taxa-specific indices of year class strength, August–September apparent survival, and overwinter apparent survival. We also examined prevalence and severity of afflictions such as parasites, wounds, and deformities.Indices of year class strength in Upper Klamath Lake were similar for shortnose suckers in 2015

  3. Protector predominantly for pump sucker rods

    Energy Technology Data Exchange (ETDEWEB)

    Razhetdinov, U.Z.; Prokopov, O.I.; Sharafutdinov, I.G.; Valishim, Yu.G.

    1982-01-01

    A protector is proposed which includes a cylindrical housing with connecting threaded sections on the ends and rim with spheres attached to the outer surface of the housing. In order to improve reliable operation of the protector by reducing wear of the rocking supports, the rim on the outer surface of the housing is installed at an angle to its axis with the possibility of movement of the sphere in the rim around the sucker rods with interaction of them with the pump-compressor pipes.

  4. Structure and mechanical properties of Octopus vulgaris suckers.

    Science.gov (United States)

    Tramacere, Francesca; Kovalev, Alexander; Kleinteich, Thomas; Gorb, Stanislav N; Mazzolai, Barbara

    2014-02-06

    In this study, we investigate the morphology and mechanical features of Octopus vulgaris suckers, which may serve as a model for the creation of a new generation of attachment devices. Octopus suckers attach to a wide range of substrates in wet conditions, including rough surfaces. This amazing feature is made possible by the sucker's tissues, which are pliable to the substrate profile. Previous studies have described a peculiar internal structure that plays a fundamental role in the attachment and detachment processes of the sucker. In this work, we present a mechanical characterization of the tissues involved in the attachment process, which was performed using microindentation tests. We evaluated the elasticity modulus and viscoelastic parameters of the natural tissues (E ∼ 10 kPa) and measured the mechanical properties of some artificial materials that have previously been used in soft robotics. Such a comparison of biological prototypes and artificial material that mimics octopus-sucker tissue is crucial for the design of innovative artificial suction cups for use in wet environments. We conclude that the properties of the common elastomers that are generally used in soft robotics are quite dissimilar to the properties of biological suckers.

  5. The morphology and adhesion mechanism of Octopus vulgaris suckers.

    Directory of Open Access Journals (Sweden)

    Francesca Tramacere

    Full Text Available The octopus sucker represents a fascinating natural system performing adhesion on different terrains and substrates. Octopuses use suckers to anchor the body to the substrate or to grasp, investigate and manipulate objects, just to mention a few of their functions. Our study focuses on the morphology and adhesion mechanism of suckers in Octopus vulgaris. We use three different techniques (MRI, ultrasonography, and histology and a 3D reconstruction approach to contribute knowledge on both morphology and functionality of the sucker structure in O. vulgaris. The results of our investigation are two-fold. First, we observe some morphological differences with respect to the octopus species previously studied (i.e., Octopus joubini, Octopus maya, Octopus bimaculoides/bimaculatus and Eledone cirrosa. In particular, in O. vulgaris the acetabular chamber, that is a hollow spherical cavity in other octopuses, shows an ellipsoidal cavity which roof has an important protuberance with surface roughness. Second, based on our findings, we propose a hypothesis on the sucker adhesion mechanism in O. vulgaris. We hypothesize that the process of continuous adhesion is achieved by sealing the orifice between acetabulum and infundibulum portions via the acetabular protuberance. We suggest this to take place while the infundibular part achieves a completely flat shape; and, by sustaining adhesion through preservation of sucker configuration. In vivo ultrasonographic recordings support our proposed adhesion model by showing the sucker in action. Such an underlying physical mechanism offers innovative potential cues for developing bioinspired artificial adhesion systems. Furthermore, we think that it could possibly represent a useful approach in order to investigate any potential difference in the ecology and in the performance of adhesion by different species.

  6. The morphology and adhesion mechanism of Octopus vulgaris suckers.

    Science.gov (United States)

    Tramacere, Francesca; Beccai, Lucia; Kuba, Michael; Gozzi, Alessandro; Bifone, Angelo; Mazzolai, Barbara

    2013-01-01

    The octopus sucker represents a fascinating natural system performing adhesion on different terrains and substrates. Octopuses use suckers to anchor the body to the substrate or to grasp, investigate and manipulate objects, just to mention a few of their functions. Our study focuses on the morphology and adhesion mechanism of suckers in Octopus vulgaris. We use three different techniques (MRI, ultrasonography, and histology) and a 3D reconstruction approach to contribute knowledge on both morphology and functionality of the sucker structure in O. vulgaris. The results of our investigation are two-fold. First, we observe some morphological differences with respect to the octopus species previously studied (i.e., Octopus joubini, Octopus maya, Octopus bimaculoides/bimaculatus and Eledone cirrosa). In particular, in O. vulgaris the acetabular chamber, that is a hollow spherical cavity in other octopuses, shows an ellipsoidal cavity which roof has an important protuberance with surface roughness. Second, based on our findings, we propose a hypothesis on the sucker adhesion mechanism in O. vulgaris. We hypothesize that the process of continuous adhesion is achieved by sealing the orifice between acetabulum and infundibulum portions via the acetabular protuberance. We suggest this to take place while the infundibular part achieves a completely flat shape; and, by sustaining adhesion through preservation of sucker configuration. In vivo ultrasonographic recordings support our proposed adhesion model by showing the sucker in action. Such an underlying physical mechanism offers innovative potential cues for developing bioinspired artificial adhesion systems. Furthermore, we think that it could possibly represent a useful approach in order to investigate any potential difference in the ecology and in the performance of adhesion by different species.

  7. Sediment Dynamics Affecting the Threatened Santa Ana Sucker in the Highly-modified Santa Ana River and Inset Channel, Southern California, USA

    Science.gov (United States)

    Minear, J. T.; Wright, S. A.

    2015-12-01

    In this study, we investigate the sediment dynamics of the low-flow channel of the Santa Ana River that is formed by wastewater discharges and contains some of the last remaining habitat of the Santa Ana Sucker (Catostomus santaanae). The Santa Ana River is a highly-modified river draining the San Bernardino Mountains and Inland Empire metropolitan area east of Los Angeles. Home to over 4 million people, the watershed provides habitat for the federally-threatened Santa Ana Sucker, which presently reside within the mainstem Santa Ana River in a reach supported by year-round constant discharges from water treatment plants. The nearly constant low-flow wastewater discharges and infrequent runoff events create a small, approximately 8 m wide, inset channel within the approximately 300 m wide mainstem channel that is typically dry except for large flood flows. The sediment dynamics within the inset channel are characterized by constantly evolving bed substrate and sediment transport rates, and occasional channel avulsions. The sediment dynamics have large influence on the Sucker, which rely on coarse-substrate (gravel and cobble) for their food production. In WY 2013 through the present, we investigated the sediment dynamics of the inset channel using repeat bathymetric and substrate surveys, bedload sampling, and discharge measurements. We found two distinct phases of the inset channel behavior: 1. 'Reset' flows, where sediment-laden mainstem discharges from upstream runoff events result in sand deposition in the inset channel or avulse the inset channel onto previously dry riverbed; and 2. 'Winnowing' flows, whereby the sand within the inset channel is removed by clear-water low flows from the wastewater treatment plant discharges. Thus, in contrast to many regulated rivers where high flows are required to flush fine sediments from the bed (for example, downstream from dams), in the Santa Ana River the low flows from wastewater treatment plants serve as the flushing

  8. Small nonnative fishes as predators of larval razorback suckers

    Science.gov (United States)

    Carpenter, J.; Mueller, G.A.

    2008-01-01

    The razorback sucker (Xyrauchen texanus), an endangered big-river fish of the Colorado River basin, has demonstrated no sustainable recruitment in 4 decades, despite presence of spawning adults and larvae. Lack of adequate recruitment has been attributed to several factors, including predation by nonnative fishes. Substantial funding and effort has been expended on mechanically removing nonnative game fishes, typically targeting large predators. As a result, abundance of larger predators has declined, but the abundance of small nonnative fishes has increased in some areas. We conducted laboratory experiments to determine if small nonnative fishes would consume larval razorback suckers. We tested adults of three small species (threadfin shad, Dorosoma petenense; red shiner, Cyprinella lutrensis; fathead minnow, Pimephales promelas) and juveniles of six larger species (common carp, Cyprinus carpio; yellow bullhead, Ameiurus natalis; channel catfish, Ictalurus punctatus; rainbow trout, Oncorhynchus mykiss; green sunfish, Lepomis cyanellus; bluegill, L. macrochirus). These nonnative fishes span a broad ecological range and are abundant within the historical range of the razorback sucker. All nine species fed on larval razorback suckers (total length, 9-16 mm). Our results suggest that predation by small nonnative fishes could be responsible for limiting recovery of this endangered species.

  9. Juvenile sucker cohort tracking data summary and assessment of monitoring program, 2015

    Science.gov (United States)

    Burdick, Summer M.; Ostberg, Carl O.; Hereford, Mark E.; Hoy, Marshal S.

    2016-09-22

    Populations of federally endangered Lost River (Deltistes luxatus) and shortnose suckers (Chasmistes brevirostris) in Upper Klamath Lake, Oregon, are experiencing long-term declines in abundance. Upper Klamath Lake populations are decreasing because adult mortality, which is relatively low, is not being balanced by recruitment of young adult suckers into known adult spawning aggregations. Previous sampling for juvenile suckers indicated that most juvenile sucker mortality in Upper Klamath Lake likely occurs within the first year of life. The importance of juvenile sucker mortality to the dynamics of Clear Lake Reservoir populations is less clear, and factors other than juvenile mortality (such as access to spawning habitat) play a substantial role. For example, production of age-0 juvenile suckers, as determined by fin ray annuli and fin development, has not been detected since 2013 in Clear Lake Reservoir, whereas it is detected annually in Upper Klamath Lake.

  10. Sucker rod string design of the pumping systems

    OpenAIRE

    Hua. L, C

    2015-01-01

    The existing design of sucker rod string mainly focuses on the simplifying assumptions that rod string was exposed to simple tension loading. And its goal was to have equal modified stress at the top of each taper. The improved rod design was to have the same degree of safety at each section, and it used a dynamic force distribution that was proportional along the whole string. Moreover, the available procedures did not provide the desired accuracy of its pertinent analysis, and the operators...

  11. Features of electric drive sucker rod pumps for oil production

    Science.gov (United States)

    Gizatullin, F. A.; Khakimyanov, M. I.; Khusainov, F. F.

    2018-01-01

    This article is about modes of operation of electric drives of downhole sucker rod pumps. Downhole oil production processes are very energy intensive. Oil fields contain many oil wells; many of them operate in inefficient modes with significant additional losses. Authors propose technical solutions to improve energy performance of a pump unit drives: counterweight balancing, reducing of electric motor power, replacing induction motors with permanent magnet motors, replacing balancer drives with chain drives, using of variable frequency drives.

  12. Colonial waterbird predation on Lost River and Shortnose suckers in the Upper Klamath Basin

    Science.gov (United States)

    Evans, Allen F.; Hewitt, David A.; Payton, Quinn; Cramer, Bradley M.; Collis, Ken; Roby, Daniel D.

    2016-01-01

    We evaluated predation on Lost River Suckers Deltistes luxatus and Shortnose Suckers Chasmistes brevirostris by American white pelicans Pelecanus erythrorhynchos and double-crested cormorants Phalacrocorax auritus nesting at mixed-species colonies in the Upper Klamath Basin of Oregon and California during 2009–2014. Predation was evaluated by recovering (detecting) PIT tags from tagged fish on bird colonies and calculating minimum predation rates, as the percentage of available suckers consumed, adjusted for PIT tag detection probabilities but not deposition probabilities (i.e., probability an egested tag was deposited on- or off-colony). Results indicate that impacts of avian predation varied by sucker species, age-class (adult, juvenile), bird colony location, and year, demonstrating dynamic predator–prey interactions. Tagged suckers ranging in size from 72 to 730 mm were susceptible to cormorant or pelican predation; all but the largest Lost River Suckers were susceptible to bird predation. Minimum predation rate estimates ranged annually from <0.1% to 4.6% of the available PIT-tagged Lost River Suckers and from <0.1% to 4.2% of the available Shortnose Suckers, and predation rates were consistently higher on suckers in Clear Lake Reservoir, California, than on suckers in Upper Klamath Lake, Oregon. There was evidence that bird predation on juvenile suckers (species unknown) in Upper Klamath Lake was higher than on adult suckers in Upper Klamath Lake, where minimum predation rates ranged annually from 5.7% to 8.4% of available juveniles. Results suggest that avian predation is a factor limiting the recovery of populations of Lost River and Shortnose suckers, particularly juvenile suckers in Upper Klamath Lake and adult suckers in Clear Lake Reservoir. Additional research is needed to measure predator-specific PIT tag deposition probabilities (which, based on other published studies, could increase predation rates presented herein by a factor of roughly 2

  13. The quality of Metroxylon Sago sucker: morphology and uptake of manganese and iron

    International Nuclear Information System (INIS)

    Nashriyah Mat; Abdul Khalik Wood; Ramli Ishak

    2001-01-01

    Metroxylon sago or sagopalm is an important source of carbohydrate for the South East Asian countries, apart from rice. In Malaysia, wild sagopalm grows in Sarawak in its natural habitats, the coastal peat swamp. The quality of sucker growing vegetatively on sagopalm was studied at Sungai Talau experimental station, Dalat plantation and Oya/Mukah plantation in Sarawak. The coefficients of variability (C) and Index of similiarity (I) were calculated based on sucker morphology and uptake of manganese and iron, The matrix of hypothetical exact interpoint distances (Indices of Similarity and Dissimilarity) shows that sucker on matured sagopalm at Sungai Talam experimental station was a high quality, sucker on 5 years old sagopalm at Mukah sago plantation was approximately one-third as good, whereas sucker on 1.5 years old sagopalm at Oya/dalat sago plantation was of inferior quality. (Author)

  14. Musculoskeletal determinants of pelvic sucker function in Hawaiian stream gobiid fishes: interspecific comparisons and allometric scaling.

    Science.gov (United States)

    Maie, Takashi; Schoenfuss, Heiko L; Blob, Richard W

    2013-07-01

    Gobiid fishes possess a distinctive ventral sucker, formed from fusion of the pelvic fins. This sucker is used to adhere to a wide range of substrates including, in some species, the vertical cliffs of waterfalls that are climbed during upstream migrations. Previous studies of waterfall-climbing goby species have found that pressure differentials and adhesive forces generated by the sucker increase with positive allometry as fish grow in size, despite isometry or negative allometry of sucker area. To produce such scaling patterns for pressure differential and adhesive force, waterfall-climbing gobies might exhibit allometry for other muscular or skeletal components of the pelvic sucker that contribute to its adhesive function. In this study, we used anatomical dissections and modeling to evaluate the potential for allometric growth in the cross-sectional area, effective mechanical advantage (EMA), and force generating capacity of major protractor and retractor muscles of the pelvic sucker (m. protractor ischii and m. retractor ischii) that help to expand the sealed volume of the sucker to produce pressure differentials and adhesive force. We compared patterns for three Hawaiian gobiid species: a nonclimber (Stenogobius hawaiiensis), an ontogenetically limited climber (Awaous guamensis), and a proficient climber (Sicyopterus stimpsoni). Scaling patterns were relatively similar for all three species, typically exhibiting isometric or negatively allometric scaling for the muscles and lever systems examined. Although these scaling patterns do not help to explain the positive allometry of pressure differentials and adhesive force as climbing gobies grow, the best climber among the species we compared, S. stimpsoni, does exhibit the highest calculated estimates of EMA, muscular input force, and output force for pelvic sucker retraction at any body size, potentially facilitating its adhesive ability. Copyright © 2013 Wiley Periodicals, Inc.

  15. Inspiration, simulation and design for smart robot manipulators from the sucker actuation mechanism of cephalopods.

    Science.gov (United States)

    Grasso, Frank W; Setlur, Pradeep

    2007-12-01

    Octopus arms house 200-300 independently controlled suckers that can alternately afford an octopus fine manipulation of small objects and produce high adhesion forces on virtually any non-porous surface. Octopuses use their suckers to grasp, rotate and reposition soft objects (e.g., octopus eggs) without damaging them and to provide strong, reversible adhesion forces to anchor the octopus to hard substrates (e.g., rock) during wave surge. The biological 'design' of the sucker system is understood to be divided anatomically into three functional groups: the infundibulum that produces a surface seal that conforms to arbitrary surface geometry; the acetabulum that generates negative pressures for adhesion; and the extrinsic muscles that allow adhered surfaces to be rotated relative to the arm. The effector underlying these abilities is the muscular hydrostat. Guided by sensory input, the thousands of muscle fibers within the muscular hydrostats of the sucker act in coordination to provide stiffness or force when and where needed. The mechanical malleability of octopus suckers, the interdigitated arrangement of their muscle fibers and the flexible interconnections of its parts make direct studies of their control challenging. We developed a dynamic simulator (ABSAMS) that models the general functioning of muscular hydrostat systems built from assemblies of biologically constrained muscular hydrostat models. We report here on simulation studies of octopus-inspired and artificial suckers implemented in this system. These simulations reproduce aspects of octopus sucker performance and squid tentacle extension. Simulations run with these models using parameters from man-made actuators and materials can serve as tools for designing soft robotic implementations of man-made artificial suckers and soft manipulators.

  16. Experimental Investigation on the Morphology and Adhesion Mechanism of Leech Posterior Suckers.

    Directory of Open Access Journals (Sweden)

    Huashan Feng

    Full Text Available The posterior sucker of a leech represents a fascinating natural system that allows the leech to adhere to different terrains and substrates. However, the mechanism of adhesion and desorption has not yet to be elucidated. In order to better understand how the adhesion is performed, we analyzed the surface structure, adsorption movements, the muscles' distribution, physical characteristics, and the adsorption force of the leech posterior suckers by experimental investigation. Three conclusions can be drawn based on the obtained experimental results. First, the adhesion by the posterior sucker is wet adhesion, because the surface of the posterior sucker is smooth and the sealing can only be achieved on wet surfaces. Second, the deformation texture, consisting of soft collagen tissues and highly ductile epidermal tissues, plays a key role in adhering to rough surfaces. Finally, the adhesion and desorption is achieved by the synergetic operation of six muscle fibers working in different directions. Concrete saying, directional deformation of the collagen/epithermal interface driven by spatially-distributed muscle fibers facilitates the excretion of fluids in the sucker venter, thus allowing liquid sealing. Furthermore, we found that the adhesion strength is directly related to the size of the contact surface which is generated and affected by the sucker deformation. Such an underlying physical mechanism offers potential cues for developing innovative bio-inspired artificial adhesion systems.

  17. Colonial waterbird predation on Lost River and shortnose suckers based on recoveries of passive integrated transponder tags

    Science.gov (United States)

    Evans, Allen; Payton, Quinn; Cramer, Bradley D.; Collis, Ken; Hewitt, David A.; Roby, Daniel D.

    2015-01-01

    We evaluated predation on Lost River suckers (Deltistes luxatus) and shortnose suckers (Chasmistes brevirostris), both listed under the Endangered Species Act (ESA), from American white pelicans (Pelecanus erythrorhynchos) and double-crested cormorants (Phalacrocorax auritus) nesting at mixed species colonies on Clear Lake Reservoir, CA and Upper Klamath Lake, OR during 2009-2014. Predation was evaluated by recovering passive integrated transponder (PIT) tags that were implanted in suckers, subsequently consumed by pelicans or cormorants, and deposited on the birds’ nesting colonies. Data from PIT tag recoveries were used to estimate predation rates (proportion of available tagged suckers consumed) by birds to evaluate the relative susceptibility of suckers to avian predation in Upper Klamath Basin. Data on the size of pelican and cormorant colonies (number of breeding adults) at Clear Lake and Upper Klamath Lake were also collected and reported in the context of predation on suckers.

  18. Sperm quality assessments for endangered razorback suckers Xyrauchen Texanus

    Science.gov (United States)

    Jenkins, Jill A.; Eilts, Bruce E.; Guitreau, Amy M.; Figiel, Chester R.; Draugelis-Dale, Rassa O.; Tiersch, Terrence R.

    2011-01-01

    Flow cytometry (FCM) and computer-assisted sperm motion analysis (CASA) methods were developed and validated for use with endangered razorback suckers Xyrauchen texanus collected (n=64) during the 2006 spawning season. Sperm motility could be activated within osmolality ranges noted during milt collections (here 167–343 mOsm/kg). We hypothesized that sperm quality of milt collected into isoosmotic (302 mOsm/kg) or hyperosmotic (500 mOsm/kg) Hanks' balanced salt solution would not differ. Pre-freeze viabilities were similar between osmolalities (79%±6 (S.E.M.) and 76%±7); however, post-thaw values were greater in hyperosmotic buffer (27%±3 and 12%±2; P=0.0065), as was mitochondrial membrane potential (33%±4 and 13%±2; P=0.0048). Visual estimates of pre-freeze motility correlated with total (r=0.7589; range 23–82%) and progressive motility (r=0.7449) by CASA and were associated with greater viability (r=0.5985; Pr=-0.83; P=0.0116) and mitochondrial function (r=-0.91; P=0.0016). By FCM-based assessments of DNA integrity, whereby increased fluorochrome binding indicated more fragmentation, higher levels were negatively correlated with count (r=-0.77; Pr=-0.66; P=0.0004). Fragmentation was higher in isotonic buffer (P=0.0234). To increase reproductive capacity of natural populations, the strategy and protocols developed can serve as a template for use with other imperiled fish species, biomonitoring, and genome banking.

  19. Development of steel head joints with fiberglass sucker rod on the base of contact stresses investigation

    Energy Technology Data Exchange (ETDEWEB)

    Kopey, B.V.; Kopey, L.B. [Ivano-Frankivsk State Technical Oil and Gas University (Ukraine); Maksymuk, A.V.; Shcherbyna, N.M. [National Ukrainian Academy of Sciences (Ukraine)

    1998-12-31

    The methods of calculation of contact stresses during cylinder shell tube - steel bandage interaction are presented. Tymoshenko`s generalized theory of shells serves as a basis for investigating steel head to fiberglass sucker rod joint strength. This theory allows to consider mechanical performance of composite materials. The problem is reduced to solving Fredholm integral equation of second degree. The numeric analysis is performed. Several joints of composite body with steel head are proposed. The full-size sucker rod fatigue tests are performed to determine the fatigue limit under the bending and axial cyclic loads in the medium of oil well fluids. (orig.)

  20. A method for designing fiberglass sucker-rod strings with API RP 11L

    International Nuclear Information System (INIS)

    Jennings, J.W.; Laine, R.E.

    1991-01-01

    This paper presents a method for using the API recommended practice for the design of sucker-rod pumping systems with fiberglass composite rod strings. The API method is useful for obtaining quick, approximate, preliminary design calculations. Equations for calculating all the composite material factors needed in the API calculations are given

  1. Robust technology and system for management of sucker rod pumping units in oil wells

    Science.gov (United States)

    Aliev, T. A.; Rzayev, A. H.; Guluyev, G. A.; Alizada, T. A.; Rzayeva, N. E.

    2018-01-01

    We propose a technology for calculating the robust, normalized correlation functions of the signal from the force sensor on the rod string attached to the hanger of the sucker rod pumping unit. The robust normalized correlation functions are used to form sets of informative attribute combinations, each of which corresponds to a technical condition of the sucker rod pumping unit. We demonstrate how these sets can be used to solve identification and management problems in the oil production process in real time using inexpensive controllers. The results obtained from using the system on real objects are also presented in this paper. It was determined that the energy saved and prolonged overhaul period substantially increased the cost-effectiveness.

  2. Microclonal Multiplication of wild Cherry (Prunus avium L.) from Shoot Tips and Root Sucker Buds

    OpenAIRE

    Pevalek-Kozlina, Branka; Michler, Charles H.; Jelaska, Sibila

    1994-01-01

    The effects of different combinations and concentrations of the growth regulators: 6-benzylaminopurine (BA), 6-furfurylaminopurine (KIN), N6- (2-isopentenyl) adenine (2iP), indole-3-butyric acid (IBA), indole-3-acetic acid (IAA) and a-naphthaleneacetic acid (NAA) on axillary shoot multiplication rates for wild cherry (Prunus avium L.) shoot explants were determined. Apical shoot tips and axillary buds from juvenile trees (5-year old) and from root suckers of mature trees (55-year old) were us...

  3. Inter-annual variability in apparent relative production, survival, and growth of juvenile Lost River and shortnose suckers in Upper Klamath Lake, Oregon, 2001–15

    Science.gov (United States)

    Burdick, Summer M.; Martin, Barbara A.

    2017-06-15

    Executive SummaryPopulations of the once abundant Lost River (Deltistes luxatus) and shortnose suckers (Chasmistes brevirostris) of the Upper Klamath Basin, decreased so substantially throughout the 20th century that they were listed under the Endangered Species Act in 1988. Major landscape alterations, deterioration of water quality, and competition with and predation by exotic species are listed as primary causes of the decreases in populations. Upper Klamath Lake populations are decreasing because fish lost due to adult mortality, which is relatively low for adult Lost River suckers and variable for adult shortnose suckers, are not replaced by new young adult suckers recruiting into known adult spawning aggregations. Catch-at-age and size data indicate that most adult suckers presently in Upper Klamath Lake spawning populations were hatched around 1991. While, a lack of egg production and emigration of young fish (especially larvae) may contribute, catch-at-length and age data indicate high mortality during the first summer or winter of life may be the primary limitation to the recruitment of young adults. The causes of juvenile sucker mortality are unknown.We compiled and analyzed catch, length, age, and species data on juvenile suckers from Upper Klamath Lake from eight prior studies conducted from 2001 to 2015 to examine annual variation in apparent production, survival, and growth of young suckers. We used a combination of qualitative assessments, general linear models, and linear regression to make inferences about annual differences in juvenile sucker dynamics. The intent of this exercise is to provide information that can be compared to annual variability in environmental conditions with the hopes of understanding what drives juvenile sucker population dynamics.Age-0 Lost River suckers generally grew faster than age-0 shortnose suckers, but the difference in growth rates between the two species varied among years. This unsynchronized annual variation in

  4. Self-recognition mechanism between skin and suckers prevents octopus arms from interfering with each other.

    Science.gov (United States)

    Nesher, Nir; Levy, Guy; Grasso, Frank W; Hochner, Binyamin

    2014-06-02

    Controlling movements of flexible arms is a challenging task for the octopus because of the virtually infinite number of degrees of freedom (DOFs) [1, 2]. Octopuses simplify this control by using stereotypical motion patterns that reduce the DOFs, in the control space, to a workable few [2]. These movements are triggered by the brain and are generated by motor programs embedded in the peripheral neuromuscular system of the arm [3-5]. The hundreds of suckers along each arm have a tendency to stick to almost any object they contact [6-9]. The existence of this reflex could pose significant problems with unplanned interactions between the arms if not appropriately managed. This problem is likely to be accentuated because it is accepted that octopuses are "not aware of their arms" [10-14]. Here we report of a self-recognition mechanism that has a novel role in motor control, restraining the arms from interfering with each other. We show that the suckers of amputated arms never attach to octopus skin because a chemical in the skin inhibits the attachment reflex of the suckers. The peripheral mechanism appears to be overridden by central control because, in contrast to amputated arms, behaving octopuses sometime grab amputated arms. Surprisingly, octopuses seem to identify their own amputated arms, as they treat arms of other octopuses like food more often than their own. This self-recognition mechanism is a novel peripheral component in the embodied organization of the adaptive interactions between the octopus's brain, body, and environment [15, 16]. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Irrigation drainwater effects on the endangered larval razorback sucker and bonytail in the middle Green River

    International Nuclear Information System (INIS)

    Hamilton, S.J.; Buhl, K.J.

    1994-01-01

    The Department of the Interior (DOI) irrigation drainwater investigation of the middle Green River of Utah reported that concentrations of boron, selenium, and zinc in water, bottom sediment, and biological tissues were sufficiently elevated to be potentially harmful to fish and wildlife. The major focus of the DOI study was in the Ashley Creek-Stewart Lake area near Jensen, utah. The middle Green River provides sensitive habitat for the endangered Colorado squawfish, razorback sucker, and bonytail. The authors conducted two 90-day chronic toxicity studies, one with razorback sucker, and the other with bonytail. Swimup larvae were exposed in a reconstituted water simulating the middle Green River. The toxicant mixture simulated the environmental ratio and concentrations of inorganics reported in the DOI study for the mouth of Ashley Creek-Stewart Lake outflow on the Green River, and was composed of arsenic, boron, copper, molybdenum, uranium, vanadium, selenate, selenite, and zinc. The mixture was tested at 1X, 2X, 4X, 8X, and 16X where X was the average expected environmental concentration. Razorback suckers had reduced survival after 40 days exposure to the inorganic mixture at 16X and after 60 days at 8X; whereas growth was reduced after 30 days at 8X and after 60 days at 4X. Bonytail had reduced survival after 20 days exposure at 16X, whereas growth was reduced after 60 days at 8X. These studies show that at environmentally realistic concentrations, the inorganic mixture simulating Ashley Creek-Stewart Lake outfall adversely affects larval endangered fish

  6. Health and condition of endangered young-of-the-year Lost River and Shortnose suckers relative to water quality in Upper Klamath Lake, Oregon, 2014–2015

    Science.gov (United States)

    Burdick, Summer M.; Conway, Carla M.; Elliott, Diane G.; Hoy, Marshal S.; Dolan-Caret, Amari; Ostberg, Carl O.

    2017-10-19

    Most mortality of endangered Lost River (Deltistes luxatus) and shortnose (Chasmistes brevirostris) suckers in Upper Klamath Lake, Oregon, occurs within the first year of life. Juvenile suckers in Clear Lake Reservoir, California, survive longer and may even recruit to the spawning populations. In a previous (2013–2014) study, the health and condition of juvenile suckers and the dynamics of water quality between Upper Klamath Lake and Clear Lake Reservoir were compared. That study found that apparent signs of stress or exposure to irritants, such as peribiliary cuffing in liver tissue and mild inflammation and necrosis in gill tissues, were present in suckers from both lakes and were unlikely to be clues to the cause of differential mortality between lakes. Seasonal trends in energy storage as glycogen and triglycerides were also similar between lakes, indicating prey limitation was not a likely factor in differential mortality. To better understand the relationship between juvenile sucker health and water quality, we examined suckers collected in 2014–2015 from Upper Klamath Lake, where water quality can be dynamic and, at times, extreme.While there were notable differences in water quality and fish health between years, we were not able to identify any specific water-quality-related causes for differential fish condition. Water quality was generally better in 2014 than in 2015. When considered together afflictions and abnormalities generally indicated healthier suckers in 2014 than 2015. Low dissolved-oxygen events (water temperatures were warmer, particularly in July and September; and concentrations of microcystin in both large and small fractions of samples were lower in 2014 than in 2015. Total and therefore also un-ionized ammonia were low in 2014–2015 relative to concentrations known to affect suckers. Petechial hemorrhages of the skin, attached Lernaea spp. and eosinophilic hyaline droplets in the kidney tubules were less prevalent in 2014 than in

  7. Passive restoration augments active restoration in deforested landscapes: the role of root suckering adjacent to planted stands of Acacia koa

    Science.gov (United States)

    Paul G. Scowcroft; Justin T. Yeh

    2013-01-01

    Active forest restoration in Hawaii’s Hakalau Forest National Wildlife Refuge has produced a network of Acacia koa tree corridors and islands in deforested grasslands. Passive restoration by root suckering has potential to expand tree cover and close gaps between planted stands. This study documents rates of encroachment into grassland, clonal...

  8. Characterization of plasma vitellogenin and sex hormone concentrations during the annual reproductive cycle of the endangered razorback sucker

    Science.gov (United States)

    Hinck, Jo Ellen; Papoulias, Diana M.; Annis, Mandy L.; Tillitt, Donald E.; Marr, Carrie; Denslow, Nancy D.; Kroll, Kevin J.; Nachtmann, Jason

    2011-01-01

    Population declines of the endangered razorback sucker Xyrauchen texanus in the Colorado River basin have been attributed to predation by and competition with nonnative fishes, habitat alteration, and dam construction. The reproductive health and seasonal variation of the reproductive end points of razorback sucker populations are currently unknown. Using nonlethal methods, we characterized the plasma hormonal fluctuations of reproductively mature female and male razorback suckers over a 12-month period in a hatchery by measuring their vitellogenin (VTG) and three sex hormones: 17β-estradiol (E2), testosterone (T), and 11-ketotestosterone (KT). Fish were identified as reproductive or nonreproductive based on their body weight, VTG, and sex hormone profiles. In reproductive females, the E2 concentration increased in the fall and winter, and increases in T and VTG concentrations were generally associated with the spawning period. Mean T concentrations were consistently greater in reproductive females than in nonreproductive females, but this pattern was even more pronounced during the spawning period (spring). Consistently low T concentrations (the spawning period may indicate reproductive impairment. In reproductive males, spring increases in KT and T concentrations were associated with spawning; concentrations of E2 (the study. In addition, the E2 : KT ratio and T were the best metrics by which to distinguish female from male adult razorback suckers throughout the year. These metrics of reproductive health and condition may be particularly important to recovery efforts of razorback suckers given that the few remaining wild populations are located in a river where water quality and quantity issues are well documented. In addition to the size, age, and recruitment information currently considered in the recovery goals of this endangered species, reproductive end points could be included as recovery metrics with which to monitor seasonal trends and determine whether

  9. Chronic toxicity and hazard assessment of an inorganic mixture simulating irrigation drainwater to razorback sucker and bonytail

    Science.gov (United States)

    Hamilton, Steven J.; Buhl, Kevin J.; Bullard, Fern A.; Little, Edward E.

    2000-01-01

    We conducted two 90 day chronic toxicity studies with two endangered fish, razorback sucker and bonytail. Swim-up larvae were exposed in a reconstituted water simulating the middle Green River. The toxicant mixture simulated the environmental ratio and concentrations of inorganics reported in a Department of the Interior study for the mouth of Ashley Creek on the Green River, and was composed of nine elements. The mixture was tested at 1X, 2X, 4X, 8X, and 16X where X was the measured environmental concentration (2 μg/L arsenic, 630 μg/L boron, 10 μg/L copper, 5 μg/L molybdenum, 51 μg/L selenate, 8 μg/L selenite, 33 μg/L uranium, 2 μg/L vanadium, and 20 μg/L zinc). Razorback sucker had reduced survival after 60 days exposure to the inorganic mixture at 8X, whereas growth was reduced after 30 and 60 days at 2X and after 90 days at 4X. Bonytail had reduced survival after 30 days exposure at 16X, whereas growth was reduced after 30, 60, and 90 days at 8X. Swimming performance of razorback sucker and bonytail were reduced after 60 and 90 days of exposure at 8X. Whole-body residues of copper, selenium, and zinc increased in a concentration-response manner and seemed to be regulated at 90 days of exposure at 4X and lower treatments for razorback sucker, and at 8X and lower for bonytail. Adverse effects occurred in fish with whole-body residues of copper, selenium, and zinc similar to those causing similar effects in other fish species. Comparison of adverse effect concentrations with measured environmental concentrations showed a high hazard to the two endangered fish. Irrigation activities may be a contributing factor to the decline of these endangered fishes in the middle Green River. 

  10. Propolis and Herba Epimedii extracts enhance the non-specific immune response and disease resistance of Chinese sucker, Myxocyprinus asiaticus.

    Science.gov (United States)

    Zhang, Guobin; Gong, Shiyuan; Yu, Denghang; Yuan, Hanwen

    2009-03-01

    The effect of traditional Chinese medicine (TCM) formulated from propolis and Herba Epimedii extracts at the ratio of 3:1 (w/w) on non-specific immune response of Chinese sucker (Myxocyprinus asiaticus) was investigated. Fish were fed diets containing 0 (control), 0.1%, 0.5% or 1.0% TCM extracts for five weeks. The respiratory burst and phagocytic activities of blood leukocytes, lysozyme and natural haemolytic complement activities in plasma were measured weekly. After five weeks of feeding, fish were infected with Aeromonas hydrophila and mortalities were recorded. Results of this study showed that feeding Chinese sucker with different dosage of TCM extracts stimulated respiratory burst activity, phagocytosis of phagocytic cells in blood and lysozyme activity in plasma. They had no effect on plasma natural haemolytic complement activity. All dosage of treated groups showed reduced mortality following A. hydrophila infection. Feed containing 0.5% TCM extracts was the most effective with the mortality of the fish significantly reduced by 35% compared to the control. The results indicate that propolis and Herba Epimedii extracts in combination enhances the non-specific immune response and disease resistance of Chinese sucker against A. hydrophila.

  11. Enrichment in the Sucker and Weaner Phase Altered the Performance of Pigs in Three Behavioural Tests.

    Science.gov (United States)

    Ralph, Cameron; Hebart, Michelle; Cronin, Greg M

    2018-05-14

    We tested the hypothesis that provision of enrichment in the form of enrichment blocks during the sucker and weaner phases would affect the behaviour of pigs. We measured the performance of pigs in an open field/novel object test, a maze test, an executive function test and the cortisol response of the pigs after exposure to an open field test. The provision of enrichment blocks altered the behaviour of the pigs in all three tests and these changes suggest an increased willingness to explore and possibly an increased ability to learn. The behavioural tests highlighted that young pigs have the capacity to learn complex tasks. Our findings support the notion that the benefits of enrichment cannot be evaluated by measuring the interactions the animal has with the enrichments in the home pen and it may simply be beneficial to live in a more complex environment. We have highlighted that the early rearing environment is important and that the management and husbandry at an early age can have long-term implications for pigs. The enrichment we used in this study was very simple, an enrichment block, and we provide evidence suggesting the provision of enrichment effected pig behavioural responses. Even the simplest of enrichments may have benefits for the welfare and development of young pigs and there is merit in developing enrichment devices that are suitable for use in pig production.

  12. Model for Sucker-Rod Pumping Unit Operating Modes Analysis Based on SimMechanics Library

    Science.gov (United States)

    Zyuzev, A. M.; Bubnov, M. V.

    2018-01-01

    The article provides basic information about the process of a sucker-rod pumping unit (SRPU) model developing by means of SimMechanics library in the MATLAB Simulink environment. The model is designed for the development of a pump productivity optimal management algorithms, sensorless diagnostics of the plunger pump and pumpjack, acquisition of the dynamometer card and determination of a dynamic fluid level in the well, normalization of the faulty unit operation before troubleshooting is performed by staff as well as equilibrium ratio determining by energy indicators and outputting of manual balancing recommendations to achieve optimal power consumption efficiency. Particular attention is given to the application of various blocks from SimMechanics library to take into account the pumpjack construction principal characteristic and to obtain an adequate model. The article explains in depth the developed tools features for collecting and analysis of simulated mechanism data. The conclusions were drawn about practical implementation possibility of the SRPU modelling results and areas for further development of investigation.

  13. Matching watershed and otolith chemistry to establish natal origin of an endangered desert lake sucker

    Science.gov (United States)

    Strohm, Deanna D.; Budy, Phaedra; Crowl, Todd A.

    2017-01-01

    Stream habitat restoration and supplemental stocking of hatchery-reared fish have increasingly become key components of recovery plans for imperiled freshwater fish; however, determining when to discontinue stocking efforts, prioritizing restoration areas, and evaluating restoration success present a conservation challenge. In this study, we demonstrate that otolith microchemistry is an effective tool for establishing natal origin of the June Sucker Chasmistes liorus, an imperiled potamodromous fish. This approach allows us to determine whether a fish is of wild or hatchery origin in order to assess whether habitat restoration enhances recruitment and to further identify areas of critical habitat. Our specific objectives were to (1) quantify and characterize chemical variation among three main spawning tributaries; (2) understand the relationship between otolith microchemistry and tributary chemistry; and (3) develop and validate a classification model to identify stream origin using otolith microchemistry data. We quantified molar ratios of Sr:Ca, Ba:Ca, and Mg:Ca for water and otolith chemistry from three main tributaries to Utah Lake, Utah, during the summer of 2013. Water chemistry (loge transformed Sr:Ca, Ba:Ca, and Mg:Ca ratios) differed significantly across all three spawning tributaries. We determined that Ba:Ca and Sr:Ca ratios were the most important variables driving our classification models, and we observed a strong linear relationship between water and otolith values for Sr:Ca and Ba:Ca but not for Mg:Ca. Classification models derived from otolith element : Ca signatures accurately sorted individuals to their experimental tributary of origin (classification tree: 89% accuracy; random forest model: 91% accuracy) and determined wild versus hatchery origin with 100% accuracy. Overall, this study aids in evaluating the effectiveness of restoration, tracking progress toward recovery, and prioritizing future restoration plans for fishes of conservation

  14. Study of the mechanical metallic coating properties on the steel used on sucker rods; Estudo das propriedades mecanicas de revetimentos mtalicos aplicados sobre o aco utilizado em hastes de bombeio

    Energy Technology Data Exchange (ETDEWEB)

    Santos, A.O.; Cavalcanti, E.B.; Cruz, M.C.P.; Souza, L.P. [Universidade Tiradentes, Aracaju, SE (Brazil). Inst. de Tecnologia e Pesquisa. Lab. de Energia e Materiais; Araujo, P.M.M. [Universidade Federal de Sergipe (UFS), Sao Cristovao, SE (Brazil). Nucleo de Engenharia Mecanica], e-mail: paubaumma@yahoo.com.br

    2008-07-01

    The mechanical properties of the steel carbon alloy coating with NiCr 80/20, NiCr 20/80, 95 MXC, aluminum and cupper had been studied about hardness and modulus of elasticity of coating. A time that the sucker rods will be submitted the efforts stress, is important that the applied coverings are not more rigid of what the steel carbon, because has problems of deformations in the material applied on the sucker rods. One observed that the coverings are less rigid than the steel carbon used on the sucker rods, presenting small modulus of elasticity to the ones of the sucker rods. This property is interesting for being to the sucker rods submitted the efforts stress and compression when in operation in the oil. With regard to the hardness, it was verified that the cupper and aluminum coating are less hard than the steel carbon, therefore more degradation to the consuming for abrasion. (author)

  15. Distribution, Health, and Development of Larval and Juvenile Lost River and Shortnose Suckers in the Williamson River Delta Restoration Project and Upper Klamath Lake, Oregon: 2008 Annual Data Summary

    Science.gov (United States)

    Burdick, Summer M.; Ottinger, Christopher; Brown, Daniel T.; VanderKooi, Scott P.; Robertson, Laura; Iwanowicz, Deborah

    2009-01-01

    Federally endangered Lost River sucker Deltistes luxatus and shortnose sucker Chasmistes brevirostris were once abundant throughout their range but populations have declined; they have been extirpated from several lakes, and may no longer reproduce in others. Poor recruitment into the adult spawning populations is one of several reasons cited for the decline and lack of recovery of these species, and may be the consequence of high mortality during juvenile life stages. High larval and juvenile sucker mortality may be exacerbated by an insufficient quantity of suitable rearing habitat. Within Upper Klamath Lake, a lack of marshes also may allow larval suckers to be swept from suitable rearing areas downstream into the seasonally anoxic waters of the Keno Reservoir. The Nature Conservancy (TNC) flooded about 3,600 acres to the north of the Williamson River mouth (Tulana Unit) in October 2007, and about 1,400 acres to the south and east of the Williamson River mouth (Goose Bay Unit) a year later, to retain larval suckers in Upper Klamath Lake, create nursery habitat for suckers, and improve water quality. In collaboration with TNC, the Bureau of Reclamation, and Oregon State University, we began a long-term collaborative research and monitoring program in 2008 to assess the effects of the Williamson River Delta restoration on the early life-history stages of Lost River and shortnose suckers. Our approach includes two equally important aspects. One component is to describe habitat use and colonization processes by larval and juvenile suckers and non-sucker fish species. The second is to evaluate the effects of the restored habitat on the health and condition of juvenile suckers. This report contains a summary of the first year of data collected as a part of this monitoring effort.

  16. Cost cutting by adjusting crank counterbalanced sucker rod pumping units; Einstellung der Gegengewichte von Tiefenpumpenantrieben mit dynamischem Momentausgleich

    Energy Technology Data Exchange (ETDEWEB)

    Huber, M. [Montanuniv. Leoben (Austria). Inst. fuer Foerdertechnik und Konstruktionslehre; Kessler, F. [Montanuniv. Leoben (Austria). Inst. fuer Foerdertechnik und Konstruktionslehre

    1995-04-01

    Oil production by means of sucker rod pumping units is the oldest and most frequently used method. Adjusting the counterweight to reduce the torque at the gearbox is still a significant problem. Depending on the procedure, counterbalancing produces various costs due to the manhours necessary. Because the often-used procedures are very time-consuming, in the past a precise positioning of the counterweight is often put aside. This, however, can lead to the risk of much higher costs as a result of gearbox damage. Providing a simple and time-saving method of adjusting the counterweight is therefore of greatest importance to all users. In this article a PC-program is developed, which is easy to handle and needs no expensive hardware. Compared to other methods, the here decribed program reduces the costs for the correct adjustment of the counterweight on average by around 60%. Finally, very simple steps to reduce costs, while running sucker rod pumping units are explained. (orig.) [Deutsch] Die Einstellung der Gegengewichte zaehlt zu den kostenintensivsten Arbeiten an konventionellen Tiefpumpenantrieben. Die derzeit am haeufigsten verwendeten messtechnischen Methoden wie die Motorstrom-Methode oder die Drehzahl-Methode sind sehr zeitraubend. Die rechnerischen Verfahren stellen im allgemeinen hohe Anforderungen an das Personal und an die verwendete Hardware. In diesem Beitrag wird ein PC-Programm vorgestellt, das besonders einfach zu bedienen ist und keine teuren Geraete erfordert. Gegenueber den messtechnischen Methoden koennen durch die Verwendung dieses Programmes die Kosten fuer eine Gegengewichtseinstellung um durchschnittlich 60% gesenkt werden. Abschliessend werden einfache, in der Praxis leicht realisierbare Massnahmen zur Kostenreduktion beim Betrieb von Tiefpumpenantrieben vorgestellt. (orig.)

  17. Host specificity of Lepeophtheirus crassus (Wilson and Bere) (Copepoda: Caligidae) parasitic on the marlin sucker Remora osteochir (Cuvier) in the Atlantic Ocean.

    Science.gov (United States)

    Ho, Ju-shey; Collete, Bruce B; Madinabeitia, Ione

    2006-10-01

    Three species of remoras--Remora brachyptera (Lowe), Remora osteochir (Cuvier), and Remora remora (Linnaeus)--were collected from 4 species of billfishes--Istiophorus platypterus (Shaw), Makaira nigricans Lacepéde, Tetrapturus albidus Poey, and Tetrapturus pfluegeri Robins and de Sylva--on board a Japanese long-liner Shoyo Maru during her cruise in 2002 across the Atlantic. However, only the marlin sucker (R. osteochir) was found to carry a parasitic copepod, Lepeophtheirus crassus (Wilson and Bere, 1936). Although 12 species of parasitic copepods have been reported from billfishes around the world ocean, none of them is L. crassus. Thus, L. crassus is considered a parasite specific to the marlin sucker.

  18. Predicting the impact of a northern pike (Esox lucius) invasion on endangered June sucker (Chasmistes liorus) and sport fishes in Utah Lake, UT

    OpenAIRE

    Reynolds, Jamie

    2017-01-01

    Invasive species introductions are associated with negative economic and environmental impacts, including reductions in native species populations. Successful invasive species populations often grow rapidly and a new food web equilibrium is established. Invasive, predatory northern pike (Esox lucius; hereafter pike) were detected in 2010 in Utah Lake, UT, a highly-degraded ecosystem home to the endemic, endangered June sucker (Chasmistes liorus). Here we test whether pike predation could hind...

  19. Like a glove: do the dimensions of male adanal suckers and tritonymphal female docking papillae correlate in the Proctophyllodidae (Astigmata: Analgoidea)?

    OpenAIRE

    Byers , K.A.; Proctor , H.C.

    2014-01-01

    International audience; Precopulatory guarding of tritonymphal females by adult males is common in feather mites (Acari: Astigmata). Within the Proctophyllodidae (Astigmata: Analgoidea), some genera possess morphological features in both sexes that have been suggested to enhance male attachment. One such structure in tritonymphal females is the development of a pair of fleshy lobe-like docking papillae, while males possess a pair of ventral adanal suckers that are proposed to fit over top of ...

  20. Particle-tracking investigation of the retention of sucker larvae emerging from spawning grounds in Upper Klamath Lake, Oregon

    Science.gov (United States)

    Wood, Tamara M.; Wherry, Susan A.; Simon, David C.; Markle, Douglas F.

    2014-01-01

    This study had two objectives: (1) to use the results of an individual-based particle-tracking model of larval sucker dispersal through the Williamson River delta and Upper Klamath Lake, Oregon, to interpret field data collected throughout Upper Klamath and Agency Lakes, and (2) to use the model to investigate the retention of sucker larvae in the system as a function of Williamson River flow, wind, and lake elevation. This is a follow-up study to work reported in Wood and others (2014) in which the hydrodynamic model of Upper Klamath Lake was combined with an individual-based, particle-tracking model of larval fish entering the lake from spawning areas in the Williamson River. In the previous study, the performance of the model was evaluated through comparison with field data comprising larval sucker distribution collected in 2009 by The Nature Conservancy, Oregon State University (OSU), and the U.S. Geological Survey, primarily from the (at that time) recently reconnected Williamson River Delta and along the eastern shoreline of Upper Klamath Lake, surrounding the old river mouth. The previous study demonstrated that the validation of the model with field data was moderately successful and that the model was useful for describing the broad patterns of larval dispersal from the river, at least in the areas surrounding the river channel immediately downstream of the spawning areas and along the shoreline where larvae enter the lake. In this study, field data collected by OSU throughout the main body of Upper Klamath Lake, and not just around the Williamson River Delta, were compared to model simulation results. Because the field data were collected throughout the lake, it was necessary to include in the simulations larvae spawned at eastern shoreline springs that were not included in the earlier studies. A complicating factor was that the OSU collected data throughout the main body of the lake in 2011 and 2012, after the end of several years of larval drift

  1. Bioaccumulation of the pharmaceutical 17α-ethinylestradiol in shorthead redhorse suckers (Moxostoma macrolepidotum) from the St. Clair River, Canada

    International Nuclear Information System (INIS)

    Al-Ansari, Ahmed M.; Saleem, Ammar; Kimpe, Linda E.; Sherry, Jim P.; McMaster, Mark E.; Trudeau, Vance L.; Blais, Jules M.

    2010-01-01

    17α-ethynylestradiol (EE2), a synthetic estrogen prescribed as a contraceptive, was measured in Shorthead Redhorse Suckers (ShRHSs) (Moxostoma macrolepidotum) collected near a wastewater treatment plant (WWTP) in the St. Clair River (Ontario, Canada). We detected EE2 in 50% of the fish samples caught near the WWTP (Stag Island), which averaged 1.6 ± 0.6 ng/g (wet weight) in males and 1.43 ± 0.96 ng/g in females. No EE2 was detected in the samples from the reference site (Port Lambton) which was 26 km further downstream of the Stag Island site. Only males from Stag Island had VTG induction, suggesting the Corunna WWTP effluent as a likely source of environmental estrogen. EE2 concentrations were correlated with total body lipid content (R 2 = 0.512, p 15 N (R 2 = 0.436, p < 0.05, n = 10), suggesting higher EE2 exposures in carnivores. Our data support the hypothesis of EE2 bioaccumulation in wild fish. - Ethinylestradiol accumulation in wild fish.

  2. Toxicity of inorganic contaminants, individually and in environmental mixtures, to three endangered fishes (Colorado squawfish, bonytail, and razorback sucker)

    Science.gov (United States)

    Buhl, Kevin J.; Hamilton, S.J.

    1996-01-01

    Two life stages of three federally-listed endangered fishes, Colorado squawfish (Ptychocheilus lucius), bonytail (Gila elegans), and razorback sucker (Xyrauchen texanus) were exposed to copper, selenate, selenite, and zinc individually, and to mixtures of nine inorganics in a reconstituted water that simulated the water quality of the middle Green River, Utah. The mixtures simulated environmental ratios of arsenate, boron, copper, molybdenum, selenate, selenite, uranium, vanadium, and zinc in two tributaries, Ashley Creek and Stewart Lake outlet, of the middle Green River. The rank order of toxicity of the individual inorganics, from most to least toxic, was: copper > zinc > selenite > selenate. Colorado squawfish larvae were more sensitive to all four inorganics and the two mixtures than the juveniles, whereas there was no consistent response between the two life stages for the other two species. There was no consistent difference in sensitivity to the inorganics among the three endangered fishes. Both mixtures exhibited either additive or greater than additive toxicity to these fishes. The primary toxic components in the mixtures, based on toxic units, were copper and zinc. Acute toxicity values were compared to measured environmental concentrations in the two tributaries to derive margins of uncertainty. Margins of uncertainty were low for both mixtures (9–22 for the Stewart Lake outlet mixture, and 12–32 for the Ashley Creek mixture), indicating that mixtures of inorganics derived from irrigation activities may pose a hazard to endangered fishes in the Green River.

  3. Assessing genetic diversity of wild and hatchery samples of the Chinese sucker (Myxocyprinus asiaticus) by the mitochondrial DNA control region.

    Science.gov (United States)

    Wu, Jiayun; Wu, Bo; Hou, Feixia; Chen, Yongbai; Li, Chong; Song, Zhaobin

    2016-01-01

    To restore the natural populations of Chinese sucker (Myxocyprinus asiaticus), a hatchery release program has been underway for nearly 10 years. Using DNA sequences of the mitochondrial control region, we assessed the genetic diversity and genetic structure among samples collected from three sites of the wild population as well as from three hatcheries. The haplotype diversity of the wild samples (h = 0.899-0.975) was significantly higher than that of the hatchery ones (h = 0.296-0.666), but the nucleotide diversity was almost identical between them (π = 0.0170-0.0280). Relatively high gene flow was detected between the hatchery and wild samples. Analysis of effective population size indicated that M. asiaticus living in the Yangtze River has been expanding following a bottleneck in the recent past. Our results suggest the hatchery release programs for M. asiaticus have not reduced the genetic diversity, but have influenced the genetic structure of the species in the upper Yangtze River.

  4. Threats, conservation strategies, and prognosis for suckers (Catostomidae) in North America: insights from regional case studies of a diverse family of non-game fishes

    Science.gov (United States)

    Cooke, Steven J.; Bunt, Christopher M.; Hamilton, Steven J.; Jennings, Cecil A.; Pearson, Micheal P.; Cooperman, Michael S.; Markle, Douglas F.

    2005-01-01

    Catostomid fishes are a diverse family of 76+ freshwater species that are distributed across North America in many different habitats. This group of fish is facing a variety of impacts and conservation issues that are somewhat unique relative to more economically valuable and heavily managed fish species. Here, we present a brief series of case studies to highlight the threats such as migration barriers, flow regulation, environmental contamination, habitat degradation, exploitation and impacts from introduced (non-native) species that are facing catostomids in different regions. Collectively, the case studies reveal that individual species usually are not threatened by a single, isolated factor. Instead, species in general face numerous stressors that threaten multiple stages of their life history. Several factors have retarded sucker conservation including widespread inabilities of field workers to distinguish some species, lack of basic natural history and ecological knowledge of life history, and the misconception that suckers are tolerant of degraded conditions and are of little social or ecological value. Without a specific constituent group lobbying for conservation of non-game fishes, all such species, including members of the catostomid family, will continue to face serious risks because of neglect, ignorance, and misunderstanding. We suggest that conservation strategies should incorporate research and education/outreach components. Other conservation strategies that would be effective for protecting suckers include freshwater protected areas for critical habitat, restoration of degraded habitat, and design of catostomid-friendly fish bypass facilities. We believe that the plight of the catostomids is representative of the threats facing many other non-game freshwater fishes with diverse life-history strategies globally.

  5. Transforming growth factor-β1 expression in endangered age-0 shortnose suckers (Chasmistes brevirostris) from Upper Klamath Lake, OR relative to histopathology, meristic, spatial, and temporal data.

    Science.gov (United States)

    Ottinger, Christopher A; Densmore, Christine L; Robertson, Laura S; Iwanowicz, Deborah D; VanderKooi, Scott P

    2016-02-01

    During July-September of 2008, 2009, and 2010 endangered age-0 juvenile shortnose suckers were sampled from Upper Klamath Lake, OR in a health evaluation that included the measurement of transforming growth factor - beta (TGF-β) expression in spleen in combination with a histopathology assessment. This analysis was performed to determine if the expression of this immuno-regulator could be used as a component of a larger health evaluation intended to identify potential risk-factors that may help to explain why very few of these fish survive to age-1. Potential associations between TGF-β1 expression, histopathological findings, meristic data as well as temporal and spatial data were evaluated using analysis-of-variance. In this analysis, the absence or presence of opercula deformity and hepatic cell necrosis were identified as significant factors in accounting for the variance in TGF-β1 expression observed in age-0 shortnose suckers (n = 122, squared multiple R = 0.989). Location of sample collection and the absence or presence of anchor worms (Lernaea spp.) were identified as significant cofactors. The actual mechanisms involved with these relationships have yet to be determined. The strength, however, of our findings support the concept of using TGF-β1 expression as part of a broader fish health assessment and suggests the potential for using additional immunologic measures in future studies. Specifically, our results indicate that the measure of TGF-β1 expression in age-0 shortnose sucker health assessments can facilitate the process of identifying disease risks that are associated with the documented lack of recruitment into the adult population. Published by Elsevier Ltd.

  6. Replacement of fish oil with soybean oil in diets for juvenile Chinese sucker (Myxocyprinus asiaticus): effects on liver lipid peroxidation and biochemical composition.

    Science.gov (United States)

    Yu, Deng-Hang; Chang, Jia-Zhi; Dong, Gui-Fang; Liu, Jun

    2017-10-01

    This study was designed to evaluate the effect of the replacement of fish oil (FO) by soybean oil (SO) on growth performance, liver lipid peroxidation, and biochemical composition in juvenile Chinese sucker, Myxocyprinus asiaticus. Fish (13.7 ± 0.2 g) in triplicate were fed five experimental diets in which 0% (FO as control), 40% (SO40), 60% (SO60), 80% (SO40), and 100% (SO100) FO were replaced by SO. The body weight gain of fish fed SO40, SO60, or SO80 diet was similar to FO group, but diets that have 100% soybean oil as dietary lipid significantly reduced fish growth (P fish liver fed diets that contained SO, but eicosapentaenoic acid (EPA), docosahexaenoic acid (DHA), and the ratio n-3/n-6 were significantly reduced by the inclusion of dietary SO (P fish. However, diet containing 100% SO as dietary lipid could reduce growth performance. Thus, we recommended that 40-80% SO can be used as dietary lipid to replace FO for juvenile Chinese sucker.

  7. Status and trends of adult Lost River (Deltistes luxatus) and shortnose (Chasmistes brevirostris) sucker populations in Upper Klamath Lake, Oregon, 2015

    Science.gov (United States)

    Hewitt, David A.; Janney, Eric C.; Hayes, Brian S.; Harris, Alta C.

    2017-07-21

    Executive SummaryData from a long-term capture-recapture program were used to assess the status and dynamics of populations of two long-lived, federally endangered catostomids in Upper Klamath Lake, Oregon. Lost River suckers (LRS; Deltistes luxatus) and shortnose suckers (SNS; Chasmistes brevirostris) have been captured and tagged with passive integrated transponder (PIT) tags during their spawning migrations in each year since 1995. In addition, beginning in 2005, individuals that had been previously PIT-tagged were re-encountered on remote underwater antennas deployed throughout sucker spawning areas. Captures and remote encounters during the spawning season in spring 2015 were incorporated into capture-recapture analyses of population dynamics. Cormack-Jolly-Seber (CJS) open population capture-recapture models were used to estimate annual survival probabilities, and a reverse-time analog of the CJS model was used to estimate recruitment of new individuals into the spawning populations. In addition, data on the size composition of captured fish were examined to provide corroborating evidence of recruitment. Separate analyses were done for each species and also for each subpopulation of LRS. Shortnose suckers and one subpopulation of LRS migrate into tributary rivers to spawn, whereas the other LRS subpopulation spawns at groundwater upwelling areas along the eastern shoreline of the lake. Characteristics of the spawning migrations in 2015, such as the effects of temperature on the timing of the migrations, were similar to past years.Capture-recapture analyses for the LRS subpopulation that spawns at the shoreline areas included encounter histories for 13,617 individuals, and analyses for the subpopulation that spawns in the rivers included 39,321 encounter histories. With a few exceptions, the survival of males and females in both subpopulations was high (greater than or equal to 0.86) between 1999 and 2013. Survival was notably lower for males from the rivers

  8. Large-scale data analytics

    CERN Document Server

    Gkoulalas-Divanis, Aris

    2014-01-01

    Provides cutting-edge research in large-scale data analytics from diverse scientific areas Surveys varied subject areas and reports on individual results of research in the field Shares many tips and insights into large-scale data analytics from authors and editors with long-term experience and specialization in the field

  9. Status and trends of adult Lost River (Deltistes luxatus) and shortnose (Chasmistes brevirostris) sucker populations in Upper Klamath Lake, Oregon, 2017

    Science.gov (United States)

    Hewitt, David A.; Janney, Eric C.; Hayes, Brian S.; Harris, Alta C.

    2018-04-24

    Executive SummaryData from a long-term capture-recapture program were used to assess the status and dynamics of populations of two long-lived, federally endangered catostomids in Upper Klamath Lake, Oregon. Lost River suckers (LRS; Deltistes luxatus) and shortnose suckers (SNS; Chasmistes brevirostris) have been captured and tagged with passive integrated transponder (PIT) tags during their spawning migrations in each year since 1995. In addition, beginning in 2005, individuals that had been previously PIT-tagged were re-encountered on remote underwater antennas deployed throughout sucker spawning areas. Captures and remote encounters during the spawning season in spring 2016 were incorporated into capture-recapture analyses of population dynamics.Cormack-Jolly-Seber (CJS) open population capture-recapture models were used to estimate annual survival probabilities, and a reverse-time analog of the CJS model was used to estimate recruitment of new individuals into the spawning populations. In addition, data on the size composition of captured fish were examined to provide corroborating evidence of recruitment. Model estimates of survival and recruitment were used to derive estimates of changes in population size over time and to determine the status of the populations through 2015. Separate analyses were done for each species and also for each subpopulation of LRS. Shortnose suckers and one subpopulation of LRS migrate into tributary rivers to spawn, whereas the other LRS subpopulation spawns at groundwater upwelling areas along the eastern shoreline of the lake.Capture-recapture analyses indicated that with a few exceptions, the survival of males and females in both Lost River sucker subpopulations was high (greater than 0.88) from 1999 to 2015. Survival was notably lower for males from the river in 2000, 2006, and 2012, and for the shoreline areas in 2002. From 2001 to 2015, the abundance of males in the lakeshore spawning subpopulation decreased by at least 64

  10. Demographics and run timing of adult Lost River (Deltistes luxatus) and short nose (Chasmistes brevirostris) suckers in Upper Klamath Lake, Oregon, 2012

    Science.gov (United States)

    Hewitt, David A.; Janney, Eric C.; Hayes, Brian S.; Harris, Alta C.

    2014-01-01

    Data from a long-term capture-recapture program were used to assess the status and dynamics of populations of two long-lived, federally endangered catostomids in Upper Klamath Lake, Oregon. Lost River suckers (Deltistes luxatus) and shortnose suckers (Chasmistes brevirostris) have been captured and tagged with passive integrated transponder (PIT) tags during their spawning migrations in each year since 1995. In addition, beginning in 2005, individuals that had been previously PIT-tagged were re-encountered on remote underwater antennas deployed throughout sucker spawning areas. Captures and remote encounters during spring 2012 were used to describe the spawning migrations in that year and also were incorporated into capture-recapture analyses of population dynamics. Cormack-Jolly-Seber (CJS) open population capture-recapture models were used to estimate annual survival probabilities, and a reverse-time analog of the CJS model was used to estimate recruitment of new individuals into the spawning populations. In addition, data on the size composition of captured fish were examined to provide corroborating evidence of recruitment. Model estimates of survival and recruitment were used to derive estimates of changes in population size over time and to determine the status of the populations in 2011. Separate analyses were conducted for each species and also for each subpopulation of Lost River suckers (LRS). Shortnose suckers (SNS) and one subpopulation of LRS migrate into tributary rivers to spawn, whereas the other LRS subpopulation spawns at groundwater upwelling areas along the eastern shoreline of the lake. In 2012, we captured, tagged, and released 749 LRS at four lakeshore spawning areas and recaptured an additional 969 individuals that had been tagged in previous years. Across all four areas, the remote antennas detected 6,578 individual LRS during the spawning season. Spawning activity peaked in April and most individuals were encountered at Cinder Flats and

  11. Suckers for Science.

    Science.gov (United States)

    Haugen, Heidi Helene

    2001-01-01

    Introduces an inquiry-based program on leeches that features five components: (1) engagement; (2) exploration; (3) explanation; (4) evaluation; and (5) extension/elaboration. Investigates the anatomy and environmental conditions of leeches. (YDS)

  12. Effect of weaning stress, housing system and probiotics supplementation on cortisol, thyroid activity and productive performance of sucker camel calves

    International Nuclear Information System (INIS)

    Abdel-Fattah, M.S.; Hashem, A.L.S.; Azamel, A.A.; Farghaly, H.A.M.

    2011-01-01

    The weaning process has been identified as associated with potential psychological, nutritional and physiological stressors on both dam and her young. These stressors are often stressful for the young. Ten sucker camel calves were weaned using calf-dam on and suckling off weaning system (calves were kept with their dams at all times during weaning process and prevented from suckling) under two housing systems, 6 calves in group housing system (G) and 4 calves in individually housing system (I). Half of calves in each housing system were supplemented with probiotics (treated, P), while the other was not-supplemented with probiotics (control, C). This study was carried out at Maryout Research Station of the Desert Research Center, 35 km to southwest of Alexandria, Egypt. Calves were weaned at 280 days of age with initial live body weight (LBW) of 236.76±0.224 kg. The duration of the study was 35 days and divided into five weeks; one week served as pre-weaning followed by four weeks served as post-weaning period (calves were in treatments). Calves and their dams were used to estimate the effects of weaning stress, housing systems (group, G vs. individually, I) and probiotics supplementation on the productive performance, cortisol (COR) and thyroid hormones (T 3 and T 4 ) concentrations of camel calves (Camelus dromedaries). No measurements were done on the dams. The results indicated that, regardless the effect of housing system and Abdel-Fattah et al., J. Rad. Res. Appl. Sci., Vol. 4, No. 4(B) (2011) 1292 probiotics supplementation, weaning stress declined LBW -0.89% at d 7 post weaning. Concerned housing system effect, group-housed calves and individually-housed calves lost -0.36 and -1.43% of their body weights, respectively on d 7 post-weaning and they recovered their weaning weight (d 0 ) on d 14 post-weaning period (2.45 and 0.57%). Neither group housing system nor probiotics supplementation prevented the weight lost resulted by weaning stress during the first

  13. Large-scale grid management

    International Nuclear Information System (INIS)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-01-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series

  14. Concentrations of cadmium, Cobalt, Lead, Nickel, and Zinc in Blood and Fillets of Northern Hog Sucker (Hypentelium nigricans) from streams contaminated by lead-Zinc mining: Implications for monitoring

    Science.gov (United States)

    Schmitt, C.J.; Brumbaugh, W.G.; May, T.W.

    2009-01-01

    Lead (Pb) and other metals can accumulate in northern hog sucker (Hypentelium nigricans) and other suckers (Catostomidae), which are harvested in large numbers from Ozark streams by recreational fishers. Suckers are also important in the diets of piscivorous wildlife and fishes. Suckers from streams contaminated by historic Pb-zinc (Zn) mining in southeastern Missouri are presently identified in a consumption advisory because of Pb concentrations. We evaluated blood sampling as a potentially nonlethal alternative to fillet sampling for Pb and other metals in northern hog sucker. Scaled, skin-on, bone-in "fillet" and blood samples were obtained from northern hog suckers (n = 75) collected at nine sites representing a wide range of conditions relative to Pb-Zn mining in southeastern Missouri. All samples were analyzed for cadmium (Cd), cobalt (Co), Pb, nickel (Ni), and Zn. Fillets were also analyzed for calcium as an indicator of the amount of bone, skin, and mucus included in the samples. Pb, Cd, Co, and Ni concentrations were typically higher in blood than in fillets, but Zn concentrations were similar in both sample types. Concentrations of all metals except Zn were typically higher at sites located downstream from active and historic Pb-Zn mines and related facilities than at nonmining sites. Blood concentrations of Pb, Cd, and Co were highly correlated with corresponding fillet concentrations; log-log linear regressions between concentrations in the two sample types explained 94% of the variation for Pb, 73-83% of the variation for Co, and 61% of the variation for Cd. In contrast, relations for Ni and Zn explained Fillet Pb and calcium concentrations were correlated (r = 0.83), but only in the 12 fish from the most contaminated site; concentrations were not significantly correlated across all sites. Conversely, fillet Cd and calcium were correlated across the range of sites (r = 0.78), and the inclusion of calcium in the fillet-to-blood relation explained an

  15. Effect of weaning stress, housing system and probiotics supplementation on cortisol, thyroid activity and productive performance of sucker camel calves

    Energy Technology Data Exchange (ETDEWEB)

    Abdel-Fattah, M. S.; Hashem, A. L.S.; Azamel, A. A. [Department of Animal Physiology, Animal and Poultry Production Division, Desert Research Center, Cairo (Egypt); Farghaly, H. A.M., [Biological Applications Department, Radioisotopes Applications Division, Nuclear Research Centre, Atomic Energy Authority, P.O. 13759, Inshas (Egypt)

    2011-07-01

    The weaning process has been identified as associated with potential psychological, nutritional and physiological stressors on both dam and her young. These stressors are often stressful for the young. Ten sucker camel calves were weaned using calf-dam on and suckling off weaning system (calves were kept with their dams at all times during weaning process and prevented from suckling) under two housing systems, 6 calves in group housing system (G) and 4 calves in individually housing system (I). Half of calves in each housing system were supplemented with probiotics (treated, P), while the other was not-supplemented with probiotics (control, C). This study was carried out at Maryout Research Station of the Desert Research Center, 35 km to southwest of Alexandria, Egypt. Calves were weaned at 280 days of age with initial live body weight (LBW) of 236.76{+-}0.224 kg. The duration of the study was 35 days and divided into five weeks; one week served as pre-weaning followed by four weeks served as post-weaning period (calves were in treatments). Calves and their dams were used to estimate the effects of weaning stress, housing systems (group, G vs. individually, I) and probiotics supplementation on the productive performance, cortisol (COR) and thyroid hormones (T{sub 3} and T{sub 4}) concentrations of camel calves (Camelus dromedaries). No measurements were done on the dams. The results indicated that, regardless the effect of housing system and Abdel-Fattah et al., J. Rad. Res. Appl. Sci., Vol. 4, No. 4(B) (2011) 1292 probiotics supplementation, weaning stress declined LBW -0.89% at d{sub 7} post weaning. Concerned housing system effect, group-housed calves and individually-housed calves lost -0.36 and -1.43% of their body weights, respectively on d{sub 7} post-weaning and they recovered their weaning weight (d{sub 0}) on d{sub 14} post-weaning period (2.45 and 0.57%). Neither group housing system nor probiotics supplementation prevented the weight lost resulted by

  16. Using Digital 3D Scanning to Create “Artifictions” of the Passenger Pigeon and Harelip Sucker, Two Extinct Species in Eastern North America: The Future Examines the Past

    Directory of Open Access Journals (Sweden)

    Bruce L. Manzano

    2015-12-01

    Full Text Available The Virtual Curation Laboratory at Virginia Commonwealth University created 3D representations of digital morphological models, termed “artifictions,” of several bone elements from two extinct animals, the passenger pigeon (Ectopistes migratorius Linnaeus Columbidae and the harelip sucker (Moxostoma lacerum Jordan and Brayton Catostomidae. Procuring recent comparative reference skeletons these species is extremely difficult. The creation of artifictions, 3D printed replicas of skeletal remains, aims to help researchers become familiar with the bones of harelip sucker and passenger pigeon to facilitate morphological identification of remains of these species within archaeological assemblages. Here, we discuss the two species, the techniques used to create digital topological models of individual skeletal elements, and the obstacles encountered regarding 3D printed artifictions in zooarchaeology.

  17. Study for prevention of sucker rods failures though NiCr coating; Estudo para prevencao de falhas de hastes de bombeio de petroleo atraves de aplicacao de revestimento NiCr

    Energy Technology Data Exchange (ETDEWEB)

    Bezerra, Brunno S.L. [PETROBRAS, Rio de Janeiro, RJ (Brazil); Araujo, Paulo M.M. [Universidade Federal de Sergipe (UFS), Aracaju, SE (Brazil); Figueiredo, Renan T.; Cavalcanti, Eliane B. [Universidade Tiradentes, Aracaju, SE (Brazil)

    2008-07-01

    The use of common materials, as carbon steel, in sucker rods motivated by its low cost, in mature oil wells located on Sergipe, Alagoas, Bahia and Rio Grande do Norte states, which are subjected to tractive-compressive-abrasive like combined loads, added to aggressive environment (oil production in the presence of water, carbon dioxide, hydrogen sulfide, salinity, etc..), leads to the materials drastic degeneration and even its rupture. The substitution of common materials by those which have better resistance o failure is, therefore, limited by high cost. A much cheaper alternative is to modify the surface of common materials used in the subsurface equipment, by applying a protective coating in order to assure the system's performance, durability or better economic viability. In the present work it was studied the use of thermal sprayed NiCr coating in sucker rods. It was studied three thermal spray processes: flame spray, arc spray and HVOF (high velocity oxy fuel). (author)

  18. Feeding Activity, Rate of Consumption, Daily Ration and Prey Selection of Major Predators in John Day Reservoir, 1985: Annual Report.

    Energy Technology Data Exchange (ETDEWEB)

    Palmer, Douglas E.; United States. Bonneville Power Administration; U.S. Fish and Wildlife Service; National Fishery Research Center (U.S.)

    1986-10-01

    This report summarizes activities in 1985 to determine the extent of predation on juvenile salmonids in John Day Reservoir. To estimate consumption of juvenile salmonids we used the composition of the natural diet of predators and in the laboratory determined rate of gastric evacuation by predators. Salmonids were the single most important food item for northern squawfish (Ptychocheilus oregonensis) at McNary tailrace during all sampling periods and at John Day forebay during July. Salmonids accounted for 11.6% of the diet of walleye (Stizostedion vitreum vitreum) in 1985 which was about twice that found in previous years. Salmonids contributed little to smallmouth bass (Micropterus dolomieui) diet but comprised about 25% of the diet of channel catfish (Ictalurus punctatus). Composition of prey taxa in beach seine catches in 1985 was similar to 1983 and 1984 with chinook salmon (Oncorhynchus tschawytscha), northern squawfish, largescale sucker (Catostomus macrocheilus), and sand roller (Percopsis transmontana) dominating the catch at main channel stations and crappies (Pomoxis spp.) and largescale sucker dominating at backwater stations. Preliminary results of beach seine efficiency studies suggest that seine efficiency varied significantly among prey species and between substrate types in 1985. Results of digestion rate experiments indicate that gastric evacuation in northern squawfish can be predicted using water temperature, prey weight, predator weight and time. 19 refs., 19 figs., 13 tabs.

  19. The role of water on the structure and mechanical properties of a thermoplastic natural block co-polymer from squid sucker ring teeth.

    Science.gov (United States)

    Rieu, Clément; Bertinetti, Luca; Schuetz, Roman; Salinas-Zavala, Cesar Ca; Weaver, James C; Fratzl, Peter; Miserez, Ali; Masic, Admir

    2016-09-02

    Hard biological polymers exhibiting a truly thermoplastic behavior that can maintain their structural properties after processing are extremely rare and highly desirable for use in advanced technological applications such as 3D-printing, biodegradable plastics and robust composites. One exception are the thermoplastic proteins that comprise the sucker ring teeth (SRT) of the Humboldt jumbo squid (Dosidicus gigas). In this work, we explore the mechanical properties of reconstituted SRT proteins and demonstrate that the material can be re-shaped by simple processing in water and at relatively low temperature (below 100 °C). The post-processed material maintains a high modulus in the GPa range, both in the dry and the wet states. When transitioning from low to high humidity, the material properties change from brittle to ductile with an increase in plastic deformation, where water acts as a plasticizer. Using synchrotron x-ray scattering tools, we found that water mostly influences nano scale structure, whereas at the molecular level, the protein structure remains largely unaffected. Furthermore, through simultaneous in situ x-ray scattering and mechanical tests, we show that the supramolecular network of the reconstituted SRT material exhibits a progressive alignment along the strain direction, which is attributed to chain alignment of the amorphous domains of SRT proteins. The high modulus in both dry and wet states, combined with their efficient thermal processing characteristics, make the SRT proteins promising substitutes for applications traditionally reserved for petroleum-based thermoplastics.

  20. Large-scale solar heat

    Energy Technology Data Exchange (ETDEWEB)

    Tolonen, J.; Konttinen, P.; Lund, P. [Helsinki Univ. of Technology, Otaniemi (Finland). Dept. of Engineering Physics and Mathematics

    1998-12-31

    In this project a large domestic solar heating system was built and a solar district heating system was modelled and simulated. Objectives were to improve the performance and reduce costs of a large-scale solar heating system. As a result of the project the benefit/cost ratio can be increased by 40 % through dimensioning and optimising the system at the designing stage. (orig.)

  1. Fish resource data from the Snare River, Northwest Territories

    International Nuclear Information System (INIS)

    Jessop, E.F.; Chang-Kue, K.T.J.; MacDonald, G.

    1994-01-01

    An extensive fish sampling and tagging program was conducted on the Snare River, Northwest Territories, in order to collect baseline data on the fish populations in sections of the river altered by hydroelectric projects. Fish populations were sampled from May to July 1977 in five sections of the river that were influenced by development of hydropower at three dams currently on line; 530 tagged fish were also released. The biweekly catch composition in experimental gill nets for each study area and the catch per gill net mesh size are presented for walleye (Stizostedion vitreum), lake trout (Salvelinus namaycush), lake whitefish (Coregonus clupeaformis), lake cisco (Coregonus artedi), northern pike (Esox lucius), white sucker (Catostomus commersoni), and longnose sucker (Catostomus catostomus). Age-specific data on length, weight, age, sex, and maturity are also included. 7 refs., 12 figs., 42 tabs

  2. Preliminary Study of the Effect of the Proposed Long Lake Valley Project Operation on the Transport of Larval Suckers in Upper Klamath Lake, Oregon

    Science.gov (United States)

    Wood, Tamara M.

    2009-01-01

    A hydrodynamic model of Upper Klamath and Agency Lakes, Oregon, was used to explore the effects of the operation of proposed offstream storage at Long Lake Valley on transport of larval suckers through the Upper Klamath and Agency Lakes system during May and June, when larval fish leave spawning sites in the Williamson River and springs along the eastern shoreline and become entrained in lake currents. A range in hydrologic conditions was considered, including historically high and low outflows and inflows, lake elevations, and the operation of pumps between Upper Klamath Lake and storage in Long Lake Valley. Two wind-forcing scenarios were considered: one dominated by moderate prevailing winds and another dominated by a strong reversal of winds from the prevailing direction. On the basis of 24 model simulations that used all combinations of hydrology and wind forcing, as well as With Project and No Action scenarios, it was determined that the biggest effect of project operations on larval transport was the result of alterations in project management of the elevation in Upper Klamath Lake and the outflow at the Link River and A Canal, rather than the result of pumping operations. This was because, during the spring time period of interest, the amount of water pumped between Upper Klamath Lake and Long Lake Valley was generally small. The dominant effect was that an increase in lake elevation would result in more larvae in the Williamson River delta and in Agency Lake, an effect that was enhanced under conditions of wind reversal. A decrease in lake elevation accompanied by an increase in the outflow at the Link River had the opposite effect on larval concentration and residence time.

  3. Japanese large-scale interferometers

    CERN Document Server

    Kuroda, K; Miyoki, S; Ishizuka, H; Taylor, C T; Yamamoto, K; Miyakawa, O; Fujimoto, M K; Kawamura, S; Takahashi, R; Yamazaki, T; Arai, K; Tatsumi, D; Ueda, A; Fukushima, M; Sato, S; Shintomi, T; Yamamoto, A; Suzuki, T; Saitô, Y; Haruyama, T; Sato, N; Higashi, Y; Uchiyama, T; Tomaru, T; Tsubono, K; Ando, M; Takamori, A; Numata, K; Ueda, K I; Yoneda, H; Nakagawa, K; Musha, M; Mio, N; Moriwaki, S; Somiya, K; Araya, A; Kanda, N; Telada, S; Sasaki, M; Tagoshi, H; Nakamura, T; Tanaka, T; Ohara, K

    2002-01-01

    The objective of the TAMA 300 interferometer was to develop advanced technologies for kilometre scale interferometers and to observe gravitational wave events in nearby galaxies. It was designed as a power-recycled Fabry-Perot-Michelson interferometer and was intended as a step towards a final interferometer in Japan. The present successful status of TAMA is presented. TAMA forms a basis for LCGT (large-scale cryogenic gravitational wave telescope), a 3 km scale cryogenic interferometer to be built in the Kamioka mine in Japan, implementing cryogenic mirror techniques. The plan of LCGT is schematically described along with its associated R and D.

  4. Ethics of large-scale change

    OpenAIRE

    Arler, Finn

    2006-01-01

      The subject of this paper is long-term large-scale changes in human society. Some very significant examples of large-scale change are presented: human population growth, human appropriation of land and primary production, the human use of fossil fuels, and climate change. The question is posed, which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, th...

  5. Large-scale river regulation

    International Nuclear Information System (INIS)

    Petts, G.

    1994-01-01

    Recent concern over human impacts on the environment has tended to focus on climatic change, desertification, destruction of tropical rain forests, and pollution. Yet large-scale water projects such as dams, reservoirs, and inter-basin transfers are among the most dramatic and extensive ways in which our environment has been, and continues to be, transformed by human action. Water running to the sea is perceived as a lost resource, floods are viewed as major hazards, and wetlands are seen as wastelands. River regulation, involving the redistribution of water in time and space, is a key concept in socio-economic development. To achieve water and food security, to develop drylands, and to prevent desertification and drought are primary aims for many countries. A second key concept is ecological sustainability. Yet the ecology of rivers and their floodplains is dependent on the natural hydrological regime, and its related biochemical and geomorphological dynamics. (Author)

  6. Automation of the procedures for changing course and balancing in sucker-rod pumping units; Automacao dos procedimentos de mudanca de curso e balanceamento em unidades de bombeio mecanico

    Energy Technology Data Exchange (ETDEWEB)

    Quintaes, Filipe de O.; Souza, Leonardo F.; Salazar, Andres O.; Maitelli, Andre L.; Fontes, Francisco de Assis O. [Universidade Federal do Rio Grande do Norte (UFRN), Natal, RN (Brazil). Dept. de Engenharia Eletrica; Karbage, Elias; Costa, Rutacio de O. [PETROBRAS, Rio de Janeiro, RJ (Brazil)

    2008-07-01

    The main advantages of the use of the method of artificial lift of Sucker-rod Pumping are: operation simplicity, it can be used until the end of the productive life of a well starting from normal conditions, capacity of pump can be modified in function of the changes of behavior of the well and usually presents a smaller cost for production along the productive life of the well. Even having the smallest cost for production, these units need maintenance procedures periodically. Two important procedures very used at Sucker-rod Pumping are the adjusts for balancing of the unit and the change of course, however, these procedures presents great operational problems. This work presents an automation project for the accomplishment of the procedures of balancing and change of course. Among some of the advantages of this new system, we have: stop of the unit at any position without damages to the reducer; reduction of the consumption of energy in direct transmission; possibility of integration with a supervisory system; reduction of operation costs and maintenance and facility of maintenance. By this automation, the productions stops will be reduced and with possibility of the elimination of the technicians' intervention. At this way, the risks and the operation costs and maintenance will be reduced. (author)

  7. Automating large-scale reactor systems

    International Nuclear Information System (INIS)

    Kisner, R.A.

    1985-01-01

    This paper conveys a philosophy for developing automated large-scale control systems that behave in an integrated, intelligent, flexible manner. Methods for operating large-scale systems under varying degrees of equipment degradation are discussed, and a design approach that separates the effort into phases is suggested. 5 refs., 1 fig

  8. Decentralized Large-Scale Power Balancing

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad

    2013-01-01

    problem is formulated as a centralized large-scale optimization problem but is then decomposed into smaller subproblems that are solved locally by each unit connected to an aggregator. For large-scale systems the method is faster than solving the full problem and can be distributed to include an arbitrary...

  9. Reviving large-scale projects

    International Nuclear Information System (INIS)

    Desiront, A.

    2003-01-01

    For the past decade, most large-scale hydro development projects in northern Quebec have been put on hold due to land disputes with First Nations. Hydroelectric projects have recently been revived following an agreement signed with Aboriginal communities in the province who recognized the need to find new sources of revenue for future generations. Many Cree are working on the project to harness the waters of the Eastmain River located in the middle of their territory. The work involves building an 890 foot long dam, 30 dikes enclosing a 603 square-km reservoir, a spillway, and a power house with 3 generating units with a total capacity of 480 MW of power for start-up in 2007. The project will require the use of 2,400 workers in total. The Cree Construction and Development Company is working on relations between Quebec's 14,000 Crees and the James Bay Energy Corporation, the subsidiary of Hydro-Quebec which is developing the project. Approximately 10 per cent of the $735-million project has been designated for the environmental component. Inspectors ensure that the project complies fully with environmental protection guidelines. Total development costs for Eastmain-1 are in the order of $2 billion of which $735 million will cover work on site and the remainder will cover generating units, transportation and financial charges. Under the treaty known as the Peace of the Braves, signed in February 2002, the Quebec government and Hydro-Quebec will pay the Cree $70 million annually for 50 years for the right to exploit hydro, mining and forest resources within their territory. The project comes at a time when electricity export volumes to the New England states are down due to growth in Quebec's domestic demand. Hydropower is a renewable and non-polluting source of energy that is one of the most acceptable forms of energy where the Kyoto Protocol is concerned. It was emphasized that large-scale hydro-electric projects are needed to provide sufficient energy to meet both

  10. Large-scale pool fires

    Directory of Open Access Journals (Sweden)

    Steinhaus Thomas

    2007-01-01

    Full Text Available A review of research into the burning behavior of large pool fires and fuel spill fires is presented. The features which distinguish such fires from smaller pool fires are mainly associated with the fire dynamics at low source Froude numbers and the radiative interaction with the fire source. In hydrocarbon fires, higher soot levels at increased diameters result in radiation blockage effects around the perimeter of large fire plumes; this yields lower emissive powers and a drastic reduction in the radiative loss fraction; whilst there are simplifying factors with these phenomena, arising from the fact that soot yield can saturate, there are other complications deriving from the intermittency of the behavior, with luminous regions of efficient combustion appearing randomly in the outer surface of the fire according the turbulent fluctuations in the fire plume. Knowledge of the fluid flow instabilities, which lead to the formation of large eddies, is also key to understanding the behavior of large-scale fires. Here modeling tools can be effectively exploited in order to investigate the fluid flow phenomena, including RANS- and LES-based computational fluid dynamics codes. The latter are well-suited to representation of the turbulent motions, but a number of challenges remain with their practical application. Massively-parallel computational resources are likely to be necessary in order to be able to adequately address the complex coupled phenomena to the level of detail that is necessary.

  11. Large-scale computing with Quantum Espresso

    International Nuclear Information System (INIS)

    Giannozzi, P.; Cavazzoni, C.

    2009-01-01

    This paper gives a short introduction to Quantum Espresso: a distribution of software for atomistic simulations in condensed-matter physics, chemical physics, materials science, and to its usage in large-scale parallel computing.

  12. Large-scale regions of antimatter

    International Nuclear Information System (INIS)

    Grobov, A. V.; Rubin, S. G.

    2015-01-01

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era

  13. Large-scale regions of antimatter

    Energy Technology Data Exchange (ETDEWEB)

    Grobov, A. V., E-mail: alexey.grobov@gmail.com; Rubin, S. G., E-mail: sgrubin@mephi.ru [National Research Nuclear University MEPhI (Russian Federation)

    2015-07-15

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.

  14. Large-scale grid management; Storskala Nettforvaltning

    Energy Technology Data Exchange (ETDEWEB)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-07-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series.

  15. Political consultation and large-scale research

    International Nuclear Information System (INIS)

    Bechmann, G.; Folkers, H.

    1977-01-01

    Large-scale research and policy consulting have an intermediary position between sociological sub-systems. While large-scale research coordinates science, policy, and production, policy consulting coordinates science, policy and political spheres. In this very position, large-scale research and policy consulting lack of institutional guarantees and rational back-ground guarantee which are characteristic for their sociological environment. This large-scale research can neither deal with the production of innovative goods under consideration of rentability, nor can it hope for full recognition by the basis-oriented scientific community. Policy consulting knows neither the competence assignment of the political system to make decisions nor can it judge succesfully by the critical standards of the established social science, at least as far as the present situation is concerned. This intermediary position of large-scale research and policy consulting has, in three points, a consequence supporting the thesis which states that this is a new form of institutionalization of science: These are: 1) external control, 2) the organization form, 3) the theoretical conception of large-scale research and policy consulting. (orig.) [de

  16. Large-Scale Analysis of Art Proportions

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2014-01-01

    While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square) and with majo......While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square...

  17. Large-scale Motion of Solar Filaments

    Indian Academy of Sciences (India)

    tribpo

    Large-scale Motion of Solar Filaments. Pavel Ambrož, Astronomical Institute of the Acad. Sci. of the Czech Republic, CZ-25165. Ondrejov, The Czech Republic. e-mail: pambroz@asu.cas.cz. Alfred Schroll, Kanzelhöehe Solar Observatory of the University of Graz, A-9521 Treffen,. Austria. e-mail: schroll@solobskh.ac.at.

  18. Sensitivity analysis for large-scale problems

    Science.gov (United States)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  19. Dissecting the large-scale galactic conformity

    Science.gov (United States)

    Seo, Seongu

    2018-01-01

    Galactic conformity is an observed phenomenon that galaxies located in the same region have similar properties such as star formation rate, color, gas fraction, and so on. The conformity was first observed among galaxies within in the same halos (“one-halo conformity”). The one-halo conformity can be readily explained by mutual interactions among galaxies within a halo. Recent observations however further witnessed a puzzling connection among galaxies with no direct interaction. In particular, galaxies located within a sphere of ~5 Mpc radius tend to show similarities, even though the galaxies do not share common halos with each other ("two-halo conformity" or “large-scale conformity”). Using a cosmological hydrodynamic simulation, Illustris, we investigate the physical origin of the two-halo conformity and put forward two scenarios. First, back-splash galaxies are likely responsible for the large-scale conformity. They have evolved into red galaxies due to ram-pressure stripping in a given galaxy cluster and happen to reside now within a ~5 Mpc sphere. Second, galaxies in strong tidal field induced by large-scale structure also seem to give rise to the large-scale conformity. The strong tides suppress star formation in the galaxies. We discuss the importance of the large-scale conformity in the context of galaxy evolution.

  20. Ethics of large-scale change

    DEFF Research Database (Denmark)

    Arler, Finn

    2006-01-01

    , which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, the neoclassical economists' approach, and finally the so-called Concentric Circle Theories approach...

  1. Large-scale multimedia modeling applications

    International Nuclear Information System (INIS)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications

  2. Large-scale perspective as a challenge

    NARCIS (Netherlands)

    Plomp, M.G.A.

    2012-01-01

    1. Scale forms a challenge for chain researchers: when exactly is something ‘large-scale’? What are the underlying factors (e.g. number of parties, data, objects in the chain, complexity) that determine this? It appears to be a continuum between small- and large-scale, where positioning on that

  3. Computing in Large-Scale Dynamic Systems

    NARCIS (Netherlands)

    Pruteanu, A.S.

    2013-01-01

    Software applications developed for large-scale systems have always been difficult to de- velop due to problems caused by the large number of computing devices involved. Above a certain network size (roughly one hundred), necessary services such as code updating, topol- ogy discovery and data

  4. Large-Scale Outflows in Seyfert Galaxies

    Science.gov (United States)

    Colbert, E. J. M.; Baum, S. A.

    1995-12-01

    \\catcode`\\@=11 \\ialign{m @th#1hfil ##hfil \\crcr#2\\crcr\\sim\\crcr}}} \\catcode`\\@=12 Highly collimated outflows extend out to Mpc scales in many radio-loud active galaxies. In Seyfert galaxies, which are radio-quiet, the outflows extend out to kpc scales and do not appear to be as highly collimated. In order to study the nature of large-scale (>~1 kpc) outflows in Seyferts, we have conducted optical, radio and X-ray surveys of a distance-limited sample of 22 edge-on Seyfert galaxies. Results of the optical emission-line imaging and spectroscopic survey imply that large-scale outflows are present in >~{{1} /{4}} of all Seyferts. The radio (VLA) and X-ray (ROSAT) surveys show that large-scale radio and X-ray emission is present at about the same frequency. Kinetic luminosities of the outflows in Seyferts are comparable to those in starburst-driven superwinds. Large-scale radio sources in Seyferts appear diffuse, but do not resemble radio halos found in some edge-on starburst galaxies (e.g. M82). We discuss the feasibility of the outflows being powered by the active nucleus (e.g. a jet) or a circumnuclear starburst.

  5. Large-scale structure of the Universe

    International Nuclear Information System (INIS)

    Doroshkevich, A.G.

    1978-01-01

    The problems, discussed at the ''Large-scale Structure of the Universe'' symposium are considered on a popular level. Described are the cell structure of galaxy distribution in the Universe, principles of mathematical galaxy distribution modelling. The images of cell structures, obtained after reprocessing with the computer are given. Discussed are three hypothesis - vortical, entropic, adiabatic, suggesting various processes of galaxy and galaxy clusters origin. A considerable advantage of the adiabatic hypothesis is recognized. The relict radiation, as a method of direct studying the processes taking place in the Universe is considered. The large-scale peculiarities and small-scale fluctuations of the relict radiation temperature enable one to estimate the turbance properties at the pre-galaxy stage. The discussion of problems, pertaining to studying the hot gas, contained in galaxy clusters, the interactions within galaxy clusters and with the inter-galaxy medium, is recognized to be a notable contribution into the development of theoretical and observational cosmology

  6. Methods for Large-Scale Nonlinear Optimization.

    Science.gov (United States)

    1980-05-01

    STANFORD, CALIFORNIA 94305 METHODS FOR LARGE-SCALE NONLINEAR OPTIMIZATION by Philip E. Gill, Waiter Murray, I Michael A. Saunden, and Masgaret H. Wright...typical iteration can be partitioned so that where B is an m X m basise matrix. This partition effectively divides the vari- ables into three classes... attention is given to the standard of the coding or the documentation. A much better way of obtaining mathematical software is from a software library

  7. Large-scale Complex IT Systems

    OpenAIRE

    Sommerville, Ian; Cliff, Dave; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard

    2011-01-01

    This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that identifies the major challen...

  8. Large-scale complex IT systems

    OpenAIRE

    Sommerville, Ian; Cliff, Dave; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard

    2012-01-01

    12 pages, 2 figures This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that ident...

  9. Large-Scale Transit Signal Priority Implementation

    OpenAIRE

    Lee, Kevin S.; Lozner, Bailey

    2018-01-01

    In 2016, the District Department of Transportation (DDOT) deployed Transit Signal Priority (TSP) at 195 intersections in highly urbanized areas of Washington, DC. In collaboration with a broader regional implementation, and in partnership with the Washington Metropolitan Area Transit Authority (WMATA), DDOT set out to apply a systems engineering–driven process to identify, design, test, and accept a large-scale TSP system. This presentation will highlight project successes and lessons learned.

  10. Probes of large-scale structure in the Universe

    International Nuclear Information System (INIS)

    Suto, Yasushi; Gorski, K.; Juszkiewicz, R.; Silk, J.

    1988-01-01

    Recent progress in observational techniques has made it possible to confront quantitatively various models for the large-scale structure of the Universe with detailed observational data. We develop a general formalism to show that the gravitational instability theory for the origin of large-scale structure is now capable of critically confronting observational results on cosmic microwave background radiation angular anisotropies, large-scale bulk motions and large-scale clumpiness in the galaxy counts. (author)

  11. RESTRUCTURING OF THE LARGE-SCALE SPRINKLERS

    Directory of Open Access Journals (Sweden)

    Paweł Kozaczyk

    2016-09-01

    Full Text Available One of the best ways for agriculture to become independent from shortages of precipitation is irrigation. In the seventies and eighties of the last century a number of large-scale sprinklers in Wielkopolska was built. At the end of 1970’s in the Poznan province 67 sprinklers with a total area of 6400 ha were installed. The average size of the sprinkler reached 95 ha. In 1989 there were 98 sprinklers, and the area which was armed with them was more than 10 130 ha. The study was conducted on 7 large sprinklers with the area ranging from 230 to 520 hectares in 1986÷1998. After the introduction of the market economy in the early 90’s and ownership changes in agriculture, large-scale sprinklers have gone under a significant or total devastation. Land on the State Farms of the State Agricultural Property Agency has leased or sold and the new owners used the existing sprinklers to a very small extent. This involved a change in crop structure, demand structure and an increase in operating costs. There has also been a threefold increase in electricity prices. Operation of large-scale irrigation encountered all kinds of barriers in practice and limitations of system solutions, supply difficulties, high levels of equipment failure which is not inclined to rational use of available sprinklers. An effect of a vision of the local area was to show the current status of the remaining irrigation infrastructure. The adopted scheme for the restructuring of Polish agriculture was not the best solution, causing massive destruction of assets previously invested in the sprinkler system.

  12. Optical interconnect for large-scale systems

    Science.gov (United States)

    Dress, William

    2013-02-01

    This paper presents a switchless, optical interconnect module that serves as a node in a network of identical distribution modules for large-scale systems. Thousands to millions of hosts or endpoints may be interconnected by a network of such modules, avoiding the need for multi-level switches. Several common network topologies are reviewed and their scaling properties assessed. The concept of message-flow routing is discussed in conjunction with the unique properties enabled by the optical distribution module where it is shown how top-down software control (global routing tables, spanning-tree algorithms) may be avoided.

  13. Adaptive visualization for large-scale graph

    International Nuclear Information System (INIS)

    Nakamura, Hiroko; Shinano, Yuji; Ohzahata, Satoshi

    2010-01-01

    We propose an adoptive visualization technique for representing a large-scale hierarchical dataset within limited display space. A hierarchical dataset has nodes and links showing the parent-child relationship between the nodes. These nodes and links are described using graphics primitives. When the number of these primitives is large, it is difficult to recognize the structure of the hierarchical data because many primitives are overlapped within a limited region. To overcome this difficulty, we propose an adaptive visualization technique for hierarchical datasets. The proposed technique selects an appropriate graph style according to the nodal density in each area. (author)

  14. Neutrinos and large-scale structure

    International Nuclear Information System (INIS)

    Eisenstein, Daniel J.

    2015-01-01

    I review the use of cosmological large-scale structure to measure properties of neutrinos and other relic populations of light relativistic particles. With experiments to measure the anisotropies of the cosmic microwave anisotropies and the clustering of matter at low redshift, we now have securely measured a relativistic background with density appropriate to the cosmic neutrino background. Our limits on the mass of the neutrino continue to shrink. Experiments coming in the next decade will greatly improve the available precision on searches for the energy density of novel relativistic backgrounds and the mass of neutrinos

  15. Neutrinos and large-scale structure

    Energy Technology Data Exchange (ETDEWEB)

    Eisenstein, Daniel J. [Daniel J. Eisenstein, Harvard-Smithsonian Center for Astrophysics, 60 Garden St., MS #20, Cambridge, MA 02138 (United States)

    2015-07-15

    I review the use of cosmological large-scale structure to measure properties of neutrinos and other relic populations of light relativistic particles. With experiments to measure the anisotropies of the cosmic microwave anisotropies and the clustering of matter at low redshift, we now have securely measured a relativistic background with density appropriate to the cosmic neutrino background. Our limits on the mass of the neutrino continue to shrink. Experiments coming in the next decade will greatly improve the available precision on searches for the energy density of novel relativistic backgrounds and the mass of neutrinos.

  16. Stabilization Algorithms for Large-Scale Problems

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg

    2006-01-01

    The focus of the project is on stabilization of large-scale inverse problems where structured models and iterative algorithms are necessary for computing approximate solutions. For this purpose, we study various iterative Krylov methods and their abilities to produce regularized solutions. Some......-curve. This heuristic is implemented as a part of a larger algorithm which is developed in collaboration with G. Rodriguez and P. C. Hansen. Last, but not least, a large part of the project has, in different ways, revolved around the object-oriented Matlab toolbox MOORe Tools developed by PhD Michael Jacobsen. New...

  17. Status: Large-scale subatmospheric cryogenic systems

    International Nuclear Information System (INIS)

    Peterson, T.

    1989-01-01

    In the late 1960's and early 1970's an interest in testing and operating RF cavities at 1.8K motivated the development and construction of four large (300 Watt) 1.8K refrigeration systems. in the past decade, development of successful superconducting RF cavities and interest in obtaining higher magnetic fields with the improved Niobium-Titanium superconductors has once again created interest in large-scale 1.8K refrigeration systems. The L'Air Liquide plant for Tore Supra is a recently commissioned 300 Watt 1.8K system which incorporates new technology, cold compressors, to obtain the low vapor pressure for low temperature cooling. CEBAF proposes to use cold compressors to obtain 5KW at 2.0K. Magnetic refrigerators of 10 Watt capacity or higher at 1.8K are now being developed. The state of the art of large-scale refrigeration in the range under 4K will be reviewed. 28 refs., 4 figs., 7 tabs

  18. Automatic management software for large-scale cluster system

    International Nuclear Information System (INIS)

    Weng Yunjian; Chinese Academy of Sciences, Beijing; Sun Gongxing

    2007-01-01

    At present, the large-scale cluster system faces to the difficult management. For example the manager has large work load. It needs to cost much time on the management and the maintenance of large-scale cluster system. The nodes in large-scale cluster system are very easy to be chaotic. Thousands of nodes are put in big rooms so that some managers are very easy to make the confusion with machines. How do effectively carry on accurate management under the large-scale cluster system? The article introduces ELFms in the large-scale cluster system. Furthermore, it is proposed to realize the large-scale cluster system automatic management. (authors)

  19. Large-scale digitizer system, analog converters

    International Nuclear Information System (INIS)

    Althaus, R.F.; Lee, K.L.; Kirsten, F.A.; Wagner, L.J.

    1976-10-01

    Analog to digital converter circuits that are based on the sharing of common resources, including those which are critical to the linearity and stability of the individual channels, are described. Simplicity of circuit composition is valued over other more costly approaches. These are intended to be applied in a large-scale processing and digitizing system for use with high-energy physics detectors such as drift-chambers or phototube-scintillator arrays. Signal distribution techniques are of paramount importance in maintaining adequate signal-to-noise ratio. Noise in both amplitude and time-jitter senses is held sufficiently low so that conversions with 10-bit charge resolution and 12-bit time resolution are achieved

  20. Emerging large-scale solar heating applications

    International Nuclear Information System (INIS)

    Wong, W.P.; McClung, J.L.

    2009-01-01

    Currently the market for solar heating applications in Canada is dominated by outdoor swimming pool heating, make-up air pre-heating and domestic water heating in homes, commercial and institutional buildings. All of these involve relatively small systems, except for a few air pre-heating systems on very large buildings. Together these applications make up well over 90% of the solar thermal collectors installed in Canada during 2007. These three applications, along with the recent re-emergence of large-scale concentrated solar thermal for generating electricity, also dominate the world markets. This paper examines some emerging markets for large scale solar heating applications, with a focus on the Canadian climate and market. (author)

  1. Accelerating sustainability in large-scale facilities

    CERN Multimedia

    Marina Giampietro

    2011-01-01

    Scientific research centres and large-scale facilities are intrinsically energy intensive, but how can big science improve its energy management and eventually contribute to the environmental cause with new cleantech? CERN’s commitment to providing tangible answers to these questions was sealed in the first workshop on energy management for large scale scientific infrastructures held in Lund, Sweden, on the 13-14 October.   Participants at the energy management for large scale scientific infrastructures workshop. The workshop, co-organised with the European Spallation Source (ESS) and  the European Association of National Research Facilities (ERF), tackled a recognised need for addressing energy issues in relation with science and technology policies. It brought together more than 150 representatives of Research Infrastrutures (RIs) and energy experts from Europe and North America. “Without compromising our scientific projects, we can ...

  2. Emerging large-scale solar heating applications

    Energy Technology Data Exchange (ETDEWEB)

    Wong, W.P.; McClung, J.L. [Science Applications International Corporation (SAIC Canada), Ottawa, Ontario (Canada)

    2009-07-01

    Currently the market for solar heating applications in Canada is dominated by outdoor swimming pool heating, make-up air pre-heating and domestic water heating in homes, commercial and institutional buildings. All of these involve relatively small systems, except for a few air pre-heating systems on very large buildings. Together these applications make up well over 90% of the solar thermal collectors installed in Canada during 2007. These three applications, along with the recent re-emergence of large-scale concentrated solar thermal for generating electricity, also dominate the world markets. This paper examines some emerging markets for large scale solar heating applications, with a focus on the Canadian climate and market. (author)

  3. Large-Scale Astrophysical Visualization on Smartphones

    Science.gov (United States)

    Becciani, U.; Massimino, P.; Costa, A.; Gheller, C.; Grillo, A.; Krokos, M.; Petta, C.

    2011-07-01

    Nowadays digital sky surveys and long-duration, high-resolution numerical simulations using high performance computing and grid systems produce multidimensional astrophysical datasets in the order of several Petabytes. Sharing visualizations of such datasets within communities and collaborating research groups is of paramount importance for disseminating results and advancing astrophysical research. Moreover educational and public outreach programs can benefit greatly from novel ways of presenting these datasets by promoting understanding of complex astrophysical processes, e.g., formation of stars and galaxies. We have previously developed VisIVO Server, a grid-enabled platform for high-performance large-scale astrophysical visualization. This article reviews the latest developments on VisIVO Web, a custom designed web portal wrapped around VisIVO Server, then introduces VisIVO Smartphone, a gateway connecting VisIVO Web and data repositories for mobile astrophysical visualization. We discuss current work and summarize future developments.

  4. Large-scale sequential quadratic programming algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Eldersveld, S.K.

    1992-09-01

    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  5. Large-scale Agricultural Land Acquisitions in West Africa | IDRC ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    This project will examine large-scale agricultural land acquisitions in nine West African countries -Burkina Faso, Guinea-Bissau, Guinea, Benin, Mali, Togo, Senegal, Niger, and Côte d'Ivoire. ... They will use the results to increase public awareness and knowledge about the consequences of large-scale land acquisitions.

  6. Algorithm 896: LSA: Algorithms for Large-Scale Optimization

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan

    2009-01-01

    Roč. 36, č. 3 (2009), 16-1-16-29 ISSN 0098-3500 R&D Pro jects: GA AV ČR IAA1030405; GA ČR GP201/06/P397 Institutional research plan: CEZ:AV0Z10300504 Keywords : algorithms * design * large-scale optimization * large-scale nonsmooth optimization * large-scale nonlinear least squares * large-scale nonlinear minimax * large-scale systems of nonlinear equations * sparse pro blems * partially separable pro blems * limited-memory methods * discrete Newton methods * quasi-Newton methods * primal interior-point methods Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.904, year: 2009

  7. Dipolar modulation of Large-Scale Structure

    Science.gov (United States)

    Yoon, Mijin

    For the last two decades, we have seen a drastic development of modern cosmology based on various observations such as the cosmic microwave background (CMB), type Ia supernovae, and baryonic acoustic oscillations (BAO). These observational evidences have led us to a great deal of consensus on the cosmological model so-called LambdaCDM and tight constraints on cosmological parameters consisting the model. On the other hand, the advancement in cosmology relies on the cosmological principle: the universe is isotropic and homogeneous on large scales. Testing these fundamental assumptions is crucial and will soon become possible given the planned observations ahead. Dipolar modulation is the largest angular anisotropy of the sky, which is quantified by its direction and amplitude. We measured a huge dipolar modulation in CMB, which mainly originated from our solar system's motion relative to CMB rest frame. However, we have not yet acquired consistent measurements of dipolar modulations in large-scale structure (LSS), as they require large sky coverage and a number of well-identified objects. In this thesis, we explore measurement of dipolar modulation in number counts of LSS objects as a test of statistical isotropy. This thesis is based on two papers that were published in peer-reviewed journals. In Chapter 2 [Yoon et al., 2014], we measured a dipolar modulation in number counts of WISE matched with 2MASS sources. In Chapter 3 [Yoon & Huterer, 2015], we investigated requirements for detection of kinematic dipole in future surveys.

  8. Large-scale tides in general relativity

    Energy Technology Data Exchange (ETDEWEB)

    Ip, Hiu Yan; Schmidt, Fabian, E-mail: iphys@mpa-garching.mpg.de, E-mail: fabians@mpa-garching.mpg.de [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85741 Garching (Germany)

    2017-02-01

    Density perturbations in cosmology, i.e. spherically symmetric adiabatic perturbations of a Friedmann-Lemaȋtre-Robertson-Walker (FLRW) spacetime, are locally exactly equivalent to a different FLRW solution, as long as their wavelength is much larger than the sound horizon of all fluid components. This fact is known as the 'separate universe' paradigm. However, no such relation is known for anisotropic adiabatic perturbations, which correspond to an FLRW spacetime with large-scale tidal fields. Here, we provide a closed, fully relativistic set of evolutionary equations for the nonlinear evolution of such modes, based on the conformal Fermi (CFC) frame. We show explicitly that the tidal effects are encoded by the Weyl tensor, and are hence entirely different from an anisotropic Bianchi I spacetime, where the anisotropy is sourced by the Ricci tensor. In order to close the system, certain higher derivative terms have to be dropped. We show that this approximation is equivalent to the local tidal approximation of Hui and Bertschinger [1]. We also show that this very simple set of equations matches the exact evolution of the density field at second order, but fails at third and higher order. This provides a useful, easy-to-use framework for computing the fully relativistic growth of structure at second order.

  9. Economically viable large-scale hydrogen liquefaction

    Science.gov (United States)

    Cardella, U.; Decker, L.; Klein, H.

    2017-02-01

    The liquid hydrogen demand, particularly driven by clean energy applications, will rise in the near future. As industrial large scale liquefiers will play a major role within the hydrogen supply chain, production capacity will have to increase by a multiple of today’s typical sizes. The main goal is to reduce the total cost of ownership for these plants by increasing energy efficiency with innovative and simple process designs, optimized in capital expenditure. New concepts must ensure a manageable plant complexity and flexible operability. In the phase of process development and selection, a dimensioning of key equipment for large scale liquefiers, such as turbines and compressors as well as heat exchangers, must be performed iteratively to ensure technological feasibility and maturity. Further critical aspects related to hydrogen liquefaction, e.g. fluid properties, ortho-para hydrogen conversion, and coldbox configuration, must be analysed in detail. This paper provides an overview on the approach, challenges and preliminary results in the development of efficient as well as economically viable concepts for large-scale hydrogen liquefaction.

  10. Large-scale Intelligent Transporation Systems simulation

    Energy Technology Data Exchange (ETDEWEB)

    Ewing, T.; Canfield, T.; Hannebutte, U.; Levine, D.; Tentner, A.

    1995-06-01

    A prototype computer system has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS) capable of running on massively parallel computers and distributed (networked) computer systems. The prototype includes the modelling of instrumented ``smart`` vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide 2-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphical user interfaces to support human-factors studies. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on ANL`s IBM SP-X parallel computer system for large scale problems. A novel feature of our design is that vehicles will be represented by autonomus computer processes, each with a behavior model which performs independent route selection and reacts to external traffic events much like real vehicles. With this approach, one will be able to take advantage of emerging massively parallel processor (MPP) systems.

  11. GPU-based large-scale visualization

    KAUST Repository

    Hadwiger, Markus

    2013-11-19

    Recent advances in image and volume acquisition as well as computational advances in simulation have led to an explosion of the amount of data that must be visualized and analyzed. Modern techniques combine the parallel processing power of GPUs with out-of-core methods and data streaming to enable the interactive visualization of giga- and terabytes of image and volume data. A major enabler for interactivity is making both the computational and the visualization effort proportional to the amount of data that is actually visible on screen, decoupling it from the full data size. This leads to powerful display-aware multi-resolution techniques that enable the visualization of data of almost arbitrary size. The course consists of two major parts: An introductory part that progresses from fundamentals to modern techniques, and a more advanced part that discusses details of ray-guided volume rendering, novel data structures for display-aware visualization and processing, and the remote visualization of large online data collections. You will learn how to develop efficient GPU data structures and large-scale visualizations, implement out-of-core strategies and concepts such as virtual texturing that have only been employed recently, as well as how to use modern multi-resolution representations. These approaches reduce the GPU memory requirements of extremely large data to a working set size that fits into current GPUs. You will learn how to perform ray-casting of volume data of almost arbitrary size and how to render and process gigapixel images using scalable, display-aware techniques. We will describe custom virtual texturing architectures as well as recent hardware developments in this area. We will also describe client/server systems for distributed visualization, on-demand data processing and streaming, and remote visualization. We will describe implementations using OpenGL as well as CUDA, exploiting parallelism on GPUs combined with additional asynchronous

  12. An Novel Architecture of Large-scale Communication in IOT

    Science.gov (United States)

    Ma, Wubin; Deng, Su; Huang, Hongbin

    2018-03-01

    In recent years, many scholars have done a great deal of research on the development of Internet of Things and networked physical systems. However, few people have made the detailed visualization of the large-scale communications architecture in the IOT. In fact, the non-uniform technology between IPv6 and access points has led to a lack of broad principles of large-scale communications architectures. Therefore, this paper presents the Uni-IPv6 Access and Information Exchange Method (UAIEM), a new architecture and algorithm that addresses large-scale communications in the IOT.

  13. Large-Scale Spacecraft Fire Safety Tests

    Science.gov (United States)

    Urban, David; Ruff, Gary A.; Ferkul, Paul V.; Olson, Sandra; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Cowlard, Adam J.; Rouvreau, Sebastien; Minster, Olivier; hide

    2014-01-01

    An international collaborative program is underway to address open issues in spacecraft fire safety. Because of limited access to long-term low-gravity conditions and the small volume generally allotted for these experiments, there have been relatively few experiments that directly study spacecraft fire safety under low-gravity conditions. Furthermore, none of these experiments have studied sample sizes and environment conditions typical of those expected in a spacecraft fire. The major constraint has been the size of the sample, with prior experiments limited to samples of the order of 10 cm in length and width or smaller. This lack of experimental data forces spacecraft designers to base their designs and safety precautions on 1-g understanding of flame spread, fire detection, and suppression. However, low-gravity combustion research has demonstrated substantial differences in flame behavior in low-gravity. This, combined with the differences caused by the confined spacecraft environment, necessitates practical scale spacecraft fire safety research to mitigate risks for future space missions. To address this issue, a large-scale spacecraft fire experiment is under development by NASA and an international team of investigators. This poster presents the objectives, status, and concept of this collaborative international project (Saffire). The project plan is to conduct fire safety experiments on three sequential flights of an unmanned ISS re-supply spacecraft (the Orbital Cygnus vehicle) after they have completed their delivery of cargo to the ISS and have begun their return journeys to earth. On two flights (Saffire-1 and Saffire-3), the experiment will consist of a flame spread test involving a meter-scale sample ignited in the pressurized volume of the spacecraft and allowed to burn to completion while measurements are made. On one of the flights (Saffire-2), 9 smaller (5 x 30 cm) samples will be tested to evaluate NASAs material flammability screening tests

  14. Large-scale fuel cycle centres

    International Nuclear Information System (INIS)

    Smiley, S.H.; Black, K.M.

    1977-01-01

    The US Nuclear Regulatory Commission (NRC) has considered the nuclear energy centre concept for fuel cycle plants in the Nuclear Energy Centre Site Survey 1975 (NECSS-75) Rep. No. NUREG-0001, an important study mandated by the US Congress in the Energy Reorganization Act of 1974 which created the NRC. For this study, the NRC defined fuel cycle centres as consisting of fuel reprocessing and mixed-oxide fuel fabrication plants, and optional high-level waste and transuranic waste management facilities. A range of fuel cycle centre sizes corresponded to the fuel throughput of power plants with a total capacity of 50,000-300,000MW(e). The types of fuel cycle facilities located at the fuel cycle centre permit the assessment of the role of fuel cycle centres in enhancing the safeguard of strategic special nuclear materials - plutonium and mixed oxides. Siting fuel cycle centres presents a smaller problem than siting reactors. A single reprocessing plant of the scale projected for use in the USA (1500-2000t/a) can reprocess fuel from reactors producing 50,000-65,000MW(e). Only two or three fuel cycle centres of the upper limit size considered in the NECSS-75 would be required in the USA by the year 2000. The NECSS-75 fuel cycle centre evaluation showed that large-scale fuel cycle centres present no real technical siting difficulties from a radiological effluent and safety standpoint. Some construction economies may be achievable with fuel cycle centres, which offer opportunities to improve waste-management systems. Combined centres consisting of reactors and fuel reprocessing and mixed-oxide fuel fabrication plants were also studied in the NECSS. Such centres can eliminate shipment not only of Pu but also mixed-oxide fuel. Increased fuel cycle costs result from implementation of combined centres unless the fuel reprocessing plants are commercial-sized. Development of Pu-burning reactors could reduce any economic penalties of combined centres. The need for effective fissile

  15. Large-scale land transformations in Indonesia: The role of ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    ... enable timely responses to the impacts of large-scale land transformations in Central Kalimantan ... In partnership with UNESCO's Organization for Women in Science for the ... New funding opportunity for gender equality and climate change.

  16. Bottom-Up Accountability Initiatives and Large-Scale Land ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Corey Piccioni

    fuel/energy, climate, and finance has occurred and one of the most ... this wave of large-scale land acquisitions. In fact, esti- ... Environmental Rights Action/Friends of the Earth,. Nigeria ... map the differentiated impacts (gender, ethnicity,.

  17. Large-scale linear programs in planning and prediction.

    Science.gov (United States)

    2017-06-01

    Large-scale linear programs are at the core of many traffic-related optimization problems in both planning and prediction. Moreover, many of these involve significant uncertainty, and hence are modeled using either chance constraints, or robust optim...

  18. Bottom-Up Accountability Initiatives and Large-Scale Land ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    ... Security can help increase accountability for large-scale land acquisitions in ... to build decent economic livelihoods and participate meaningfully in decisions ... its 2017 call for proposals to establish Cyber Policy Centres in the Global South.

  19. Amplification of large-scale magnetic field in nonhelical magnetohydrodynamics

    KAUST Repository

    Kumar, Rohit

    2017-08-11

    It is typically assumed that the kinetic and magnetic helicities play a crucial role in the growth of large-scale dynamo. In this paper, we demonstrate that helicity is not essential for the amplification of large-scale magnetic field. For this purpose, we perform nonhelical magnetohydrodynamic (MHD) simulation, and show that the large-scale magnetic field can grow in nonhelical MHD when random external forcing is employed at scale 1/10 the box size. The energy fluxes and shell-to-shell transfer rates computed using the numerical data show that the large-scale magnetic energy grows due to the energy transfers from the velocity field at the forcing scales.

  20. Large-Scale 3D Printing: The Way Forward

    Science.gov (United States)

    Jassmi, Hamad Al; Najjar, Fady Al; Ismail Mourad, Abdel-Hamid

    2018-03-01

    Research on small-scale 3D printing has rapidly evolved, where numerous industrial products have been tested and successfully applied. Nonetheless, research on large-scale 3D printing, directed to large-scale applications such as construction and automotive manufacturing, yet demands a great a great deal of efforts. Large-scale 3D printing is considered an interdisciplinary topic and requires establishing a blended knowledge base from numerous research fields including structural engineering, materials science, mechatronics, software engineering, artificial intelligence and architectural engineering. This review article summarizes key topics of relevance to new research trends on large-scale 3D printing, particularly pertaining (1) technological solutions of additive construction (i.e. the 3D printers themselves), (2) materials science challenges, and (3) new design opportunities.

  1. Bayesian hierarchical model for large-scale covariance matrix estimation.

    Science.gov (United States)

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  2. Benefits of transactive memory systems in large-scale development

    OpenAIRE

    Aivars, Sablis

    2016-01-01

    Context. Large-scale software development projects are those consisting of a large number of teams, maybe even spread across multiple locations, and working on large and complex software tasks. That means that neither a team member individually nor an entire team holds all the knowledge about the software being developed and teams have to communicate and coordinate their knowledge. Therefore, teams and team members in large-scale software development projects must acquire and manage expertise...

  3. Capabilities of the Large-Scale Sediment Transport Facility

    Science.gov (United States)

    2016-04-01

    pump flow meters, sediment trap weigh tanks , and beach profiling lidar. A detailed discussion of the original LSTF features and capabilities can be...ERDC/CHL CHETN-I-88 April 2016 Approved for public release; distribution is unlimited. Capabilities of the Large-Scale Sediment Transport...describes the Large-Scale Sediment Transport Facility (LSTF) and recent upgrades to the measurement systems. The purpose of these upgrades was to increase

  4. Trends in large-scale testing of reactor structures

    International Nuclear Information System (INIS)

    Blejwas, T.E.

    2003-01-01

    Large-scale tests of reactor structures have been conducted at Sandia National Laboratories since the late 1970s. This paper describes a number of different large-scale impact tests, pressurization tests of models of containment structures, and thermal-pressure tests of models of reactor pressure vessels. The advantages of large-scale testing are evident, but cost, in particular limits its use. As computer models have grown in size, such as number of degrees of freedom, the advent of computer graphics has made possible very realistic representation of results - results that may not accurately represent reality. A necessary condition to avoiding this pitfall is the validation of the analytical methods and underlying physical representations. Ironically, the immensely larger computer models sometimes increase the need for large-scale testing, because the modeling is applied to increasing more complex structural systems and/or more complex physical phenomena. Unfortunately, the cost of large-scale tests is a disadvantage that will likely severely limit similar testing in the future. International collaborations may provide the best mechanism for funding future programs with large-scale tests. (author)

  5. Large-scale assembly of colloidal particles

    Science.gov (United States)

    Yang, Hongta

    This study reports a simple, roll-to-roll compatible coating technology for producing three-dimensional highly ordered colloidal crystal-polymer composites, colloidal crystals, and macroporous polymer membranes. A vertically beveled doctor blade is utilized to shear align silica microsphere-monomer suspensions to form large-area composites in a single step. The polymer matrix and the silica microspheres can be selectively removed to create colloidal crystals and self-standing macroporous polymer membranes. The thickness of the shear-aligned crystal is correlated with the viscosity of the colloidal suspension and the coating speed, and the correlations can be qualitatively explained by adapting the mechanisms developed for conventional doctor blade coating. Five important research topics related to the application of large-scale three-dimensional highly ordered macroporous films by doctor blade coating are covered in this study. The first topic describes the invention in large area and low cost color reflective displays. This invention is inspired by the heat pipe technology. The self-standing macroporous polymer films exhibit brilliant colors which originate from the Bragg diffractive of visible light form the three-dimensional highly ordered air cavities. The colors can be easily changed by tuning the size of the air cavities to cover the whole visible spectrum. When the air cavities are filled with a solvent which has the same refractive index as that of the polymer, the macroporous polymer films become completely transparent due to the index matching. When the solvent trapped in the cavities is evaporated by in-situ heating, the sample color changes back to brilliant color. This process is highly reversible and reproducible for thousands of cycles. The second topic reports the achievement of rapid and reversible vapor detection by using 3-D macroporous photonic crystals. Capillary condensation of a condensable vapor in the interconnected macropores leads to the

  6. Crescimento e teores de clorofila em mudas de bananeira em função da supressão do pseudocaule, de doses de nitrogênio e de boro Growth and chlorophyll content of banana suckers in function of pseudostem suppression and doses of nitrogen and boron

    Directory of Open Access Journals (Sweden)

    Walter Esfrain Pereira

    2010-12-01

    Full Text Available O objetivo deste experimento foi avaliar a influência da eliminação da gema apical do rizoma e de doses de nitrogênio e boro, sobre a produção e o crescimento de mudas de bananeira. O experimento foi realizado no Centro de Formação de Tecnólogos da UFPB. O delineamento foi em blocos casualizados, com quatro blocos e nove plantas matrizes por parcela, sendo duas plantas úteis. Os fatores avaliados foram doses de N (0 a 240 g/planta e de B (0 a 2,2 g/planta combinados com a matriz experimental Composto Central de Box, originando nove combinações, arranjados fatorialmente com e sem eliminação da gema apical da planta matriz. Os dados foram submetidos à análise de variância e de regressão. A supressão do pseudocaule da planta principal, com a eliminação da gema apical do rizoma aumentou o número de perfilhos, com maior crescimento, exceto do diâmetro do rizoma, o qual diminuiu. Os teores foliares de clorofila total e de B também foram diminuídos, nos perfilhos das bananeiras amputadas. Para a produção de mudas da bananeira ‘Pacovan’, recomenda-se a supressão do pseudocaule da planta principal, com eliminação da gema apical do rizoma e aplicação do N.The objective of this research was to evaluate the influence of the removal in the pseudostem with elimination of the meristem apical of the rhizome and doses of nitrogen and boron, about the production and growth of banana suckers. The experiment was carried out at Centro de Formação de Tecnólogos - UFPB, State of Paraíba. The experimental design was in randomized blocks, with four replications and nine mother plants for experimental unit, being two useful plants. The evaluated valued factors were doses of N (0 to 240 g plant-1 and of B (0 to 240 g plant-1 combined in agreement with the experimental matrix ‘Central Composite’, originating nine combinations, which were arranged factorially with suppression and without suppression of the pseudostem of the mother

  7. Phylogenetic distribution of large-scale genome patchiness

    Directory of Open Access Journals (Sweden)

    Hackenberg Michael

    2008-04-01

    Full Text Available Abstract Background The phylogenetic distribution of large-scale genome structure (i.e. mosaic compositional patchiness has been explored mainly by analytical ultracentrifugation of bulk DNA. However, with the availability of large, good-quality chromosome sequences, and the recently developed computational methods to directly analyze patchiness on the genome sequence, an evolutionary comparative analysis can be carried out at the sequence level. Results The local variations in the scaling exponent of the Detrended Fluctuation Analysis are used here to analyze large-scale genome structure and directly uncover the characteristic scales present in genome sequences. Furthermore, through shuffling experiments of selected genome regions, computationally-identified, isochore-like regions were identified as the biological source for the uncovered large-scale genome structure. The phylogenetic distribution of short- and large-scale patchiness was determined in the best-sequenced genome assemblies from eleven eukaryotic genomes: mammals (Homo sapiens, Pan troglodytes, Mus musculus, Rattus norvegicus, and Canis familiaris, birds (Gallus gallus, fishes (Danio rerio, invertebrates (Drosophila melanogaster and Caenorhabditis elegans, plants (Arabidopsis thaliana and yeasts (Saccharomyces cerevisiae. We found large-scale patchiness of genome structure, associated with in silico determined, isochore-like regions, throughout this wide phylogenetic range. Conclusion Large-scale genome structure is detected by directly analyzing DNA sequences in a wide range of eukaryotic chromosome sequences, from human to yeast. In all these genomes, large-scale patchiness can be associated with the isochore-like regions, as directly detected in silico at the sequence level.

  8. Avaliação do ciclo e produção da planta-filha em função do manejo da planta-mãe em diferentes épocas do ano em bananeira Prata-Anã Evaluation of cycle and production of sucker plant in function of mother plant management in banana tree 'Prata Anã´

    Directory of Open Access Journals (Sweden)

    José Egídio Flori

    2008-06-01

    Full Text Available Objetivou-se, neste trabalho, avaliar o efeito do manejo da planta-mãe e da época de seleção das plantas-filha de bananeira 'Prata-Anã´ (Musa spp. na produção e no período de desenvolvimento da planta-filha. Utilizou-se um bananal comercial com cinco anos de idade, plantado no espaçamento de 3,5 m x 2,0 m. O delineamento foi inteiramente casualizado no esquema de parcelas subdivididas, com três repetições. Os tratamentos nas parcelas foram: manejo 1 (M1 - família conduzida sem a planta-mãe, a qual foi retirada logo após a sua floração; manejo 2 (M2 - família conduzida com planta-mãe (manejo convencional. As subparcelas corresponderam a doze épocas de seleção das plantas-filha, selecionadas no estádio de chifrão, iniciando em fev./02 e finalizando em jan./03. As características avaliadas foram: a período de desenvolvimento da planta-filha (período em dias entre a data de seleção da planta-filha e a sua colheita; b massa do cacho das plantas-filha. Diante dos resultados obtidos concluiu-se: o manejo da planta-mãe não influenciou a massa do cacho da planta-filha; o manejo da planta-mãe alterou o período de desenvolvimento da planta-filha, sendo esse menor naquelas conduzidas sem a planta-mãe (M1; a época de seleção influenciou a massa do cacho e o período de desenvolvimento da planta-filha.The objective of this study was to evaluate the effect of managing banana mother plants and time of selection of sucker on their productive characteristics, using the cultivar 'Prata Anã´ (Musa spp. genomic group AAB. The crop was planted in March 1997, spaced by 3.5 m x 2.0 m. An experimental area of 2.1 hectares was demarcated in February 2002. A split-plot design was used incorporating a completely random design with three replications. The plots consisted of five plants submitted to two managements: M1 - hill managed without the mother plant, which was removed right after flowering; M2 - hill managed with the mother

  9. Analysis using large-scale ringing data

    Directory of Open Access Journals (Sweden)

    Baillie, S. R.

    2004-06-01

    survival and recruitment estimates from the French CES scheme to assess the relative contributions of survival and recruitment to overall population changes. He develops a novel approach to modelling survival rates from such multi–site data by using within–year recaptures to provide a covariate of between–year recapture rates. This provided parsimonious models of variation in recapture probabilities between sites and years. The approach provides promising results for the four species investigated and can potentially be extended to similar data from other CES/MAPS schemes. The final paper by Blandine Doligez, David Thomson and Arie van Noordwijk (Doligez et al., 2004 illustrates how large-scale studies of population dynamics can be important for evaluating the effects of conservation measures. Their study is concerned with the reintroduction of White Stork populations to the Netherlands where a re–introduction programme started in 1969 had resulted in a breeding population of 396 pairs by 2000. They demonstrate the need to consider a wide range of models in order to account for potential age, time, cohort and “trap–happiness” effects. As the data are based on resightings such trap–happiness must reflect some form of heterogeneity in resighting probabilities. Perhaps surprisingly, the provision of supplementary food did not influence survival, but it may havehad an indirect effect via the alteration of migratory behaviour. Spatially explicit modelling of data gathered at many sites inevitably results in starting models with very large numbers of parameters. The problem is often complicated further by having relatively sparse data at each site, even where the total amount of data gathered is very large. Both Julliard (2004 and Doligez et al. (2004 give explicit examples of problems caused by needing to handle very large numbers of parameters and show how they overcame them for their particular data sets. Such problems involve both the choice of appropriate

  10. PKI security in large-scale healthcare networks.

    Science.gov (United States)

    Mantas, Georgios; Lymberopoulos, Dimitrios; Komninos, Nikos

    2012-06-01

    During the past few years a lot of PKI (Public Key Infrastructures) infrastructures have been proposed for healthcare networks in order to ensure secure communication services and exchange of data among healthcare professionals. However, there is a plethora of challenges in these healthcare PKI infrastructures. Especially, there are a lot of challenges for PKI infrastructures deployed over large-scale healthcare networks. In this paper, we propose a PKI infrastructure to ensure security in a large-scale Internet-based healthcare network connecting a wide spectrum of healthcare units geographically distributed within a wide region. Furthermore, the proposed PKI infrastructure facilitates the trust issues that arise in a large-scale healthcare network including multi-domain PKI infrastructures.

  11. Large-Scale Agriculture and Outgrower Schemes in Ethiopia

    DEFF Research Database (Denmark)

    Wendimu, Mengistu Assefa

    , the impact of large-scale agriculture and outgrower schemes on productivity, household welfare and wages in developing countries is highly contentious. Chapter 1 of this thesis provides an introduction to the study, while also reviewing the key debate in the contemporary land ‘grabbing’ and historical large...... sugarcane outgrower scheme on household income and asset stocks. Chapter 5 examines the wages and working conditions in ‘formal’ large-scale and ‘informal’ small-scale irrigated agriculture. The results in Chapter 2 show that moisture stress, the use of untested planting materials, and conflict over land...... commands a higher wage than ‘formal’ large-scale agriculture, while rather different wage determination mechanisms exist in the two sectors. Human capital characteristics (education and experience) partly explain the differences in wages within the formal sector, but play no significant role...

  12. Seismic safety in conducting large-scale blasts

    Science.gov (United States)

    Mashukov, I. V.; Chaplygin, V. V.; Domanov, V. P.; Semin, A. A.; Klimkin, M. A.

    2017-09-01

    In mining enterprises to prepare hard rocks for excavation a drilling and blasting method is used. With the approach of mining operations to settlements the negative effect of large-scale blasts increases. To assess the level of seismic impact of large-scale blasts the scientific staff of Siberian State Industrial University carried out expertise for coal mines and iron ore enterprises. Determination of the magnitude of surface seismic vibrations caused by mass explosions was performed using seismic receivers, an analog-digital converter with recording on a laptop. The registration results of surface seismic vibrations during production of more than 280 large-scale blasts at 17 mining enterprises in 22 settlements are presented. The maximum velocity values of the Earth’s surface vibrations are determined. The safety evaluation of seismic effect was carried out according to the permissible value of vibration velocity. For cases with exceedance of permissible values recommendations were developed to reduce the level of seismic impact.

  13. Large-Scale Structure and Hyperuniformity of Amorphous Ices

    Science.gov (United States)

    Martelli, Fausto; Torquato, Salvatore; Giovambattista, Nicolas; Car, Roberto

    2017-09-01

    We investigate the large-scale structure of amorphous ices and transitions between their different forms by quantifying their large-scale density fluctuations. Specifically, we simulate the isothermal compression of low-density amorphous ice (LDA) and hexagonal ice to produce high-density amorphous ice (HDA). Both HDA and LDA are nearly hyperuniform; i.e., they are characterized by an anomalous suppression of large-scale density fluctuations. By contrast, in correspondence with the nonequilibrium phase transitions to HDA, the presence of structural heterogeneities strongly suppresses the hyperuniformity and the system becomes hyposurficial (devoid of "surface-area fluctuations"). Our investigation challenges the largely accepted "frozen-liquid" picture, which views glasses as structurally arrested liquids. Beyond implications for water, our findings enrich our understanding of pressure-induced structural transformations in glasses.

  14. Large-scale structure observables in general relativity

    International Nuclear Information System (INIS)

    Jeong, Donghui; Schmidt, Fabian

    2015-01-01

    We review recent studies that rigorously define several key observables of the large-scale structure of the Universe in a general relativistic context. Specifically, we consider (i) redshift perturbation of cosmic clock events; (ii) distortion of cosmic rulers, including weak lensing shear and magnification; and (iii) observed number density of tracers of the large-scale structure. We provide covariant and gauge-invariant expressions of these observables. Our expressions are given for a linearly perturbed flat Friedmann–Robertson–Walker metric including scalar, vector, and tensor metric perturbations. While we restrict ourselves to linear order in perturbation theory, the approach can be straightforwardly generalized to higher order. (paper)

  15. Fatigue Analysis of Large-scale Wind turbine

    Directory of Open Access Journals (Sweden)

    Zhu Yongli

    2017-01-01

    Full Text Available The paper does research on top flange fatigue damage of large-scale wind turbine generator. It establishes finite element model of top flange connection system with finite element analysis software MSC. Marc/Mentat, analyzes its fatigue strain, implements load simulation of flange fatigue working condition with Bladed software, acquires flange fatigue load spectrum with rain-flow counting method, finally, it realizes fatigue analysis of top flange with fatigue analysis software MSC. Fatigue and Palmgren-Miner linear cumulative damage theory. The analysis result indicates that its result provides new thinking for flange fatigue analysis of large-scale wind turbine generator, and possesses some practical engineering value.

  16. Generation of large-scale vortives in compressible helical turbulence

    International Nuclear Information System (INIS)

    Chkhetiani, O.G.; Gvaramadze, V.V.

    1989-01-01

    We consider generation of large-scale vortices in compressible self-gravitating turbulent medium. The closed equation describing evolution of the large-scale vortices in helical turbulence with finite correlation time is obtained. This equation has the form similar to the hydromagnetic dynamo equation, which allows us to call the vortx genertation effect the vortex dynamo. It is possible that principally the same mechanism is responsible both for amplification and maintenance of density waves and magnetic fields in gaseous disks of spiral galaxies. (author). 29 refs

  17. Origin of large-scale cell structure in the universe

    International Nuclear Information System (INIS)

    Zel'dovich, Y.B.

    1982-01-01

    A qualitative explanation is offered for the characteristic global structure of the universe, wherein ''black'' regions devoid of galaxies are surrounded on all sides by closed, comparatively thin, ''bright'' layers populated by galaxies. The interpretation rests on some very general arguments regarding the growth of large-scale perturbations in a cold gas

  18. Large-Scale Systems Control Design via LMI Optimization

    Czech Academy of Sciences Publication Activity Database

    Rehák, Branislav

    2015-01-01

    Roč. 44, č. 3 (2015), s. 247-253 ISSN 1392-124X Institutional support: RVO:67985556 Keywords : Combinatorial linear matrix inequalities * large-scale system * decentralized control Subject RIV: BC - Control Systems Theory Impact factor: 0.633, year: 2015

  19. Worldwide large-scale fluctuations of sardine and anchovy ...

    African Journals Online (AJOL)

    Worldwide large-scale fluctuations of sardine and anchovy populations. ... African Journal of Marine Science. Journal Home · ABOUT THIS JOURNAL · Advanced ... Fullscreen Fullscreen Off. http://dx.doi.org/10.2989/AJMS.2008.30.1.13.463.

  20. Worldwide large-scale fluctuations of sardine and anchovy ...

    African Journals Online (AJOL)

    Worldwide large-scale fluctuations of sardine and anchovy populations. ... African Journal of Marine Science. Journal Home · ABOUT THIS JOURNAL · Advanced ... http://dx.doi.org/10.2989/AJMS.2008.30.1.13.463 · AJOL African Journals ...

  1. Large-scale coastal impact induced by a catastrophic storm

    DEFF Research Database (Denmark)

    Fruergaard, Mikkel; Andersen, Thorbjørn Joest; Johannessen, Peter N

    breaching. Our results demonstrate that violent, millennial-scale storms can trigger significant large-scale and long-term changes on barrier coasts, and that coastal changes assumed to take place over centuries or even millennia may occur in association with a single extreme storm event....

  2. Planck intermediate results XLII. Large-scale Galactic magnetic fields

    DEFF Research Database (Denmark)

    Adam, R.; Ade, P. A. R.; Alves, M. I. R.

    2016-01-01

    Recent models for the large-scale Galactic magnetic fields in the literature have been largely constrained by synchrotron emission and Faraday rotation measures. We use three different but representative models to compare their predicted polarized synchrotron and dust emission with that measured ...

  3. Large-scale matrix-handling subroutines 'ATLAS'

    International Nuclear Information System (INIS)

    Tsunematsu, Toshihide; Takeda, Tatsuoki; Fujita, Keiichi; Matsuura, Toshihiko; Tahara, Nobuo

    1978-03-01

    Subroutine package ''ATLAS'' has been developed for handling large-scale matrices. The package is composed of four kinds of subroutines, i.e., basic arithmetic routines, routines for solving linear simultaneous equations and for solving general eigenvalue problems and utility routines. The subroutines are useful in large scale plasma-fluid simulations. (auth.)

  4. Breakdown of large-scale circulation in turbulent rotating convection

    NARCIS (Netherlands)

    Kunnen, R.P.J.; Clercx, H.J.H.; Geurts, Bernardus J.

    2008-01-01

    Turbulent rotating convection in a cylinder is investigated both numerically and experimentally at Rayleigh number Ra = $10^9$ and Prandtl number $\\sigma$ = 6.4. In this Letter we discuss two topics: the breakdown under rotation of the domain-filling large-scale circulation (LSC) typical for

  5. Penalized Estimation in Large-Scale Generalized Linear Array Models

    DEFF Research Database (Denmark)

    Lund, Adam; Vincent, Martin; Hansen, Niels Richard

    2017-01-01

    Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...

  6. A Chain Perspective on Large-scale Number Systems

    NARCIS (Netherlands)

    Grijpink, J.H.A.M.

    2012-01-01

    As large-scale number systems gain significance in social and economic life (electronic communication, remote electronic authentication), the correct functioning and the integrity of public number systems take on crucial importance. They are needed to uniquely indicate people, objects or phenomena

  7. Image-based Exploration of Large-Scale Pathline Fields

    KAUST Repository

    Nagoor, Omniah H.

    2014-01-01

    structure in which each pixel contains a list of pathlines segments. With this view-dependent method it is possible to filter, color-code and explore large-scale flow data in real-time. In addition, optimization techniques such as early-ray termination

  8. The Large-Scale Structure of Scientific Method

    Science.gov (United States)

    Kosso, Peter

    2009-01-01

    The standard textbook description of the nature of science describes the proposal, testing, and acceptance of a theoretical idea almost entirely in isolation from other theories. The resulting model of science is a kind of piecemeal empiricism that misses the important network structure of scientific knowledge. Only the large-scale description of…

  9. Large-Scale Machine Learning for Classification and Search

    Science.gov (United States)

    Liu, Wei

    2012-01-01

    With the rapid development of the Internet, nowadays tremendous amounts of data including images and videos, up to millions or billions, can be collected for training machine learning models. Inspired by this trend, this thesis is dedicated to developing large-scale machine learning techniques for the purpose of making classification and nearest…

  10. Chirping for large-scale maritime archaeological survey

    DEFF Research Database (Denmark)

    Grøn, Ole; Boldreel, Lars Ole

    2014-01-01

    Archaeological wrecks exposed on the sea floor are mapped using side-scan and multibeam techniques, whereas the detection of submerged archaeological sites, such as Stone Age settlements, and wrecks, partially or wholly embedded in sea-floor sediments, requires the application of high-resolution ...... the present state of this technology, it appears well suited to large-scale maritime archaeological mapping....

  11. Ecosystem resilience despite large-scale altered hydroclimatic conditions

    Science.gov (United States)

    G. E. Ponce Campos; M. S. Moran; A. Huete; Y. Zhang; C. Bresloff; T.E. Huxman; D. Eamus; D. D. Bosch; A. R. Buda; S. A. Gunter; T. Heartsill Scalley; S. G. Kitchen; M. P. McClaran; W. H. McNab; D. S. Montoya; J. A. Morgan; D. P. C. Peters; E. J. Sadler; M. S. Seyfried; P. J. Starks

    2013-01-01

    Climate change is predicted to increase both drought frequency and duration, and when coupled with substantial warming, will establish a new hydroclimatological model for many regions1. Largescale, warm droughts have recently occurred in North America, Africa, Europe, Amazonia and Australia, resulting in major effects on terrestrial ecosystems, carbon balance and food...

  12. Large-scale Homogenization of Bulk Materials in Mammoth Silos

    NARCIS (Netherlands)

    Schott, D.L.

    2004-01-01

    This doctoral thesis concerns the large-scale homogenization of bulk materials in mammoth silos. The objective of this research was to determine the best stacking and reclaiming method for homogenization in mammoth silos. For this purpose a simulation program was developed to estimate the

  13. Fractals and the Large-Scale Structure in the Universe

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 4. Fractals and the Large-Scale Structure in the Universe - Is the Cosmological Principle Valid? A K Mittal T R Seshadri. General Article Volume 7 Issue 4 April 2002 pp 39-47 ...

  14. LARGE-SCALE COMMERCIAL INVESTMENTS IN LAND: SEEKING ...

    African Journals Online (AJOL)

    extent of large-scale investment in land or to assess its impact on the people in recipient countries. .... favorable lease terms, apparently based on a belief that this is necessary to .... Harm to the rights of local occupiers of land can result from a dearth. 24. ..... applies to a self-identified group based on the group's traditions.

  15. Reconsidering Replication: New Perspectives on Large-Scale School Improvement

    Science.gov (United States)

    Peurach, Donald J.; Glazer, Joshua L.

    2012-01-01

    The purpose of this analysis is to reconsider organizational replication as a strategy for large-scale school improvement: a strategy that features a "hub" organization collaborating with "outlet" schools to enact school-wide designs for improvement. To do so, we synthesize a leading line of research on commercial replication to construct a…

  16. First Mile Challenges for Large-Scale IoT

    KAUST Repository

    Bader, Ahmed; Elsawy, Hesham; Gharbieh, Mohammad; Alouini, Mohamed-Slim; Adinoyi, Abdulkareem; Alshaalan, Furaih

    2017-01-01

    The Internet of Things is large-scale by nature. This is not only manifested by the large number of connected devices, but also by the sheer scale of spatial traffic intensity that must be accommodated, primarily in the uplink direction. To that end

  17. Technologies and challenges in large-scale phosphoproteomics

    DEFF Research Database (Denmark)

    Engholm-Keller, Kasper; Larsen, Martin Røssel

    2013-01-01

    become the main technique for discovery and characterization of phosphoproteins in a nonhypothesis driven fashion. In this review, we describe methods for state-of-the-art MS-based analysis of protein phosphorylation as well as the strategies employed in large-scale phosphoproteomic experiments...... with focus on the various challenges and limitations this field currently faces....

  18. Dual Decomposition for Large-Scale Power Balancing

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus; Jørgensen, John Bagterp; Vandenberghe, Lieven

    2013-01-01

    Dual decomposition is applied to power balancing of exible thermal storage units. The centralized large-scale problem is decomposed into smaller subproblems and solved locallyby each unit in the Smart Grid. Convergence is achieved by coordinating the units consumption through a negotiation...

  19. Evaluation of Large-scale Public Sector Reforms

    DEFF Research Database (Denmark)

    Breidahl, Karen Nielsen; Gjelstrup, Gunnar; Hansen, Hanne Foss

    2017-01-01

    and more delimited policy areas take place. In our analysis we apply four governance perspectives (rational-instrumental, rational-interest based, institutional-cultural and a chaos perspective) in a comparative analysis of the evaluations of two large-scale public sector reforms in Denmark and Norway. We...

  20. Large-scale silviculture experiments of western Oregon and Washington.

    Science.gov (United States)

    Nathan J. Poage; Paul D. Anderson

    2007-01-01

    We review 12 large-scale silviculture experiments (LSSEs) in western Washington and Oregon with which the Pacific Northwest Research Station of the USDA Forest Service is substantially involved. We compiled and arrayed information about the LSSEs as a series of matrices in a relational database, which is included on the compact disc published with this report and...

  1. Participatory Design and the Challenges of Large-Scale Systems

    DEFF Research Database (Denmark)

    Simonsen, Jesper; Hertzum, Morten

    2008-01-01

    With its 10th biannual anniversary conference, Participatory Design (PD) is leaving its teens and must now be considered ready to join the adult world. In this article we encourage the PD community to think big: PD should engage in large-scale information-systems development and opt for a PD...

  2. A bacterial disease of yellow perch (Peres flavescens)

    Science.gov (United States)

    Ross, A.J.; Nordstrom, P.R.; Bailey, J.E.; Heaton, J.H.

    1960-01-01

    On May 26, 1959, two of the authors' investigated a fish kill at Dailey Lake, Park County, Montana. They observed about a half-dozen live, weakly swimming yellow perch (Perca flavescens), in addition to thousand of dead perch along the shoreline. It was learned from local residents that mortalities had begun to appear some 2 weeks earlier. At that time the time the authorities had diagnosed the condition as a winterkill, since ice had only recently disappeared from the lake. Although a number of other species inhabit Dailey Lake, including rainbow trout (Salmo gairdneri), brown trout (S. trutta), kokanee (Oncorhynchus nerka), black crappie (Pomoxis nigromaculatus), largemouth bass (Micropterus salmoides), longnose suckers (Catostomus catostomus), rainbow x cutthroat hybrids, only one other species was represented in the kill. This consisted of one black crappie.

  3. Participatory Design of Large-Scale Information Systems

    DEFF Research Database (Denmark)

    Simonsen, Jesper; Hertzum, Morten

    2008-01-01

    into a PD process model that (1) emphasizes PD experiments as transcending traditional prototyping by evaluating fully integrated systems exposed to real work practices; (2) incorporates improvisational change management including anticipated, emergent, and opportunity-based change; and (3) extends initial...... design and development into a sustained and ongoing stepwise implementation that constitutes an overall technology-driven organizational change. The process model is presented through a largescale PD experiment in the Danish healthcare sector. We reflect on our experiences from this experiment......In this article we discuss how to engage in large-scale information systems development by applying a participatory design (PD) approach that acknowledges the unique situated work practices conducted by the domain experts of modern organizations. We reconstruct the iterative prototyping approach...

  4. GAIA: A WINDOW TO LARGE-SCALE MOTIONS

    Energy Technology Data Exchange (ETDEWEB)

    Nusser, Adi [Physics Department and the Asher Space Science Institute-Technion, Haifa 32000 (Israel); Branchini, Enzo [Department of Physics, Universita Roma Tre, Via della Vasca Navale 84, 00146 Rome (Italy); Davis, Marc, E-mail: adi@physics.technion.ac.il, E-mail: branchin@fis.uniroma3.it, E-mail: mdavis@berkeley.edu [Departments of Astronomy and Physics, University of California, Berkeley, CA 94720 (United States)

    2012-08-10

    Using redshifts as a proxy for galaxy distances, estimates of the two-dimensional (2D) transverse peculiar velocities of distant galaxies could be obtained from future measurements of proper motions. We provide the mathematical framework for analyzing 2D transverse motions and show that they offer several advantages over traditional probes of large-scale motions. They are completely independent of any intrinsic relations between galaxy properties; hence, they are essentially free of selection biases. They are free from homogeneous and inhomogeneous Malmquist biases that typically plague distance indicator catalogs. They provide additional information to traditional probes that yield line-of-sight peculiar velocities only. Further, because of their 2D nature, fundamental questions regarding vorticity of large-scale flows can be addressed. Gaia, for example, is expected to provide proper motions of at least bright galaxies with high central surface brightness, making proper motions a likely contender for traditional probes based on current and future distance indicator measurements.

  5. Large-scale influences in near-wall turbulence.

    Science.gov (United States)

    Hutchins, Nicholas; Marusic, Ivan

    2007-03-15

    Hot-wire data acquired in a high Reynolds number facility are used to illustrate the need for adequate scale separation when considering the coherent structure in wall-bounded turbulence. It is found that a large-scale motion in the log region becomes increasingly comparable in energy to the near-wall cycle as the Reynolds number increases. Through decomposition of fluctuating velocity signals, it is shown that this large-scale motion has a distinct modulating influence on the small-scale energy (akin to amplitude modulation). Reassessment of DNS data, in light of these results, shows similar trends, with the rate and intensity of production due to the near-wall cycle subject to a modulating influence from the largest-scale motions.

  6. First Mile Challenges for Large-Scale IoT

    KAUST Repository

    Bader, Ahmed

    2017-03-16

    The Internet of Things is large-scale by nature. This is not only manifested by the large number of connected devices, but also by the sheer scale of spatial traffic intensity that must be accommodated, primarily in the uplink direction. To that end, cellular networks are indeed a strong first mile candidate to accommodate the data tsunami to be generated by the IoT. However, IoT devices are required in the cellular paradigm to undergo random access procedures as a precursor to resource allocation. Such procedures impose a major bottleneck that hinders cellular networks\\' ability to support large-scale IoT. In this article, we shed light on the random access dilemma and present a case study based on experimental data as well as system-level simulations. Accordingly, a case is built for the latent need to revisit random access procedures. A call for action is motivated by listing a few potential remedies and recommendations.

  7. The Large-scale Effect of Environment on Galactic Conformity

    Science.gov (United States)

    Sun, Shuangpeng; Guo, Qi; Wang, Lan; Wang, Jie; Gao, Liang; Lacey, Cedric G.; Pan, Jun

    2018-04-01

    We use a volume-limited galaxy sample from the SDSS Data Release 7 to explore the dependence of galactic conformity on the large-scale environment, measured on ˜ 4 Mpc scales. We find that the star formation activity of neighbour galaxies depends more strongly on the environment than on the activity of their primary galaxies. In under-dense regions most neighbour galaxies tend to be active, while in over-dense regions neighbour galaxies are mostly passive, regardless of the activity of their primary galaxies. At a given stellar mass, passive primary galaxies reside in higher density regions than active primary galaxies, leading to the apparently strong conformity signal. The dependence of the activity of neighbour galaxies on environment can be explained by the corresponding dependence of the fraction of satellite galaxies. Similar results are found for galaxies in a semi-analytical model, suggesting that no new physics is required to explain the observed large-scale conformity.

  8. Traffic assignment models in large-scale applications

    DEFF Research Database (Denmark)

    Rasmussen, Thomas Kjær

    the potential of the method proposed and the possibility to use individual-based GPS units for travel surveys in real-life large-scale multi-modal networks. Congestion is known to highly influence the way we act in the transportation network (and organise our lives), because of longer travel times...... of observations of actual behaviour to obtain estimates of the (monetary) value of different travel time components, thereby increasing the behavioural realism of largescale models. vii The generation of choice sets is a vital component in route choice models. This is, however, not a straight-forward task in real......, but the reliability of the travel time also has a large impact on our travel choices. Consequently, in order to improve the realism of transport models, correct understanding and representation of two values that are related to the value of time (VoT) are essential: (i) the value of congestion (VoC), as the Vo...

  9. Real-time simulation of large-scale floods

    Science.gov (United States)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  10. Large-scale networks in engineering and life sciences

    CERN Document Server

    Findeisen, Rolf; Flockerzi, Dietrich; Reichl, Udo; Sundmacher, Kai

    2014-01-01

    This edited volume provides insights into and tools for the modeling, analysis, optimization, and control of large-scale networks in the life sciences and in engineering. Large-scale systems are often the result of networked interactions between a large number of subsystems, and their analysis and control are becoming increasingly important. The chapters of this book present the basic concepts and theoretical foundations of network theory and discuss its applications in different scientific areas such as biochemical reactions, chemical production processes, systems biology, electrical circuits, and mobile agents. The aim is to identify common concepts, to understand the underlying mathematical ideas, and to inspire discussions across the borders of the various disciplines.  The book originates from the interdisciplinary summer school “Large Scale Networks in Engineering and Life Sciences” hosted by the International Max Planck Research School Magdeburg, September 26-30, 2011, and will therefore be of int...

  11. Some Statistics for Measuring Large-Scale Structure

    OpenAIRE

    Brandenberger, Robert H.; Kaplan, David M.; A, Stephen; Ramsey

    1993-01-01

    Good statistics for measuring large-scale structure in the Universe must be able to distinguish between different models of structure formation. In this paper, two and three dimensional ``counts in cell" statistics and a new ``discrete genus statistic" are applied to toy versions of several popular theories of structure formation: random phase cold dark matter model, cosmic string models, and global texture scenario. All three statistics appear quite promising in terms of differentiating betw...

  12. Foundations of Large-Scale Multimedia Information Management and Retrieval

    CERN Document Server

    Chang, Edward Y

    2011-01-01

    "Foundations of Large-Scale Multimedia Information Management and Retrieval - Mathematics of Perception" covers knowledge representation and semantic analysis of multimedia data and scalability in signal extraction, data mining, and indexing. The book is divided into two parts: Part I - Knowledge Representation and Semantic Analysis focuses on the key components of mathematics of perception as it applies to data management and retrieval. These include feature selection/reduction, knowledge representation, semantic analysis, distance function formulation for measuring similarity, and

  13. PKI security in large-scale healthcare networks

    OpenAIRE

    Mantas, G.; Lymberopoulos, D.; Komninos, N.

    2012-01-01

    During the past few years a lot of PKI (Public Key Infrastructures) infrastructures have been proposed for healthcare networks in order to ensure secure communication services and exchange of data among healthcare professionals. However, there is a plethora of challenges in these healthcare PKI infrastructures. Especially, there are a lot of challenges for PKI infrastructures deployed over large-scale healthcare networks. In this paper, we propose a PKI infrastructure to ensure security in a ...

  14. Large-scale motions in the universe: a review

    International Nuclear Information System (INIS)

    Burstein, D.

    1990-01-01

    The expansion of the universe can be retarded in localised regions within the universe both by the presence of gravity and by non-gravitational motions generated in the post-recombination universe. The motions of galaxies thus generated are called 'peculiar motions', and the amplitudes, size scales and coherence of these peculiar motions are among the most direct records of the structure of the universe. As such, measurements of these properties of the present-day universe provide some of the severest tests of cosmological theories. This is a review of the current evidence for large-scale motions of galaxies out to a distance of ∼5000 km s -1 (in an expanding universe, distance is proportional to radial velocity). 'Large-scale' in this context refers to motions that are correlated over size scales larger than the typical sizes of groups of galaxies, up to and including the size of the volume surveyed. To orient the reader into this relatively new field of study, a short modern history is given together with an explanation of the terminology. Careful consideration is given to the data used to measure the distances, and hence the peculiar motions, of galaxies. The evidence for large-scale motions is presented in a graphical fashion, using only the most reliable data for galaxies spanning a wide range in optical properties and over the complete range of galactic environments. The kinds of systematic errors that can affect this analysis are discussed, and the reliability of these motions is assessed. The predictions of two models of large-scale motion are compared to the observations, and special emphasis is placed on those motions in which our own Galaxy directly partakes. (author)

  15. A Classification Framework for Large-Scale Face Recognition Systems

    OpenAIRE

    Zhou, Ziheng; Deravi, Farzin

    2009-01-01

    This paper presents a generic classification framework for large-scale face recognition systems. Within the framework, a data sampling strategy is proposed to tackle the data imbalance when image pairs are sampled from thousands of face images for preparing a training dataset. A modified kernel Fisher discriminant classifier is proposed to make it computationally feasible to train the kernel-based classification method using tens of thousands of training samples. The framework is tested in an...

  16. Design study on sodium cooled large-scale reactor

    International Nuclear Information System (INIS)

    Murakami, Tsutomu; Hishida, Masahiko; Kisohara, Naoyuki

    2004-07-01

    In Phase 1 of the 'Feasibility Studies on Commercialized Fast Reactor Cycle Systems (F/S)', an advanced loop type reactor has been selected as a promising concept of sodium-cooled large-scale reactor, which has a possibility to fulfill the design requirements of the F/S. In Phase 2, design improvement for further cost reduction of establishment of the plant concept has been performed. This report summarizes the results of the design study on the sodium-cooled large-scale reactor performed in JFY2003, which is the third year of Phase 2. In the JFY2003 design study, critical subjects related to safety, structural integrity and thermal hydraulics which found in the last fiscal year has been examined and the plant concept has been modified. Furthermore, fundamental specifications of main systems and components have been set and economy has been evaluated. In addition, as the interim evaluation of the candidate concept of the FBR fuel cycle is to be conducted, cost effectiveness and achievability for the development goal were evaluated and the data of the three large-scale reactor candidate concepts were prepared. As a results of this study, the plant concept of the sodium-cooled large-scale reactor has been constructed, which has a prospect to satisfy the economic goal (construction cost: less than 200,000 yens/kWe, etc.) and has a prospect to solve the critical subjects. From now on, reflecting the results of elemental experiments, the preliminary conceptual design of this plant will be preceded toward the selection for narrowing down candidate concepts at the end of Phase 2. (author)

  17. Design study on sodium-cooled large-scale reactor

    International Nuclear Information System (INIS)

    Shimakawa, Yoshio; Nibe, Nobuaki; Hori, Toru

    2002-05-01

    In Phase 1 of the 'Feasibility Study on Commercialized Fast Reactor Cycle Systems (F/S)', an advanced loop type reactor has been selected as a promising concept of sodium-cooled large-scale reactor, which has a possibility to fulfill the design requirements of the F/S. In Phase 2 of the F/S, it is planed to precede a preliminary conceptual design of a sodium-cooled large-scale reactor based on the design of the advanced loop type reactor. Through the design study, it is intended to construct such a plant concept that can show its attraction and competitiveness as a commercialized reactor. This report summarizes the results of the design study on the sodium-cooled large-scale reactor performed in JFY2001, which is the first year of Phase 2. In the JFY2001 design study, a plant concept has been constructed based on the design of the advanced loop type reactor, and fundamental specifications of main systems and components have been set. Furthermore, critical subjects related to safety, structural integrity, thermal hydraulics, operability, maintainability and economy have been examined and evaluated. As a result of this study, the plant concept of the sodium-cooled large-scale reactor has been constructed, which has a prospect to satisfy the economic goal (construction cost: less than 200,000yens/kWe, etc.) and has a prospect to solve the critical subjects. From now on, reflecting the results of elemental experiments, the preliminary conceptual design of this plant will be preceded toward the selection for narrowing down candidate concepts at the end of Phase 2. (author)

  18. Large-Scale Optimization for Bayesian Inference in Complex Systems

    Energy Technology Data Exchange (ETDEWEB)

    Willcox, Karen [MIT; Marzouk, Youssef [MIT

    2013-11-12

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to

  19. NASA: Assessments of Selected Large-Scale Projects

    Science.gov (United States)

    2011-03-01

    REPORT DATE MAR 2011 2. REPORT TYPE 3. DATES COVERED 00-00-2011 to 00-00-2011 4. TITLE AND SUBTITLE Assessments Of Selected Large-Scale Projects...Volatile EvolutioN MEP Mars Exploration Program MIB Mishap Investigation Board MMRTG Multi Mission Radioisotope Thermoelectric Generator MMS Magnetospheric...probes designed to explore the Martian surface, to satellites equipped with advanced sensors to study the earth , to telescopes intended to explore the

  20. Primordial large-scale electromagnetic fields from gravitoelectromagnetic inflation

    Energy Technology Data Exchange (ETDEWEB)

    Membiela, Federico Agustin [Departamento de Fisica, Facultad de Ciencias Exactas y Naturales, Universidad Nacional de Mar del Plata, Funes 3350, (7600) Mar del Plata (Argentina); Consejo Nacional de Investigaciones Cientificas y Tecnicas (CONICET) (Argentina)], E-mail: membiela@mdp.edu.ar; Bellini, Mauricio [Departamento de Fisica, Facultad de Ciencias Exactas y Naturales, Universidad Nacional de Mar del Plata, Funes 3350, (7600) Mar del Plata (Argentina); Consejo Nacional de Investigaciones Cientificas y Tecnicas (CONICET) (Argentina)], E-mail: mbellini@mdp.edu.ar

    2009-04-20

    We investigate the origin and evolution of primordial electric and magnetic fields in the early universe, when the expansion is governed by a cosmological constant {lambda}{sub 0}. Using the gravitoelectromagnetic inflationary formalism with A{sub 0}=0, we obtain the power of spectrums for large-scale magnetic fields and the inflaton field fluctuations during inflation. A very important fact is that our formalism is naturally non-conformally invariant.

  1. Primordial large-scale electromagnetic fields from gravitoelectromagnetic inflation

    Science.gov (United States)

    Membiela, Federico Agustín; Bellini, Mauricio

    2009-04-01

    We investigate the origin and evolution of primordial electric and magnetic fields in the early universe, when the expansion is governed by a cosmological constant Λ0. Using the gravitoelectromagnetic inflationary formalism with A0 = 0, we obtain the power of spectrums for large-scale magnetic fields and the inflaton field fluctuations during inflation. A very important fact is that our formalism is naturally non-conformally invariant.

  2. Primordial large-scale electromagnetic fields from gravitoelectromagnetic inflation

    International Nuclear Information System (INIS)

    Membiela, Federico Agustin; Bellini, Mauricio

    2009-01-01

    We investigate the origin and evolution of primordial electric and magnetic fields in the early universe, when the expansion is governed by a cosmological constant Λ 0 . Using the gravitoelectromagnetic inflationary formalism with A 0 =0, we obtain the power of spectrums for large-scale magnetic fields and the inflaton field fluctuations during inflation. A very important fact is that our formalism is naturally non-conformally invariant.

  3. Rotation invariant fast features for large-scale recognition

    Science.gov (United States)

    Takacs, Gabriel; Chandrasekhar, Vijay; Tsai, Sam; Chen, David; Grzeszczuk, Radek; Girod, Bernd

    2012-10-01

    We present an end-to-end feature description pipeline which uses a novel interest point detector and Rotation- Invariant Fast Feature (RIFF) descriptors. The proposed RIFF algorithm is 15× faster than SURF1 while producing large-scale retrieval results that are comparable to SIFT.2 Such high-speed features benefit a range of applications from Mobile Augmented Reality (MAR) to web-scale image retrieval and analysis.

  4. Concurrent Programming Using Actors: Exploiting Large-Scale Parallelism,

    Science.gov (United States)

    1985-10-07

    ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT. TASK* Artificial Inteligence Laboratory AREA Is WORK UNIT NUMBERS 545 Technology Square...D-R162 422 CONCURRENT PROGRMMIZNG USING f"OS XL?ITP TEH l’ LARGE-SCALE PARALLELISH(U) NASI AC E Al CAMBRIDGE ARTIFICIAL INTELLIGENCE L. G AGHA ET AL...RESOLUTION TEST CHART N~ATIONAL BUREAU OF STANDA.RDS - -96 A -E. __ _ __ __’ .,*- - -- •. - MASSACHUSETTS INSTITUTE OF TECHNOLOGY ARTIFICIAL

  5. On a Game of Large-Scale Projects Competition

    Science.gov (United States)

    Nikonov, Oleg I.; Medvedeva, Marina A.

    2009-09-01

    The paper is devoted to game-theoretical control problems motivated by economic decision making situations arising in realization of large-scale projects, such as designing and putting into operations the new gas or oil pipelines. A non-cooperative two player game is considered with payoff functions of special type for which standard existence theorems and algorithms for searching Nash equilibrium solutions are not applicable. The paper is based on and develops the results obtained in [1]-[5].

  6. Large-scale nuclear energy from the thorium cycle

    International Nuclear Information System (INIS)

    Lewis, W.B.; Duret, M.F.; Craig, D.S.; Veeder, J.I.; Bain, A.S.

    1973-02-01

    The thorium fuel cycle in CANDU (Canada Deuterium Uranium) reactors challenges breeders and fusion as the simplest means of meeting the world's large-scale demands for energy for centuries. Thorium oxide fuel allows high power density with excellent neutron economy. The combination of thorium fuel with organic caloporteur promises easy maintenance and high availability of the whole plant. The total fuelling cost including charges on the inventory is estimated to be attractively low. (author) [fr

  7. Fast, large-scale hologram calculation in wavelet domain

    Science.gov (United States)

    Shimobaba, Tomoyoshi; Matsushima, Kyoji; Takahashi, Takayuki; Nagahama, Yuki; Hasegawa, Satoki; Sano, Marie; Hirayama, Ryuji; Kakue, Takashi; Ito, Tomoyoshi

    2018-04-01

    We propose a large-scale hologram calculation using WAvelet ShrinkAge-Based superpositIon (WASABI), a wavelet transform-based algorithm. An image-type hologram calculated using the WASABI method is printed on a glass substrate with the resolution of 65 , 536 × 65 , 536 pixels and a pixel pitch of 1 μm. The hologram calculation time amounts to approximately 354 s on a commercial CPU, which is approximately 30 times faster than conventional methods.

  8. Evolutionary leap in large-scale flood risk assessment needed

    OpenAIRE

    Vorogushyn, Sergiy; Bates, Paul D.; de Bruijn, Karin; Castellarin, Attilio; Kreibich, Heidi; Priest, Sally J.; Schröter, Kai; Bagli, Stefano; Blöschl, Günter; Domeneghetti, Alessio; Gouldby, Ben; Klijn, Frans; Lammersen, Rita; Neal, Jeffrey C.; Ridder, Nina

    2018-01-01

    Current approaches for assessing large-scale flood risks contravene the fundamental principles of the flood risk system functioning because they largely ignore basic interactions and feedbacks between atmosphere, catchments, river-floodplain systems and socio-economic processes. As a consequence, risk analyses are uncertain and might be biased. However, reliable risk estimates are required for prioritizing national investments in flood risk mitigation or for appraisal and management of insura...

  9. Large-scale Health Information Database and Privacy Protection*1

    OpenAIRE

    YAMAMOTO, Ryuichi

    2016-01-01

    Japan was once progressive in the digitalization of healthcare fields but unfortunately has fallen behind in terms of the secondary use of data for public interest. There has recently been a trend to establish large-scale health databases in the nation, and a conflict between data use for public interest and privacy protection has surfaced as this trend has progressed. Databases for health insurance claims or for specific health checkups and guidance services were created according to the law...

  10. Multiresolution comparison of precipitation datasets for large-scale models

    Science.gov (United States)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  11. Human visual system automatically represents large-scale sequential regularities.

    Science.gov (United States)

    Kimura, Motohiro; Widmann, Andreas; Schröger, Erich

    2010-03-04

    Our brain recordings reveal that large-scale sequential regularities defined across non-adjacent stimuli can be automatically represented in visual sensory memory. To show that, we adopted an auditory paradigm developed by Sussman, E., Ritter, W., and Vaughan, H. G. Jr. (1998). Predictability of stimulus deviance and the mismatch negativity. NeuroReport, 9, 4167-4170, Sussman, E., and Gumenyuk, V. (2005). Organization of sequential sounds in auditory memory. NeuroReport, 16, 1519-1523 to the visual domain by presenting task-irrelevant infrequent luminance-deviant stimuli (D, 20%) inserted among task-irrelevant frequent stimuli being of standard luminance (S, 80%) in randomized (randomized condition, SSSDSSSSSDSSSSD...) and fixed manners (fixed condition, SSSSDSSSSDSSSSD...). Comparing the visual mismatch negativity (visual MMN), an event-related brain potential (ERP) index of memory-mismatch processes in human visual sensory system, revealed that visual MMN elicited by deviant stimuli was reduced in the fixed compared to the randomized condition. Thus, the large-scale sequential regularity being present in the fixed condition (SSSSD) must have been represented in visual sensory memory. Interestingly, this effect did not occur in conditions with stimulus-onset asynchronies (SOAs) of 480 and 800 ms but was confined to the 160-ms SOA condition supporting the hypothesis that large-scale regularity extraction was based on perceptual grouping of the five successive stimuli defining the regularity. 2010 Elsevier B.V. All rights reserved.

  12. Geospatial Optimization of Siting Large-Scale Solar Projects

    Energy Technology Data Exchange (ETDEWEB)

    Macknick, Jordan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Quinby, Ted [National Renewable Energy Lab. (NREL), Golden, CO (United States); Caulfield, Emmet [Stanford Univ., CA (United States); Gerritsen, Margot [Stanford Univ., CA (United States); Diffendorfer, Jay [U.S. Geological Survey, Boulder, CO (United States); Haines, Seth [U.S. Geological Survey, Boulder, CO (United States)

    2014-03-01

    Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent with each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.

  13. Parallel clustering algorithm for large-scale biological data sets.

    Science.gov (United States)

    Wang, Minchao; Zhang, Wu; Ding, Wang; Dai, Dongbo; Zhang, Huiran; Xie, Hao; Chen, Luonan; Guo, Yike; Xie, Jiang

    2014-01-01

    Recent explosion of biological data brings a great challenge for the traditional clustering algorithms. With increasing scale of data sets, much larger memory and longer runtime are required for the cluster identification problems. The affinity propagation algorithm outperforms many other classical clustering algorithms and is widely applied into the biological researches. However, the time and space complexity become a great bottleneck when handling the large-scale data sets. Moreover, the similarity matrix, whose constructing procedure takes long runtime, is required before running the affinity propagation algorithm, since the algorithm clusters data sets based on the similarities between data pairs. Two types of parallel architectures are proposed in this paper to accelerate the similarity matrix constructing procedure and the affinity propagation algorithm. The memory-shared architecture is used to construct the similarity matrix, and the distributed system is taken for the affinity propagation algorithm, because of its large memory size and great computing capacity. An appropriate way of data partition and reduction is designed in our method, in order to minimize the global communication cost among processes. A speedup of 100 is gained with 128 cores. The runtime is reduced from serval hours to a few seconds, which indicates that parallel algorithm is capable of handling large-scale data sets effectively. The parallel affinity propagation also achieves a good performance when clustering large-scale gene data (microarray) and detecting families in large protein superfamilies.

  14. Accelerating large-scale phase-field simulations with GPU

    Directory of Open Access Journals (Sweden)

    Xiaoming Shi

    2017-10-01

    Full Text Available A new package for accelerating large-scale phase-field simulations was developed by using GPU based on the semi-implicit Fourier method. The package can solve a variety of equilibrium equations with different inhomogeneity including long-range elastic, magnetostatic, and electrostatic interactions. Through using specific algorithm in Compute Unified Device Architecture (CUDA, Fourier spectral iterative perturbation method was integrated in GPU package. The Allen-Cahn equation, Cahn-Hilliard equation, and phase-field model with long-range interaction were solved based on the algorithm running on GPU respectively to test the performance of the package. From the comparison of the calculation results between the solver executed in single CPU and the one on GPU, it was found that the speed on GPU is enormously elevated to 50 times faster. The present study therefore contributes to the acceleration of large-scale phase-field simulations and provides guidance for experiments to design large-scale functional devices.

  15. VESPA: Very large-scale Evolutionary and Selective Pressure Analyses

    Directory of Open Access Journals (Sweden)

    Andrew E. Webb

    2017-06-01

    Full Text Available Background Large-scale molecular evolutionary analyses of protein coding sequences requires a number of preparatory inter-related steps from finding gene families, to generating alignments and phylogenetic trees and assessing selective pressure variation. Each phase of these analyses can represent significant challenges, particularly when working with entire proteomes (all protein coding sequences in a genome from a large number of species. Methods We present VESPA, software capable of automating a selective pressure analysis using codeML in addition to the preparatory analyses and summary statistics. VESPA is written in python and Perl and is designed to run within a UNIX environment. Results We have benchmarked VESPA and our results show that the method is consistent, performs well on both large scale and smaller scale datasets, and produces results in line with previously published datasets. Discussion Large-scale gene family identification, sequence alignment, and phylogeny reconstruction are all important aspects of large-scale molecular evolutionary analyses. VESPA provides flexible software for simplifying these processes along with downstream selective pressure variation analyses. The software automatically interprets results from codeML and produces simplified summary files to assist the user in better understanding the results. VESPA may be found at the following website: http://www.mol-evol.org/VESPA.

  16. Problems of large-scale vertically-integrated aquaculture

    Energy Technology Data Exchange (ETDEWEB)

    Webber, H H; Riordan, P F

    1976-01-01

    The problems of vertically-integrated aquaculture are outlined; they are concerned with: species limitations (in the market, biological and technological); site selection, feed, manpower needs, and legal, institutional and financial requirements. The gaps in understanding of, and the constraints limiting, large-scale aquaculture are listed. Future action is recommended with respect to: types and diversity of species to be cultivated, marketing, biotechnology (seed supply, disease control, water quality and concerted effort), siting, feed, manpower, legal and institutional aids (granting of water rights, grants, tax breaks, duty-free imports, etc.), and adequate financing. The last of hard data based on experience suggests that large-scale vertically-integrated aquaculture is a high risk enterprise, and with the high capital investment required, banks and funding institutions are wary of supporting it. Investment in pilot projects is suggested to demonstrate that large-scale aquaculture can be a fully functional and successful business. Construction and operation of such pilot farms is judged to be in the interests of both the public and private sector.

  17. Large-scale fracture mechancis testing -- requirements and possibilities

    International Nuclear Information System (INIS)

    Brumovsky, M.

    1993-01-01

    Application of fracture mechanics to very important and/or complicated structures, like reactor pressure vessels, brings also some questions about the reliability and precision of such calculations. These problems become more pronounced in cases of elastic-plastic conditions of loading and/or in parts with non-homogeneous materials (base metal and austenitic cladding, property gradient changes through material thickness) or with non-homogeneous stress fields (nozzles, bolt threads, residual stresses etc.). For such special cases some verification by large-scale testing is necessary and valuable. This paper discusses problems connected with planning of such experiments with respect to their limitations, requirements to a good transfer of received results to an actual vessel. At the same time, an analysis of possibilities of small-scale model experiments is also shown, mostly in connection with application of results between standard, small-scale and large-scale experiments. Experience from 30 years of large-scale testing in SKODA is used as an example to support this analysis. 1 fig

  18. Photorealistic large-scale urban city model reconstruction.

    Science.gov (United States)

    Poullis, Charalambos; You, Suya

    2009-01-01

    The rapid and efficient creation of virtual environments has become a crucial part of virtual reality applications. In particular, civil and defense applications often require and employ detailed models of operations areas for training, simulations of different scenarios, planning for natural or man-made events, monitoring, surveillance, games, and films. A realistic representation of the large-scale environments is therefore imperative for the success of such applications since it increases the immersive experience of its users and helps reduce the difference between physical and virtual reality. However, the task of creating such large-scale virtual environments still remains a time-consuming and manual work. In this work, we propose a novel method for the rapid reconstruction of photorealistic large-scale virtual environments. First, a novel, extendible, parameterized geometric primitive is presented for the automatic building identification and reconstruction of building structures. In addition, buildings with complex roofs containing complex linear and nonlinear surfaces are reconstructed interactively using a linear polygonal and a nonlinear primitive, respectively. Second, we present a rendering pipeline for the composition of photorealistic textures, which unlike existing techniques, can recover missing or occluded texture information by integrating multiple information captured from different optical sensors (ground, aerial, and satellite).

  19. Image-based Exploration of Large-Scale Pathline Fields

    KAUST Repository

    Nagoor, Omniah H.

    2014-05-27

    While real-time applications are nowadays routinely used in visualizing large nu- merical simulations and volumes, handling these large-scale datasets requires high-end graphics clusters or supercomputers to process and visualize them. However, not all users have access to powerful clusters. Therefore, it is challenging to come up with a visualization approach that provides insight to large-scale datasets on a single com- puter. Explorable images (EI) is one of the methods that allows users to handle large data on a single workstation. Although it is a view-dependent method, it combines both exploration and modification of visual aspects without re-accessing the original huge data. In this thesis, we propose a novel image-based method that applies the concept of EI in visualizing large flow-field pathlines data. The goal of our work is to provide an optimized image-based method, which scales well with the dataset size. Our approach is based on constructing a per-pixel linked list data structure in which each pixel contains a list of pathlines segments. With this view-dependent method it is possible to filter, color-code and explore large-scale flow data in real-time. In addition, optimization techniques such as early-ray termination and deferred shading are applied, which further improves the performance and scalability of our approach.

  20. [A large-scale accident in Alpine terrain].

    Science.gov (United States)

    Wildner, M; Paal, P

    2015-02-01

    Due to the geographical conditions, large-scale accidents amounting to mass casualty incidents (MCI) in Alpine terrain regularly present rescue teams with huge challenges. Using an example incident, specific conditions and typical problems associated with such a situation are presented. The first rescue team members to arrive have the elementary tasks of qualified triage and communication to the control room, which is required to dispatch the necessary additional support. Only with a clear "concept", to which all have to adhere, can the subsequent chaos phase be limited. In this respect, a time factor confounded by adverse weather conditions or darkness represents enormous pressure. Additional hazards are frostbite and hypothermia. If priorities can be established in terms of urgency, then treatment and procedure algorithms have proven successful. For evacuation of causalities, a helicopter should be strived for. Due to the low density of hospitals in Alpine regions, it is often necessary to distribute the patients over a wide area. Rescue operations in Alpine terrain have to be performed according to the particular conditions and require rescue teams to have specific knowledge and expertise. The possibility of a large-scale accident should be considered when planning events. With respect to optimization of rescue measures, regular training and exercises are rational, as is the analysis of previous large-scale Alpine accidents.

  1. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    NARCIS (Netherlands)

    Loon, van A.F.; Huijgevoort, van M.H.J.; Lanen, van H.A.J.

    2012-01-01

    Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological

  2. Reproductive health indicators of fishes from Pennsylvania watersheds: association with chemicals of emerging concern.

    Science.gov (United States)

    Blazer, V S; Iwanowicz, D D; Walsh, H L; Sperry, A J; Iwanowicz, L R; Alvarez, D A; Brightbill, R A; Smith, G; Foreman, W T; Manning, R

    2014-10-01

    Fishes were collected at 16 sites within the three major river drainages (Delaware, Susquehanna, and Ohio) of Pennsylvania. Three species were evaluated for biomarkers of estrogenic/antiandrogenic exposure, including plasma vitellogenin and testicular oocytes in male fishes. Smallmouth bass Micropterus dolomieu, white sucker Catostomus commersonii, and redhorse sucker Moxostoma species were collected in the summer, a period of low flow and low reproductive activity. Smallmouth bass were the only species in which testicular oocytes were observed; however, measurable concentrations of plasma vitellogenin were found in male bass and white sucker. The percentage of male bass with testicular oocytes ranged from 10 to 100%, with the highest prevalence and severity in bass collected in the Susquehanna drainage. The percentage of males with plasma vitellogenin ranged from 0 to 100% in both bass and sucker. Biological findings were compared with chemical analyses of discrete water samples collected at the time of fish collections. Estrone concentrations correlated with testicular oocytes prevalence and severity and with the percentage of male bass with vitellogenin. No correlations were noted with the percentage of male sucker with vitellogenin and water chemical concentrations. The prevalence and severity of testicular oocytes in bass also correlated with the percent of agricultural land use in the watershed above a site. Two sites within the Susquehanna drainage and one in the Delaware were immediately downstream of wastewater treatment plants to compare results with upstream fish. The percentage of male bass with testicular oocytes was not consistently higher downstream; however, severity did tend to increase downstream.

  3. Robust large-scale parallel nonlinear solvers for simulations.

    Energy Technology Data Exchange (ETDEWEB)

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson (Sandia National Laboratories, Livermore, CA)

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any

  4. Foundational perspectives on causality in large-scale brain networks

    Science.gov (United States)

    Mannino, Michael; Bressler, Steven L.

    2015-12-01

    A profusion of recent work in cognitive neuroscience has been concerned with the endeavor to uncover causal influences in large-scale brain networks. However, despite the fact that many papers give a nod to the important theoretical challenges posed by the concept of causality, this explosion of research has generally not been accompanied by a rigorous conceptual analysis of the nature of causality in the brain. This review provides both a descriptive and prescriptive account of the nature of causality as found within and between large-scale brain networks. In short, it seeks to clarify the concept of causality in large-scale brain networks both philosophically and scientifically. This is accomplished by briefly reviewing the rich philosophical history of work on causality, especially focusing on contributions by David Hume, Immanuel Kant, Bertrand Russell, and Christopher Hitchcock. We go on to discuss the impact that various interpretations of modern physics have had on our understanding of causality. Throughout all this, a central focus is the distinction between theories of deterministic causality (DC), whereby causes uniquely determine their effects, and probabilistic causality (PC), whereby causes change the probability of occurrence of their effects. We argue that, given the topological complexity of its large-scale connectivity, the brain should be considered as a complex system and its causal influences treated as probabilistic in nature. We conclude that PC is well suited for explaining causality in the brain for three reasons: (1) brain causality is often mutual; (2) connectional convergence dictates that only rarely is the activity of one neuronal population uniquely determined by another one; and (3) the causal influences exerted between neuronal populations may not have observable effects. A number of different techniques are currently available to characterize causal influence in the brain. Typically, these techniques quantify the statistical

  5. Emerging and Legacy Contaminants in The Foodweb in The Lower Columbia River: USGS ConHab Project

    Science.gov (United States)

    Nilsen, E. B.; Alvarez, D.; Counihan, T.; Elias, E.; Gelfenbaum, G. R.; Hardiman, J.; Jenkins, J.; Mesa, M.; Morace, J.; Patino, R.; Torres, L.; Waite, I.; Zaugg, S.

    2012-12-01

    An interdisciplinary study, USGS Columbia River Contaminants and Habitat Characterization (ConHab) project, investigates transport pathways, chemical fate, and effects of polybrominated diphenyl ethers (PBDEs) and other endocrine disrupting chemicals (EDCs) in aquatic media and the foodweb in the lower Columbia River, Oregon and Washington. Polar organic chemical integrative samplers (POCIS) and semipermeable membrane devices (SPMDs) were co-deployed at each of 10 sites in 2008 to provide a measure of the dissolved concentrations of select PBDEs, chlorinated pesticides, and other EDCs. PBDE-47 was the most prevalent of the PBDEs detected. Numerous organochlorine pesticides, both banned and current-use, including hexachlorobenzene, pentachloroanisole, dichlorodiphenyltrichloroethane (DDT) and its degradates, chlorpyrifos, endosulfan, and the endosulfan degradation products, were measured at each site. EDCs commonly detected included a series of polycyclic aromatic hydrocarbons (PAHs), fragrances (galaxolide), pesticides (chlorpyrifos and atrazine), plasticizers (phthalates), and flame retardants (phosphates). The downstream sites tended to have the highest concentrations of contaminants in the lower Columbia River. In 2009 and 2010 passive samplers were deployed and resident largescale suckers (Catostomus macrocheilus) and surface bed sediments were collected at three of the original sites representing a gradient of exposure based on 2008 results. Brain, fillet, liver, stomach, and gonad tissues were analyzed. Chemical concentrations were highest in livers, followed by brain, stomach, gonad, and, lastly, fillet. Concentrations of halogenated compounds in tissue samples ranged from PBDE-100 > PBDE-154 > PBDE-153. Concentrations in tissues and in sediments increased moving downstream from Skamania, WA to Columbia City, OR to Longview, WA. Preliminary biomarker results indicate that fish at the downstream sites experience greater stress relative to the upstream site

  6. Large-scale compositional heterogeneity in the Earth's mantle

    Science.gov (United States)

    Ballmer, M.

    2017-12-01

    Seismic imaging of subducted Farallon and Tethys lithosphere in the lower mantle has been taken as evidence for whole-mantle convection, and efficient mantle mixing. However, cosmochemical constraints point to a lower-mantle composition that has a lower Mg/Si compared to upper-mantle pyrolite. Moreover, geochemical signatures of magmatic rocks indicate the long-term persistence of primordial reservoirs somewhere in the mantle. In this presentation, I establish geodynamic mechanisms for sustaining large-scale (primordial) heterogeneity in the Earth's mantle using numerical models. Mantle flow is controlled by rock density and viscosity. Variations in intrinsic rock density, such as due to heterogeneity in basalt or iron content, can induce layering or partial layering in the mantle. Layering can be sustained in the presence of persistent whole mantle convection due to active "unmixing" of heterogeneity in low-viscosity domains, e.g. in the transition zone or near the core-mantle boundary [1]. On the other hand, lateral variations in intrinsic rock viscosity, such as due to heterogeneity in Mg/Si, can strongly affect the mixing timescales of the mantle. In the extreme case, intrinsically strong rocks may remain unmixed through the age of the Earth, and persist as large-scale domains in the mid-mantle due to focusing of deformation along weak conveyor belts [2]. That large-scale lateral heterogeneity and/or layering can persist in the presence of whole-mantle convection can explain the stagnation of some slabs, as well as the deflection of some plumes, in the mid-mantle. These findings indeed motivate new seismic studies for rigorous testing of model predictions. [1] Ballmer, M. D., N. C. Schmerr, T. Nakagawa, and J. Ritsema (2015), Science Advances, doi:10.1126/sciadv.1500815. [2] Ballmer, M. D., C. Houser, J. W. Hernlund, R. Wentzcovitch, and K. Hirose (2017), Nature Geoscience, doi:10.1038/ngeo2898.

  7. Large-Scale Traveling Weather Systems in Mars’ Southern Extratropics

    Science.gov (United States)

    Hollingsworth, Jeffery L.; Kahre, Melinda A.

    2017-10-01

    Between late fall and early spring, Mars’ middle- and high-latitude atmosphere supports strong mean equator-to-pole temperature contrasts and an accompanying mean westerly polar vortex. Observations from both the MGS Thermal Emission Spectrometer (TES) and the MRO Mars Climate Sounder (MCS) indicate that a mean baroclinicity-barotropicity supports intense, large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). Such extratropical weather disturbances are critical components of the global circulation as they serve as agents in the transport of heat and momentum, and generalized scalar/tracer quantities (e.g., atmospheric dust, water-vapor and ice clouds). The character of such traveling extratropical synoptic disturbances in Mars' southern hemisphere during late winter through early spring is investigated using a moderately high-resolution Mars global climate model (Mars GCM). This Mars GCM imposes interactively-lifted and radiatively-active dust based on a threshold value of the surface stress. The model exhibits a reasonable "dust cycle" (i.e., globally averaged, a dustier atmosphere during southern spring and summer occurs). Compared to the northern-hemisphere counterparts, the southern synoptic-period weather disturbances and accompanying frontal waves have smaller meridional and zonal scales, and are far less intense. Influences of the zonally asymmetric (i.e., east-west varying) topography on southern large-scale weather are investigated, in addition to large-scale up-slope/down-slope flows and the diurnal cycle. A southern storm zone in late winter and early spring presents in the western hemisphere via orographic influences from the Tharsis highlands, and the Argyre and Hellas impact basins. Geographically localized transient-wave activity diagnostics are constructed that illuminate dynamical differences amongst the simulations and these are presented.

  8. On the Phenomenology of an Accelerated Large-Scale Universe

    Directory of Open Access Journals (Sweden)

    Martiros Khurshudyan

    2016-10-01

    Full Text Available In this review paper, several new results towards the explanation of the accelerated expansion of the large-scale universe is discussed. On the other hand, inflation is the early-time accelerated era and the universe is symmetric in the sense of accelerated expansion. The accelerated expansion of is one of the long standing problems in modern cosmology, and physics in general. There are several well defined approaches to solve this problem. One of them is an assumption concerning the existence of dark energy in recent universe. It is believed that dark energy is responsible for antigravity, while dark matter has gravitational nature and is responsible, in general, for structure formation. A different approach is an appropriate modification of general relativity including, for instance, f ( R and f ( T theories of gravity. On the other hand, attempts to build theories of quantum gravity and assumptions about existence of extra dimensions, possible variability of the gravitational constant and the speed of the light (among others, provide interesting modifications of general relativity applicable to problems of modern cosmology, too. In particular, here two groups of cosmological models are discussed. In the first group the problem of the accelerated expansion of large-scale universe is discussed involving a new idea, named the varying ghost dark energy. On the other hand, the second group contains cosmological models addressed to the same problem involving either new parameterizations of the equation of state parameter of dark energy (like varying polytropic gas, or nonlinear interactions between dark energy and dark matter. Moreover, for cosmological models involving varying ghost dark energy, massless particle creation in appropriate radiation dominated universe (when the background dynamics is due to general relativity is demonstrated as well. Exploring the nature of the accelerated expansion of the large-scale universe involving generalized

  9. Large-Scale Traveling Weather Systems in Mars Southern Extratropics

    Science.gov (United States)

    Hollingsworth, Jeffery L.; Kahre, Melinda A.

    2017-01-01

    Between late fall and early spring, Mars' middle- and high-latitude atmosphere supports strong mean equator-to-pole temperature contrasts and an accompanying mean westerly polar vortex. Observations from both the MGS Thermal Emission Spectrometer (TES) and the MRO Mars Climate Sounder (MCS) indicate that a mean baroclinicity-barotropicity supports intense, large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). Such extratropical weather disturbances are critical components of the global circulation as they serve as agents in the transport of heat and momentum, and generalized scalar/tracer quantities (e.g., atmospheric dust, water-vapor and ice clouds). The character of such traveling extratropical synoptic disturbances in Mars' southern hemisphere during late winter through early spring is investigated using a moderately high-resolution Mars global climate model (Mars GCM). This Mars GCM imposes interactively-lifted and radiatively-active dust based on a threshold value of the surface stress. The model exhibits a reasonable "dust cycle" (i.e., globally averaged, a dustier atmosphere during southern spring and summer occurs). Compared to the northern-hemisphere counterparts, the southern synoptic-period weather disturbances and accompanying frontal waves have smaller meridional and zonal scales, and are far less intense. Influences of the zonally asymmetric (i.e., east-west varying) topography on southern large-scale weather are investigated, in addition to large-scale up-slope/down-slope flows and the diurnal cycle. A southern storm zone in late winter and early spring presents in the western hemisphere via orographic influences from the Tharsis highlands, and the Argyre and Hellas impact basins. Geographically localized transient-wave activity diagnostics are constructed that illuminate dynamical differences amongst the simulations and these are presented.

  10. Nonlinear evolution of large-scale structure in the universe

    International Nuclear Information System (INIS)

    Frenk, C.S.; White, S.D.M.; Davis, M.

    1983-01-01

    Using N-body simulations we study the nonlinear development of primordial density perturbation in an Einstein--de Sitter universe. We compare the evolution of an initial distribution without small-scale density fluctuations to evolution from a random Poisson distribution. These initial conditions mimic the assumptions of the adiabatic and isothermal theories of galaxy formation. The large-scale structures which form in the two cases are markedly dissimilar. In particular, the correlation function xi(r) and the visual appearance of our adiabatic (or ''pancake'') models match better the observed distribution of galaxies. This distribution is characterized by large-scale filamentary structure. Because the pancake models do not evolve in a self-similar fashion, the slope of xi(r) steepens with time; as a result there is a unique epoch at which these models fit the galaxy observations. We find the ratio of cutoff length to correlation length at this time to be lambda/sub min//r 0 = 5.1; its expected value in a neutrino dominated universe is 4(Ωh) -1 (H 0 = 100h km s -1 Mpc -1 ). At early epochs these models predict a negligible amplitude for xi(r) and could explain the lack of measurable clustering in the Lyα absorption lines of high-redshift quasars. However, large-scale structure in our models collapses after z = 2. If this collapse precedes galaxy formation as in the usual pancake theory, galaxies formed uncomfortably recently. The extent of this problem may depend on the cosmological model used; the present series of experiments should be extended in the future to include models with Ω<1

  11. Generation Expansion Planning Considering Integrating Large-scale Wind Generation

    DEFF Research Database (Denmark)

    Zhang, Chunyu; Ding, Yi; Østergaard, Jacob

    2013-01-01

    necessitated the inclusion of more innovative and sophisticated approaches in power system investment planning. A bi-level generation expansion planning approach considering large-scale wind generation was proposed in this paper. The first phase is investment decision, while the second phase is production...... optimization decision. A multi-objective PSO (MOPSO) algorithm was introduced to solve this optimization problem, which can accelerate the convergence and guarantee the diversity of Pareto-optimal front set as well. The feasibility and effectiveness of the proposed bi-level planning approach and the MOPSO...

  12. Safeguarding aspects of large-scale commercial reprocessing plants

    International Nuclear Information System (INIS)

    1979-03-01

    The paper points out that several solutions to the problems of safeguarding large-scale plants have been put forward: (1) Increased measurement accuracy. This does not remove the problem of timely detection. (2) Continuous in-process measurement. As yet unproven and likely to be costly. (3) More extensive use of containment and surveillance. The latter appears to be feasible but requires the incorporation of safeguards into plant design and sufficient redundancy to protect the operators interests. The advantages of altering the emphasis of safeguards philosophy from quantitative goals to the analysis of diversion strategies should be considered

  13. Large-Scale Graph Processing Using Apache Giraph

    KAUST Repository

    Sakr, Sherif

    2017-01-07

    This book takes its reader on a journey through Apache Giraph, a popular distributed graph processing platform designed to bring the power of big data processing to graph data. Designed as a step-by-step self-study guide for everyone interested in large-scale graph processing, it describes the fundamental abstractions of the system, its programming models and various techniques for using the system to process graph data at scale, including the implementation of several popular and advanced graph analytics algorithms.

  14. Large-scale computing techniques for complex system simulations

    CERN Document Server

    Dubitzky, Werner; Schott, Bernard

    2012-01-01

    Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and

  15. Large-Scale Analysis of Network Bistability for Human Cancers

    Science.gov (United States)

    Shiraishi, Tetsuya; Matsuyama, Shinako; Kitano, Hiroaki

    2010-01-01

    Protein–protein interaction and gene regulatory networks are likely to be locked in a state corresponding to a disease by the behavior of one or more bistable circuits exhibiting switch-like behavior. Sets of genes could be over-expressed or repressed when anomalies due to disease appear, and the circuits responsible for this over- or under-expression might persist for as long as the disease state continues. This paper shows how a large-scale analysis of network bistability for various human cancers can identify genes that can potentially serve as drug targets or diagnosis biomarkers. PMID:20628618

  16. Development of Large-Scale Spacecraft Fire Safety Experiments

    DEFF Research Database (Denmark)

    Ruff, Gary A.; Urban, David L.; Fernandez-Pello, A. Carlos

    2013-01-01

    exploration missions outside of low-earth orbit and accordingly, more complex in terms of operations, logistics, and safety. This will increase the challenge of ensuring a fire-safe environment for the crew throughout the mission. Based on our fundamental uncertainty of the behavior of fires in low...... of the spacecraft fire safety risk. The activity of this project is supported by an international topical team of fire experts from other space agencies who conduct research that is integrated into the overall experiment design. The large-scale space flight experiment will be conducted in an Orbital Sciences...

  17. Enabling Large-Scale Biomedical Analysis in the Cloud

    Directory of Open Access Journals (Sweden)

    Ying-Chih Lin

    2013-01-01

    Full Text Available Recent progress in high-throughput instrumentations has led to an astonishing growth in both volume and complexity of biomedical data collected from various sources. The planet-size data brings serious challenges to the storage and computing technologies. Cloud computing is an alternative to crack the nut because it gives concurrent consideration to enable storage and high-performance computing on large-scale data. This work briefly introduces the data intensive computing system and summarizes existing cloud-based resources in bioinformatics. These developments and applications would facilitate biomedical research to make the vast amount of diversification data meaningful and usable.

  18. Current status of large-scale cryogenic gravitational wave telescope

    International Nuclear Information System (INIS)

    Kuroda, K; Ohashi, M; Miyoki, S; Uchiyama, T; Ishitsuka, H; Yamamoto, K; Kasahara, K; Fujimoto, M-K; Kawamura, S; Takahashi, R; Yamazaki, T; Arai, K; Tatsumi, D; Ueda, A; Fukushima, M; Sato, S; Nagano, S; Tsunesada, Y; Zhu, Zong-Hong; Shintomi, T; Yamamoto, A; Suzuki, T; Saito, Y; Haruyama, T; Sato, N; Higashi, Y; Tomaru, T; Tsubono, K; Ando, M; Takamori, A; Numata, K; Aso, Y; Ueda, K-I; Yoneda, H; Nakagawa, K; Musha, M; Mio, N; Moriwaki, S; Somiya, K; Araya, A; Kanda, N; Telada, S; Tagoshi, H; Nakamura, T; Sasaki, M; Tanaka, T; Oohara, K; Takahashi, H; Miyakawa, O; Tobar, M E

    2003-01-01

    The large-scale cryogenic gravitational wave telescope (LCGT) project is the proposed advancement of TAMA, which will be able to detect the coalescences of binary neutron stars occurring in our galaxy. LCGT intends to detect the coalescence events within about 240 Mpc, the rate of which is expected to be from 0.1 to several events in a year. LCGT has Fabry-Perot cavities of 3 km baseline and the mirrors are cooled down to a cryogenic temperature of 20 K. It is planned to be built in the underground of Kamioka mine. This paper overviews the revision of the design and the current status of the R and D

  19. Properties of large-scale methane/hydrogen jet fires

    Energy Technology Data Exchange (ETDEWEB)

    Studer, E. [CEA Saclay, DEN, LTMF Heat Transfer and Fluid Mech Lab, 91 - Gif-sur-Yvette (France); Jamois, D.; Leroy, G.; Hebrard, J. [INERIS, F-60150 Verneuil En Halatte (France); Jallais, S. [Air Liquide, F-78350 Jouy En Josas (France); Blanchetiere, V. [GDF SUEZ, 93 - La Plaine St Denis (France)

    2009-12-15

    A future economy based on reduction of carbon-based fuels for power generation and transportation may consider hydrogen as possible energy carrier Extensive and widespread use of hydrogen might require a pipeline network. The alternatives might be the use of the existing natural gas network or to design a dedicated network. Whatever the solution, mixing hydrogen with natural gas will modify the consequences of accidents, substantially The French National Research Agency (ANR) funded project called HYDROMEL focuses on these critical questions Within this project large-scale jet fires have been studied experimentally and numerically The main characteristics of these flames including visible length, radiation fluxes and blowout have been assessed. (authors)

  20. Large-Scale Graph Processing Using Apache Giraph

    KAUST Repository

    Sakr, Sherif; Orakzai, Faisal Moeen; Abdelaziz, Ibrahim; Khayyat, Zuhair

    2017-01-01

    This book takes its reader on a journey through Apache Giraph, a popular distributed graph processing platform designed to bring the power of big data processing to graph data. Designed as a step-by-step self-study guide for everyone interested in large-scale graph processing, it describes the fundamental abstractions of the system, its programming models and various techniques for using the system to process graph data at scale, including the implementation of several popular and advanced graph analytics algorithms.

  1. Multidimensional quantum entanglement with large-scale integrated optics

    DEFF Research Database (Denmark)

    Wang, Jianwei; Paesani, Stefano; Ding, Yunhong

    2018-01-01

    -dimensional entanglement. A programmable bipartite entangled system is realized with dimension up to 15 × 15 on a large-scale silicon-photonics quantum circuit. The device integrates more than 550 photonic components on a single chip, including 16 identical photon-pair sources. We verify the high precision, generality......The ability to control multidimensional quantum systems is key for the investigation of fundamental science and for the development of advanced quantum technologies. We demonstrate a multidimensional integrated quantum photonic platform able to generate, control and analyze high...

  2. Test on large-scale seismic isolation elements, 2

    International Nuclear Information System (INIS)

    Mazda, T.; Moteki, M.; Ishida, K.; Shiojiri, H.; Fujita, T.

    1991-01-01

    Seismic isolation test program of Central Research Inst. of Electric Power Industry (CRIEPI) to apply seismic isolation to Fast Breeder Reactor (FBR) plant was started in 1987. In this test program, demonstration test of seismic isolation elements was considered as one of the most important research items. Facilities for testing seismic isolation elements were built in Abiko Research Laboratory of CRIEPI. Various tests of large-scale seismic isolation elements were conducted up to this day. Many important test data to develop design technical guidelines was obtained. (author)

  3. How Large-Scale Research Facilities Connect to Global Research

    DEFF Research Database (Denmark)

    Lauto, Giancarlo; Valentin, Finn

    2013-01-01

    Policies for large-scale research facilities (LSRFs) often highlight their spillovers to industrial innovation and their contribution to the external connectivity of the regional innovation system hosting them. Arguably, the particular institutional features of LSRFs are conducive for collaborative...... research. However, based on data on publications produced in 2006–2009 at the Neutron Science Directorate of Oak Ridge National Laboratory in Tennessee (United States), we find that internationalization of its collaborative research is restrained by coordination costs similar to those characterizing other...

  4. Novel algorithm of large-scale simultaneous linear equations

    International Nuclear Information System (INIS)

    Fujiwara, T; Hoshi, T; Yamamoto, S; Sogabe, T; Zhang, S-L

    2010-01-01

    We review our recently developed methods of solving large-scale simultaneous linear equations and applications to electronic structure calculations both in one-electron theory and many-electron theory. This is the shifted COCG (conjugate orthogonal conjugate gradient) method based on the Krylov subspace, and the most important issue for applications is the shift equation and the seed switching method, which greatly reduce the computational cost. The applications to nano-scale Si crystals and the double orbital extended Hubbard model are presented.

  5. Detecting differential protein expression in large-scale population proteomics

    Energy Technology Data Exchange (ETDEWEB)

    Ryu, Soyoung; Qian, Weijun; Camp, David G.; Smith, Richard D.; Tompkins, Ronald G.; Davis, Ronald W.; Xiao, Wenzhong

    2014-06-17

    Mass spectrometry-based high-throughput quantitative proteomics shows great potential in clinical biomarker studies, identifying and quantifying thousands of proteins in biological samples. However, methods are needed to appropriately handle issues/challenges unique to mass spectrometry data in order to detect as many biomarker proteins as possible. One issue is that different mass spectrometry experiments generate quite different total numbers of quantified peptides, which can result in more missing peptide abundances in an experiment with a smaller total number of quantified peptides. Another issue is that the quantification of peptides is sometimes absent, especially for less abundant peptides and such missing values contain the information about the peptide abundance. Here, we propose a Significance Analysis for Large-scale Proteomics Studies (SALPS) that handles missing peptide intensity values caused by the two mechanisms mentioned above. Our model has a robust performance in both simulated data and proteomics data from a large clinical study. Because varying patients’ sample qualities and deviating instrument performances are not avoidable for clinical studies performed over the course of several years, we believe that our approach will be useful to analyze large-scale clinical proteomics data.

  6. IP over optical multicasting for large-scale video delivery

    Science.gov (United States)

    Jin, Yaohui; Hu, Weisheng; Sun, Weiqiang; Guo, Wei

    2007-11-01

    In the IPTV systems, multicasting will play a crucial role in the delivery of high-quality video services, which can significantly improve bandwidth efficiency. However, the scalability and the signal quality of current IPTV can barely compete with the existing broadcast digital TV systems since it is difficult to implement large-scale multicasting with end-to-end guaranteed quality of service (QoS) in packet-switched IP network. China 3TNet project aimed to build a high performance broadband trial network to support large-scale concurrent streaming media and interactive multimedia services. The innovative idea of 3TNet is that an automatic switched optical networks (ASON) with the capability of dynamic point-to-multipoint (P2MP) connections replaces the conventional IP multicasting network in the transport core, while the edge remains an IP multicasting network. In this paper, we will introduce the network architecture and discuss challenges in such IP over Optical multicasting for video delivery.

  7. ANTITRUST ISSUES IN THE LARGE-SCALE FOOD DISTRIBUTION SECTOR

    Directory of Open Access Journals (Sweden)

    Enrico Adriano Raffaelli

    2014-12-01

    Full Text Available In light of the slow modernization of the Italian large-scale food distribution sector, of the fragmentation at national level, of the significant roles of the cooperatives at local level and of the alliances between food retail chains, the ICA during the recent years has developed a strong interest in this sector.After having analyzed the peculiarities of the Italian large-scale food distribution sector, this article shows the recent approach taken by the ICA toward the main antitrust issues in this sector.In the analysis of such issues, mainly the contractual relations between the GDO retailers and their suppliers, the introduction of Article 62 of Law no. 27 dated 24th March 2012 is crucial, because, by facilitating and encouraging complaints by the interested parties, it should allow the developing of normal competitive dynamics within the food distribution sector, where companies should be free to enter the market using the tools at their disposal, without undue restrictions.

  8. Exploiting Data Sparsity for Large-Scale Matrix Computations

    KAUST Repository

    Akbudak, Kadir; Ltaief, Hatem; Mikhalev, Aleksandr; Charara, Ali; Keyes, David E.

    2018-01-01

    Exploiting data sparsity in dense matrices is an algorithmic bridge between architectures that are increasingly memory-austere on a per-core basis and extreme-scale applications. The Hierarchical matrix Computations on Manycore Architectures (HiCMA) library tackles this challenging problem by achieving significant reductions in time to solution and memory footprint, while preserving a specified accuracy requirement of the application. HiCMA provides a high-performance implementation on distributed-memory systems of one of the most widely used matrix factorization in large-scale scientific applications, i.e., the Cholesky factorization. It employs the tile low-rank data format to compress the dense data-sparse off-diagonal tiles of the matrix. It then decomposes the matrix computations into interdependent tasks and relies on the dynamic runtime system StarPU for asynchronous out-of-order scheduling, while allowing high user-productivity. Performance comparisons and memory footprint on matrix dimensions up to eleven million show a performance gain and memory saving of more than an order of magnitude for both metrics on thousands of cores, against state-of-the-art open-source and vendor optimized numerical libraries. This represents an important milestone in enabling large-scale matrix computations toward solving big data problems in geospatial statistics for climate/weather forecasting applications.

  9. Dose monitoring in large-scale flowing aqueous media

    International Nuclear Information System (INIS)

    Kuruca, C.N.

    1995-01-01

    The Miami Electron Beam Research Facility (EBRF) has been in operation for six years. The EBRF houses a 1.5 MV, 75 KW DC scanned electron beam. Experiments have been conducted to evaluate the effectiveness of high-energy electron irradiation in the removal of toxic organic chemicals from contaminated water and the disinfection of various wastewater streams. The large-scale plant operates at approximately 450 L/min (120 gal/min). The radiation dose absorbed by the flowing aqueous streams is estimated by measuring the difference in water temperature before and after it passes in front of the beam. Temperature measurements are made using resistance temperature devices (RTDs) and recorded by computer along with other operating parameters. Estimated dose is obtained from the measured temperature differences using the specific heat of water. This presentation will discuss experience with this measurement system, its application to different water presentation devices, sources of error, and the advantages and disadvantages of its use in large-scale process applications

  10. Literature Review: Herbal Medicine Treatment after Large-Scale Disasters.

    Science.gov (United States)

    Takayama, Shin; Kaneko, Soichiro; Numata, Takehiro; Kamiya, Tetsuharu; Arita, Ryutaro; Saito, Natsumi; Kikuchi, Akiko; Ohsawa, Minoru; Kohayagawa, Yoshitaka; Ishii, Tadashi

    2017-01-01

    Large-scale natural disasters, such as earthquakes, tsunamis, volcanic eruptions, and typhoons, occur worldwide. After the Great East Japan earthquake and tsunami, our medical support operation's experiences suggested that traditional medicine might be useful for treating the various symptoms of the survivors. However, little information is available regarding herbal medicine treatment in such situations. Considering that further disasters will occur, we performed a literature review and summarized the traditional medicine approaches for treatment after large-scale disasters. We searched PubMed and Cochrane Library for articles written in English, and Ichushi for those written in Japanese. Articles published before 31 March 2016 were included. Keywords "disaster" and "herbal medicine" were used in our search. Among studies involving herbal medicine after a disaster, we found two randomized controlled trials investigating post-traumatic stress disorder (PTSD), three retrospective investigations of trauma or common diseases, and seven case series or case reports of dizziness, pain, and psychosomatic symptoms. In conclusion, herbal medicine has been used to treat trauma, PTSD, and other symptoms after disasters. However, few articles have been published, likely due to the difficulty in designing high quality studies in such situations. Further study will be needed to clarify the usefulness of herbal medicine after disasters.

  11. Development of large-scale functional brain networks in children.

    Directory of Open Access Journals (Sweden)

    Kaustubh Supekar

    2009-07-01

    Full Text Available The ontogeny of large-scale functional organization of the human brain is not well understood. Here we use network analysis of intrinsic functional connectivity to characterize the organization of brain networks in 23 children (ages 7-9 y and 22 young-adults (ages 19-22 y. Comparison of network properties, including path-length, clustering-coefficient, hierarchy, and regional connectivity, revealed that although children and young-adults' brains have similar "small-world" organization at the global level, they differ significantly in hierarchical organization and interregional connectivity. We found that subcortical areas were more strongly connected with primary sensory, association, and paralimbic areas in children, whereas young-adults showed stronger cortico-cortical connectivity between paralimbic, limbic, and association areas. Further, combined analysis of functional connectivity with wiring distance measures derived from white-matter fiber tracking revealed that the development of large-scale brain networks is characterized by weakening of short-range functional connectivity and strengthening of long-range functional connectivity. Importantly, our findings show that the dynamic process of over-connectivity followed by pruning, which rewires connectivity at the neuronal level, also operates at the systems level, helping to reconfigure and rebalance subcortical and paralimbic connectivity in the developing brain. Our study demonstrates the usefulness of network analysis of brain connectivity to elucidate key principles underlying functional brain maturation, paving the way for novel studies of disrupted brain connectivity in neurodevelopmental disorders such as autism.

  12. Development of large-scale functional brain networks in children.

    Science.gov (United States)

    Supekar, Kaustubh; Musen, Mark; Menon, Vinod

    2009-07-01

    The ontogeny of large-scale functional organization of the human brain is not well understood. Here we use network analysis of intrinsic functional connectivity to characterize the organization of brain networks in 23 children (ages 7-9 y) and 22 young-adults (ages 19-22 y). Comparison of network properties, including path-length, clustering-coefficient, hierarchy, and regional connectivity, revealed that although children and young-adults' brains have similar "small-world" organization at the global level, they differ significantly in hierarchical organization and interregional connectivity. We found that subcortical areas were more strongly connected with primary sensory, association, and paralimbic areas in children, whereas young-adults showed stronger cortico-cortical connectivity between paralimbic, limbic, and association areas. Further, combined analysis of functional connectivity with wiring distance measures derived from white-matter fiber tracking revealed that the development of large-scale brain networks is characterized by weakening of short-range functional connectivity and strengthening of long-range functional connectivity. Importantly, our findings show that the dynamic process of over-connectivity followed by pruning, which rewires connectivity at the neuronal level, also operates at the systems level, helping to reconfigure and rebalance subcortical and paralimbic connectivity in the developing brain. Our study demonstrates the usefulness of network analysis of brain connectivity to elucidate key principles underlying functional brain maturation, paving the way for novel studies of disrupted brain connectivity in neurodevelopmental disorders such as autism.

  13. Some ecological guidelines for large-scale biomass plantations

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, W.; Cook, J.H.; Beyea, J. [National Audubon Society, Tavernier, FL (United States)

    1993-12-31

    The National Audubon Society sees biomass as an appropriate and necessary source of energy to help replace fossil fuels in the near future, but is concerned that large-scale biomass plantations could displace significant natural vegetation and wildlife habitat, and reduce national and global biodiversity. We support the development of an industry large enough to provide significant portions of our energy budget, but we see a critical need to ensure that plantations are designed and sited in ways that minimize ecological disruption, or even provide environmental benefits. We have been studying the habitat value of intensively managed short-rotation tree plantations. Our results show that these plantations support large populations of some birds, but not all of the species using the surrounding landscape, and indicate that their value as habitat can be increased greatly by including small areas of mature trees within them. We believe short-rotation plantations can benefit regional biodiversity if they can be deployed as buffers for natural forests, or as corridors connecting forest tracts. To realize these benefits, and to avoid habitat degradation, regional biomass plantation complexes (e.g., the plantations supplying all the fuel for a powerplant) need to be planned, sited, and developed as large-scale units in the context of the regional landscape mosaic.

  14. Extending SME to Handle Large-Scale Cognitive Modeling.

    Science.gov (United States)

    Forbus, Kenneth D; Ferguson, Ronald W; Lovett, Andrew; Gentner, Dedre

    2017-07-01

    Analogy and similarity are central phenomena in human cognition, involved in processes ranging from visual perception to conceptual change. To capture this centrality requires that a model of comparison must be able to integrate with other processes and handle the size and complexity of the representations required by the tasks being modeled. This paper describes extensions to Structure-Mapping Engine (SME) since its inception in 1986 that have increased its scope of operation. We first review the basic SME algorithm, describe psychological evidence for SME as a process model, and summarize its role in simulating similarity-based retrieval and generalization. Then we describe five techniques now incorporated into the SME that have enabled it to tackle large-scale modeling tasks: (a) Greedy merging rapidly constructs one or more best interpretations of a match in polynomial time: O(n 2 log(n)); (b) Incremental operation enables mappings to be extended as new information is retrieved or derived about the base or target, to model situations where information in a task is updated over time; (c) Ubiquitous predicates model the varying degrees to which items may suggest alignment; (d) Structural evaluation of analogical inferences models aspects of plausibility judgments; (e) Match filters enable large-scale task models to communicate constraints to SME to influence the mapping process. We illustrate via examples from published studies how these enable it to capture a broader range of psychological phenomena than before. Copyright © 2016 Cognitive Science Society, Inc.

  15. Exploiting Data Sparsity for Large-Scale Matrix Computations

    KAUST Repository

    Akbudak, Kadir

    2018-02-24

    Exploiting data sparsity in dense matrices is an algorithmic bridge between architectures that are increasingly memory-austere on a per-core basis and extreme-scale applications. The Hierarchical matrix Computations on Manycore Architectures (HiCMA) library tackles this challenging problem by achieving significant reductions in time to solution and memory footprint, while preserving a specified accuracy requirement of the application. HiCMA provides a high-performance implementation on distributed-memory systems of one of the most widely used matrix factorization in large-scale scientific applications, i.e., the Cholesky factorization. It employs the tile low-rank data format to compress the dense data-sparse off-diagonal tiles of the matrix. It then decomposes the matrix computations into interdependent tasks and relies on the dynamic runtime system StarPU for asynchronous out-of-order scheduling, while allowing high user-productivity. Performance comparisons and memory footprint on matrix dimensions up to eleven million show a performance gain and memory saving of more than an order of magnitude for both metrics on thousands of cores, against state-of-the-art open-source and vendor optimized numerical libraries. This represents an important milestone in enabling large-scale matrix computations toward solving big data problems in geospatial statistics for climate/weather forecasting applications.

  16. Forestry practices and aquatic biodiversity: Fish

    Science.gov (United States)

    Gresswell, Robert E.

    2005-01-01

    ). Native non-game fishes have rarely been monitored, but populations of species such as large-scale suckers (Catostomus macrocheilus), squawfish (Ptychocheilus umpquae), and Pacific lamprey (Lampetra tridentata) also are declining in some drainages (Oregon Department of Fish and Wildlife, unpublished data).

  17. Stability and Control of Large-Scale Dynamical Systems A Vector Dissipative Systems Approach

    CERN Document Server

    Haddad, Wassim M

    2011-01-01

    Modern complex large-scale dynamical systems exist in virtually every aspect of science and engineering, and are associated with a wide variety of physical, technological, environmental, and social phenomena, including aerospace, power, communications, and network systems, to name just a few. This book develops a general stability analysis and control design framework for nonlinear large-scale interconnected dynamical systems, and presents the most complete treatment on vector Lyapunov function methods, vector dissipativity theory, and decentralized control architectures. Large-scale dynami

  18. Large-scale additive manufacturing with bioinspired cellulosic materials.

    Science.gov (United States)

    Sanandiya, Naresh D; Vijay, Yadunund; Dimopoulou, Marina; Dritsas, Stylianos; Fernandez, Javier G

    2018-06-05

    Cellulose is the most abundant and broadly distributed organic compound and industrial by-product on Earth. However, despite decades of extensive research, the bottom-up use of cellulose to fabricate 3D objects is still plagued with problems that restrict its practical applications: derivatives with vast polluting effects, use in combination with plastics, lack of scalability and high production cost. Here we demonstrate the general use of cellulose to manufacture large 3D objects. Our approach diverges from the common association of cellulose with green plants and it is inspired by the wall of the fungus-like oomycetes, which is reproduced introducing small amounts of chitin between cellulose fibers. The resulting fungal-like adhesive material(s) (FLAM) are strong, lightweight and inexpensive, and can be molded or processed using woodworking techniques. We believe this first large-scale additive manufacture with ubiquitous biological polymers will be the catalyst for the transition to environmentally benign and circular manufacturing models.

  19. Large-scale experience with biological treatment of contaminated soil

    International Nuclear Information System (INIS)

    Schulz-Berendt, V.; Poetzsch, E.

    1995-01-01

    The efficiency of biological methods for the cleanup of soil contaminated with total petroleum hydrocarbons (TPH) and polycyclic aromatic hydrocarbons (PAH) was demonstrated by a large-scale example in which 38,000 tons of TPH- and PAH-polluted soil was treated onsite with the TERRAFERM reg-sign degradation system to reach the target values of 300 mg/kg TPH and 5 mg/kg PAH. Detection of the ecotoxicological potential (Microtox reg-sign assay) showed a significant decrease during the remediation. Low concentrations of PAH in the ground were treated by an in situ technology. The in situ treatment was combined with mechanical measures (slurry wall) to prevent the contamination from dispersing from the site

  20. Structural Quality of Service in Large-Scale Networks

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup

    , telephony and data. To meet the requirements of the different applications, and to handle the increased vulnerability to failures, the ability to design robust networks providing good Quality of Service is crucial. However, most planning of large-scale networks today is ad-hoc based, leading to highly...... complex networks lacking predictability and global structural properties. The thesis applies the concept of Structural Quality of Service to formulate desirable global properties, and it shows how regular graph structures can be used to obtain such properties.......Digitalization has created the base for co-existence and convergence in communications, leading to an increasing use of multi service networks. This is for example seen in the Fiber To The Home implementations, where a single fiber is used for virtually all means of communication, including TV...

  1. Large-scale patterns in Rayleigh-Benard convection

    International Nuclear Information System (INIS)

    Hardenberg, J. von; Parodi, A.; Passoni, G.; Provenzale, A.; Spiegel, E.A.

    2008-01-01

    Rayleigh-Benard convection at large Rayleigh number is characterized by the presence of intense, vertically moving plumes. Both laboratory and numerical experiments reveal that the rising and descending plumes aggregate into separate clusters so as to produce large-scale updrafts and downdrafts. The horizontal scales of the aggregates reported so far have been comparable to the horizontal extent of the containers, but it has not been clear whether that represents a limitation imposed by domain size. In this work, we present numerical simulations of convection at sufficiently large aspect ratio to ascertain whether there is an intrinsic saturation scale for the clustering process when that ratio is large enough. From a series of simulations of Rayleigh-Benard convection with Rayleigh numbers between 10 5 and 10 8 and with aspect ratios up to 12π, we conclude that the clustering process has a finite horizontal saturation scale with at most a weak dependence on Rayleigh number in the range studied

  2. Hydrogen-combustion analyses of large-scale tests

    International Nuclear Information System (INIS)

    Gido, R.G.; Koestel, A.

    1986-01-01

    This report uses results of the large-scale tests with turbulence performed by the Electric Power Research Institute at the Nevada Test Site to evaluate hydrogen burn-analysis procedures based on lumped-parameter codes like COMPARE-H2 and associated burn-parameter models. The test results: (1) confirmed, in a general way, the procedures for application to pulsed burning, (2) increased significantly our understanding of the burn phenomenon by demonstrating that continuous burning can occur, and (3) indicated that steam can terminate continuous burning. Future actions recommended include: (1) modification of the code to perform continuous-burn analyses, which is demonstrated, (2) analyses to determine the type of burning (pulsed or continuous) that will exist in nuclear containments and the stable location if the burning is continuous, and (3) changes to the models for estimating burn parameters

  3. Hydrogen-combustion analyses of large-scale tests

    International Nuclear Information System (INIS)

    Gido, R.G.; Koestel, A.

    1986-01-01

    This report uses results of the large-scale tests with turbulence performed by the Electric Power Research Institute at the Nevada Test Site to evaluate hydrogen burn-analysis procedures based on lumped-parameter codes like COMPARE-H2 and associated burn-parameter models. The test results (a) confirmed, in a general way, the procedures for application to pulsed burning, (b) increased significantly our understanding of the burn phenomenon by demonstrating that continuous burning can occur and (c) indicated that steam can terminate continuous burning. Future actions recommended include (a) modification of the code to perform continuous-burn analyses, which is demonstrated, (b) analyses to determine the type of burning (pulsed or continuous) that will exist in nuclear containments and the stable location if the burning is continuous, and (c) changes to the models for estimating burn parameters

  4. Towards large-scale plasma-assisted synthesis of nanowires

    Science.gov (United States)

    Cvelbar, U.

    2011-05-01

    Large quantities of nanomaterials, e.g. nanowires (NWs), are needed to overcome the high market price of nanomaterials and make nanotechnology widely available for general public use and applications to numerous devices. Therefore, there is an enormous need for new methods or routes for synthesis of those nanostructures. Here plasma technologies for synthesis of NWs, nanotubes, nanoparticles or other nanostructures might play a key role in the near future. This paper presents a three-dimensional problem of large-scale synthesis connected with the time, quantity and quality of nanostructures. Herein, four different plasma methods for NW synthesis are presented in contrast to other methods, e.g. thermal processes, chemical vapour deposition or wet chemical processes. The pros and cons are discussed in detail for the case of two metal oxides: iron oxide and zinc oxide NWs, which are important for many applications.

  5. Large-scale visualization system for grid environment

    International Nuclear Information System (INIS)

    Suzuki, Yoshio

    2007-01-01

    Center for Computational Science and E-systems of Japan Atomic Energy Agency (CCSE/JAEA) has been conducting R and Ds of distributed computing (grid computing) environments: Seamless Thinking Aid (STA), Information Technology Based Laboratory (ITBL) and Atomic Energy Grid InfraStructure (AEGIS). In these R and Ds, we have developed the visualization technology suitable for the distributed computing environment. As one of the visualization tools, we have developed the Parallel Support Toolkit (PST) which can execute the visualization process parallely on a computer. Now, we improve PST to be executable simultaneously on multiple heterogeneous computers using Seamless Thinking Aid Message Passing Interface (STAMPI). STAMPI, we have developed in these R and Ds, is the MPI library executable on a heterogeneous computing environment. The improvement realizes the visualization of extremely large-scale data and enables more efficient visualization processes in a distributed computing environment. (author)

  6. BigSUR: large-scale structured urban reconstruction

    KAUST Repository

    Kelly, Tom; Femiani, John; Wonka, Peter; Mitra, Niloy J.

    2017-01-01

    The creation of high-quality semantically parsed 3D models for dense metropolitan areas is a fundamental urban modeling problem. Although recent advances in acquisition techniques and processing algorithms have resulted in large-scale imagery or 3D polygonal reconstructions, such data-sources are typically noisy, and incomplete, with no semantic structure. In this paper, we present an automatic data fusion technique that produces high-quality structured models of city blocks. From coarse polygonal meshes, street-level imagery, and GIS footprints, we formulate a binary integer program that globally balances sources of error to produce semantically parsed mass models with associated facade elements. We demonstrate our system on four city regions of varying complexity; our examples typically contain densely built urban blocks spanning hundreds of buildings. In our largest example, we produce a structured model of 37 city blocks spanning a total of 1,011 buildings at a scale and quality previously impossible to achieve automatically.

  7. Measuring large-scale social networks with high resolution.

    Directory of Open Access Journals (Sweden)

    Arkadiusz Stopczynski

    Full Text Available This paper describes the deployment of a large-scale study designed to measure human interactions across a variety of communication channels, with high temporal resolution and spanning multiple years-the Copenhagen Networks Study. Specifically, we collect data on face-to-face interactions, telecommunication, social networks, location, and background information (personality, demographics, health, politics for a densely connected population of 1000 individuals, using state-of-the-art smartphones as social sensors. Here we provide an overview of the related work and describe the motivation and research agenda driving the study. Additionally, the paper details the data-types measured, and the technical infrastructure in terms of both backend and phone software, as well as an outline of the deployment procedures. We document the participant privacy procedures and their underlying principles. The paper is concluded with early results from data analysis, illustrating the importance of multi-channel high-resolution approach to data collection.

  8. Large-scale innovation and change in UK higher education

    Directory of Open Access Journals (Sweden)

    Stephen Brown

    2013-09-01

    Full Text Available This paper reflects on challenges universities face as they respond to change. It reviews current theories and models of change management, discusses why universities are particularly difficult environments in which to achieve large scale, lasting change and reports on a recent attempt by the UK JISC to enable a range of UK universities to employ technology to deliver such changes. Key lessons that emerged from these experiences are reviewed covering themes of pervasiveness, unofficial systems, project creep, opposition, pressure to deliver, personnel changes and technology issues. The paper argues that collaborative approaches to project management offer greater prospects of effective large-scale change in universities than either management-driven top-down or more champion-led bottom-up methods. It also argues that while some diminution of control over project outcomes is inherent in this approach, this is outweighed by potential benefits of lasting and widespread adoption of agreed changes.

  9. Multidimensional quantum entanglement with large-scale integrated optics.

    Science.gov (United States)

    Wang, Jianwei; Paesani, Stefano; Ding, Yunhong; Santagati, Raffaele; Skrzypczyk, Paul; Salavrakos, Alexia; Tura, Jordi; Augusiak, Remigiusz; Mančinska, Laura; Bacco, Davide; Bonneau, Damien; Silverstone, Joshua W; Gong, Qihuang; Acín, Antonio; Rottwitt, Karsten; Oxenløwe, Leif K; O'Brien, Jeremy L; Laing, Anthony; Thompson, Mark G

    2018-04-20

    The ability to control multidimensional quantum systems is central to the development of advanced quantum technologies. We demonstrate a multidimensional integrated quantum photonic platform able to generate, control, and analyze high-dimensional entanglement. A programmable bipartite entangled system is realized with dimensions up to 15 × 15 on a large-scale silicon photonics quantum circuit. The device integrates more than 550 photonic components on a single chip, including 16 identical photon-pair sources. We verify the high precision, generality, and controllability of our multidimensional technology, and further exploit these abilities to demonstrate previously unexplored quantum applications, such as quantum randomness expansion and self-testing on multidimensional states. Our work provides an experimental platform for the development of multidimensional quantum technologies. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  10. Distributed system for large-scale remote research

    International Nuclear Information System (INIS)

    Ueshima, Yutaka

    2002-01-01

    In advanced photon research, large-scale simulations and high-resolution observations are powerfull tools. In numerical and real experiments, the real-time visualization and steering system is considered as a hopeful method of data analysis. This approach is valid in the typical analysis at one time or low cost experiment and simulation. In research of an unknown problem, it is necessary that the output data be analyzed many times because conclusive analysis is difficult at one time. Consequently, output data should be filed to refer and analyze at any time. To support research, we need the automatic functions, transporting data files from data generator to data storage, analyzing data, tracking history of data handling, and so on. The supporting system will be a functionally distributed system. (author)

  11. Large-scale dynamic compaction of natural salt

    International Nuclear Information System (INIS)

    Hansen, F.D.; Ahrens, E.H.

    1996-01-01

    A large-scale dynamic compaction demonstration of natural salt was successfully completed. About 40 m 3 of salt were compacted in three, 2-m lifts by dropping a 9,000-kg weight from a height of 15 m in a systematic pattern to achieve desired compaction energy. To enhance compaction, 1 wt% water was added to the relatively dry mine-run salt. The average compacted mass fractional density was 0.90 of natural intact salt, and in situ nitrogen permeabilities averaged 9X10 -14 m 2 . This established viability of dynamic compacting for placing salt shaft seal components. The demonstration also provided compacted salt parameters needed for shaft seal system design and performance assessments of the Waste Isolation Pilot Plant

  12. Large-scale quantitative analysis of painting arts.

    Science.gov (United States)

    Kim, Daniel; Son, Seung-Woo; Jeong, Hawoong

    2014-12-11

    Scientists have made efforts to understand the beauty of painting art in their own languages. As digital image acquisition of painting arts has made rapid progress, researchers have come to a point where it is possible to perform statistical analysis of a large-scale database of artistic paints to make a bridge between art and science. Using digital image processing techniques, we investigate three quantitative measures of images - the usage of individual colors, the variety of colors, and the roughness of the brightness. We found a difference in color usage between classical paintings and photographs, and a significantly low color variety of the medieval period. Interestingly, moreover, the increment of roughness exponent as painting techniques such as chiaroscuro and sfumato have advanced is consistent with historical circumstances.

  13. Coordinated SLNR based Precoding in Large-Scale Heterogeneous Networks

    KAUST Repository

    Boukhedimi, Ikram; Kammoun, Abla; Alouini, Mohamed-Slim

    2017-01-01

    This work focuses on the downlink of large-scale two-tier heterogeneous networks composed of a macro-cell overlaid by micro-cell networks. Our interest is on the design of coordinated beamforming techniques that allow to mitigate the inter-cell interference. Particularly, we consider the case in which the coordinating base stations (BSs) have imperfect knowledge of the channel state information. Under this setting, we propose a regularized SLNR based precoding design in which the regularization factor is used to allow better resilience with respect to the channel estimation errors. Based on tools from random matrix theory, we provide an analytical analysis of the SINR and SLNR performances. These results are then exploited to propose a proper setting of the regularization factor. Simulation results are finally provided in order to validate our findings and to confirm the performance of the proposed precoding scheme.

  14. BigSUR: large-scale structured urban reconstruction

    KAUST Repository

    Kelly, Tom

    2017-11-22

    The creation of high-quality semantically parsed 3D models for dense metropolitan areas is a fundamental urban modeling problem. Although recent advances in acquisition techniques and processing algorithms have resulted in large-scale imagery or 3D polygonal reconstructions, such data-sources are typically noisy, and incomplete, with no semantic structure. In this paper, we present an automatic data fusion technique that produces high-quality structured models of city blocks. From coarse polygonal meshes, street-level imagery, and GIS footprints, we formulate a binary integer program that globally balances sources of error to produce semantically parsed mass models with associated facade elements. We demonstrate our system on four city regions of varying complexity; our examples typically contain densely built urban blocks spanning hundreds of buildings. In our largest example, we produce a structured model of 37 city blocks spanning a total of 1,011 buildings at a scale and quality previously impossible to achieve automatically.

  15. Coordinated SLNR based Precoding in Large-Scale Heterogeneous Networks

    KAUST Repository

    Boukhedimi, Ikram

    2017-03-06

    This work focuses on the downlink of large-scale two-tier heterogeneous networks composed of a macro-cell overlaid by micro-cell networks. Our interest is on the design of coordinated beamforming techniques that allow to mitigate the inter-cell interference. Particularly, we consider the case in which the coordinating base stations (BSs) have imperfect knowledge of the channel state information. Under this setting, we propose a regularized SLNR based precoding design in which the regularization factor is used to allow better resilience with respect to the channel estimation errors. Based on tools from random matrix theory, we provide an analytical analysis of the SINR and SLNR performances. These results are then exploited to propose a proper setting of the regularization factor. Simulation results are finally provided in order to validate our findings and to confirm the performance of the proposed precoding scheme.

  16. Planning under uncertainty solving large-scale stochastic linear programs

    Energy Technology Data Exchange (ETDEWEB)

    Infanger, G. [Stanford Univ., CA (United States). Dept. of Operations Research]|[Technische Univ., Vienna (Austria). Inst. fuer Energiewirtschaft

    1992-12-01

    For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.

  17. Hierarchical optimal control of large-scale nonlinear chemical processes.

    Science.gov (United States)

    Ramezani, Mohammad Hossein; Sadati, Nasser

    2009-01-01

    In this paper, a new approach is presented for optimal control of large-scale chemical processes. In this approach, the chemical process is decomposed into smaller sub-systems at the first level, and a coordinator at the second level, for which a two-level hierarchical control strategy is designed. For this purpose, each sub-system in the first level can be solved separately, by using any conventional optimization algorithm. In the second level, the solutions obtained from the first level are coordinated using a new gradient-type strategy, which is updated by the error of the coordination vector. The proposed algorithm is used to solve the optimal control problem of a complex nonlinear chemical stirred tank reactor (CSTR), where its solution is also compared with the ones obtained using the centralized approach. The simulation results show the efficiency and the capability of the proposed hierarchical approach, in finding the optimal solution, over the centralized method.

  18. Experimental Investigation of Large-Scale Bubbly Plumes

    Energy Technology Data Exchange (ETDEWEB)

    Zboray, R.; Simiano, M.; De Cachard, F

    2004-03-01

    Carefully planned and instrumented experiments under well-defined boundary conditions have been carried out on large-scale, isothermal, bubbly plumes. The data obtained is meant to validate newly developed, high-resolution numerical tools for 3D transient, two-phase flow modelling. Several measurement techniques have been utilised to collect data from the experiments: particle image velocimetry, optical probes, electromagnetic probes, and visualisation. Bubble and liquid velocity fields, void-fraction distributions, bubble size and interfacial-area-concentration distributions have all been measured in the plume region, as well as recirculation velocities in the surrounding pool. The results obtained from the different measurement techniques have been compared. In general, the two-phase flow data obtained from the different techniques are found to be consistent, and of high enough quality for validating numerical simulation tools for 3D bubbly flows. (author)

  19. Geophysical mapping of complex glaciogenic large-scale structures

    DEFF Research Database (Denmark)

    Høyer, Anne-Sophie

    2013-01-01

    This thesis presents the main results of a four year PhD study concerning the use of geophysical data in geological mapping. The study is related to the Geocenter project, “KOMPLEKS”, which focuses on the mapping of complex, large-scale geological structures. The study area is approximately 100 km2...... data types and co-interpret them in order to improve our geological understanding. However, in order to perform this successfully, methodological considerations are necessary. For instance, a structure indicated by a reflection in the seismic data is not always apparent in the resistivity data...... information) can be collected. The geophysical data are used together with geological analyses from boreholes and pits to interpret the geological history of the hill-island. The geophysical data reveal that the glaciotectonic structures truncate at the surface. The directions of the structures were mapped...

  20. Large-scale transport across narrow gaps in rod bundles

    Energy Technology Data Exchange (ETDEWEB)

    Guellouz, M.S.; Tavoularis, S. [Univ. of Ottawa (Canada)

    1995-09-01

    Flow visualization and how-wire anemometry were used to investigate the velocity field in a rectangular channel containing a single cylindrical rod, which could be traversed on the centreplane to form gaps of different widths with the plane wall. The presence of large-scale, quasi-periodic structures in the vicinity of the gap has been demonstrated through flow visualization, spectral analysis and space-time correlation measurements. These structures are seen to exist even for relatively large gaps, at least up to W/D=1.350 (W is the sum of the rod diameter, D, and the gap width). The above measurements appear to compatible with the field of a street of three-dimensional, counter-rotating vortices, whose detailed structure, however, remains to be determined. The convection speed and the streamwise spacing of these vortices have been determined as functions of the gap size.

  1. Including investment risk in large-scale power market models

    DEFF Research Database (Denmark)

    Lemming, Jørgen Kjærgaard; Meibom, P.

    2003-01-01

    Long-term energy market models can be used to examine investments in production technologies, however, with market liberalisation it is crucial that such models include investment risks and investor behaviour. This paper analyses how the effect of investment risk on production technology selection...... can be included in large-scale partial equilibrium models of the power market. The analyses are divided into a part about risk measures appropriate for power market investors and a more technical part about the combination of a risk-adjustment model and a partial-equilibrium model. To illustrate...... the analyses quantitatively, a framework based on an iterative interaction between the equilibrium model and a separate risk-adjustment module was constructed. To illustrate the features of the proposed modelling approach we examined how uncertainty in demand and variable costs affects the optimal choice...

  2. Optimal Wind Energy Integration in Large-Scale Electric Grids

    Science.gov (United States)

    Albaijat, Mohammad H.

    The major concern in electric grid operation is operating under the most economical and reliable fashion to ensure affordability and continuity of electricity supply. This dissertation investigates the effects of such challenges, which affect electric grid reliability and economic operations. These challenges are: 1. Congestion of transmission lines, 2. Transmission lines expansion, 3. Large-scale wind energy integration, and 4. Phaser Measurement Units (PMUs) optimal placement for highest electric grid observability. Performing congestion analysis aids in evaluating the required increase of transmission line capacity in electric grids. However, it is necessary to evaluate expansion of transmission line capacity on methods to ensure optimal electric grid operation. Therefore, the expansion of transmission line capacity must enable grid operators to provide low-cost electricity while maintaining reliable operation of the electric grid. Because congestion affects the reliability of delivering power and increases its cost, the congestion analysis in electric grid networks is an important subject. Consequently, next-generation electric grids require novel methodologies for studying and managing congestion in electric grids. We suggest a novel method of long-term congestion management in large-scale electric grids. Owing to the complication and size of transmission line systems and the competitive nature of current grid operation, it is important for electric grid operators to determine how many transmission lines capacity to add. Traditional questions requiring answers are "Where" to add, "How much of transmission line capacity" to add, and "Which voltage level". Because of electric grid deregulation, transmission lines expansion is more complicated as it is now open to investors, whose main interest is to generate revenue, to build new transmission lines. Adding a new transmission capacity will help the system to relieve the transmission system congestion, create

  3. Electric vehicles and large-scale integration of wind power

    DEFF Research Database (Denmark)

    Liu, Wen; Hu, Weihao; Lund, Henrik

    2013-01-01

    with this imbalance and to reduce its high dependence on oil production. For this reason, it is interesting to analyse the extent to which transport electrification can further the renewable energy integration. This paper quantifies this issue in Inner Mongolia, where the share of wind power in the electricity supply...... was 6.5% in 2009 and which has the plan to develop large-scale wind power. The results show that electric vehicles (EVs) have the ability to balance the electricity demand and supply and to further the wind power integration. In the best case, the energy system with EV can increase wind power...... integration by 8%. The application of EVs benefits from saving both energy system cost and fuel cost. However, the negative consequences of decreasing energy system efficiency and increasing the CO2 emission should be noted when applying the hydrogen fuel cell vehicle (HFCV). The results also indicate...

  4. Experimental Investigation of Large-Scale Bubbly Plumes

    International Nuclear Information System (INIS)

    Zboray, R.; Simiano, M.; De Cachard, F.

    2004-01-01

    Carefully planned and instrumented experiments under well-defined boundary conditions have been carried out on large-scale, isothermal, bubbly plumes. The data obtained is meant to validate newly developed, high-resolution numerical tools for 3D transient, two-phase flow modelling. Several measurement techniques have been utilised to collect data from the experiments: particle image velocimetry, optical probes, electromagnetic probes, and visualisation. Bubble and liquid velocity fields, void-fraction distributions, bubble size and interfacial-area-concentration distributions have all been measured in the plume region, as well as recirculation velocities in the surrounding pool. The results obtained from the different measurement techniques have been compared. In general, the two-phase flow data obtained from the different techniques are found to be consistent, and of high enough quality for validating numerical simulation tools for 3D bubbly flows. (author)

  5. Complex Formation Control of Large-Scale Intelligent Autonomous Vehicles

    Directory of Open Access Journals (Sweden)

    Ming Lei

    2012-01-01

    Full Text Available A new formation framework of large-scale intelligent autonomous vehicles is developed, which can realize complex formations while reducing data exchange. Using the proposed hierarchy formation method and the automatic dividing algorithm, vehicles are automatically divided into leaders and followers by exchanging information via wireless network at initial time. Then, leaders form formation geometric shape by global formation information and followers track their own virtual leaders to form line formation by local information. The formation control laws of leaders and followers are designed based on consensus algorithms. Moreover, collision-avoiding problems are considered and solved using artificial potential functions. Finally, a simulation example that consists of 25 vehicles shows the effectiveness of theory.

  6. Magnetic Properties of Large-Scale Nanostructured Graphene Systems

    DEFF Research Database (Denmark)

    Gregersen, Søren Schou

    The on-going progress in two-dimensional (2D) materials and nanostructure fabrication motivates the study of altered and combined materials. Graphene—the most studied material of the 2D family—displays unique electronic and spintronic properties. Exceptionally high electron mobilities, that surpass...... those in conventional materials such as silicon, make graphene a very interesting material for high-speed electronics. Simultaneously, long spin-diffusion lengths and spin-life times makes graphene an eligible spin-transport channel. In this thesis, we explore fundamental features of nanostructured...... graphene systems using large-scale modeling techniques. Graphene perforations, or antidots, have received substantial interest in the prospect of opening large band gaps in the otherwise gapless graphene. Motivated by recent improvements of fabrication processes, such as forming graphene antidots and layer...

  7. A large-scale computer facility for computational aerodynamics

    International Nuclear Information System (INIS)

    Bailey, F.R.; Balhaus, W.F.

    1985-01-01

    The combination of computer system technology and numerical modeling have advanced to the point that computational aerodynamics has emerged as an essential element in aerospace vehicle design methodology. To provide for further advances in modeling of aerodynamic flow fields, NASA has initiated at the Ames Research Center the Numerical Aerodynamic Simulation (NAS) Program. The objective of the Program is to develop a leading-edge, large-scale computer facility, and make it available to NASA, DoD, other Government agencies, industry and universities as a necessary element in ensuring continuing leadership in computational aerodynamics and related disciplines. The Program will establish an initial operational capability in 1986 and systematically enhance that capability by incorporating evolving improvements in state-of-the-art computer system technologies as required to maintain a leadership role. This paper briefly reviews the present and future requirements for computational aerodynamics and discusses the Numerical Aerodynamic Simulation Program objectives, computational goals, and implementation plans

  8. Exploiting large-scale correlations to detect continuous gravitational waves.

    Science.gov (United States)

    Pletsch, Holger J; Allen, Bruce

    2009-10-30

    Fully coherent searches (over realistic ranges of parameter space and year-long observation times) for unknown sources of continuous gravitational waves are computationally prohibitive. Less expensive hierarchical searches divide the data into shorter segments which are analyzed coherently, then detection statistics from different segments are combined incoherently. The novel method presented here solves the long-standing problem of how best to do the incoherent combination. The optimal solution exploits large-scale parameter-space correlations in the coherent detection statistic. Application to simulated data shows dramatic sensitivity improvements compared with previously available (ad hoc) methods, increasing the spatial volume probed by more than 2 orders of magnitude at lower computational cost.

  9. Large-scale Ising-machines composed of magnetic neurons

    Science.gov (United States)

    Mizushima, Koichi; Goto, Hayato; Sato, Rie

    2017-10-01

    We propose Ising-machines composed of magnetic neurons, that is, magnetic bits in a recording track. In large-scale machines, the sizes of both neurons and synapses need to be reduced, and neat and smart connections among neurons are also required to achieve all-to-all connectivity among them. These requirements can be fulfilled by adopting magnetic recording technologies such as race-track memories and skyrmion tracks because the area of a magnetic bit is almost two orders of magnitude smaller than that of static random access memory, which has normally been used as a semiconductor neuron, and the smart connections among neurons are realized by using the read and write methods of these technologies.

  10. Combined process automation for large-scale EEG analysis.

    Science.gov (United States)

    Sfondouris, John L; Quebedeaux, Tabitha M; Holdgraf, Chris; Musto, Alberto E

    2012-01-01

    Epileptogenesis is a dynamic process producing increased seizure susceptibility. Electroencephalography (EEG) data provides information critical in understanding the evolution of epileptiform changes throughout epileptic foci. We designed an algorithm to facilitate efficient large-scale EEG analysis via linked automation of multiple data processing steps. Using EEG recordings obtained from electrical stimulation studies, the following steps of EEG analysis were automated: (1) alignment and isolation of pre- and post-stimulation intervals, (2) generation of user-defined band frequency waveforms, (3) spike-sorting, (4) quantification of spike and burst data and (5) power spectral density analysis. This algorithm allows for quicker, more efficient EEG analysis. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Matrix Sampling of Items in Large-Scale Assessments

    Directory of Open Access Journals (Sweden)

    Ruth A. Childs

    2003-07-01

    Full Text Available Matrix sampling of items -' that is, division of a set of items into different versions of a test form..-' is used by several large-scale testing programs. Like other test designs, matrixed designs have..both advantages and disadvantages. For example, testing time per student is less than if each..student received all the items, but the comparability of student scores may decrease. Also,..curriculum coverage is maintained, but reporting of scores becomes more complex. In this paper,..matrixed designs are compared with more traditional designs in nine categories of costs:..development costs, materials costs, administration costs, educational costs, scoring costs,..reliability costs, comparability costs, validity costs, and reporting costs. In choosing among test..designs, a testing program should examine the costs in light of its mandate(s, the content of the..tests, and the financial resources available, among other considerations.

  12. Nuclear-pumped lasers for large-scale applications

    International Nuclear Information System (INIS)

    Anderson, R.E.; Leonard, E.M.; Shea, R.F.; Berggren, R.R.

    1989-05-01

    Efficient initiation of large-volume chemical lasers may be achieved by neutron induced reactions which produce charged particles in the final state. When a burst mode nuclear reactor is used as the neutron source, both a sufficiently intense neutron flux and a sufficiently short initiation pulse may be possible. Proof-of-principle experiments are planned to demonstrate lasing in a direct nuclear-pumped large-volume system; to study the effects of various neutron absorbing materials on laser performance; to study the effects of long initiation pulse lengths; to demonstrate the performance of large-scale optics and the beam quality that may be obtained; and to assess the performance of alternative designs of burst systems that increase the neutron output and burst repetition rate. 21 refs., 8 figs., 5 tabs

  13. Large-scale biophysical evaluation of protein PEGylation effects

    DEFF Research Database (Denmark)

    Vernet, Erik; Popa, Gina; Pozdnyakova, Irina

    2016-01-01

    PEGylation is the most widely used method to chemically modify protein biopharmaceuticals, but surprisingly limited public data is available on the biophysical effects of protein PEGylation. Here we report the first large-scale study, with site-specific mono-PEGylation of 15 different proteins...... of PEGylation on the thermal stability of a protein based on data generated by circular dichroism (CD), differential scanning calorimetry (DSC), or differential scanning fluorimetry (DSF). In addition, DSF was validated as a fast and inexpensive screening method for thermal unfolding studies of PEGylated...... proteins. Multivariate data analysis revealed clear trends in biophysical properties upon PEGylation for a subset of proteins, although no universal trends were found. Taken together, these findings are important in the consideration of biophysical methods and evaluation of second...

  14. Optimization of large-scale fabrication of dielectric elastomer transducers

    DEFF Research Database (Denmark)

    Hassouneh, Suzan Sager

    Dielectric elastomers (DEs) have gained substantial ground in many different applications, such as wave energy harvesting, valves and loudspeakers. For DE technology to be commercially viable, it is necessary that any large-scale production operation is nondestructive, efficient and cheap. Danfoss......-strength laminates to perform as monolithic elements. For the front-to-back and front-to-front configurations, conductive elastomers were utilised. One approach involved adding the cheap and conductive filler, exfoliated graphite (EG) to a PDMS matrix to increase dielectric permittivity. The results showed that even...... as conductive adhesives were rejected. Dielectric properties below the percolation threshold were subsequently investigated, in order to conclude the study. In order to avoid destroying the network structure, carbon nanotubes (CNTs) were used as fillers during the preparation of the conductive elastomers...

  15. USE OF RFID AT LARGE-SCALE EVENTS

    Directory of Open Access Journals (Sweden)

    Yuusuke KAWAKITA

    2005-01-01

    Full Text Available Radio Frequency Identification (RFID devices and related technologies have received a great deal of attention for their ability to perform non-contact object identification. Systems incorporating RFID have been evaluated from a variety of perspectives. The authors constructed a networked RFID system to support event management at NetWorld+Interop 2004 Tokyo, an event that received 150,000 visitors. The system used multiple RFID readers installed at the venue and RFID tags carried by each visitor to provide a platform for running various management and visitor support applications. This paper presents the results of this field trial of RFID readability rates. It further addresses the applicability of RFID systems to visitor management, a problematic aspect of large-scale events.

  16. Modeling of large-scale oxy-fuel combustion processes

    DEFF Research Database (Denmark)

    Yin, Chungen

    2012-01-01

    Quite some studies have been conducted in order to implement oxy-fuel combustion with flue gas recycle in conventional utility boilers as an effective effort of carbon capture and storage. However, combustion under oxy-fuel conditions is significantly different from conventional air-fuel firing......, among which radiative heat transfer under oxy-fuel conditions is one of the fundamental issues. This paper demonstrates the nongray-gas effects in modeling of large-scale oxy-fuel combustion processes. Oxy-fuel combustion of natural gas in a 609MW utility boiler is numerically studied, in which...... calculation of the oxy-fuel WSGGM remarkably over-predicts the radiative heat transfer to the furnace walls and under-predicts the gas temperature at the furnace exit plane, which also result in a higher incomplete combustion in the gray calculation. Moreover, the gray and non-gray calculations of the same...

  17. Risk Management Challenges in Large-scale Energy PSS

    DEFF Research Database (Denmark)

    Tegeltija, Miroslava; Oehmen, Josef; Kozin, Igor

    2017-01-01

    Probabilistic risk management approaches have a long tradition in engineering. A large variety of tools and techniques based on the probabilistic view of risk is available and applied in PSS practice. However, uncertainties that arise due to lack of knowledge and information are still missing...... adequate representations. We focus on a large-scale energy company in Denmark as one case of current product/servicesystems risk management best practices. We analyze their risk management process and investigate the tools they use in order to support decision making processes within the company. First, we...... identify the following challenges in the current risk management practices that are in line with literature: (1) current methods are not appropriate for the situations dominated by weak knowledge and information; (2) quality of traditional models in such situations is open to debate; (3) quality of input...

  18. Cosmological streaming velocities and large-scale density maxima

    International Nuclear Information System (INIS)

    Peacock, J.A.; Lumsden, S.L.; Heavens, A.F.

    1987-01-01

    The statistical testing of models for galaxy formation against the observed peculiar velocities on 10-100 Mpc scales is considered. If it is assumed that observers are likely to be sited near maxima in the primordial field of density perturbations, then the observed filtered velocity field will be biased to low values by comparison with a point selected at random. This helps to explain how the peculiar velocities (relative to the microwave background) of the local supercluster and the Rubin-Ford shell can be so similar in magnitude. Using this assumption to predict peculiar velocities on two scales, we test models with large-scale damping (i.e. adiabatic perturbations). Allowed models have a damping length close to the Rubin-Ford scale and are mildly non-linear. Both purely baryonic universes and universes dominated by massive neutrinos can account for the observed velocities, provided 0.1 ≤ Ω ≤ 1. (author)

  19. Engineering large-scale agent-based systems with consensus

    Science.gov (United States)

    Bokma, A.; Slade, A.; Kerridge, S.; Johnson, K.

    1994-01-01

    The paper presents the consensus method for the development of large-scale agent-based systems. Systems can be developed as networks of knowledge based agents (KBA) which engage in a collaborative problem solving effort. The method provides a comprehensive and integrated approach to the development of this type of system. This includes a systematic analysis of user requirements as well as a structured approach to generating a system design which exhibits the desired functionality. There is a direct correspondence between system requirements and design components. The benefits of this approach are that requirements are traceable into design components and code thus facilitating verification. The use of the consensus method with two major test applications showed it to be successful and also provided valuable insight into problems typically associated with the development of large systems.

  20. Future hydrogen markets for large-scale hydrogen production systems

    International Nuclear Information System (INIS)

    Forsberg, Charles W.

    2007-01-01

    The cost of delivered hydrogen includes production, storage, and distribution. For equal production costs, large users (>10 6 m 3 /day) will favor high-volume centralized hydrogen production technologies to avoid collection costs for hydrogen from widely distributed sources. Potential hydrogen markets were examined to identify and characterize those markets that will favor large-scale hydrogen production technologies. The two high-volume centralized hydrogen production technologies are nuclear energy and fossil energy with carbon dioxide sequestration. The potential markets for these technologies are: (1) production of liquid fuels (gasoline, diesel and jet) including liquid fuels with no net greenhouse gas emissions and (2) peak electricity production. The development of high-volume centralized hydrogen production technologies requires an understanding of the markets to (1) define hydrogen production requirements (purity, pressure, volumes, need for co-product oxygen, etc.); (2) define and develop technologies to use the hydrogen, and (3) create the industrial partnerships to commercialize such technologies. (author)

  1. Research on the Construction Management and Sustainable Development of Large-Scale Scientific Facilities in China

    Science.gov (United States)

    Guiquan, Xi; Lin, Cong; Xuehui, Jin

    2018-05-01

    As an important platform for scientific and technological development, large -scale scientific facilities are the cornerstone of technological innovation and a guarantee for economic and social development. Researching management of large-scale scientific facilities can play a key role in scientific research, sociology and key national strategy. This paper reviews the characteristics of large-scale scientific facilities, and summarizes development status of China's large-scale scientific facilities. At last, the construction, management, operation and evaluation of large-scale scientific facilities is analyzed from the perspective of sustainable development.

  2. SCALE INTERACTION IN A MIXING LAYER. THE ROLE OF THE LARGE-SCALE GRADIENTS

    KAUST Repository

    Fiscaletti, Daniele

    2015-08-23

    The interaction between scales is investigated in a turbulent mixing layer. The large-scale amplitude modulation of the small scales already observed in other works depends on the crosswise location. Large-scale positive fluctuations correlate with a stronger activity of the small scales on the low speed-side of the mixing layer, and a reduced activity on the high speed-side. However, from physical considerations we would expect the scales to interact in a qualitatively similar way within the flow and across different turbulent flows. Therefore, instead of the large-scale fluctuations, the large-scale gradients modulation of the small scales has been additionally investigated.

  3. An occupancy-based quantification of the highly imperiled status of desert fishes of the southwestern United States.

    Science.gov (United States)

    Budy, Phaedra; Conner, Mary M; Salant, Nira L; Macfarlane, William W

    2015-08-01

    Desert fishes are some of the most imperiled vertebrates worldwide due to their low economic worth and because they compete with humans for water. An ecological complex of fishes, 2 suckers (Catostomus latipinnis, Catostomus discobolus) and a chub (Gila robusta) (collectively managed as the so-called three species) are endemic to the U.S. Colorado River Basin, are affected by multiple stressors, and have allegedly declined dramatically. We built a series of occupancy models to determine relationships between trends in occupancy, local extinction, and local colonization rates, identify potential limiting factors, and evaluate the suitability of managing the 3 species collectively. For a historical period (1889-2011), top performing models (AICc) included a positive time trend in local extinction probability and a negative trend in local colonization probability. As flood frequency decreased post-development local extinction probability increased. By the end of the time series, 47% (95% CI 34-61) and 15% (95% CI 6-33) of sites remained occupied by the suckers and the chub, respectively, and models with the 2 species of sucker as one group and the chub as the other performed best. For a contemporary period (2001-2011), top performing (based on AICc ) models included peak annual discharge. As peak discharge increased, local extinction probability decreased and local colonization probability increased. For the contemporary period, results of models that split all 3 species into separate groups were similar to results of models that combined the 2 suckers but not the chub. Collectively, these results confirmed that declines in these fishes were strongly associated with water development and that relative to their historic distribution all 3 species have declined dramatically. Further, the chub was distinct in that it declined the most dramatically and therefore may need to be managed separately. Our modeling approach may be useful in other situations in which targeted

  4. Hydrogen combustion modelling in large-scale geometries

    International Nuclear Information System (INIS)

    Studer, E.; Beccantini, A.; Kudriakov, S.; Velikorodny, A.

    2014-01-01

    Hydrogen risk mitigation issues based on catalytic recombiners cannot exclude flammable clouds to be formed during the course of a severe accident in a Nuclear Power Plant. Consequences of combustion processes have to be assessed based on existing knowledge and state of the art in CFD combustion modelling. The Fukushima accidents have also revealed the need for taking into account the hydrogen explosion phenomena in risk management. Thus combustion modelling in a large-scale geometry is one of the remaining severe accident safety issues. At present day there doesn't exist a combustion model which can accurately describe a combustion process inside a geometrical configuration typical of the Nuclear Power Plant (NPP) environment. Therefore the major attention in model development has to be paid on the adoption of existing approaches or creation of the new ones capable of reliably predicting the possibility of the flame acceleration in the geometries of that type. A set of experiments performed previously in RUT facility and Heiss Dampf Reactor (HDR) facility is used as a validation database for development of three-dimensional gas dynamic model for the simulation of hydrogen-air-steam combustion in large-scale geometries. The combustion regimes include slow deflagration, fast deflagration, and detonation. Modelling is based on Reactive Discrete Equation Method (RDEM) where flame is represented as an interface separating reactants and combustion products. The transport of the progress variable is governed by different flame surface wrinkling factors. The results of numerical simulation are presented together with the comparisons, critical discussions and conclusions. (authors)

  5. Large-Scale Structure Behind The Milky Way with ALFAZOA

    Science.gov (United States)

    Sanchez Barrantes, Monica; Henning, Patricia A.; Momjian, Emmanuel; McIntyre, Travis; Minchin, Robert F.

    2018-06-01

    The region of the sky behind the Milky Way (the Zone of Avoidance; ZOA) is not well studied due to high obscuration from gas and dust in our galaxy as well as stellar confusion, which results in low detection rate of galaxies in this region. Because of this, little is known about the distribution of galaxies in the ZOA, and other all sky redshift surveys have incomplete maps (e.g. the 2MASS Redshift survey in NIR has a gap of 5-8 deg around the Galactic plane). There is still controversy about the dipole anisotropy calculated from the comparison between the CMB and galaxy and redshift surveys, in part due to the incomplete sky mapping and redshift depth of these surveys. Fortunately, there is no ZOA at radio wavelengths because such wavelengths can pass unimpeded through dust and are not affected by stellar confusion. Therefore, we can detect and make a map of the distribution of obscured galaxies that contain the 21cm neutral hydrogen emission line, and trace the large-scale structure across the Galactic plane. The Arecibo L-Band Feed Array Zone of Avoidance (ALFAZOA) survey is a blind HI survey for galaxies behind the Milky Way that covers more than 1000 square degrees of the sky, conducted in two phases: shallow (completed) and deep (ongoing). We show the results of the finished shallow phase of the survey, which mapped a region between the galactic longitude l=30-75 deg, and latitude b <|10 deg|, and detected 418 galaxies to about 12,000 km/s, including galaxy properties and mapped large-scale structure. We do the same for new results from the deep phase, which is ongoing and covers 30 < l < 75 deg and b < |2| deg for the inner galaxy and 175 < l < 207 deg, with -2 < b < 1 for the outer galaxy.

  6. In situ vitrification large-scale operational acceptance test analysis

    International Nuclear Information System (INIS)

    Buelt, J.L.; Carter, J.G.

    1986-05-01

    A thermal treatment process is currently under study to provide possible enhancement of in-place stabilization of transuranic and chemically contaminated soil sites. The process is known as in situ vitrification (ISV). In situ vitrification is a remedial action process that destroys solid and liquid organic contaminants and incorporates radionuclides into a glass-like material that renders contaminants substantially less mobile and less likely to impact the environment. A large-scale operational acceptance test (LSOAT) was recently completed in which more than 180 t of vitrified soil were produced in each of three adjacent settings. The LSOAT demonstrated that the process conforms to the functional design criteria necessary for the large-scale radioactive test (LSRT) to be conducted following verification of the performance capabilities of the process. The energy requirements and vitrified block size, shape, and mass are sufficiently equivalent to those predicted by the ISV mathematical model to confirm its usefulness as a predictive tool. The LSOAT demonstrated an electrode replacement technique, which can be used if an electrode fails, and techniques have been identified to minimize air oxidation, thereby extending electrode life. A statistical analysis was employed during the LSOAT to identify graphite collars and an insulative surface as successful cold cap subsidence techniques. The LSOAT also showed that even under worst-case conditions, the off-gas system exceeds the flow requirements necessary to maintain a negative pressure on the hood covering the area being vitrified. The retention of simulated radionuclides and chemicals in the soil and off-gas system exceeds requirements so that projected emissions are one to two orders of magnitude below the maximum permissible concentrations of contaminants at the stack

  7. Disinformative data in large-scale hydrological modelling

    Directory of Open Access Journals (Sweden)

    A. Kauffeldt

    2013-07-01

    Full Text Available Large-scale hydrological modelling has become an important tool for the study of global and regional water resources, climate impacts, and water-resources management. However, modelling efforts over large spatial domains are fraught with problems of data scarcity, uncertainties and inconsistencies between model forcing and evaluation data. Model-independent methods to screen and analyse data for such problems are needed. This study aimed at identifying data inconsistencies in global datasets using a pre-modelling analysis, inconsistencies that can be disinformative for subsequent modelling. The consistency between (i basin areas for different hydrographic datasets, and (ii between climate data (precipitation and potential evaporation and discharge data, was examined in terms of how well basin areas were represented in the flow networks and the possibility of water-balance closure. It was found that (i most basins could be well represented in both gridded basin delineations and polygon-based ones, but some basins exhibited large area discrepancies between flow-network datasets and archived basin areas, (ii basins exhibiting too-high runoff coefficients were abundant in areas where precipitation data were likely affected by snow undercatch, and (iii the occurrence of basins exhibiting losses exceeding the potential-evaporation limit was strongly dependent on the potential-evaporation data, both in terms of numbers and geographical distribution. Some inconsistencies may be resolved by considering sub-grid variability in climate data, surface-dependent potential-evaporation estimates, etc., but further studies are needed to determine the reasons for the inconsistencies found. Our results emphasise the need for pre-modelling data analysis to identify dataset inconsistencies as an important first step in any large-scale study. Applying data-screening methods before modelling should also increase our chances to draw robust conclusions from subsequent

  8. Produção de mudas do tipo rebentão, utilizando coroas de três cultivares de abacaxi inoculadas com fungos micorrízicos Production of seedlings type suckers, using crowns of three cultivars of pineapple inoculated with mycorrhizal fungi

    Directory of Open Access Journals (Sweden)

    Paulo Cesar dos Santos

    2011-09-01

    alternativa para a produção de mudas de abacaxizeiro, sendo mais eficiente na cultivar 'Smooth Cayenne', que produziu 26 mudas em coroas cultivadas até 420 dias após o plantio.One of the main obstacles to the development of pineapple cultivation in Brazil has been the absence of seedlings in both quantity and quality, to spread. Among the alternatives, the obtaining of seedlings is verified starting from the sprouting from crowns of fruits, which are normally discarded by the consumer. Furthermore, the use of arbuscular mycorrhizal fungi (AMF can be an alternative for improving the production of seedlings, since these fungi can shorten the time of seedlings of various fruit. In that sense, the objective of this work was to evaluate the production of seedlings type suckers, using the method of destruction of the apical meristem of the crown of the pineapple plants inoculated with FMAs. For such, we used the randomized complete block design, factorial 3x3, with three cultivars of pineapple ('Smooth Cayenne', 'Pérola' and 'Jupí' and three microbiological treatments (no inoculation, Glomus etunicatum and a mixture of Glomus clarum and Gigaspora margarita, with four replications. The first emissions were recorded at 30, 60 and 90 days after planting for cultivar Smooth, 'Pérola' and 'Jupí', respectively. The pineapple tree 'Smooth Cayenne' produced 80 to 69% seedlings more than the cultivars 'Pérola' and 'Jupí', respectively. In microbiological treatments there was a significant difference for the suckers issued, in which the mixed treatment was the most prevalent issue when compared with treatment with G. etunicatum. Nutritional assessments in the crowns, inoculation with the mixed treatment promoted an increase of 85 and 66% for P, 22 and 13% for N and 6 and 19% for K, compared to treatments G. etunicatum and control respectively. It is concluded that the production of suckers from crown whose principal gem was beheaded is an alternative for the production of pineapple

  9. On soft limits of large-scale structure correlation functions

    International Nuclear Information System (INIS)

    Sagunski, Laura

    2016-08-01

    Large-scale structure surveys have the potential to become the leading probe for precision cosmology in the next decade. To extract valuable information on the cosmological evolution of the Universe from the observational data, it is of major importance to derive accurate theoretical predictions for the statistical large-scale structure observables, such as the power spectrum and the bispectrum of (dark) matter density perturbations. Hence, one of the greatest challenges of modern cosmology is to theoretically understand the non-linear dynamics of large-scale structure formation in the Universe from first principles. While analytic approaches to describe the large-scale structure formation are usually based on the framework of non-relativistic cosmological perturbation theory, we pursue another road in this thesis and develop methods to derive generic, non-perturbative statements about large-scale structure correlation functions. We study unequal- and equal-time correlation functions of density and velocity perturbations in the limit where one of their wavenumbers becomes small, that is, in the soft limit. In the soft limit, it is possible to link (N+1)-point and N-point correlation functions to non-perturbative 'consistency conditions'. These provide in turn a powerful tool to test fundamental aspects of the underlying theory at hand. In this work, we first rederive the (resummed) consistency conditions at unequal times by using the so-called eikonal approximation. The main appeal of the unequal-time consistency conditions is that they are solely based on symmetry arguments and thus are universal. Proceeding from this, we direct our attention to consistency conditions at equal times, which, on the other hand, depend on the interplay between soft and hard modes. We explore the existence and validity of equal-time consistency conditions within and beyond perturbation theory. For this purpose, we investigate the predictions for the soft limit of the

  10. On soft limits of large-scale structure correlation functions

    Energy Technology Data Exchange (ETDEWEB)

    Sagunski, Laura

    2016-08-15

    Large-scale structure surveys have the potential to become the leading probe for precision cosmology in the next decade. To extract valuable information on the cosmological evolution of the Universe from the observational data, it is of major importance to derive accurate theoretical predictions for the statistical large-scale structure observables, such as the power spectrum and the bispectrum of (dark) matter density perturbations. Hence, one of the greatest challenges of modern cosmology is to theoretically understand the non-linear dynamics of large-scale structure formation in the Universe from first principles. While analytic approaches to describe the large-scale structure formation are usually based on the framework of non-relativistic cosmological perturbation theory, we pursue another road in this thesis and develop methods to derive generic, non-perturbative statements about large-scale structure correlation functions. We study unequal- and equal-time correlation functions of density and velocity perturbations in the limit where one of their wavenumbers becomes small, that is, in the soft limit. In the soft limit, it is possible to link (N+1)-point and N-point correlation functions to non-perturbative 'consistency conditions'. These provide in turn a powerful tool to test fundamental aspects of the underlying theory at hand. In this work, we first rederive the (resummed) consistency conditions at unequal times by using the so-called eikonal approximation. The main appeal of the unequal-time consistency conditions is that they are solely based on symmetry arguments and thus are universal. Proceeding from this, we direct our attention to consistency conditions at equal times, which, on the other hand, depend on the interplay between soft and hard modes. We explore the existence and validity of equal-time consistency conditions within and beyond perturbation theory. For this purpose, we investigate the predictions for the soft limit of the

  11. A large-scale peer teaching programme - acceptance and benefit.

    Science.gov (United States)

    Schuetz, Elisabeth; Obirei, Barbara; Salat, Daniela; Scholz, Julia; Hann, Dagmar; Dethleffsen, Kathrin

    2017-08-01

    The involvement of students in the embodiment of university teaching through peer-assisted learning formats is commonly applied. Publications on this topic exclusively focus on strictly defined situations within the curriculum and selected target groups. This study, in contrast, presents and evaluates a large-scale structured and quality-assured peer teaching programme, which offers diverse and targeted courses throughout the preclinical part of the medical curriculum. The large-scale peer teaching programme consists of subject specific and interdisciplinary tutorials that address all scientific, physiological and anatomic subjects of the preclinical curriculum as well as tutorials with contents exceeding the formal curriculum. In the study year 2013/14 a total of 1,420 lessons were offered as part of the programme. Paper-based evaluations were conducted over the full range of courses. Acceptance and benefit of this peer teaching programme were evaluated in a retrospective study covering the period 2012 to 2014. Usage of tutorials by students who commenced their studies in 2012/13 (n=959) was analysed from 2012 till 2014. Based on the results of 13 first assessments in the preclinical subjects anatomy, biochemistry and physiology, the students were assigned to one of five groups. These groups were compared according to participation in the tutorials. To investigate the benefit of tutorials of the peer teaching programme, the results of biochemistry re-assessments of participants and non-participants of tutorials in the years 2012 till 2014 (n=188, 172 and 204, respectively) were compared using Kolmogorov-Smirnov- and Chi-square tests as well as the effect size Cohen's d. Almost 70 % of the students attended the voluntary additional programme during their preclinical studies. The students participating in the tutorials had achieved different levels of proficiency in first assessments. The acceptance of different kinds of tutorials appears to correlate with their

  12. The (in)effectiveness of Global Land Policies on Large-Scale Land Acquisition

    NARCIS (Netherlands)

    Verhoog, S.M.

    2014-01-01

    Due to current crises, large-scale land acquisition (LSLA) is becoming a topic of growing concern. Public data from the ‘Land Matrix Global Observatory’ project (Land Matrix 2014a) demonstrates that since 2000, 1,664 large-scale land transactions in low- and middle-income countries were reported,

  13. Macroecological factors explain large-scale spatial population patterns of ancient agriculturalists

    NARCIS (Netherlands)

    Xu, C.; Chen, B.; Abades, S.; Reino, L.; Teng, S.; Ljungqvist, F.C.; Huang, Z.Y.X.; Liu, X.

    2015-01-01

    Aim: It has been well demonstrated that the large-scale distribution patterns of numerous species are driven by similar macroecological factors. However, understanding of this topic remains limited when applied to our own species. Here we take a large-scale look at ancient agriculturalist

  14. Toward Instructional Leadership: Principals' Perceptions of Large-Scale Assessment in Schools

    Science.gov (United States)

    Prytula, Michelle; Noonan, Brian; Hellsten, Laurie

    2013-01-01

    This paper describes a study of the perceptions that Saskatchewan school principals have regarding large-scale assessment reform and their perceptions of how assessment reform has affected their roles as principals. The findings revealed that large-scale assessments, especially provincial assessments, have affected the principal in Saskatchewan…

  15. Managing Risk and Uncertainty in Large-Scale University Research Projects

    Science.gov (United States)

    Moore, Sharlissa; Shangraw, R. F., Jr.

    2011-01-01

    Both publicly and privately funded research projects managed by universities are growing in size and scope. Complex, large-scale projects (over $50 million) pose new management challenges and risks for universities. This paper explores the relationship between project success and a variety of factors in large-scale university projects. First, we…

  16. Agri-Environmental Resource Management by Large-Scale Collective Action: Determining KEY Success Factors

    Science.gov (United States)

    Uetake, Tetsuya

    2015-01-01

    Purpose: Large-scale collective action is necessary when managing agricultural natural resources such as biodiversity and water quality. This paper determines the key factors to the success of such action. Design/Methodology/Approach: This paper analyses four large-scale collective actions used to manage agri-environmental resources in Canada and…

  17. SCALE INTERACTION IN A MIXING LAYER. THE ROLE OF THE LARGE-SCALE GRADIENTS

    KAUST Repository

    Fiscaletti, Daniele; Attili, Antonio; Bisetti, Fabrizio; Elsinga, Gerrit E.

    2015-01-01

    from physical considerations we would expect the scales to interact in a qualitatively similar way within the flow and across different turbulent flows. Therefore, instead of the large-scale fluctuations, the large-scale gradients modulation of the small scales has been additionally investigated.

  18. Large-Scale Assessment, Rationality, and Scientific Management: The Case of No Child Left Behind

    Science.gov (United States)

    Roach, Andrew T.; Frank, Jennifer

    2007-01-01

    This article examines the ways in which NCLB and the movement towards large-scale assessment systems are based on Weber's concept of formal rationality and tradition of scientific management. Building on these ideas, the authors use Ritzer's McDonaldization thesis to examine some of the core features of large-scale assessment and accountability…

  19. A new system of labour management in African large-scale agriculture?

    DEFF Research Database (Denmark)

    Gibbon, Peter; Riisgaard, Lone

    2014-01-01

    This paper applies a convention theory (CT) approach to the analysis of labour management systems in African large-scale farming. The reconstruction of previous analyses of high-value crop production on large-scale farms in Africa in terms of CT suggests that, since 1980–95, labour management has...

  20. Managing the risks of a large-scale infrastructure project : The case of Spoorzone Delft

    NARCIS (Netherlands)

    Priemus, H.

    2012-01-01

    Risk management in large-scale infrastructure projects is attracting the attention of academics and practitioners alike. After a brief summary of the theoretical background, this paper describes how the risk analysis and risk management shaped up in a current large-scale infrastructure project in

  1. Decentralised stabilising controllers for a class of large-scale linear ...

    Indian Academy of Sciences (India)

    subsystems resulting from a new aggregation-decomposition technique. The method has been illustrated through a numerical example of a large-scale linear system consisting of three subsystems each of the fourth order. Keywords. Decentralised stabilisation; large-scale linear systems; optimal feedback control; algebraic ...

  2. Large-scale Flow and Transport of Magnetic Flux in the Solar ...

    Indian Academy of Sciences (India)

    tribpo

    Abstract. Horizontal large-scale velocity field describes horizontal displacement of the photospheric magnetic flux in zonal and meridian directions. The flow systems of solar plasma, constructed according to the velocity field, create the large-scale cellular-like patterns with up-flow in the center and the down-flow on the ...

  3. Reconstructing Information in Large-Scale Structure via Logarithmic Mapping

    Science.gov (United States)

    Szapudi, Istvan

    We propose to develop a new method to extract information from large-scale structure data combining two-point statistics and non-linear transformations; before, this information was available only with substantially more complex higher-order statistical methods. Initially, most of the cosmological information in large-scale structure lies in two-point statistics. With non- linear evolution, some of that useful information leaks into higher-order statistics. The PI and group has shown in a series of theoretical investigations how that leakage occurs, and explained the Fisher information plateau at smaller scales. This plateau means that even as more modes are added to the measurement of the power spectrum, the total cumulative information (loosely speaking the inverse errorbar) is not increasing. Recently we have shown in Neyrinck et al. (2009, 2010) that a logarithmic (and a related Gaussianization or Box-Cox) transformation on the non-linear Dark Matter or galaxy field reconstructs a surprisingly large fraction of this missing Fisher information of the initial conditions. This was predicted by the earlier wave mechanical formulation of gravitational dynamics by Szapudi & Kaiser (2003). The present proposal is focused on working out the theoretical underpinning of the method to a point that it can be used in practice to analyze data. In particular, one needs to deal with the usual real-life issues of galaxy surveys, such as complex geometry, discrete sam- pling (Poisson or sub-Poisson noise), bias (linear, or non-linear, deterministic, or stochastic), redshift distortions, pro jection effects for 2D samples, and the effects of photometric redshift errors. We will develop methods for weak lensing and Sunyaev-Zeldovich power spectra as well, the latter specifically targetting Planck. In addition, we plan to investigate the question of residual higher- order information after the non-linear mapping, and possible applications for cosmology. Our aim will be to work out

  4. Alignment between galaxies and large-scale structure

    International Nuclear Information System (INIS)

    Faltenbacher, A.; Li Cheng; White, Simon D. M.; Jing, Yi-Peng; Mao Shude; Wang Jie

    2009-01-01

    Based on the Sloan Digital Sky Survey DR6 (SDSS) and the Millennium Simulation (MS), we investigate the alignment between galaxies and large-scale structure. For this purpose, we develop two new statistical tools, namely the alignment correlation function and the cos(2θ)-statistic. The former is a two-dimensional extension of the traditional two-point correlation function and the latter is related to the ellipticity correlation function used for cosmic shear measurements. Both are based on the cross correlation between a sample of galaxies with orientations and a reference sample which represents the large-scale structure. We apply the new statistics to the SDSS galaxy catalog. The alignment correlation function reveals an overabundance of reference galaxies along the major axes of red, luminous (L ∼ * ) galaxies out to projected separations of 60 h- 1 Mpc. The signal increases with central galaxy luminosity. No alignment signal is detected for blue galaxies. The cos(2θ)-statistic yields very similar results. Starting from a MS semi-analytic galaxy catalog, we assign an orientation to each red, luminous and central galaxy, based on that of the central region of the host halo (with size similar to that of the stellar galaxy). As an alternative, we use the orientation of the host halo itself. We find a mean projected misalignment between a halo and its central region of ∼ 25 deg. The misalignment decreases slightly with increasing luminosity of the central galaxy. Using the orientations and luminosities of the semi-analytic galaxies, we repeat our alignment analysis on mock surveys of the MS. Agreement with the SDSS results is good if the central orientations are used. Predictions using the halo orientations as proxies for central galaxy orientations overestimate the observed alignment by more than a factor of 2. Finally, the large volume of the MS allows us to generate a two-dimensional map of the alignment correlation function, which shows the reference

  5. Large-Scale Spray Releases: Additional Aerosol Test Results

    Energy Technology Data Exchange (ETDEWEB)

    Daniel, Richard C.; Gauglitz, Phillip A.; Burns, Carolyn A.; Fountain, Matthew S.; Shimskey, Rick W.; Billing, Justin M.; Bontha, Jagannadha R.; Kurath, Dean E.; Jenks, Jeromy WJ; MacFarlan, Paul J.; Mahoney, Lenna A.

    2013-08-01

    One of the events postulated in the hazard analysis for the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak event involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids that behave as a Newtonian fluid. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and in processing facilities across the DOE complex. To expand the data set upon which the WTP accident and safety analyses were based, an aerosol spray leak testing program was conducted by Pacific Northwest National Laboratory (PNNL). PNNL’s test program addressed two key technical areas to improve the WTP methodology (Larson and Allen 2010). The first technical area was to quantify the role of slurry particles in small breaches where slurry particles may plug the hole and prevent high-pressure sprays. The results from an effort to address this first technical area can be found in Mahoney et al. (2012a). The second technical area was to determine aerosol droplet size distribution and total droplet volume from prototypic breaches and fluids, including sprays from larger breaches and sprays of slurries for which literature data are mostly absent. To address the second technical area, the testing program collected aerosol generation data at two scales, commonly referred to as small-scale and large-scale testing. The small-scale testing and resultant data are described in Mahoney et al. (2012b), and the large-scale testing and resultant data are presented in Schonewill et al. (2012). In tests at both scales, simulants were used

  6. Large-scale hydrogen production using nuclear reactors

    Energy Technology Data Exchange (ETDEWEB)

    Ryland, D.; Stolberg, L.; Kettner, A.; Gnanapragasam, N.; Suppiah, S. [Atomic Energy of Canada Limited, Chalk River, ON (Canada)

    2014-07-01

    For many years, Atomic Energy of Canada Limited (AECL) has been studying the feasibility of using nuclear reactors, such as the Supercritical Water-cooled Reactor, as an energy source for large scale hydrogen production processes such as High Temperature Steam Electrolysis and the Copper-Chlorine thermochemical cycle. Recent progress includes the augmentation of AECL's experimental capabilities by the construction of experimental systems to test high temperature steam electrolysis button cells at ambient pressure and temperatures up to 850{sup o}C and CuCl/HCl electrolysis cells at pressures up to 7 bar and temperatures up to 100{sup o}C. In parallel, detailed models of solid oxide electrolysis cells and the CuCl/HCl electrolysis cell are being refined and validated using experimental data. Process models are also under development to assess options for economic integration of these hydrogen production processes with nuclear reactors. Options for large-scale energy storage, including hydrogen storage, are also under study. (author)

  7. Parallel Framework for Dimensionality Reduction of Large-Scale Datasets

    Directory of Open Access Journals (Sweden)

    Sai Kiranmayee Samudrala

    2015-01-01

    Full Text Available Dimensionality reduction refers to a set of mathematical techniques used to reduce complexity of the original high-dimensional data, while preserving its selected properties. Improvements in simulation strategies and experimental data collection methods are resulting in a deluge of heterogeneous and high-dimensional data, which often makes dimensionality reduction the only viable way to gain qualitative and quantitative understanding of the data. However, existing dimensionality reduction software often does not scale to datasets arising in real-life applications, which may consist of thousands of points with millions of dimensions. In this paper, we propose a parallel framework for dimensionality reduction of large-scale data. We identify key components underlying the spectral dimensionality reduction techniques, and propose their efficient parallel implementation. We show that the resulting framework can be used to process datasets consisting of millions of points when executed on a 16,000-core cluster, which is beyond the reach of currently available methods. To further demonstrate applicability of our framework we perform dimensionality reduction of 75,000 images representing morphology evolution during manufacturing of organic solar cells in order to identify how processing parameters affect morphology evolution.

  8. How large-scale subsidence affects stratocumulus transitions

    Directory of Open Access Journals (Sweden)

    J. J. van der Dussen

    2016-01-01

    Full Text Available Some climate modeling results suggest that the Hadley circulation might weaken in a future climate, causing a subsequent reduction in the large-scale subsidence velocity in the subtropics. In this study we analyze the cloud liquid water path (LWP budget from large-eddy simulation (LES results of three idealized stratocumulus transition cases, each with a different subsidence rate. As shown in previous studies a reduced subsidence is found to lead to a deeper stratocumulus-topped boundary layer, an enhanced cloud-top entrainment rate and a delay in the transition of stratocumulus clouds into shallow cumulus clouds during its equatorwards advection by the prevailing trade winds. The effect of a reduction of the subsidence rate can be summarized as follows. The initial deepening of the stratocumulus layer is partly counteracted by an enhanced absorption of solar radiation. After some hours the deepening of the boundary layer is accelerated by an enhancement of the entrainment rate. Because this is accompanied by a change in the cloud-base turbulent fluxes of moisture and heat, the net change in the LWP due to changes in the turbulent flux profiles is negligibly small.

  9. A large-scale chromosome-specific SNP discovery guideline.

    Science.gov (United States)

    Akpinar, Bala Ani; Lucas, Stuart; Budak, Hikmet

    2017-01-01

    Single-nucleotide polymorphisms (SNPs) are the most prevalent type of variation in genomes that are increasingly being used as molecular markers in diversity analyses, mapping and cloning of genes, and germplasm characterization. However, only a few studies reported large-scale SNP discovery in Aegilops tauschii, restricting their potential use as markers for the low-polymorphic D genome. Here, we report 68,592 SNPs found on the gene-related sequences of the 5D chromosome of Ae. tauschii genotype MvGB589 using genomic and transcriptomic sequences from seven Ae. tauschii accessions, including AL8/78, the only genotype for which a draft genome sequence is available at present. We also suggest a workflow to compare SNP positions in homologous regions on the 5D chromosome of Triticum aestivum, bread wheat, to mark single nucleotide variations between these closely related species. Overall, the identified SNPs define a density of 4.49 SNPs per kilobyte, among the highest reported for the genic regions of Ae. tauschii so far. To our knowledge, this study also presents the first chromosome-specific SNP catalog in Ae. tauschii that should facilitate the association of these SNPs with morphological traits on chromosome 5D to be ultimately targeted for wheat improvement.

  10. Large-Scale Multiantenna Multisine Wireless Power Transfer

    Science.gov (United States)

    Huang, Yang; Clerckx, Bruno

    2017-11-01

    Wireless Power Transfer (WPT) is expected to be a technology reshaping the landscape of low-power applications such as the Internet of Things, Radio Frequency identification (RFID) networks, etc. Although there has been some progress towards multi-antenna multi-sine WPT design, the large-scale design of WPT, reminiscent of massive MIMO in communications, remains an open challenge. In this paper, we derive efficient multiuser algorithms based on a generalizable optimization framework, in order to design transmit sinewaves that maximize the weighted-sum/minimum rectenna output DC voltage. The study highlights the significant effect of the nonlinearity introduced by the rectification process on the design of waveforms in multiuser systems. Interestingly, in the single-user case, the optimal spatial domain beamforming, obtained prior to the frequency domain power allocation optimization, turns out to be Maximum Ratio Transmission (MRT). In contrast, in the general weighted sum criterion maximization problem, the spatial domain beamforming optimization and the frequency domain power allocation optimization are coupled. Assuming channel hardening, low-complexity algorithms are proposed based on asymptotic analysis, to maximize the two criteria. The structure of the asymptotically optimal spatial domain precoder can be found prior to the optimization. The performance of the proposed algorithms is evaluated. Numerical results confirm the inefficiency of the linear model-based design for the single and multi-user scenarios. It is also shown that as nonlinear model-based designs, the proposed algorithms can benefit from an increasing number of sinewaves.

  11. HTS cables open the window for large-scale renewables

    International Nuclear Information System (INIS)

    Geschiere, A; Willen, D; Piga, E; Barendregt, P

    2008-01-01

    In a realistic approach to future energy consumption, the effects of sustainable power sources and the effects of growing welfare with increased use of electricity need to be considered. These factors lead to an increased transfer of electric energy over the networks. A dominant part of the energy need will come from expanded large-scale renewable sources. To use them efficiently over Europe, large energy transits between different countries are required. Bottlenecks in the existing infrastructure will be avoided by strengthening the network. For environmental reasons more infrastructure will be built underground. Nuon is studying the HTS technology as a component to solve these challenges. This technology offers a tremendously large power transport capacity as well as the possibility to reduce short circuit currents, making integration of renewables easier. Furthermore, power transport will be possible at lower voltage levels, giving the opportunity to upgrade the existing network while re-using it. This will result in large cost savings while reaching the future energy challenges. In a 6 km backbone structure in Amsterdam Nuon wants to install a 50 kV HTS Triax cable for a significant increase of the transport capacity, while developing its capabilities. Nevertheless several barriers have to be overcome

  12. Financing a large-scale picture archival and communication system.

    Science.gov (United States)

    Goldszal, Alberto F; Bleshman, Michael H; Bryan, R Nick

    2004-01-01

    An attempt to finance a large-scale multi-hospital picture archival and communication system (PACS) solely based on cost savings from current film operations is reported. A modified Request for Proposal described the technical requirements, PACS architecture, and performance targets. The Request for Proposal was complemented by a set of desired financial goals-the main one being the ability to use film savings to pay for the implementation and operation of the PACS. Financing of the enterprise-wide PACS was completed through an operating lease agreement including all PACS equipment, implementation, service, and support for an 8-year term, much like a complete outsourcing. Equipment refreshes, both hardware and software, are included. Our agreement also linked the management of the digital imaging operation (PACS) and the traditional film printing, shifting the operational risks of continued printing and costs related to implementation delays to the PACS vendor. An additional optimization step provided the elimination of the negative film budget variances in the beginning of the project when PACS costs tend to be higher than film and film-related expenses. An enterprise-wide PACS has been adopted to achieve clinical workflow improvements and cost savings. PACS financing was solely based on film savings, which included the entire digital solution (PACS) and any residual film printing. These goals were achieved with simultaneous elimination of any over-budget scenarios providing a non-negative cash flow in each year of an 8-year term.

  13. Large-scale assembly bias of dark matter halos

    Energy Technology Data Exchange (ETDEWEB)

    Lazeyras, Titouan; Musso, Marcello; Schmidt, Fabian, E-mail: titouan@mpa-garching.mpg.de, E-mail: mmusso@sas.upenn.edu, E-mail: fabians@mpa-garching.mpg.de [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85748 Garching (Germany)

    2017-03-01

    We present precise measurements of the assembly bias of dark matter halos, i.e. the dependence of halo bias on other properties than the mass, using curved 'separate universe' N-body simulations which effectively incorporate an infinite-wavelength matter overdensity into the background density. This method measures the LIMD (local-in-matter-density) bias parameters b {sub n} in the large-scale limit. We focus on the dependence of the first two Eulerian biases b {sup E} {sup {sub 1}} and b {sup E} {sup {sub 2}} on four halo properties: the concentration, spin, mass accretion rate, and ellipticity. We quantitatively compare our results with previous works in which assembly bias was measured on fairly small scales. Despite this difference, our findings are in good agreement with previous results. We also look at the joint dependence of bias on two halo properties in addition to the mass. Finally, using the excursion set peaks model, we attempt to shed new insights on how assembly bias arises in this analytical model.

  14. Large-scale solvothermal synthesis of fluorescent carbon nanoparticles

    International Nuclear Information System (INIS)

    Ku, Kahoe; Park, Jinwoo; Kim, Nayon; Kim, Woong; Lee, Seung-Wook; Chung, Haegeun; Han, Chi-Hwan

    2014-01-01

    The large-scale production of high-quality carbon nanomaterials is highly desirable for a variety of applications. We demonstrate a novel synthetic route to the production of fluorescent carbon nanoparticles (CNPs) in large quantities via a single-step reaction. The simple heating of a mixture of benzaldehyde, ethanol and graphite oxide (GO) with residual sulfuric acid in an autoclave produced 7 g of CNPs with a quantum yield of 20%. The CNPs can be dispersed in various organic solvents; hence, they are easily incorporated into polymer composites in forms such as nanofibers and thin films. Additionally, we observed that the GO present during the CNP synthesis was reduced. The reduced GO (RGO) was sufficiently conductive (σ ≈ 282 S m −1 ) such that it could be used as an electrode material in a supercapacitor; in addition, it can provide excellent capacitive behavior and high-rate capability. This work will contribute greatly to the development of efficient synthetic routes to diverse carbon nanomaterials, including CNPs and RGO, that are suitable for a wide range of applications. (paper)

  15. Large-scale functional purification of recombinant HIV-1 capsid.

    Directory of Open Access Journals (Sweden)

    Magdeleine Hung

    Full Text Available During human immunodeficiency virus type-1 (HIV-1 virion maturation, capsid proteins undergo a major rearrangement to form a conical core that protects the viral nucleoprotein complexes. Mutations in the capsid sequence that alter the stability of the capsid core are deleterious to viral infectivity and replication. Recently, capsid assembly has become an attractive target for the development of a new generation of anti-retroviral agents. Drug screening efforts and subsequent structural and mechanistic studies require gram quantities of active, homogeneous and pure protein. Conventional means of laboratory purification of Escherichia coli expressed recombinant capsid protein rely on column chromatography steps that are not amenable to large-scale production. Here we present a function-based purification of wild-type and quadruple mutant capsid proteins, which relies on the inherent propensity of capsid protein to polymerize and depolymerize. This method does not require the packing of sizable chromatography columns and can generate double-digit gram quantities of functionally and biochemically well-behaved proteins with greater than 98% purity. We have used the purified capsid protein to characterize two known assembly inhibitors in our in-house developed polymerization assay and to measure their binding affinities. Our capsid purification procedure provides a robust method for purifying large quantities of a key protein in the HIV-1 life cycle, facilitating identification of the next generation anti-HIV agents.

  16. Research on large-scale wind farm modeling

    Science.gov (United States)

    Ma, Longfei; Zhang, Baoqun; Gong, Cheng; Jiao, Ran; Shi, Rui; Chi, Zhongjun; Ding, Yifeng

    2017-01-01

    Due to intermittent and adulatory properties of wind energy, when large-scale wind farm connected to the grid, it will have much impact on the power system, which is different from traditional power plants. Therefore it is necessary to establish an effective wind farm model to simulate and analyze the influence wind farms have on the grid as well as the transient characteristics of the wind turbines when the grid is at fault. However we must first establish an effective WTGs model. As the doubly-fed VSCF wind turbine has become the mainstream wind turbine model currently, this article first investigates the research progress of doubly-fed VSCF wind turbine, and then describes the detailed building process of the model. After that investigating the common wind farm modeling methods and pointing out the problems encountered. As WAMS is widely used in the power system, which makes online parameter identification of the wind farm model based on off-output characteristics of wind farm be possible, with a focus on interpretation of the new idea of identification-based modeling of large wind farms, which can be realized by two concrete methods.

  17. LARGE-SCALE INDICATIVE MAPPING OF SOIL RUNOFF

    Directory of Open Access Journals (Sweden)

    E. Panidi

    2017-11-01

    Full Text Available In our study we estimate relationships between quantitative parameters of relief, soil runoff regime, and spatial distribution of radioactive pollutants in the soil. The study is conducted on the test arable area located in basin of the upper Oka River (Orel region, Russia. Previously we collected rich amount of soil samples, which make it possible to investigate redistribution of the Chernobyl-origin cesium-137 in soil material and as a consequence the soil runoff magnitude at sampling points. Currently we are describing and discussing the technique applied to large-scale mapping of the soil runoff. The technique is based upon the cesium-137 radioactivity measurement in the different relief structures. Key stages are the allocation of the places for soil sampling points (we used very high resolution space imagery as a supporting data; soil samples collection and analysis; calibration of the mathematical model (using the estimated background value of the cesium-137 radioactivity; and automated compilation of the map (predictive map of the studied territory (digital elevation model is used for this purpose, and cesium-137 radioactivity can be predicted using quantitative parameters of the relief. The maps can be used as a support data for precision agriculture and for recultivation or melioration purposes.

  18. Reorganizing Complex Network to Improve Large-Scale Multiagent Teamwork

    Directory of Open Access Journals (Sweden)

    Yang Xu

    2014-01-01

    Full Text Available Large-scale multiagent teamwork has been popular in various domains. Similar to human society infrastructure, agents only coordinate with some of the others, with a peer-to-peer complex network structure. Their organization has been proven as a key factor to influence their performance. To expedite team performance, we have analyzed that there are three key factors. First, complex network effects may be able to promote team performance. Second, coordination interactions coming from their sources are always trying to be routed to capable agents. Although they could be transferred across the network via different paths, their sources and sinks depend on the intrinsic nature of the team which is irrelevant to the network connections. In addition, the agents involved in the same plan often form a subteam and communicate with each other more frequently. Therefore, if the interactions between agents can be statistically recorded, we are able to set up an integrated network adjustment algorithm by combining the three key factors. Based on our abstracted teamwork simulations and the coordination statistics, we implemented the adaptive reorganization algorithm. The experimental results briefly support our design that the reorganized network is more capable of coordinating heterogeneous agents.

  19. Factors influencing the decommissioning of large-scale nuclear plants

    International Nuclear Information System (INIS)

    Large, J.H.

    1988-01-01

    The decision-making process involving the decommissioning of the UK graphite moderated, gas-cooled nuclear power stations is complex. There are timing, engineering, waste disposal, cost and lost generation capacity factors to consider and the overall decision of when and how to proceed with decommissioning may include political and public tolerance dimensions. For the final stage of decommissioning the nuclear industry could either completely dismantle the reactor island leaving a green-field site or, alternatively, the reactor island could be maintained indefinitely with additional super- and substructure containment. At this time the first of these options, or deferred decommissioning, prevails and with this the nuclear industry has expressed considerable confidence that the technology required will become available with passing time, that acceptable radioactive waste disposal methods and facilities will be available and that the eventual costs of decommissioning will not escalate without restraint. If the deferred decommissioning strategy is wrong and it is not possible to completely dismantle the reactor islands a century into the future, then it may be too late to effect sufficient longer term containment to maintain the reactor hulks in a reliable condition. With respect to the final decommissioning of large-scale nuclear plant, it is concluded that the nuclear industry does not know quite how to do it, when it will be attempted and when it will be completed, and they do not know how much it will eventually cost. (author)

  20. Locating inefficient links in a large-scale transportation network

    Science.gov (United States)

    Sun, Li; Liu, Like; Xu, Zhongzhi; Jie, Yang; Wei, Dong; Wang, Pu

    2015-02-01

    Based on data from geographical information system (GIS) and daily commuting origin destination (OD) matrices, we estimated the distribution of traffic flow in the San Francisco road network and studied Braess's paradox in a large-scale transportation network with realistic travel demand. We measured the variation of total travel time Δ T when a road segment is closed, and found that | Δ T | follows a power-law distribution if Δ T 0. This implies that most roads have a negligible effect on the efficiency of the road network, while the failure of a few crucial links would result in severe travel delays, and closure of a few inefficient links would counter-intuitively reduce travel costs considerably. Generating three theoretical networks, we discovered that the heterogeneously distributed travel demand may be the origin of the observed power-law distributions of | Δ T | . Finally, a genetic algorithm was used to pinpoint inefficient link clusters in the road network. We found that closing specific road clusters would further improve the transportation efficiency.

  1. Staghorn: An Automated Large-Scale Distributed System Analysis Platform

    Energy Technology Data Exchange (ETDEWEB)

    Gabert, Kasimir [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Burns, Ian [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Elliott, Steven [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kallaher, Jenna [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vail, Adam [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-09-01

    Conducting experiments on large-scale distributed computing systems is becoming significantly easier with the assistance of emulation. Researchers can now create a model of a distributed computing environment and then generate a virtual, laboratory copy of the entire system composed of potentially thousands of virtual machines, switches, and software. The use of real software, running at clock rate in full virtual machines, allows experiments to produce meaningful results without necessitating a full understanding of all model components. However, the ability to inspect and modify elements within these models is bound by the limitation that such modifications must compete with the model, either running in or alongside it. This inhibits entire classes of analyses from being conducted upon these models. We developed a mechanism to snapshot an entire emulation-based model as it is running. This allows us to \\freeze time" and subsequently fork execution, replay execution, modify arbitrary parts of the model, or deeply explore the model. This snapshot includes capturing packets in transit and other input/output state along with the running virtual machines. We were able to build this system in Linux using Open vSwitch and Kernel Virtual Machines on top of Sandia's emulation platform Firewheel. This primitive opens the door to numerous subsequent analyses on models, including state space exploration, debugging distributed systems, performance optimizations, improved training environments, and improved experiment repeatability.

  2. Relativistic jets without large-scale magnetic fields

    Science.gov (United States)

    Parfrey, K.; Giannios, D.; Beloborodov, A.

    2014-07-01

    The canonical model of relativistic jets from black holes requires a large-scale ordered magnetic field to provide a significant magnetic flux through the ergosphere--in the Blandford-Znajek process, the jet power scales with the square of the magnetic flux. In many jet systems the presence of the required flux in the environment of the central engine is questionable. I will describe an alternative scenario, in which jets are produced by the continuous sequential accretion of small magnetic loops. The magnetic energy stored in these coronal flux systems is amplified by the differential rotation of the accretion disc and by the rotating spacetime of the black hole, leading to runaway field line inflation, magnetic reconnection in thin current layers, and the ejection of discrete bubbles of Poynting-flux-dominated plasma. For illustration I will show the results of general-relativistic force-free electrodynamic simulations of rotating black hole coronae, performed using a new resistivity model. The dissipation of magnetic energy by coronal reconnection events, as demonstrated in these simulations, is a potential source of the observed high-energy emission from accreting compact objects.

  3. A new large-scale plasma source with plasma cathode

    International Nuclear Information System (INIS)

    Yamauchi, K.; Hirokawa, K.; Suzuki, H.; Satake, T.

    1996-01-01

    A new large-scale plasma source (200 mm diameter) with a plasma cathode has been investigated. The plasma has a good spatial uniformity, operates at low electron temperature, and is highly ionized under relatively low gas pressure of about 10 -4 Torr. The plasma source consists of a plasma chamber and a plasma cathode generator. The plasma chamber has an anode which is 200 mm in diameter, 150 mm in length, is made of 304 stainless steel, and acts as a plasma expansion cup. A filament-cathode-like plasma ''plasma cathode'' is placed on the central axis of this source. To improve the plasma spatial uniformity in the plasma chamber, a disk-shaped, floating electrode is placed between the plasma chamber and the plasma cathode. The 200 mm diameter plasma is measure by using Langmuir probes. As a result, the discharge voltage is relatively low (30-120 V), the plasma space potential is almost equal to the discharge voltage and can be easily controlled, the electron temperature is several electron volts, the plasma density is about 10 10 cm -3 , and the plasma density is about 10% variance in over a 100 mm diameter. (Author)

  4. FFTLasso: Large-Scale LASSO in the Fourier Domain

    KAUST Repository

    Bibi, Adel Aamer

    2017-11-09

    In this paper, we revisit the LASSO sparse representation problem, which has been studied and used in a variety of different areas, ranging from signal processing and information theory to computer vision and machine learning. In the vision community, it found its way into many important applications, including face recognition, tracking, super resolution, image denoising, to name a few. Despite advances in efficient sparse algorithms, solving large-scale LASSO problems remains a challenge. To circumvent this difficulty, people tend to downsample and subsample the problem (e.g. via dimensionality reduction) to maintain a manageable sized LASSO, which usually comes at the cost of losing solution accuracy. This paper proposes a novel circulant reformulation of the LASSO that lifts the problem to a higher dimension, where ADMM can be efficiently applied to its dual form. Because of this lifting, all optimization variables are updated using only basic element-wise operations, the most computationally expensive of which is a 1D FFT. In this way, there is no need for a linear system solver nor matrix-vector multiplication. Since all operations in our FFTLasso method are element-wise, the subproblems are completely independent and can be trivially parallelized (e.g. on a GPU). The attractive computational properties of FFTLasso are verified by extensive experiments on synthetic and real data and on the face recognition task. They demonstrate that FFTLasso scales much more effectively than a state-of-the-art solver.

  5. Addressing Criticisms of Large-Scale Marine Protected Areas

    Science.gov (United States)

    Ban, Natalie C; Fernandez, Miriam; Friedlander, Alan M; García-Borboroglu, Pablo; Golbuu, Yimnang; Guidetti, Paolo; Harris, Jean M; Hawkins, Julie P; Langlois, Tim; McCauley, Douglas J; Pikitch, Ellen K; Richmond, Robert H; Roberts, Callum M

    2018-01-01

    Abstract Designated large-scale marine protected areas (LSMPAs, 100,000 or more square kilometers) constitute over two-thirds of the approximately 6.6% of the ocean and approximately 14.5% of the exclusive economic zones within marine protected areas. Although LSMPAs have received support among scientists and conservation bodies for wilderness protection, regional ecological connectivity, and improving resilience to climate change, there are also concerns. We identified 10 common criticisms of LSMPAs along three themes: (1) placement, governance, and management; (2) political expediency; and (3) social–ecological value and cost. Through critical evaluation of scientific evidence, we discuss the value, achievements, challenges, and potential of LSMPAs in these arenas. We conclude that although some criticisms are valid and need addressing, none pertain exclusively to LSMPAs, and many involve challenges ubiquitous in management. We argue that LSMPAs are an important component of a diversified management portfolio that tempers potential losses, hedges against uncertainty, and enhances the probability of achieving sustainably managed oceans. PMID:29731514

  6. Episodic memory in aspects of large-scale brain networks

    Science.gov (United States)

    Jeong, Woorim; Chung, Chun Kee; Kim, June Sic

    2015-01-01

    Understanding human episodic memory in aspects of large-scale brain networks has become one of the central themes in neuroscience over the last decade. Traditionally, episodic memory was regarded as mostly relying on medial temporal lobe (MTL) structures. However, recent studies have suggested involvement of more widely distributed cortical network and the importance of its interactive roles in the memory process. Both direct and indirect neuro-modulations of the memory network have been tried in experimental treatments of memory disorders. In this review, we focus on the functional organization of the MTL and other neocortical areas in episodic memory. Task-related neuroimaging studies together with lesion studies suggested that specific sub-regions of the MTL are responsible for specific components of memory. However, recent studies have emphasized that connectivity within MTL structures and even their network dynamics with other cortical areas are essential in the memory process. Resting-state functional network studies also have revealed that memory function is subserved by not only the MTL system but also a distributed network, particularly the default-mode network (DMN). Furthermore, researchers have begun to investigate memory networks throughout the entire brain not restricted to the specific resting-state network (RSN). Altered patterns of functional connectivity (FC) among distributed brain regions were observed in patients with memory impairments. Recently, studies have shown that brain stimulation may impact memory through modulating functional networks, carrying future implications of a novel interventional therapy for memory impairment. PMID:26321939

  7. Protein homology model refinement by large-scale energy optimization.

    Science.gov (United States)

    Park, Hahnbeom; Ovchinnikov, Sergey; Kim, David E; DiMaio, Frank; Baker, David

    2018-03-20

    Proteins fold to their lowest free-energy structures, and hence the most straightforward way to increase the accuracy of a partially incorrect protein structure model is to search for the lowest-energy nearby structure. This direct approach has met with little success for two reasons: first, energy function inaccuracies can lead to false energy minima, resulting in model degradation rather than improvement; and second, even with an accurate energy function, the search problem is formidable because the energy only drops considerably in the immediate vicinity of the global minimum, and there are a very large number of degrees of freedom. Here we describe a large-scale energy optimization-based refinement method that incorporates advances in both search and energy function accuracy that can substantially improve the accuracy of low-resolution homology models. The method refined low-resolution homology models into correct folds for 50 of 84 diverse protein families and generated improved models in recent blind structure prediction experiments. Analyses of the basis for these improvements reveal contributions from both the improvements in conformational sampling techniques and the energy function.

  8. Contextual Compression of Large-Scale Wind Turbine Array Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Potter, Kristin C [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Clyne, John [National Center for Atmospheric Research (NCAR)

    2017-12-04

    Data sizes are becoming a critical issue particularly for HPC applications. We have developed a user-driven lossy wavelet-based storage model to facilitate the analysis and visualization of large-scale wind turbine array simulations. The model stores data as heterogeneous blocks of wavelet coefficients, providing high-fidelity access to user-defined data regions believed the most salient, while providing lower-fidelity access to less salient regions on a block-by-block basis. In practice, by retaining the wavelet coefficients as a function of feature saliency, we have seen data reductions in excess of 94 percent, while retaining lossless information in the turbine-wake regions most critical to analysis and providing enough (low-fidelity) contextual information in the upper atmosphere to track incoming coherent turbulent structures. Our contextual wavelet compression approach has allowed us to deliver interative visual analysis while providing the user control over where data loss, and thus reduction in accuracy, in the analysis occurs. We argue this reduced but contextualized representation is a valid approach and encourages contextual data management.

  9. Open TG-GATEs: a large-scale toxicogenomics database

    Science.gov (United States)

    Igarashi, Yoshinobu; Nakatsu, Noriyuki; Yamashita, Tomoya; Ono, Atsushi; Ohno, Yasuo; Urushidani, Tetsuro; Yamada, Hiroshi

    2015-01-01

    Toxicogenomics focuses on assessing the safety of compounds using gene expression profiles. Gene expression signatures from large toxicogenomics databases are expected to perform better than small databases in identifying biomarkers for the prediction and evaluation of drug safety based on a compound's toxicological mechanisms in animal target organs. Over the past 10 years, the Japanese Toxicogenomics Project consortium (TGP) has been developing a large-scale toxicogenomics database consisting of data from 170 compounds (mostly drugs) with the aim of improving and enhancing drug safety assessment. Most of the data generated by the project (e.g. gene expression, pathology, lot number) are freely available to the public via Open TG-GATEs (Toxicogenomics Project-Genomics Assisted Toxicity Evaluation System). Here, we provide a comprehensive overview of the database, including both gene expression data and metadata, with a description of experimental conditions and procedures used to generate the database. Open TG-GATEs is available from http://toxico.nibio.go.jp/english/index.html. PMID:25313160

  10. Large-Scale Seismic Test Program at Hualien, Taiwan

    International Nuclear Information System (INIS)

    Tang, H.T.; Graves, H.L.; Yeh, Y.S.

    1991-01-01

    The Large-Scale Seismic Test (LSST) Program at Hualien, Taiwan, is a follow-on to the soil-structure interaction (SSI) experiments at Lotung, Taiwan. The planned SSI studies will be performed at a stiff soil site in Hualien, Taiwan, that historically has had slightly more destructive earthquakes in the past than Lotung. The objectives of the LSST project is as follows: To obtain earthquake-induced SSI data at a stiff soil site having similar prototypical nuclear power plant soil conditions. To confirm the findings and methodologies validated against the Lotung soft soil SSI data for prototypical plant condition applications. To further validate the technical basis of realistic SSI analysis approaches. To further support the resolution of USI A-40 Seismic Design Criteria issue. These objectives will be accomplished through an integrated and carefully planned experimental program consisting of: soil characterization, test model design and field construction, instrumentation layout and deployment, in-situ geophysical information collection, forced vibration test, and synthesis of results and findings. The LSST is a joint effort among many interested parties. EPRI and Taipower are the organizers of the program and have the lead in planning and managing the program

  11. Optimization of large-scale industrial systems : an emerging method

    Energy Technology Data Exchange (ETDEWEB)

    Hammache, A.; Aube, F.; Benali, M.; Cantave, R. [Natural Resources Canada, Varennes, PQ (Canada). CANMET Energy Technology Centre

    2006-07-01

    This paper reviewed optimization methods of large-scale industrial production systems and presented a novel systematic multi-objective and multi-scale optimization methodology. The methodology was based on a combined local optimality search with global optimality determination, and advanced system decomposition and constraint handling. The proposed method focused on the simultaneous optimization of the energy, economy and ecology aspects of industrial systems (E{sup 3}-ISO). The aim of the methodology was to provide guidelines for decision-making strategies. The approach was based on evolutionary algorithms (EA) with specifications including hybridization of global optimality determination with a local optimality search; a self-adaptive algorithm to account for the dynamic changes of operating parameters and design variables occurring during the optimization process; interactive optimization; advanced constraint handling and decomposition strategy; and object-oriented programming and parallelization techniques. Flowcharts of the working principles of the basic EA were presented. It was concluded that the EA uses a novel decomposition and constraint handling technique to enhance the Pareto solution search procedure for multi-objective problems. 6 refs., 9 figs.

  12. Large-scale exploration and analysis of drug combinations.

    Science.gov (United States)

    Li, Peng; Huang, Chao; Fu, Yingxue; Wang, Jinan; Wu, Ziyin; Ru, Jinlong; Zheng, Chunli; Guo, Zihu; Chen, Xuetong; Zhou, Wei; Zhang, Wenjuan; Li, Yan; Chen, Jianxin; Lu, Aiping; Wang, Yonghua

    2015-06-15

    Drug combinations are a promising strategy for combating complex diseases by improving the efficacy and reducing corresponding side effects. Currently, a widely studied problem in pharmacology is to predict effective drug combinations, either through empirically screening in clinic or pure experimental trials. However, the large-scale prediction of drug combination by a systems method is rarely considered. We report a systems pharmacology framework to predict drug combinations (PreDCs) on a computational model, termed probability ensemble approach (PEA), for analysis of both the efficacy and adverse effects of drug combinations. First, a Bayesian network integrating with a similarity algorithm is developed to model the combinations from drug molecular and pharmacological phenotypes, and the predictions are then assessed with both clinical efficacy and adverse effects. It is illustrated that PEA can predict the combination efficacy of drugs spanning different therapeutic classes with high specificity and sensitivity (AUC = 0.90), which was further validated by independent data or new experimental assays. PEA also evaluates the adverse effects (AUC = 0.95) quantitatively and detects the therapeutic indications for drug combinations. Finally, the PreDC database includes 1571 known and 3269 predicted optimal combinations as well as their potential side effects and therapeutic indications. The PreDC database is available at http://sm.nwsuaf.edu.cn/lsp/predc.php. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Neighborhood Discriminant Hashing for Large-Scale Image Retrieval.

    Science.gov (United States)

    Tang, Jinhui; Li, Zechao; Wang, Meng; Zhao, Ruizhen

    2015-09-01

    With the proliferation of large-scale community-contributed images, hashing-based approximate nearest neighbor search in huge databases has aroused considerable interest from the fields of computer vision and multimedia in recent years because of its computational and memory efficiency. In this paper, we propose a novel hashing method named neighborhood discriminant hashing (NDH) (for short) to implement approximate similarity search. Different from the previous work, we propose to learn a discriminant hashing function by exploiting local discriminative information, i.e., the labels of a sample can be inherited from the neighbor samples it selects. The hashing function is expected to be orthogonal to avoid redundancy in the learned hashing bits as much as possible, while an information theoretic regularization is jointly exploited using maximum entropy principle. As a consequence, the learned hashing function is compact and nonredundant among bits, while each bit is highly informative. Extensive experiments are carried out on four publicly available data sets and the comparison results demonstrate the outperforming performance of the proposed NDH method over state-of-the-art hashing techniques.

  14. Assessing large-scale wildlife responses to human infrastructure development.

    Science.gov (United States)

    Torres, Aurora; Jaeger, Jochen A G; Alonso, Juan Carlos

    2016-07-26

    Habitat loss and deterioration represent the main threats to wildlife species, and are closely linked to the expansion of roads and human settlements. Unfortunately, large-scale effects of these structures remain generally overlooked. Here, we analyzed the European transportation infrastructure network and found that 50% of the continent is within 1.5 km of transportation infrastructure. We present a method for assessing the impacts from infrastructure on wildlife, based on functional response curves describing density reductions in birds and mammals (e.g., road-effect zones), and apply it to Spain as a case study. The imprint of infrastructure extends over most of the country (55.5% in the case of birds and 97.9% for mammals), with moderate declines predicted for birds (22.6% of individuals) and severe declines predicted for mammals (46.6%). Despite certain limitations, we suggest the approach proposed is widely applicable to the evaluation of effects of planned infrastructure developments under multiple scenarios, and propose an internationally coordinated strategy to update and improve it in the future.

  15. Large-scale parallel genome assembler over cloud computing environment.

    Science.gov (United States)

    Das, Arghya Kusum; Koppa, Praveen Kumar; Goswami, Sayan; Platania, Richard; Park, Seung-Jong

    2017-06-01

    The size of high throughput DNA sequencing data has already reached the terabyte scale. To manage this huge volume of data, many downstream sequencing applications started using locality-based computing over different cloud infrastructures to take advantage of elastic (pay as you go) resources at a lower cost. However, the locality-based programming model (e.g. MapReduce) is relatively new. Consequently, developing scalable data-intensive bioinformatics applications using this model and understanding the hardware environment that these applications require for good performance, both require further research. In this paper, we present a de Bruijn graph oriented Parallel Giraph-based Genome Assembler (GiGA), as well as the hardware platform required for its optimal performance. GiGA uses the power of Hadoop (MapReduce) and Giraph (large-scale graph analysis) to achieve high scalability over hundreds of compute nodes by collocating the computation and data. GiGA achieves significantly higher scalability with competitive assembly quality compared to contemporary parallel assemblers (e.g. ABySS and Contrail) over traditional HPC cluster. Moreover, we show that the performance of GiGA is significantly improved by using an SSD-based private cloud infrastructure over traditional HPC cluster. We observe that the performance of GiGA on 256 cores of this SSD-based cloud infrastructure closely matches that of 512 cores of traditional HPC cluster.

  16. Discriminant WSRC for Large-Scale Plant Species Recognition

    Directory of Open Access Journals (Sweden)

    Shanwen Zhang

    2017-01-01

    Full Text Available In sparse representation based classification (SRC and weighted SRC (WSRC, it is time-consuming to solve the global sparse representation problem. A discriminant WSRC (DWSRC is proposed for large-scale plant species recognition, including two stages. Firstly, several subdictionaries are constructed by dividing the dataset into several similar classes, and a subdictionary is chosen by the maximum similarity between the test sample and the typical sample of each similar class. Secondly, the weighted sparse representation of the test image is calculated with respect to the chosen subdictionary, and then the leaf category is assigned through the minimum reconstruction error. Different from the traditional SRC and its improved approaches, we sparsely represent the test sample on a subdictionary whose base elements are the training samples of the selected similar class, instead of using the generic overcomplete dictionary on the entire training samples. Thus, the complexity to solving the sparse representation problem is reduced. Moreover, DWSRC is adapted to newly added leaf species without rebuilding the dictionary. Experimental results on the ICL plant leaf database show that the method has low computational complexity and high recognition rate and can be clearly interpreted.

  17. Applicability of vector processing to large-scale nuclear codes

    International Nuclear Information System (INIS)

    Ishiguro, Misako; Harada, Hiroo; Matsuura, Toshihiko; Okuda, Motoi; Ohta, Fumio; Umeya, Makoto.

    1982-03-01

    To meet the growing trend of computational requirements in JAERI, introduction of a high-speed computer with vector processing faculty (a vector processor) is desirable in the near future. To make effective use of a vector processor, appropriate optimization of nuclear codes to pipelined-vector architecture is vital, which will pose new problems concerning code development and maintenance. In this report, vector processing efficiency is assessed with respect to large-scale nuclear codes by examining the following items: 1) The present feature of computational load in JAERI is analyzed by compiling the computer utilization statistics. 2) Vector processing efficiency is estimated for the ten heavily-used nuclear codes by analyzing their dynamic behaviors run on a scalar machine. 3) Vector processing efficiency is measured for the other five nuclear codes by using the current vector processors, FACOM 230-75 APU and CRAY-1. 4) Effectiveness of applying a high-speed vector processor to nuclear codes is evaluated by taking account of the characteristics in JAERI jobs. Problems of vector processors are also discussed from the view points of code performance and ease of use. (author)

  18. Comprehensive large-scale assessment of intrinsic protein disorder.

    Science.gov (United States)

    Walsh, Ian; Giollo, Manuel; Di Domenico, Tomás; Ferrari, Carlo; Zimmermann, Olav; Tosatto, Silvio C E

    2015-01-15

    Intrinsically disordered regions are key for the function of numerous proteins. Due to the difficulties in experimental disorder characterization, many computational predictors have been developed with various disorder flavors. Their performance is generally measured on small sets mainly from experimentally solved structures, e.g. Protein Data Bank (PDB) chains. MobiDB has only recently started to collect disorder annotations from multiple experimental structures. MobiDB annotates disorder for UniProt sequences, allowing us to conduct the first large-scale assessment of fast disorder predictors on 25 833 different sequences with X-ray crystallographic structures. In addition to a comprehensive ranking of predictors, this analysis produced the following interesting observations. (i) The predictors cluster according to their disorder definition, with a consensus giving more confidence. (ii) Previous assessments appear over-reliant on data annotated at the PDB chain level and performance is lower on entire UniProt sequences. (iii) Long disordered regions are harder to predict. (iv) Depending on the structural and functional types of the proteins, differences in prediction performance of up to 10% are observed. The datasets are available from Web site at URL: http://mobidb.bio.unipd.it/lsd. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. Biotechnological lignite conversion - a large-scale concept

    Energy Technology Data Exchange (ETDEWEB)

    Reich-Walber, M.; Meyrahn, H.; Felgener, G.W. [Rheinbraun AG, Koeln (Germany). Fuel Technology and Lab. Dept.

    1997-12-31

    Concerning the research on biotechnological lignite upgrading, Rheinbraun`s overall objective is the large-scale production of liquid and gaseous products for the energy and chemical/refinery sectors. The presentation outlines Rheinbraun`s technical concept for electricity production on the basis of biotechnologically solubilized lignite. A first rough cost estimate based on the assumptions described in the paper in detail and compared with the latest power plant generation shows the general cost efficiency of this technology despite the additional costs in respect of coal solubilization. The main reasons are low-cost process techniques for coal conversion on the one hand and cost reductions mainly in power plant technology (more efficient combustion processes and simplified gas clean-up) but also in coal transport (easy fuel handling) on the other hand. Moreover, it is hoped that an extended range of products will make it possible to widen the fields of lignite application. The presentation also points out that there is still a huge gap between this scenario and reality by limited microbiological knowledge. To close this gap Rheinbraun started a research project supported by the North-Rhine Westphalian government in 1995. Several leading biotechnological companies and institutes in Germany and the United States are involved in the project. The latest results of the current project will be presented in the paper. This includes fundamental research activities in the field of microbial coal conversion as well as investigations into bioreactor design and product treatment (dewatering, deashing and desulphurization). (orig.)

  20. FFTLasso: Large-Scale LASSO in the Fourier Domain

    KAUST Repository

    Bibi, Adel Aamer; Itani, Hani; Ghanem, Bernard

    2017-01-01

    In this paper, we revisit the LASSO sparse representation problem, which has been studied and used in a variety of different areas, ranging from signal processing and information theory to computer vision and machine learning. In the vision community, it found its way into many important applications, including face recognition, tracking, super resolution, image denoising, to name a few. Despite advances in efficient sparse algorithms, solving large-scale LASSO problems remains a challenge. To circumvent this difficulty, people tend to downsample and subsample the problem (e.g. via dimensionality reduction) to maintain a manageable sized LASSO, which usually comes at the cost of losing solution accuracy. This paper proposes a novel circulant reformulation of the LASSO that lifts the problem to a higher dimension, where ADMM can be efficiently applied to its dual form. Because of this lifting, all optimization variables are updated using only basic element-wise operations, the most computationally expensive of which is a 1D FFT. In this way, there is no need for a linear system solver nor matrix-vector multiplication. Since all operations in our FFTLasso method are element-wise, the subproblems are completely independent and can be trivially parallelized (e.g. on a GPU). The attractive computational properties of FFTLasso are verified by extensive experiments on synthetic and real data and on the face recognition task. They demonstrate that FFTLasso scales much more effectively than a state-of-the-art solver.

  1. Fast Component Pursuit for Large-Scale Inverse Covariance Estimation.

    Science.gov (United States)

    Han, Lei; Zhang, Yu; Zhang, Tong

    2016-08-01

    The maximum likelihood estimation (MLE) for the Gaussian graphical model, which is also known as the inverse covariance estimation problem, has gained increasing interest recently. Most existing works assume that inverse covariance estimators contain sparse structure and then construct models with the ℓ 1 regularization. In this paper, different from existing works, we study the inverse covariance estimation problem from another perspective by efficiently modeling the low-rank structure in the inverse covariance, which is assumed to be a combination of a low-rank part and a diagonal matrix. One motivation for this assumption is that the low-rank structure is common in many applications including the climate and financial analysis, and another one is that such assumption can reduce the computational complexity when computing its inverse. Specifically, we propose an efficient COmponent Pursuit (COP) method to obtain the low-rank part, where each component can be sparse. For optimization, the COP method greedily learns a rank-one component in each iteration by maximizing the log-likelihood. Moreover, the COP algorithm enjoys several appealing properties including the existence of an efficient solution in each iteration and the theoretical guarantee on the convergence of this greedy approach. Experiments on large-scale synthetic and real-world datasets including thousands of millions variables show that the COP method is faster than the state-of-the-art techniques for the inverse covariance estimation problem when achieving comparable log-likelihood on test data.

  2. Episodic memory in aspects of large-scale brain networks

    Directory of Open Access Journals (Sweden)

    Woorim eJeong

    2015-08-01

    Full Text Available Understanding human episodic memory in aspects of large-scale brain networks has become one of the central themes in neuroscience over the last decade. Traditionally, episodic memory was regarded as mostly relying on medial temporal lobe (MTL structures. However, recent studies have suggested involvement of more widely distributed cortical network and the importance of its interactive roles in the memory process. Both direct and indirect neuro-modulations of the memory network have been tried in experimental treatments of memory disorders. In this review, we focus on the functional organization of the MTL and other neocortical areas in episodic memory. Task-related neuroimaging studies together with lesion studies suggested that specific sub-regions of the MTL are responsible for specific components of memory. However, recent studies have emphasized that connectivity within MTL structures and even their network dynamics with other cortical areas are essential in the memory process. Resting-state functional network studies also have revealed that memory function is subserved by not only the MTL system but also a distributed network, particularly the default-mode network. Furthermore, researchers have begun to investigate memory networks throughout the entire brain not restricted to the specific resting-state network. Altered patterns of functional connectivity among distributed brain regions were observed in patients with memory impairments. Recently, studies have shown that brain stimulation may impact memory through modulating functional networks, carrying future implications of a novel interventional therapy for memory impairment.

  3. Large-Scale Seismic Test Program at Hualien, Taiwan

    International Nuclear Information System (INIS)

    Tang, H.T.; Graves, H.L.; Chen, P.C.

    1992-01-01

    The Large-Scale Seismic Test (LSST) Program at Hualien, Taiwan, is a follow-on to the soil-structure interaction (SSI) experiments at Lotung, Taiwan. The planned SSI studies will be performed at a stiff soil site in Hualien, Taiwan, that historically has had slightly more destructive earthquakes in the past than Lotung. The LSST is a joint effort among many interested parties. Electric Power Research Institute (EPRI) and Taipower are the organizers of the program and have the lead in planning and managing the program. Other organizations participating in the LSST program are US Nuclear Regulatory Commission, the Central Research Institute of Electric Power Industry, the Tokyo Electric Power Company, the Commissariat A L'Energie Atomique, Electricite de France and Framatome. The LSST was initiated in January 1990, and is envisioned to be five years in duration. Based on the assumption of stiff soil and confirmed by soil boring and geophysical results the test model was designed to provide data needed for SSI studies covering: free-field input, nonlinear soil response, non-rigid body SSI, torsional response, kinematic interaction, spatial incoherency and other effects. Taipower had the lead in design of the test model and received significant input from other LSST members. Questions raised by LSST members were on embedment effects, model stiffness, base shear, and openings for equipment. This paper describes progress in site preparation, design and construction of the model and development of an instrumentation plan

  4. Implicit solvers for large-scale nonlinear problems

    International Nuclear Information System (INIS)

    Keyes, David E; Reynolds, Daniel R; Woodward, Carol S

    2006-01-01

    Computational scientists are grappling with increasingly complex, multi-rate applications that couple such physical phenomena as fluid dynamics, electromagnetics, radiation transport, chemical and nuclear reactions, and wave and material propagation in inhomogeneous media. Parallel computers with large storage capacities are paving the way for high-resolution simulations of coupled problems; however, hardware improvements alone will not prove enough to enable simulations based on brute-force algorithmic approaches. To accurately capture nonlinear couplings between dynamically relevant phenomena, often while stepping over rapid adjustments to quasi-equilibria, simulation scientists are increasingly turning to implicit formulations that require a discrete nonlinear system to be solved for each time step or steady state solution. Recent advances in iterative methods have made fully implicit formulations a viable option for solution of these large-scale problems. In this paper, we overview one of the most effective iterative methods, Newton-Krylov, for nonlinear systems and point to software packages with its implementation. We illustrate the method with an example from magnetically confined plasma fusion and briefly survey other areas in which implicit methods have bestowed important advantages, such as allowing high-order temporal integration and providing a pathway to sensitivity analyses and optimization. Lastly, we overview algorithm extensions under development motivated by current SciDAC applications

  5. Impact of large-scale tides on cosmological distortions via redshift-space power spectrum

    Science.gov (United States)

    Akitsu, Kazuyuki; Takada, Masahiro

    2018-03-01

    Although large-scale perturbations beyond a finite-volume survey region are not direct observables, these affect measurements of clustering statistics of small-scale (subsurvey) perturbations in large-scale structure, compared with the ensemble average, via the mode-coupling effect. In this paper we show that a large-scale tide induced by scalar perturbations causes apparent anisotropic distortions in the redshift-space power spectrum of galaxies in a way depending on an alignment between the tide, wave vector of small-scale modes and line-of-sight direction. Using the perturbation theory of structure formation, we derive a response function of the redshift-space power spectrum to large-scale tide. We then investigate the impact of large-scale tide on estimation of cosmological distances and the redshift-space distortion parameter via the measured redshift-space power spectrum for a hypothetical large-volume survey, based on the Fisher matrix formalism. To do this, we treat the large-scale tide as a signal, rather than an additional source of the statistical errors, and show that a degradation in the parameter is restored if we can employ the prior on the rms amplitude expected for the standard cold dark matter (CDM) model. We also discuss whether the large-scale tide can be constrained at an accuracy better than the CDM prediction, if the effects up to a larger wave number in the nonlinear regime can be included.

  6. Large-scale CO2 storage — Is it feasible?

    Directory of Open Access Journals (Sweden)

    Johansen H.

    2013-06-01

    Full Text Available CCS is generally estimated to have to account for about 20% of the reduction of CO2 emissions to the atmosphere. This paper focuses on the technical aspects of CO2 storage, even if the CCS challenge is equally dependent upon finding viable international solutions to a wide range of economic, political and cultural issues. It has already been demonstrated that it is technically possible to store adequate amounts of CO2 in the subsurface (Sleipner, InSalah, Snøhvit. The large-scale storage challenge (several Gigatons of CO2 per year is more an issue of minimizing cost without compromising safety, and of making international regulations.The storage challenge may be split into 4 main parts: 1 finding reservoirs with adequate storage capacity, 2 make sure that the sealing capacity above the reservoir is sufficient, 3 build the infrastructure for transport, drilling and injection, and 4 set up and perform the necessary monitoring activities. More than 150 years of worldwide experience from the production of oil and gas is an important source of competence for CO2 storage. The storage challenge is however different in three important aspects: 1 the storage activity results in pressure increase in the subsurface, 2 there is no production of fluids that give important feedback on reservoir performance, and 3 the monitoring requirement will have to extend for a much longer time into the future than what is needed during oil and gas production. An important property of CO2 is that its behaviour in the subsurface is significantly different from that of oil and gas. CO2 in contact with water is reactive and corrosive, and may impose great damage on both man-made and natural materials, if proper precautions are not executed. On the other hand, the long-term effect of most of these reactions is that a large amount of CO2 will become immobilized and permanently stored as solid carbonate minerals. The reduced opportunity for direct monitoring of fluid samples

  7. Large-Scale Spray Releases: Initial Aerosol Test Results

    Energy Technology Data Exchange (ETDEWEB)

    Schonewill, Philip P.; Gauglitz, Phillip A.; Bontha, Jagannadha R.; Daniel, Richard C.; Kurath, Dean E.; Adkins, Harold E.; Billing, Justin M.; Burns, Carolyn A.; Davis, James M.; Enderlin, Carl W.; Fischer, Christopher M.; Jenks, Jeromy WJ; Lukins, Craig D.; MacFarlan, Paul J.; Shutthanandan, Janani I.; Smith, Dennese M.

    2012-12-01

    One of the events postulated in the hazard analysis at the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids with Newtonian fluid behavior. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and across processing facilities in the DOE complex. Two key technical areas were identified where testing results were needed to improve the technical basis by reducing the uncertainty due to extrapolating existing literature results. The first technical need was to quantify the role of slurry particles in small breaches where the slurry particles may plug and result in substantially reduced, or even negligible, respirable fraction formed by high-pressure sprays. The second technical need was to determine the aerosol droplet size distribution and volume from prototypic breaches and fluids, specifically including sprays from larger breaches with slurries where data from the literature are scarce. To address these technical areas, small- and large-scale test stands were constructed and operated with simulants to determine aerosol release fractions and generation rates from a range of breach sizes and geometries. The properties of the simulants represented the range of properties expected in the WTP process streams and included water, sodium salt solutions, slurries containing boehmite or gibbsite, and a hazardous chemical simulant. The effect of anti-foam agents was assessed with most of the simulants. Orifices included round holes and

  8. Large-scale CO2 storage — Is it feasible?

    Science.gov (United States)

    Johansen, H.

    2013-06-01

    CCS is generally estimated to have to account for about 20% of the reduction of CO2 emissions to the atmosphere. This paper focuses on the technical aspects of CO2 storage, even if the CCS challenge is equally dependent upon finding viable international solutions to a wide range of economic, political and cultural issues. It has already been demonstrated that it is technically possible to store adequate amounts of CO2 in the subsurface (Sleipner, InSalah, Snøhvit). The large-scale storage challenge (several Gigatons of CO2 per year) is more an issue of minimizing cost without compromising safety, and of making international regulations.The storage challenge may be split into 4 main parts: 1) finding reservoirs with adequate storage capacity, 2) make sure that the sealing capacity above the reservoir is sufficient, 3) build the infrastructure for transport, drilling and injection, and 4) set up and perform the necessary monitoring activities. More than 150 years of worldwide experience from the production of oil and gas is an important source of competence for CO2 storage. The storage challenge is however different in three important aspects: 1) the storage activity results in pressure increase in the subsurface, 2) there is no production of fluids that give important feedback on reservoir performance, and 3) the monitoring requirement will have to extend for a much longer time into the future than what is needed during oil and gas production. An important property of CO2 is that its behaviour in the subsurface is significantly different from that of oil and gas. CO2 in contact with water is reactive and corrosive, and may impose great damage on both man-made and natural materials, if proper precautions are not executed. On the other hand, the long-term effect of most of these reactions is that a large amount of CO2 will become immobilized and permanently stored as solid carbonate minerals. The reduced opportunity for direct monitoring of fluid samples close to the

  9. A new large-scale manufacturing platform for complex biopharmaceuticals.

    Science.gov (United States)

    Vogel, Jens H; Nguyen, Huong; Giovannini, Roberto; Ignowski, Jolene; Garger, Steve; Salgotra, Anil; Tom, Jennifer

    2012-12-01

    Complex biopharmaceuticals, such as recombinant blood coagulation factors, are addressing critical medical needs and represent a growing multibillion-dollar market. For commercial manufacturing of such, sometimes inherently unstable, molecules it is important to minimize product residence time in non-ideal milieu in order to obtain acceptable yields and consistently high product quality. Continuous perfusion cell culture allows minimization of residence time in the bioreactor, but also brings unique challenges in product recovery, which requires innovative solutions. In order to maximize yield, process efficiency, facility and equipment utilization, we have developed, scaled-up and successfully implemented a new integrated manufacturing platform in commercial scale. This platform consists of a (semi-)continuous cell separation process based on a disposable flow path and integrated with the upstream perfusion operation, followed by membrane chromatography on large-scale adsorber capsules in rapid cycling mode. Implementation of the platform at commercial scale for a new product candidate led to a yield improvement of 40% compared to the conventional process technology, while product quality has been shown to be more consistently high. Over 1,000,000 L of cell culture harvest have been processed with 100% success rate to date, demonstrating the robustness of the new platform process in GMP manufacturing. While membrane chromatography is well established for polishing in flow-through mode, this is its first commercial-scale application for bind/elute chromatography in the biopharmaceutical industry and demonstrates its potential in particular for manufacturing of potent, low-dose biopharmaceuticals. Copyright © 2012 Wiley Periodicals, Inc.

  10. Challenges in Managing Trustworthy Large-scale Digital Science

    Science.gov (United States)

    Evans, B. J. K.

    2017-12-01

    The increased use of large-scale international digital science has opened a number of challenges for managing, handling, using and preserving scientific information. The large volumes of information are driven by three main categories - model outputs including coupled models and ensembles, data products that have been processing to a level of usability, and increasingly heuristically driven data analysis. These data products are increasingly the ones that are usable by the broad communities, and far in excess of the raw instruments data outputs. The data, software and workflows are then shared and replicated to allow broad use at an international scale, which places further demands of infrastructure to support how the information is managed reliably across distributed resources. Users necessarily rely on these underlying "black boxes" so that they are productive to produce new scientific outcomes. The software for these systems depend on computational infrastructure, software interconnected systems, and information capture systems. This ranges from the fundamentals of the reliability of the compute hardware, system software stacks and libraries, and the model software. Due to these complexities and capacity of the infrastructure, there is an increased emphasis of transparency of the approach and robustness of the methods over the full reproducibility. Furthermore, with large volume data management, it is increasingly difficult to store the historical versions of all model and derived data. Instead, the emphasis is on the ability to access the updated products and the reliability by which both previous outcomes are still relevant and can be updated for the new information. We will discuss these challenges and some of the approaches underway that are being used to address these issues.

  11. Large-scale filaments associated with Milky Way spiral arms

    Science.gov (United States)

    Wang, Ke; Testi, Leonardo; Ginsburg, Adam; Walmsley, C. Malcolm; Molinari, Sergio; Schisano, Eugenio

    2015-07-01

    The ubiquity of filamentary structure at various scales throughout the Galaxy has triggered a renewed interest in their formation, evolution, and role in star formation. The largest filaments can reach up to Galactic scale as part of the spiral arm structure. However, such large-scale filaments are hard to identify systematically due to limitations in identifying methodology (i.e. as extinction features). We present a new approach to directly search for the largest, coldest, and densest filaments in the Galaxy, making use of sensitive Herschel Hi-GAL (Herschel Infrared Galactic Plane Survey) data complemented by spectral line cubes. We present a sample of the nine most prominent Herschel filaments, including six identified from a pilot search field plus three from outside the field. These filaments measure 37-99 pc long and 0.6-3.0 pc wide with masses (0.5-8.3) × 104 M⊙, and beam-averaged (28 arcsec, or 0.4-0.7 pc) peak H2 column densities of (1.7-9.3)× 1022 cm- 2. The bulk of the filaments are relatively cold (17-21 K), while some local clumps have a dust temperature up to 25-47 K. All the filaments are located within ≲60 pc from the Galactic mid-plane. Comparing the filaments to a recent spiral arm model incorporating the latest parallax measurements, we find that 7/9 of them reside within arms, but most are close to arm edges. These filaments are comparable in length to the Galactic scaleheight and therefore are not simply part of a grander turbulent cascade.

  12. Spatiotemporal dynamics of large-scale brain activity

    Science.gov (United States)

    Neuman, Jeremy

    Understanding the dynamics of large-scale brain activity is a tough challenge. One reason for this is the presence of an incredible amount of complexity arising from having roughly 100 billion neurons connected via 100 trillion synapses. Because of the extremely high number of degrees of freedom in the nervous system, the question of how the brain manages to properly function and remain stable, yet also be adaptable, must be posed. Neuroscientists have identified many ways the nervous system makes this possible, of which synaptic plasticity is possibly the most notable one. On the other hand, it is vital to understand how the nervous system also loses stability, resulting in neuropathological diseases such as epilepsy, a disease which affects 1% of the population. In the following work, we seek to answer some of these questions from two different perspectives. The first uses mean-field theory applied to neuronal populations, where the variables of interest are the percentages of active excitatory and inhibitory neurons in a network, to consider how the nervous system responds to external stimuli, self-organizes and generates epileptiform activity. The second method uses statistical field theory, in the framework of single neurons on a lattice, to study the concept of criticality, an idea borrowed from physics which posits that in some regime the brain operates in a collectively stable or marginally stable manner. This will be examined in two different neuronal networks with self-organized criticality serving as the overarching theme for the union of both perspectives. One of the biggest problems in neuroscience is the question of to what extent certain details are significant to the functioning of the brain. These details give rise to various spatiotemporal properties that at the smallest of scales explain the interaction of single neurons and synapses and at the largest of scales describe, for example, behaviors and sensations. In what follows, we will shed some

  13. Large-scale HTS bulks for magnetic application

    International Nuclear Information System (INIS)

    Werfel, Frank N.; Floegel-Delor, Uta; Riedel, Thomas; Goebel, Bernd; Rothfeld, Rolf; Schirrmeister, Peter; Wippich, Dieter

    2013-01-01

    Highlights: ► ATZ Company has constructed about 130 HTS magnet systems. ► Multi-seeded YBCO bulks joint the way for large-scale application. ► Levitation platforms demonstrate “superconductivity” to a great public audience (100 years anniversary). ► HTS magnetic bearings show forces up to 1 t. ► Modular HTS maglev vacuum cryostats are tested for train demonstrators in Brazil, China and Germany. -- Abstract: ATZ Company has constructed about 130 HTS magnet systems using high-Tc bulk magnets. A key feature in scaling-up is the fabrication of YBCO melts textured multi-seeded large bulks with three to eight seeds. Except of levitation, magnetization, trapped field and hysteresis, we review system engineering parameters of HTS magnetic linear and rotational bearings like compactness, cryogenics, power density, efficiency and robust construction. We examine mobile compact YBCO bulk magnet platforms cooled with LN 2 and Stirling cryo-cooler for demonstrator use. Compact cryostats for Maglev train operation contain 24 pieces of 3-seed bulks and can levitate 2500–3000 N at 10 mm above a permanent magnet (PM) track. The effective magnetic distance of the thermally insulated bulks is 2 mm only; the stored 2.5 l LN 2 allows more than 24 h operation without refilling. 34 HTS Maglev vacuum cryostats are manufactured tested and operate in Germany, China and Brazil. The magnetic levitation load to weight ratio is more than 15, and by group assembling the HTS cryostats under vehicles up to 5 t total loads levitated above a magnetic track is achieved

  14. Seasonal dependence of large-scale Birkeland currents

    International Nuclear Information System (INIS)

    Fujii, R.; Iijima, T.; Potemra, T.A.; Sugiura, M.

    1981-01-01

    The seasonal dependence of large-scale Birkeland currents has been determined from the analysis of vector magnetic field data acquired by the TRIAD satellite in the northern hemisphere. Statistical characteristics of single sheet (i.e., net currents) and double sheet Birkeland currents were determined from 555 TRIAD passes during the summer, and 408 passes during the winter (more complicated multiple-sheet current systems were not included in this study). The average K/sub p/ value for the summer events is 1.9 and for the winter events is 2.0. The principal results include the following: (1) The single sheet Birkeland currents are statistically observed more often than the double sheet currents in the dayside of the auroral zone during any season. The single sheet currents are also observed more often in the summer than in the winter (as much as 2 to 3 times as often depending upon the MLT sector). (2) The intensities of the single and double sheet Birkeland currents on the dayside, from approximately 1000 MLT to 1800 MLT, are larger during the summer (in comparison to winter) by a factor of about 2. (3) The intensities of the double sheet Birkeland currents in the nightside (the dominant system in this local time) do not show a significant difference from summer to winter. (4) The single and double sheet currents in the dayside (between 0600 and 1800 MLT) appear at higher latitudes (by about 1 0 to 3 0 ) during the summer in comparison to the winter. These characterisctis suggest that the Birkeland current intensities are controlled by the ionosphere conductivity in the polar region. The greater occurrence of single sheet Birkeland currents during the summertime supports the suggestion that these currents close via the polar cap when the conductivity there is sufficiently high to permit it

  15. Large-scale multielectrode recording and stimulation of neural activity

    International Nuclear Information System (INIS)

    Sher, A.; Chichilnisky, E.J.; Dabrowski, W.; Grillo, A.A.; Grivich, M.; Gunning, D.; Hottowy, P.; Kachiguine, S.; Litke, A.M.; Mathieson, K.; Petrusca, D.

    2007-01-01

    Large circuits of neurons are employed by the brain to encode and process information. How this encoding and processing is carried out is one of the central questions in neuroscience. Since individual neurons communicate with each other through electrical signals (action potentials), the recording of neural activity with arrays of extracellular electrodes is uniquely suited for the investigation of this question. Such recordings provide the combination of the best spatial (individual neurons) and temporal (individual action-potentials) resolutions compared to other large-scale imaging methods. Electrical stimulation of neural activity in turn has two very important applications: it enhances our understanding of neural circuits by allowing active interactions with them, and it is a basis for a large variety of neural prosthetic devices. Until recently, the state-of-the-art in neural activity recording systems consisted of several dozen electrodes with inter-electrode spacing ranging from tens to hundreds of microns. Using silicon microstrip detector expertise acquired in the field of high-energy physics, we created a unique neural activity readout and stimulation framework that consists of high-density electrode arrays, multi-channel custom-designed integrated circuits, a data acquisition system, and data-processing software. Using this framework we developed a number of neural readout and stimulation systems: (1) a 512-electrode system for recording the simultaneous activity of as many as hundreds of neurons, (2) a 61-electrode system for electrical stimulation and readout of neural activity in retinas and brain-tissue slices, and (3) a system with telemetry capabilities for recording neural activity in the intact brain of awake, naturally behaving animals. We will report on these systems, their various applications to the field of neurobiology, and novel scientific results obtained with some of them. We will also outline future directions

  16. Large-scale projects between regional planning and environmental protection

    International Nuclear Information System (INIS)

    Schmidt, G.

    1984-01-01

    The first part of the work discusses the current law of land-use planning, municipal and technical construction planning, and licensing under the atomic energy law and the federal law on immission protection. In the second part some theses suggesting modifications are submitted. In the sector of land-use planning substantial contributions to the protection of the environment can only be expected from programs and plans (aims). For the environmental conflicts likely to arise from large-scale projects (nuclear power plant, fossil-fuel power plant) this holds good for the most part of site selection plans. They have bearings on environmental protection in that they presuppose thorough examination of facts, help to recognize possible conflicts at an early date and provide a frame for solving those problems. Municipal construction planning is guided by the following principles: Environmental protection is an equivalent planning target. Environmental data and facts and their methodical processing play a fundamental part as they constitute the basis of evaluation. Under the rules and regulations of the federal law on immission protection, section 5, number 2 - prevention of nuisances - operators are obliged to take preventive care of risks. That section is not concerned with planning or distribution. Neither does the licensing of nuclear plants have planning character. So far as the legal preconditions of licensing are fulfilled, the scope for rejection of an application under section 7, subsection 2 of the atomic energy law in view of site selection and requirement of a plant hardly carries any practical weight. (orig./HP) [de

  17. Large-scale HTS bulks for magnetic application

    Energy Technology Data Exchange (ETDEWEB)

    Werfel, Frank N., E-mail: werfel@t-online.de [Adelwitz Technologiezentrum GmbH (ATZ), Rittergut Adelwitz 16, 04886 Arzberg-Adelwitz (Germany); Floegel-Delor, Uta; Riedel, Thomas; Goebel, Bernd; Rothfeld, Rolf; Schirrmeister, Peter; Wippich, Dieter [Adelwitz Technologiezentrum GmbH (ATZ), Rittergut Adelwitz 16, 04886 Arzberg-Adelwitz (Germany)

    2013-01-15

    Highlights: ► ATZ Company has constructed about 130 HTS magnet systems. ► Multi-seeded YBCO bulks joint the way for large-scale application. ► Levitation platforms demonstrate “superconductivity” to a great public audience (100 years anniversary). ► HTS magnetic bearings show forces up to 1 t. ► Modular HTS maglev vacuum cryostats are tested for train demonstrators in Brazil, China and Germany. -- Abstract: ATZ Company has constructed about 130 HTS magnet systems using high-Tc bulk magnets. A key feature in scaling-up is the fabrication of YBCO melts textured multi-seeded large bulks with three to eight seeds. Except of levitation, magnetization, trapped field and hysteresis, we review system engineering parameters of HTS magnetic linear and rotational bearings like compactness, cryogenics, power density, efficiency and robust construction. We examine mobile compact YBCO bulk magnet platforms cooled with LN{sub 2} and Stirling cryo-cooler for demonstrator use. Compact cryostats for Maglev train operation contain 24 pieces of 3-seed bulks and can levitate 2500–3000 N at 10 mm above a permanent magnet (PM) track. The effective magnetic distance of the thermally insulated bulks is 2 mm only; the stored 2.5 l LN{sub 2} allows more than 24 h operation without refilling. 34 HTS Maglev vacuum cryostats are manufactured tested and operate in Germany, China and Brazil. The magnetic levitation load to weight ratio is more than 15, and by group assembling the HTS cryostats under vehicles up to 5 t total loads levitated above a magnetic track is achieved.

  18. Coupled binary embedding for large-scale image retrieval.

    Science.gov (United States)

    Zheng, Liang; Wang, Shengjin; Tian, Qi

    2014-08-01

    Visual matching is a crucial step in image retrieval based on the bag-of-words (BoW) model. In the baseline method, two keypoints are considered as a matching pair if their SIFT descriptors are quantized to the same visual word. However, the SIFT visual word has two limitations. First, it loses most of its discriminative power during quantization. Second, SIFT only describes the local texture feature. Both drawbacks impair the discriminative power of the BoW model and lead to false positive matches. To tackle this problem, this paper proposes to embed multiple binary features at indexing level. To model correlation between features, a multi-IDF scheme is introduced, through which different binary features are coupled into the inverted file. We show that matching verification methods based on binary features, such as Hamming embedding, can be effectively incorporated in our framework. As an extension, we explore the fusion of binary color feature into image retrieval. The joint integration of the SIFT visual word and binary features greatly enhances the precision of visual matching, reducing the impact of false positive matches. Our method is evaluated through extensive experiments on four benchmark datasets (Ukbench, Holidays, DupImage, and MIR Flickr 1M). We show that our method significantly improves the baseline approach. In addition, large-scale experiments indicate that the proposed method requires acceptable memory usage and query time compared with other approaches. Further, when global color feature is integrated, our method yields competitive performance with the state-of-the-arts.

  19. Large-scale urban point cloud labeling and reconstruction

    Science.gov (United States)

    Zhang, Liqiang; Li, Zhuqiang; Li, Anjian; Liu, Fangyu

    2018-04-01

    The large number of object categories and many overlapping or closely neighboring objects in large-scale urban scenes pose great challenges in point cloud classification. In this paper, a novel framework is proposed for classification and reconstruction of airborne laser scanning point cloud data. To label point clouds, we present a rectified linear units neural network named ReLu-NN where the rectified linear units (ReLu) instead of the traditional sigmoid are taken as the activation function in order to speed up the convergence. Since the features of the point cloud are sparse, we reduce the number of neurons by the dropout to avoid over-fitting of the training process. The set of feature descriptors for each 3D point is encoded through self-taught learning, and forms a discriminative feature representation which is taken as the input of the ReLu-NN. The segmented building points are consolidated through an edge-aware point set resampling algorithm, and then they are reconstructed into 3D lightweight models using the 2.5D contouring method (Zhou and Neumann, 2010). Compared with deep learning approaches, the ReLu-NN introduced can easily classify unorganized point clouds without rasterizing the data, and it does not need a large number of training samples. Most of the parameters in the network are learned, and thus the intensive parameter tuning cost is significantly reduced. Experimental results on various datasets demonstrate that the proposed framework achieves better performance than other related algorithms in terms of classification accuracy and reconstruction quality.

  20. Biopolitics problems of large-scale hydraulic engineering construction

    International Nuclear Information System (INIS)

    Romanenko, V.D.

    1997-01-01

    The XX century which will enter in a history as a century of large-scale hydraulic engineering constructions come to the finish. Only on the European continent 517 large reservoirs (more than 1000 million km 3 of water were detained, had been constructed for a period from 1901 till 1985. In the Danube basin a plenty for reservoirs of power stations, navigations, navigating sluices and other hydraulic engineering structures are constructed. Among them more than 40 especially large objects are located along the main bed of the river. A number of hydro-complexes such as Dnieper-Danube and Gabcikovo, Danube-Oder-Labe (project), Danube-Tissa, Danube-Adriatic Sea (project), Danube-Aegean Sea, Danube-Black Sea ones, are entered into operation or are in a stage of designing. Hydraulic engineering construction was especially heavily conducted in Ukraine. On its territory some large reservoirs on Dnieper and Yuzhny Bug were constructed, which have heavily changed the hydrological regime of the rivers. Summarised the results of river systems regulating in Ukraine one can be noted that more than 27 thousand ponds (3 km 3 per year), 1098 reservoirs of total volume 55 km 3 , 11 large channels of total length more than 2000 km and with productivity of 1000 m 2 /s have been created in Ukraine. Hydraulic engineering construction played an important role in development of the industry and agriculture, water-supply of the cities and settlements, in environmental effects, and maintenance of safe navigation in Danube, Dnieper and other rivers. In next part of the paper, the environmental changes after construction of the Karakum Channel on the Aral Sea in the Middle Asia are discussed

  1. Iodine oxides in large-scale THAI tests

    International Nuclear Information System (INIS)

    Funke, F.; Langrock, G.; Kanzleiter, T.; Poss, G.; Fischer, K.; Kühnel, A.; Weber, G.; Allelein, H.-J.

    2012-01-01

    Highlights: ► Iodine oxide particles were produced from gaseous iodine and ozone. ► Ozone replaced the effect of ionizing radiation in the large-scale THAI facility. ► The mean diameter of the iodine oxide particles was about 0.35 μm. ► Particle formation was faster than the chemical reaction between iodine and ozone. ► Deposition of iodine oxide particles was slow in the absence of other aerosols. - Abstract: The conversion of gaseous molecular iodine into iodine oxide aerosols has significant relevance in the understanding of the fission product iodine volatility in a LWR containment during severe accidents. In containment, the high radiation field caused by fission products released from the reactor core induces radiolytic oxidation into iodine oxides. To study the characteristics and the behaviour of iodine oxides in large scale, two THAI tests Iod-13 and Iod-14 were performed, simulating radiolytic oxidation of molecular iodine by reaction of iodine with ozone, with ozone injected from an ozone generator. The observed iodine oxides form submicron particles with mean volume-related diameters of about 0.35 μm and show low deposition rates in the THAI tests performed in the absence of other nuclear aerosols. Formation of iodine aerosols from gaseous precursors iodine and ozone is fast as compared to their chemical interaction. The current approach in empirical iodine containment behaviour models in severe accidents, including the radiolytic production of I 2 -oxidizing agents followed by the I 2 oxidation itself, is confirmed by these THAI tests.

  2. Combustion of biodiesel in a large-scale laboratory furnace

    International Nuclear Information System (INIS)

    Pereira, Caio; Wang, Gongliang; Costa, Mário

    2014-01-01

    Combustion tests in a large-scale laboratory furnace were carried out to assess the feasibility of using biodiesel as a fuel in industrial furnaces. For comparison purposes, petroleum-based diesel was also used as a fuel. Initially, the performance of the commercial air-assisted atomizer used in the combustion tests was scrutinized under non-reacting conditions. Subsequently, flue gas data, including PM (particulate matter), were obtained for various flame conditions to quantify the effects of the atomization quality and excess air on combustion performance. The combustion data was complemented with in-flame temperature measurements for two representative furnace operating conditions. The results reveal that (i) CO emissions from biodiesel and diesel combustion are rather similar and not affected by the atomization quality; (ii) NO x emissions increase slightly as spray quality improves for both liquid fuels, but NO x emissions from biodiesel combustion are always lower than those from diesel combustion; (iii) CO emissions decrease rapidly for both liquid fuels as the excess air level increases up to an O 2 concentration in the flue gas of 2%, beyond which they remain unchanged; (iv) NO x emissions increase with an increase in the excess air level for both liquid fuels; (v) the quality of the atomization has a significant impact on PM emissions, with the diesel combustion yielding significantly higher PM emissions than biodiesel combustion; and (vi) diesel combustion originates PM with elements such as Cr, Na, Ni and Pb, while biodiesel combustion produces PM with elements such as Ca, Mg and Fe. - Highlights: • CO emissions from biodiesel and diesel tested are similar. • NO x emissions from biodiesel tested are lower than those from diesel tested. • Diesel tested yields significantly higher PM (particulate matter) emissions than biodiesel tested. • Diesel tested originates PM with Cr, Na, Ni and Pb, while biodiesel tested produces PM with Ca, Mg and Fe

  3. Inflationary tensor fossils in large-scale structure

    Energy Technology Data Exchange (ETDEWEB)

    Dimastrogiovanni, Emanuela [School of Physics and Astronomy, University of Minnesota, Minneapolis, MN 55455 (United States); Fasiello, Matteo [Department of Physics, Case Western Reserve University, Cleveland, OH 44106 (United States); Jeong, Donghui [Department of Astronomy and Astrophysics, The Pennsylvania State University, University Park, PA 16802 (United States); Kamionkowski, Marc, E-mail: ema@physics.umn.edu, E-mail: mrf65@case.edu, E-mail: duj13@psu.edu, E-mail: kamion@jhu.edu [Department of Physics and Astronomy, 3400 N. Charles St., Johns Hopkins University, Baltimore, MD 21218 (United States)

    2014-12-01

    Inflation models make specific predictions for a tensor-scalar-scalar three-point correlation, or bispectrum, between one gravitational-wave (tensor) mode and two density-perturbation (scalar) modes. This tensor-scalar-scalar correlation leads to a local power quadrupole, an apparent departure from statistical isotropy in our Universe, as well as characteristic four-point correlations in the current mass distribution in the Universe. So far, the predictions for these observables have been worked out only for single-clock models in which certain consistency conditions between the tensor-scalar-scalar correlation and tensor and scalar power spectra are satisfied. Here we review the requirements on inflation models for these consistency conditions to be satisfied. We then consider several examples of inflation models, such as non-attractor and solid-inflation models, in which these conditions are put to the test. In solid inflation the simplest consistency conditions are already violated whilst in the non-attractor model we find that, contrary to the standard scenario, the tensor-scalar-scalar correlator probes directly relevant model-dependent information. We work out the predictions for observables in these models. For non-attractor inflation we find an apparent local quadrupolar departure from statistical isotropy in large-scale structure but that this power quadrupole decreases very rapidly at smaller scales. The consistency of the CMB quadrupole with statistical isotropy then constrains the distance scale that corresponds to the transition from the non-attractor to attractor phase of inflation to be larger than the currently observable horizon. Solid inflation predicts clustering fossils signatures in the current galaxy distribution that may be large enough to be detectable with forthcoming, and possibly even current, galaxy surveys.

  4. Large-scale Health Information Database and Privacy Protection.

    Science.gov (United States)

    Yamamoto, Ryuichi

    2016-09-01

    Japan was once progressive in the digitalization of healthcare fields but unfortunately has fallen behind in terms of the secondary use of data for public interest. There has recently been a trend to establish large-scale health databases in the nation, and a conflict between data use for public interest and privacy protection has surfaced as this trend has progressed. Databases for health insurance claims or for specific health checkups and guidance services were created according to the law that aims to ensure healthcare for the elderly; however, there is no mention in the act about using these databases for public interest in general. Thus, an initiative for such use must proceed carefully and attentively. The PMDA projects that collect a large amount of medical record information from large hospitals and the health database development project that the Ministry of Health, Labour and Welfare (MHLW) is working on will soon begin to operate according to a general consensus; however, the validity of this consensus can be questioned if issues of anonymity arise. The likelihood that researchers conducting a study for public interest would intentionally invade the privacy of their subjects is slim. However, patients could develop a sense of distrust about their data being used since legal requirements are ambiguous. Nevertheless, without using patients' medical records for public interest, progress in medicine will grind to a halt. Proper legislation that is clear for both researchers and patients will therefore be highly desirable. A revision of the Act on the Protection of Personal Information is currently in progress. In reality, however, privacy is not something that laws alone can protect; it will also require guidelines and self-discipline. We now live in an information capitalization age. I will introduce the trends in legal reform regarding healthcare information and discuss some basics to help people properly face the issue of health big data and privacy

  5. Large-scale Health Information Database and Privacy Protection*1

    Science.gov (United States)

    YAMAMOTO, Ryuichi

    2016-01-01

    Japan was once progressive in the digitalization of healthcare fields but unfortunately has fallen behind in terms of the secondary use of data for public interest. There has recently been a trend to establish large-scale health databases in the nation, and a conflict between data use for public interest and privacy protection has surfaced as this trend has progressed. Databases for health insurance claims or for specific health checkups and guidance services were created according to the law that aims to ensure healthcare for the elderly; however, there is no mention in the act about using these databases for public interest in general. Thus, an initiative for such use must proceed carefully and attentively. The PMDA*2 projects that collect a large amount of medical record information from large hospitals and the health database development project that the Ministry of Health, Labour and Welfare (MHLW) is working on will soon begin to operate according to a general consensus; however, the validity of this consensus can be questioned if issues of anonymity arise. The likelihood that researchers conducting a study for public interest would intentionally invade the privacy of their subjects is slim. However, patients could develop a sense of distrust about their data being used since legal requirements are ambiguous. Nevertheless, without using patients’ medical records for public interest, progress in medicine will grind to a halt. Proper legislation that is clear for both researchers and patients will therefore be highly desirable. A revision of the Act on the Protection of Personal Information is currently in progress. In reality, however, privacy is not something that laws alone can protect; it will also require guidelines and self-discipline. We now live in an information capitalization age. I will introduce the trends in legal reform regarding healthcare information and discuss some basics to help people properly face the issue of health big data and privacy

  6. Pro website development and operations streamlining DevOps for large-scale websites

    CERN Document Server

    Sacks, Matthew

    2012-01-01

    Pro Website Development and Operations gives you the experience you need to create and operate a large-scale production website. Large-scale websites have their own unique set of problems regarding their design-problems that can get worse when agile methodologies are adopted for rapid results. Managing large-scale websites, deploying applications, and ensuring they are performing well often requires a full scale team involving the development and operations sides of the company-two departments that don't always see eye to eye. When departments struggle with each other, it adds unnecessary comp

  7. Fast Simulation of Large-Scale Floods Based on GPU Parallel Computing

    OpenAIRE

    Qiang Liu; Yi Qin; Guodong Li

    2018-01-01

    Computing speed is a significant issue of large-scale flood simulations for real-time response to disaster prevention and mitigation. Even today, most of the large-scale flood simulations are generally run on supercomputers due to the massive amounts of data and computations necessary. In this work, a two-dimensional shallow water model based on an unstructured Godunov-type finite volume scheme was proposed for flood simulation. To realize a fast simulation of large-scale floods on a personal...

  8. Results of Large-Scale Spacecraft Flammability Tests

    Science.gov (United States)

    Ferkul, Paul; Olson, Sandra; Urban, David L.; Ruff, Gary A.; Easton, John; T'ien, James S.; Liao, Ta-Ting T.; Fernandez-Pello, A. Carlos; Torero, Jose L.; Eigenbrand, Christian; hide

    2017-01-01

    For the first time, a large-scale fire was intentionally set inside a spacecraft while in orbit. Testing in low gravity aboard spacecraft had been limited to samples of modest size: for thin fuels the longest samples burned were around 15 cm in length and thick fuel samples have been even smaller. This is despite the fact that fire is a catastrophic hazard for spaceflight and the spread and growth of a fire, combined with its interactions with the vehicle cannot be expected to scale linearly. While every type of occupied structure on earth has been the subject of full scale fire testing, this had never been attempted in space owing to the complexity, cost, risk and absence of a safe location. Thus, there is a gap in knowledge of fire behavior in spacecraft. The recent utilization of large, unmanned, resupply craft has provided the needed capability: a habitable but unoccupied spacecraft in low earth orbit. One such vehicle was used to study the flame spread over a 94 x 40.6 cm thin charring solid (fiberglasscotton fabric). The sample was an order of magnitude larger than anything studied to date in microgravity and was of sufficient scale that it consumed 1.5 of the available oxygen. The experiment which is called Saffire consisted of two tests, forward or concurrent flame spread (with the direction of flow) and opposed flame spread (against the direction of flow). The average forced air speed was 20 cms. For the concurrent flame spread test, the flame size remained constrained after the ignition transient, which is not the case in 1-g. These results were qualitatively different from those on earth where an upward-spreading flame on a sample of this size accelerates and grows. In addition, a curious effect of the chamber size is noted. Compared to previous microgravity work in smaller tunnels, the flame in the larger tunnel spread more slowly, even for a wider sample. This is attributed to the effect of flow acceleration in the smaller tunnels as a result of hot

  9. Large-scale volcanism associated with coronae on Venus

    Science.gov (United States)

    Roberts, K. Magee; Head, James W.

    1993-01-01

    The formation and evolution of coronae on Venus are thought to be the result of mantle upwellings against the crust and lithosphere and subsequent gravitational relaxation. A variety of other features on Venus have been linked to processes associated with mantle upwelling, including shield volcanoes on large regional rises such as Beta, Atla and Western Eistla Regiones and extensive flow fields such as Mylitta and Kaiwan Fluctus near the Lada Terra/Lavinia Planitia boundary. Of these features, coronae appear to possess the smallest amounts of associated volcanism, although volcanism associated with coronae has only been qualitatively examined. An initial survey of coronae based on recent Magellan data indicated that only 9 percent of all coronae are associated with substantial amounts of volcanism, including interior calderas or edifices greater than 50 km in diameter and extensive, exterior radial flow fields. Sixty-eight percent of all coronae were found to have lesser amounts of volcanism, including interior flooding and associated volcanic domes and small shields; the remaining coronae were considered deficient in associated volcanism. It is possible that coronae are related to mantle plumes or diapirs that are lower in volume or in partial melt than those associated with the large shields or flow fields. Regional tectonics or variations in local crustal and thermal structure may also be significant in determining the amount of volcanism produced from an upwelling. It is also possible that flow fields associated with some coronae are sheet-like in nature and may not be readily identified. If coronae are associated with volcanic flow fields, then they may be a significant contributor to plains formation on Venus, as they number over 300 and are widely distributed across the planet. As a continuation of our analysis of large-scale volcanism on Venus, we have reexamined the known population of coronae and assessed quantitatively the scale of volcanism associated

  10. FEASIBILITY OF LARGE-SCALE OCEAN CO2 SEQUESTRATION

    Energy Technology Data Exchange (ETDEWEB)

    Dr. Peter Brewer; Dr. James Barry

    2002-09-30

    We have continued to carry out creative small-scale experiments in the deep ocean to investigate the science underlying questions of possible future large-scale deep-ocean CO{sub 2} sequestration as a means of ameliorating greenhouse gas growth rates in the atmosphere. This project is closely linked to additional research funded by the DoE Office of Science, and to support from the Monterey Bay Aquarium Research Institute. The listing of project achievements here over the past year reflects these combined resources. Within the last project year we have: (1) Published a significant workshop report (58 pages) entitled ''Direct Ocean Sequestration Expert's Workshop'', based upon a meeting held at MBARI in 2001. The report is available both in hard copy, and on the NETL web site. (2) Carried out three major, deep ocean, (3600m) cruises to examine the physical chemistry, and biological consequences, of several liter quantities released on the ocean floor. (3) Carried out two successful short cruises in collaboration with Dr. Izuo Aya and colleagues (NMRI, Osaka, Japan) to examine the fate of cold (-55 C) CO{sub 2} released at relatively shallow ocean depth. (4) Carried out two short cruises in collaboration with Dr. Costas Tsouris, ORNL, to field test an injection nozzle designed to transform liquid CO{sub 2} into a hydrate slurry at {approx}1000m depth. (5) In collaboration with Prof. Jill Pasteris (Washington University) we have successfully accomplished the first field test of a deep ocean laser Raman spectrometer for probing in situ the physical chemistry of the CO{sub 2} system. (6) Submitted the first major paper on biological impacts as determined from our field studies. (7) Submitted a paper on our measurements of the fate of a rising stream of liquid CO{sub 2} droplets to Environmental Science & Technology. (8) Have had accepted for publication in Eos the first brief account of the laser Raman spectrometer success. (9) Have had two

  11. GPU-Accelerated Sparse Matrix Solvers for Large-Scale Simulations, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Many large-scale numerical simulations can be broken down into common mathematical routines. While the applications may differ, the need to perform functions such as...

  12. Unraveling The Connectome: Visualizing and Abstracting Large-Scale Connectomics Data

    KAUST Repository

    Al-Awami, Ali K.

    2017-01-01

    -user system seamlessly integrates a diverse set of tools. Our system provides support for the management, provenance, accountability, and auditing of large-scale segmentations. Finally, we present a novel architecture to render very large volumes interactively

  13. Restoring Eelgrass (Zostera marina) from Seed: A Comparison of Planting Methods for Large-Scale Projects

    National Research Council Canada - National Science Library

    Orth, Robert; Marion, Scott; Granger, Steven; Traber, Michael

    2008-01-01

    Eelgrass (Zostera marina) seeds are being used in a variety of both small- and large-scale restoration activities and have been successfully used to initiate recovery of eelgrass in the Virginia seaside coastal lagoons...

  14. Monitoring and Information Fusion for Search and Rescue Operations in Large-Scale Disasters

    National Research Council Canada - National Science Library

    Nardi, Daniele

    2002-01-01

    ... for information fusion with application to search-and-rescue and large scale disaster relief. The objective is to develop and to deploy tools to support the monitoring activities in an intervention caused by a large-scale disaster...

  15. Systems and methods for large-scale nanotemplate and nanowire fabrication

    KAUST Repository

    Vidal, Enrique Vilanova; Alfadhel, Ahmed; Ivanov, Iurii; Kosel, Jü rgen

    2016-01-01

    Systems and methods for largescale nanotemplate and nanowire fabrication are provided. The system can include a sample holder and one or more chemical containers fluidly connected to the sample holder. The sample holder can be configured to contain

  16. Co-Cure-Ply Resins for High Performance, Large-Scale Structures

    Data.gov (United States)

    National Aeronautics and Space Administration — Large-scale composite structures are commonly joined by secondary bonding of molded-and-cured thermoset components. This approach may result in unpredictable joint...

  17. Double inflation: A possible resolution of the large-scale structure problem

    International Nuclear Information System (INIS)

    Turner, M.S.; Villumsen, J.V.; Vittorio, N.; Silk, J.; Juszkiewicz, R.

    1986-11-01

    A model is presented for the large-scale structure of the universe in which two successive inflationary phases resulted in large small-scale and small large-scale density fluctuations. This bimodal density fluctuation spectrum in an Ω = 1 universe dominated by hot dark matter leads to large-scale structure of the galaxy distribution that is consistent with recent observational results. In particular, large, nearly empty voids and significant large-scale peculiar velocity fields are produced over scales of ∼100 Mpc, while the small-scale structure over ≤ 10 Mpc resembles that in a low density universe, as observed. Detailed analytical calculations and numerical simulations are given of the spatial and velocity correlations. 38 refs., 6 figs

  18. Direction of information flow in large-scale resting-state networks is frequency-dependent

    NARCIS (Netherlands)

    Hillebrand, Arjan; Tewarie, Prejaas; Van Dellen, Edwin; Yu, Meichen; Carbo, Ellen W S; Douw, Linda; Gouw, Alida A.; Van Straaten, Elisabeth C W; Stam, Cornelis J.

    2016-01-01

    Normal brain function requires interactions between spatially separated, and functionally specialized, macroscopic regions, yet the directionality of these interactions in large-scale functional networks is unknown. Magnetoencephalography was used to determine the directionality of these

  19. The role of large-scale, extratropical dynamics in climate change

    International Nuclear Information System (INIS)

    Shepherd, T.G.

    1994-02-01

    The climate modeling community has focused recently on improving our understanding of certain processes, such as cloud feedbacks and ocean circulation, that are deemed critical to climate-change prediction. Although attention to such processes is warranted, emphasis on these areas has diminished a general appreciation of the role played by the large-scale dynamics of the extratropical atmosphere. Lack of interest in extratropical dynamics may reflect the assumption that these dynamical processes are a non-problem as far as climate modeling is concerned, since general circulation models (GCMs) calculate motions on this scale from first principles. Nevertheless, serious shortcomings in our ability to understand and simulate large-scale dynamics exist. Partly due to a paucity of standard GCM diagnostic calculations of large-scale motions and their transports of heat, momentum, potential vorticity, and moisture, a comprehensive understanding of the role of large-scale dynamics in GCM climate simulations has not been developed. Uncertainties remain in our understanding and simulation of large-scale extratropical dynamics and their interaction with other climatic processes, such as cloud feedbacks, large-scale ocean circulation, moist convection, air-sea interaction and land-surface processes. To address some of these issues, the 17th Stanstead Seminar was convened at Bishop's University in Lennoxville, Quebec. The purpose of the Seminar was to promote discussion of the role of large-scale extratropical dynamics in global climate change. Abstracts of the talks are included in this volume. On the basis of these talks, several key issues emerged concerning large-scale extratropical dynamics and their climatic role. Individual records are indexed separately for the database

  20. Large-Scale Cooperative Task Distribution on Peer-to-Peer Networks

    Science.gov (United States)

    2012-01-01

    SUBTITLE Large-scale cooperative task distribution on peer-to-peer networks 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...disadvantages of ML- Chord are its fixed size (two layers), and limited scala - bility for large-scale systems. RC-Chord extends ML- D. Karrels et al...configurable before runtime. This can be improved by incorporating a distributed learning algorithm to tune the number and range of the DLoE tracking

  1. Temporal flexibility and careers: The role of large-scale organizations for physicians

    OpenAIRE

    Forrest Briscoe

    2006-01-01

    Temporal flexibility and careers: The role of large-scale organizations for physicians. Forrest Briscoe Briscoe This study investigates how employment in large-scale organizations affects the work lives of practicing physicians. Well-established theory associates larger organizations with bureaucratic constraint, loss of workplace control, and dissatisfaction, but this author finds that large scale is also associated with greater schedule and career flexibility. Ironically, the bureaucratic p...

  2. The role of large-scale, extratropical dynamics in climate change

    Energy Technology Data Exchange (ETDEWEB)

    Shepherd, T.G. [ed.

    1994-02-01

    The climate modeling community has focused recently on improving our understanding of certain processes, such as cloud feedbacks and ocean circulation, that are deemed critical to climate-change prediction. Although attention to such processes is warranted, emphasis on these areas has diminished a general appreciation of the role played by the large-scale dynamics of the extratropical atmosphere. Lack of interest in extratropical dynamics may reflect the assumption that these dynamical processes are a non-problem as far as climate modeling is concerned, since general circulation models (GCMs) calculate motions on this scale from first principles. Nevertheless, serious shortcomings in our ability to understand and simulate large-scale dynamics exist. Partly due to a paucity of standard GCM diagnostic calculations of large-scale motions and their transports of heat, momentum, potential vorticity, and moisture, a comprehensive understanding of the role of large-scale dynamics in GCM climate simulations has not been developed. Uncertainties remain in our understanding and simulation of large-scale extratropical dynamics and their interaction with other climatic processes, such as cloud feedbacks, large-scale ocean circulation, moist convection, air-sea interaction and land-surface processes. To address some of these issues, the 17th Stanstead Seminar was convened at Bishop`s University in Lennoxville, Quebec. The purpose of the Seminar was to promote discussion of the role of large-scale extratropical dynamics in global climate change. Abstracts of the talks are included in this volume. On the basis of these talks, several key issues emerged concerning large-scale extratropical dynamics and their climatic role. Individual records are indexed separately for the database.

  3. Security and VO management capabilities in a large-scale Grid operating system

    OpenAIRE

    Aziz, Benjamin; Sporea, Ioana

    2014-01-01

    This paper presents a number of security and VO management capabilities in a large-scale distributed Grid operating system. The capabilities formed the basis of the design and implementation of a number of security and VO management services in the system. The main aim of the paper is to provide some idea of the various functionality cases that need to be considered when designing similar large-scale systems in the future.

  4. Mining Together : Large-Scale Mining Meets Artisanal Mining, A Guide for Action

    OpenAIRE

    World Bank

    2009-01-01

    The present guide mining together-when large-scale mining meets artisanal mining is an important step to better understanding the conflict dynamics and underlying issues between large-scale and small-scale mining. This guide for action not only points to some of the challenges that both parties need to deal with in order to build a more constructive relationship, but most importantly it sh...

  5. The Modified HZ Conjugate Gradient Algorithm for Large-Scale Nonsmooth Optimization.

    Science.gov (United States)

    Yuan, Gonglin; Sheng, Zhou; Liu, Wenjie

    2016-01-01

    In this paper, the Hager and Zhang (HZ) conjugate gradient (CG) method and the modified HZ (MHZ) CG method are presented for large-scale nonsmooth convex minimization. Under some mild conditions, convergent results of the proposed methods are established. Numerical results show that the presented methods can be better efficiency for large-scale nonsmooth problems, and several problems are tested (with the maximum dimensions to 100,000 variables).

  6. The Modified HZ Conjugate Gradient Algorithm for Large-Scale Nonsmooth Optimization.

    Directory of Open Access Journals (Sweden)

    Gonglin Yuan

    Full Text Available In this paper, the Hager and Zhang (HZ conjugate gradient (CG method and the modified HZ (MHZ CG method are presented for large-scale nonsmooth convex minimization. Under some mild conditions, convergent results of the proposed methods are established. Numerical results show that the presented methods can be better efficiency for large-scale nonsmooth problems, and several problems are tested (with the maximum dimensions to 100,000 variables.

  7. Understanding the faint red galaxy population using large-scale clustering measurements from SDSS DR7

    OpenAIRE

    Ross, Ashley; Tojeiro, Rita; Percival, Will

    2011-01-01

    We use data from the SDSS to investigate the evolution of the large-scale galaxy bias as a function of luminosity for red galaxies. We carefully consider correlation functions of galaxies selected from both photometric and spectroscopic data, and cross-correlations between them, to obtain multiple measurements of the large-scale bias. We find, for our most robust analyses, a strong increase in bias with luminosity for the most luminous galaxies, an intermediate regime where bias does not evol...

  8. Large-Scale Sequencing: The Future of Genomic Sciences Colloquium

    Energy Technology Data Exchange (ETDEWEB)

    Margaret Riley; Merry Buckley

    2009-01-01

    Genetic sequencing and the various molecular techniques it has enabled have revolutionized the field of microbiology. Examining and comparing the genetic sequences borne by microbes - including bacteria, archaea, viruses, and microbial eukaryotes - provides researchers insights into the processes microbes carry out, their pathogenic traits, and new ways to use microorganisms in medicine and manufacturing. Until recently, sequencing entire microbial genomes has been laborious and expensive, and the decision to sequence the genome of an organism was made on a case-by-case basis by individual researchers and funding agencies. Now, thanks to new technologies, the cost and effort of sequencing is within reach for even the smallest facilities, and the ability to sequence the genomes of a significant fraction of microbial life may be possible. The availability of numerous microbial genomes will enable unprecedented insights into microbial evolution, function, and physiology. However, the current ad hoc approach to gathering sequence data has resulted in an unbalanced and highly biased sampling of microbial diversity. A well-coordinated, large-scale effort to target the breadth and depth of microbial diversity would result in the greatest impact. The American Academy of Microbiology convened a colloquium to discuss the scientific benefits of engaging in a large-scale, taxonomically-based sequencing project. A group of individuals with expertise in microbiology, genomics, informatics, ecology, and evolution deliberated on the issues inherent in such an effort and generated a set of specific recommendations for how best to proceed. The vast majority of microbes are presently uncultured and, thus, pose significant challenges to such a taxonomically-based approach to sampling genome diversity. However, we have yet to even scratch the surface of the genomic diversity among cultured microbes. A coordinated sequencing effort of cultured organisms is an appropriate place to begin

  9. Detection of large-scale concentric gravity waves from a Chinese airglow imager network

    Science.gov (United States)

    Lai, Chang; Yue, Jia; Xu, Jiyao; Yuan, Wei; Li, Qinzeng; Liu, Xiao

    2018-06-01

    Concentric gravity waves (CGWs) contain a broad spectrum of horizontal wavelengths and periods due to their instantaneous localized sources (e.g., deep convection, volcanic eruptions, or earthquake, etc.). However, it is difficult to observe large-scale gravity waves of >100 km wavelength from the ground for the limited field of view of a single camera and local bad weather. Previously, complete large-scale CGW imagery could only be captured by satellite observations. In the present study, we developed a novel method that uses assembling separate images and applying low-pass filtering to obtain temporal and spatial information about complete large-scale CGWs from a network of all-sky airglow imagers. Coordinated observations from five all-sky airglow imagers in Northern China were assembled and processed to study large-scale CGWs over a wide area (1800 km × 1 400 km), focusing on the same two CGW events as Xu et al. (2015). Our algorithms yielded images of large-scale CGWs by filtering out the small-scale CGWs. The wavelengths, wave speeds, and periods of CGWs were measured from a sequence of consecutive assembled images. Overall, the assembling and low-pass filtering algorithms can expand the airglow imager network to its full capacity regarding the detection of large-scale gravity waves.

  10. Large-scale weakly supervised object localization via latent category learning.

    Science.gov (United States)

    Chong Wang; Kaiqi Huang; Weiqiang Ren; Junge Zhang; Maybank, Steve

    2015-04-01

    Localizing objects in cluttered backgrounds is challenging under large-scale weakly supervised conditions. Due to the cluttered image condition, objects usually have large ambiguity with backgrounds. Besides, there is also a lack of effective algorithm for large-scale weakly supervised localization in cluttered backgrounds. However, backgrounds contain useful latent information, e.g., the sky in the aeroplane class. If this latent information can be learned, object-background ambiguity can be largely reduced and background can be suppressed effectively. In this paper, we propose the latent category learning (LCL) in large-scale cluttered conditions. LCL is an unsupervised learning method which requires only image-level class labels. First, we use the latent semantic analysis with semantic object representation to learn the latent categories, which represent objects, object parts or backgrounds. Second, to determine which category contains the target object, we propose a category selection strategy by evaluating each category's discrimination. Finally, we propose the online LCL for use in large-scale conditions. Evaluation on the challenging PASCAL Visual Object Class (VOC) 2007 and the large-scale imagenet large-scale visual recognition challenge 2013 detection data sets shows that the method can improve the annotation precision by 10% over previous methods. More importantly, we achieve the detection precision which outperforms previous results by a large margin and can be competitive to the supervised deformable part model 5.0 baseline on both data sets.

  11. Low-Complexity Transmit Antenna Selection and Beamforming for Large-Scale MIMO Communications

    Directory of Open Access Journals (Sweden)

    Kun Qian

    2014-01-01

    Full Text Available Transmit antenna selection plays an important role in large-scale multiple-input multiple-output (MIMO communications, but optimal large-scale MIMO antenna selection is a technical challenge. Exhaustive search is often employed in antenna selection, but it cannot be efficiently implemented in large-scale MIMO communication systems due to its prohibitive high computation complexity. This paper proposes a low-complexity interactive multiple-parameter optimization method for joint transmit antenna selection and beamforming in large-scale MIMO communication systems. The objective is to jointly maximize the channel outrage capacity and signal-to-noise (SNR performance and minimize the mean square error in transmit antenna selection and minimum variance distortionless response (MVDR beamforming without exhaustive search. The effectiveness of all the proposed methods is verified by extensive simulation results. It is shown that the required antenna selection processing time of the proposed method does not increase along with the increase of selected antennas, but the computation complexity of conventional exhaustive search method will significantly increase when large-scale antennas are employed in the system. This is particularly useful in antenna selection for large-scale MIMO communication systems.

  12. Generation of large-scale vorticity in rotating stratified turbulence with inhomogeneous helicity: mean-field theory

    Science.gov (United States)

    Kleeorin, N.

    2018-06-01

    We discuss a mean-field theory of the generation of large-scale vorticity in a rotating density stratified developed turbulence with inhomogeneous kinetic helicity. We show that the large-scale non-uniform flow is produced due to either a combined action of a density stratified rotating turbulence and uniform kinetic helicity or a combined effect of a rotating incompressible turbulence and inhomogeneous kinetic helicity. These effects result in the formation of a large-scale shear, and in turn its interaction with the small-scale turbulence causes an excitation of the large-scale instability (known as a vorticity dynamo) due to a combined effect of the large-scale shear and Reynolds stress-induced generation of the mean vorticity. The latter is due to the effect of large-scale shear on the Reynolds stress. A fast rotation suppresses this large-scale instability.

  13. Economic and agricultural transformation through large-scale farming : impacts of large-scale farming on local economic development, household food security and the environment in Ethiopia

    NARCIS (Netherlands)

    Bekele, M.S.

    2016-01-01

    This study examined impacts of large-scale farming in Ethiopia on local economic development, household food security, incomes, employment, and the environment. The study adopted a mixed research approach in which both qualitative and quantitative data were generated from secondary and primary

  14. Large-scale ground motion simulation using GPGPU

    Science.gov (United States)

    Aoi, S.; Maeda, T.; Nishizawa, N.; Aoki, T.

    2012-12-01

    Huge computation resources are required to perform large-scale ground motion simulations using 3-D finite difference method (FDM) for realistic and complex models with high accuracy. Furthermore, thousands of various simulations are necessary to evaluate the variability of the assessment caused by uncertainty of the assumptions of the source models for future earthquakes. To conquer the problem of restricted computational resources, we introduced the use of GPGPU (General purpose computing on graphics processing units) which is the technique of using a GPU as an accelerator of the computation which has been traditionally conducted by the CPU. We employed the CPU version of GMS (Ground motion Simulator; Aoi et al., 2004) as the original code and implemented the function for GPU calculation using CUDA (Compute Unified Device Architecture). GMS is a total system for seismic wave propagation simulation based on 3-D FDM scheme using discontinuous grids (Aoi&Fujiwara, 1999), which includes the solver as well as the preprocessor tools (parameter generation tool) and postprocessor tools (filter tool, visualization tool, and so on). The computational model is decomposed in two horizontal directions and each decomposed model is allocated to a different GPU. We evaluated the performance of our newly developed GPU version of GMS on the TSUBAME2.0 which is one of the Japanese fastest supercomputer operated by the Tokyo Institute of Technology. First we have performed a strong scaling test using the model with about 22 million grids and achieved 3.2 and 7.3 times of the speed-up by using 4 and 16 GPUs. Next, we have examined a weak scaling test where the model sizes (number of grids) are increased in proportion to the degree of parallelism (number of GPUs). The result showed almost perfect linearity up to the simulation with 22 billion grids using 1024 GPUs where the calculation speed reached to 79.7 TFlops and about 34 times faster than the CPU calculation using the same number

  15. Developing A Large-Scale, Collaborative, Productive Geoscience Education Network

    Science.gov (United States)

    Manduca, C. A.; Bralower, T. J.; Egger, A. E.; Fox, S.; Ledley, T. S.; Macdonald, H.; Mcconnell, D. A.; Mogk, D. W.; Tewksbury, B. J.

    2012-12-01

    Over the past 15 years, the geoscience education community has grown substantially and developed broad and deep capacity for collaboration and dissemination of ideas. While this community is best viewed as emergent from complex interactions among changing educational needs and opportunities, we highlight the role of several large projects in the development of a network within this community. In the 1990s, three NSF projects came together to build a robust web infrastructure to support the production and dissemination of on-line resources: On The Cutting Edge (OTCE), Earth Exploration Toolbook, and Starting Point: Teaching Introductory Geoscience. Along with the contemporaneous Digital Library for Earth System Education, these projects engaged geoscience educators nationwide in exploring professional development experiences that produced lasting on-line resources, collaborative authoring of resources, and models for web-based support for geoscience teaching. As a result, a culture developed in the 2000s in which geoscience educators anticipated that resources for geoscience teaching would be shared broadly and that collaborative authoring would be productive and engaging. By this time, a diverse set of examples demonstrated the power of the web infrastructure in supporting collaboration, dissemination and professional development . Building on this foundation, more recent work has expanded both the size of the network and the scope of its work. Many large research projects initiated collaborations to disseminate resources supporting educational use of their data. Research results from the rapidly expanding geoscience education research community were integrated into the Pedagogies in Action website and OTCE. Projects engaged faculty across the nation in large-scale data collection and educational research. The Climate Literacy and Energy Awareness Network and OTCE engaged community members in reviewing the expanding body of on-line resources. Building Strong

  16. Large-scale control of mosquito vectors of disease

    International Nuclear Information System (INIS)

    Curtis, C.F.; Andreasen, M.H.

    2000-01-01

    By far the most important vector borne disease is malaria transmitted by Anopheles mosquitoes causing an estimated 300-500 million clinical cases per year and 1.4-2.6 million deaths, mostly in tropical Africa (WHO 1995). The second most important mosquito borne disease is lymphatic filariasis, but there are now such effective, convenient and cheap drugs for its treatment that vector control will now have at most a supplementary role (Maxwell et al. 1999a). The only other mosquito borne disease likely to justify large-scale vector control is dengue which is carried in urban areas of Southeast Asia and Latin America by Aedes aegypti L. which was also the urban vector of yellow fever in Latin America. This mosquito was eradicated from most countries of Latin America between the 1930s and 60s but, unfortunately in recent years, it has been allowed to re-infest and cause serious dengue epidemics, except in Cuba where it has been held close to eradication (Reiter and Gubler 1997). In the 1930s and 40s, invasions by An. gambiae Giles s.l., the main tropical African malaria vector, were eradicated from Brazil (Soper and Wilson 1943) and Egypt (Shousha 1947). It is surprising that greatly increased air traffic has not led to more such invasions of apparently climatically suitable areas, e.g., of Polynesia which has no anophelines and therefore no malaria. The above mentioned temporary or permanent eradications were achieved before the advent of DDT, using larvicidal methods (of a kind which would now be considered environmentally unacceptable) carried out by rigorously disciplined teams. MALARIA Between the end of the Second World War and the 1960s, the availability of DDT for spraying of houses allowed eradication of malaria from the Soviet Union, southern Europe, the USA, northern Venezuela and Guyana, Taiwan and the Caribbean Islands, apart from Hispaniola. Its range and intensity were also greatly reduced in China, India and South Africa and, at least temporarily, in

  17. Large-scale field testing on flexible shallow landslide barriers

    Science.gov (United States)

    Bugnion, Louis; Volkwein, Axel; Wendeler, Corinna; Roth, Andrea

    2010-05-01

    Open shallow landslides occur regularly in a wide range of natural terrains. Generally, they are difficult to predict and result in damages to properties and disruption of transportation systems. In order to improve the knowledge about the physical process itself and to develop new protection measures, large-scale field experiments were conducted in Veltheim, Switzerland. Material was released down a 30° inclined test slope into a flexible barrier. The flow as well as the impact into the barrier was monitored using various measurement techniques. Laser devices recording flow heights, a special force plate measuring normal and shear basal forces as well as load cells for impact pressures were installed along the test slope. In addition, load cells were built in the support and retaining cables of the barrier to provide data for detailed back-calculation of load distribution during impact. For the last test series an additional guiding wall in flow direction on both sides of the barrier was installed to achieve higher impact pressures in the middle of the barrier. With these guiding walls the flow is not able to spread out before hitting the barrier. A special constructed release mechanism simulating the sudden failure of the slope was designed such that about 50 m3 of mixed earth and gravel saturated with water can be released in an instant. Analysis of cable forces combined with impact pressures and velocity measurements during a test series allow us now to develop a load model for the barrier design. First numerical simulations with the software tool FARO, originally developed for rockfall barriers and afterwards calibrated for debris flow impacts, lead already to structural improvements on barrier design. Decisive for the barrier design is the first dynamic impact pressure depending on the flow velocity and afterwards the hydrostatic pressure of the complete retained material behind the barrier. Therefore volume estimation of open shallow landslides by assessing

  18. Features of the method of large-scale paleolandscape reconstructions

    Science.gov (United States)

    Nizovtsev, Vyacheslav; Erman, Natalia; Graves, Irina

    2017-04-01

    The method of paleolandscape reconstructions was tested in the key area of the basin of the Central Dubna, located at the junction of the Taldom and Sergiev Posad districts of the Moscow region. A series of maps was created which shows paleoreconstructions of the original (indigenous) living environment of initial settlers during main time periods of the Holocene age and features of human interaction with landscapes at the early stages of economic development of the territory (in the early and middle Holocene). The sequence of these works is as follows. 1. Comprehensive analysis of topographic maps of different scales and aerial and satellite images, stock materials of geological and hydrological surveys and prospecting of peat deposits, archaeological evidence on ancient settlements, palynological and osteological analysis, analysis of complex landscape and archaeological studies. 2. Mapping of factual material and analyzing of the spatial distribution of archaeological sites were performed. 3. Running of a large-scale field landscape mapping (sample areas) and compiling of maps of the modern landscape structure. On this basis, edaphic properties of the main types of natural boundaries were analyzed and their resource base was determined. 4. Reconstruction of lake-river system during the main periods of the Holocene. The boundaries of restored paleolakes were determined based on power and territorial confinement of decay ooze. 5. On the basis of landscape and edaphic method the actual paleolandscape reconstructions for the main periods of the Holocene were performed. During the reconstructions of the original, indigenous flora we relied on data of palynological studies conducted on the studied area or in similar landscape conditions. 6. The result was a retrospective analysis and periodization of the settlement process, economic development and the formation of the first anthropogenically transformed landscape complexes. The reconstruction of the dynamics of the

  19. Effect of Media Culture on Growth and Sucker Pandanus Plant

    Directory of Open Access Journals (Sweden)

    ali salehi sardoei

    2017-02-01

    Full Text Available Introduction: One factor that is of great importance to the cultivation of flowers and ornamental plants, is the media. Planting plants in containers as an important component of the nursery technology has grown. Compared with farm volume, growth media used for each plant greatly reduce plant growth that largely influence by the physical and chemical properties of growth media used. Therefore, good management of potted plants bed will cause the plants have good quality. A good growth media with optimal physical and biological properties, relatively inexpensive, stable and style enough to work should be available. The Burgers showed that composted green waste can be used as substrates for soilless cultivation and improve the water-holding capacity of soil. The garden has a range of materials including hardwood and softwood bark, leaves, soil, waste, sewage sludge and coconut (cocopeat that has been used as a seed bed. According to the economic issues and increasing moisture storage, palm peat substrates are primary material that can be prepared as a good growth medium for the producing's presented level Country. Peat moss is not applicable to all plants because of high cost and poor absorption characteristics like low pH and low water holding capacity . This study was conducted to investigate the possibility of replacing peat moss palm waste and the effect of it on growth characteristics were studied. Materials and Methods: The experimental design was completely randomized design with four replications of eight treatments. The compressed unit (block was supplied and commercial cocopeat was used because of reducing the cost of transportation. Before applying this material, the amount of water was added for opening up and voluminous and become it completely uniform.. In treatments containing sand + perlite, these four types volume ratio of 1:1 and mixed with sand + perlite were used. First, wooden cuttings of pandanus in a bed of sand rooted in the greenhouse, then the rooted cuttings were transferred to pots with a diameter of 17cm. The pots were filled with the examined material. After planting in pots in a greenhouse with temperature of 20-25°C in winter and 30-35°C in summer were kept on planting plans. The indicators of growth including stem diameter, stem length and lateral shoot number, leaf area and chlorophyll index were measured. Analysis was performed on data using SPSS 16. Comparisons were made using one-way analysis of variance (ANOVA and Duncan’s multiple range tests. Differences were considered to be significant at P < 0.05. Results and Discussion: Vegetative characteristics of pandanus plants were significantly different from each other. Results showed that the mean media of cococheps and 50% peat palm + 25% sand + 25% perlite had the highest leaf area with 413.97 and 378.69cm2 respectively and non-significant difference was showed. Means followed by same letter are not significantly different at P

  20. Cytology of DNA Replication Reveals Dynamic Plasticity of Large-Scale Chromatin Fibers.

    Science.gov (United States)

    Deng, Xiang; Zhironkina, Oxana A; Cherepanynets, Varvara D; Strelkova, Olga S; Kireev, Igor I; Belmont, Andrew S

    2016-09-26

    In higher eukaryotic interphase nuclei, the 100- to >1,000-fold linear compaction of chromatin is difficult to reconcile with its function as a template for transcription, replication, and repair. It is challenging to imagine how DNA and RNA polymerases with their associated molecular machinery would move along the DNA template without transient decondensation of observed large-scale chromatin "chromonema" fibers [1]. Transcription or "replication factory" models [2], in which polymerases remain fixed while DNA is reeled through, are similarly difficult to conceptualize without transient decondensation of these chromonema fibers. Here, we show how a dynamic plasticity of chromatin folding within large-scale chromatin fibers allows DNA replication to take place without significant changes in the global large-scale chromatin compaction or shape of these large-scale chromatin fibers. Time-lapse imaging of lac-operator-tagged chromosome regions shows no major change in the overall compaction of these chromosome regions during their DNA replication. Improved pulse-chase labeling of endogenous interphase chromosomes yields a model in which the global compaction and shape of large-Mbp chromatin domains remains largely invariant during DNA replication, with DNA within these domains undergoing significant movements and redistribution as they move into and then out of adjacent replication foci. In contrast to hierarchical folding models, this dynamic plasticity of large-scale chromatin organization explains how localized changes in DNA topology allow DNA replication to take place without an accompanying global unfolding of large-scale chromatin fibers while suggesting a possible mechanism for maintaining epigenetic programming of large-scale chromatin domains throughout DNA replication. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. State of the Art in Large-Scale Soil Moisture Monitoring

    Science.gov (United States)

    Ochsner, Tyson E.; Cosh, Michael Harold; Cuenca, Richard H.; Dorigo, Wouter; Draper, Clara S.; Hagimoto, Yutaka; Kerr, Yan H.; Larson, Kristine M.; Njoku, Eni Gerald; Small, Eric E.; hide

    2013-01-01

    Soil moisture is an essential climate variable influencing land atmosphere interactions, an essential hydrologic variable impacting rainfall runoff processes, an essential ecological variable regulating net ecosystem exchange, and an essential agricultural variable constraining food security. Large-scale soil moisture monitoring has advanced in recent years creating opportunities to transform scientific understanding of soil moisture and related processes. These advances are being driven by researchers from a broad range of disciplines, but this complicates collaboration and communication. For some applications, the science required to utilize large-scale soil moisture data is poorly developed. In this review, we describe the state of the art in large-scale soil moisture monitoring and identify some critical needs for research to optimize the use of increasingly available soil moisture data. We review representative examples of 1) emerging in situ and proximal sensing techniques, 2) dedicated soil moisture remote sensing missions, 3) soil moisture monitoring networks, and 4) applications of large-scale soil moisture measurements. Significant near-term progress seems possible in the use of large-scale soil moisture data for drought monitoring. Assimilation of soil moisture data for meteorological or hydrologic forecasting also shows promise, but significant challenges related to model structures and model errors remain. Little progress has been made yet in the use of large-scale soil moisture observations within the context of ecological or agricultural modeling. Opportunities abound to advance the science and practice of large-scale soil moisture monitoring for the sake of improved Earth system monitoring, modeling, and forecasting.

  2. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    Directory of Open Access Journals (Sweden)

    A. F. Van Loon

    2012-11-01

    Full Text Available Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological drought? To answer this question, we evaluated the simulation of drought propagation in an ensemble mean of ten large-scale models, both land-surface models and global hydrological models, that participated in the model intercomparison project of WATCH (WaterMIP. For a selection of case study areas, we studied drought characteristics (number of droughts, duration, severity, drought propagation features (pooling, attenuation, lag, lengthening, and hydrological drought typology (classical rainfall deficit drought, rain-to-snow-season drought, wet-to-dry-season drought, cold snow season drought, warm snow season drought, composite drought.

    Drought characteristics simulated by large-scale models clearly reflected drought propagation; i.e. drought events became fewer and longer when moving through the hydrological cycle. However, more differentiation was expected between fast and slowly responding systems, with slowly responding systems having fewer and longer droughts in runoff than fast responding systems. This was not found using large-scale models. Drought propagation features were poorly reproduced by the large-scale models, because runoff reacted immediately to precipitation, in all case study areas. This fast reaction to precipitation, even in cold climates in winter and in semi-arid climates in summer, also greatly influenced the hydrological drought typology as identified by the large-scale models. In general, the large-scale models had the correct representation of drought types, but the percentages of occurrence had some important mismatches, e.g. an overestimation of classical rainfall deficit droughts, and an

  3. Decoupling local mechanics from large-scale structure in modular metamaterials

    Science.gov (United States)

    Yang, Nan; Silverberg, Jesse L.

    2017-04-01

    A defining feature of mechanical metamaterials is that their properties are determined by the organization of internal structure instead of the raw fabrication materials. This shift of attention to engineering internal degrees of freedom has coaxed relatively simple materials into exhibiting a wide range of remarkable mechanical properties. For practical applications to be realized, however, this nascent understanding of metamaterial design must be translated into a capacity for engineering large-scale structures with prescribed mechanical functionality. Thus, the challenge is to systematically map desired functionality of large-scale structures backward into a design scheme while using finite parameter domains. Such “inverse design” is often complicated by the deep coupling between large-scale structure and local mechanical function, which limits the available design space. Here, we introduce a design strategy for constructing 1D, 2D, and 3D mechanical metamaterials inspired by modular origami and kirigami. Our approach is to assemble a number of modules into a voxelized large-scale structure, where the module’s design has a greater number of mechanical design parameters than the number of constraints imposed by bulk assembly. This inequality allows each voxel in the bulk structure to be uniquely assigned mechanical properties independent from its ability to connect and deform with its neighbors. In studying specific examples of large-scale metamaterial structures we show that a decoupling of global structure from local mechanical function allows for a variety of mechanically and topologically complex designs.

  4. Response of deep and shallow tropical maritime cumuli to large-scale processes

    Science.gov (United States)

    Yanai, M.; Chu, J.-H.; Stark, T. E.; Nitta, T.

    1976-01-01

    The bulk diagnostic method of Yanai et al. (1973) and a simplified version of the spectral diagnostic method of Nitta (1975) are used for a more quantitative evaluation of the response of various types of cumuliform clouds to large-scale processes, using the same data set in the Marshall Islands area for a 100-day period in 1956. The dependence of the cloud mass flux distribution on radiative cooling, large-scale vertical motion, and evaporation from the sea is examined. It is shown that typical radiative cooling rates in the tropics tend to produce a bimodal distribution of mass spectrum exhibiting deep and shallow clouds. The bimodal distribution is further enhanced when the large-scale vertical motion is upward, and a nearly unimodal distribution of shallow clouds prevails when the relative cooling is compensated by the heating due to the large-scale subsidence. Both deep and shallow clouds are modulated by large-scale disturbances. The primary role of surface evaporation is to maintain the moisture flux at the cloud base.

  5. Large-scale Labeled Datasets to Fuel Earth Science Deep Learning Applications

    Science.gov (United States)

    Maskey, M.; Ramachandran, R.; Miller, J.

    2017-12-01

    Deep learning has revolutionized computer vision and natural language processing with various algorithms scaled using high-performance computing. However, generic large-scale labeled datasets such as the ImageNet are the fuel that drives the impressive accuracy of deep learning results. Large-scale labeled datasets already exist in domains such as medical science, but creating them in the Earth science domain is a challenge. While there are ways to apply deep learning using limited labeled datasets, there is a need in the Earth sciences for creating large-scale labeled datasets for benchmarking and scaling deep learning applications. At the NASA Marshall Space Flight Center, we are using deep learning for a variety of Earth science applications where we have encountered the need for large-scale labeled datasets. We will discuss our approaches for creating such datasets and why these datasets are just as valuable as deep learning algorithms. We will also describe successful usage of these large-scale labeled datasets with our deep learning based applications.

  6. Large-scale retrieval for medical image analytics: A comprehensive review.

    Science.gov (United States)

    Li, Zhongyu; Zhang, Xiaofan; Müller, Henning; Zhang, Shaoting

    2018-01-01

    Over the past decades, medical image analytics was greatly facilitated by the explosion of digital imaging techniques, where huge amounts of medical images were produced with ever-increasing quality and diversity. However, conventional methods for analyzing medical images have achieved limited success, as they are not capable to tackle the huge amount of image data. In this paper, we review state-of-the-art approaches for large-scale medical image analysis, which are mainly based on recent advances in computer vision, machine learning and information retrieval. Specifically, we first present the general pipeline of large-scale retrieval, summarize the challenges/opportunities of medical image analytics on a large-scale. Then, we provide a comprehensive review of algorithms and techniques relevant to major processes in the pipeline, including feature representation, feature indexing, searching, etc. On the basis of existing work, we introduce the evaluation protocols and multiple applications of large-scale medical image retrieval, with a variety of exploratory and diagnostic scenarios. Finally, we discuss future directions of large-scale retrieval, which can further improve the performance of medical image analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Soft X-ray Emission from Large-Scale Galactic Outflows in Seyfert Galaxies

    Science.gov (United States)

    Colbert, E. J. M.; Baum, S.; O'Dea, C.; Veilleux, S.

    1998-01-01

    Kiloparsec-scale soft X-ray nebulae extend along the galaxy minor axes in several Seyfert galaxies, including NGC 2992, NGC 4388 and NGC 5506. In these three galaxies, the extended X-ray emission observed in ROSAT HRI images has 0.2-2.4 keV X-ray luminosities of 0.4-3.5 x 10(40) erg s(-1) . The X-ray nebulae are roughly co-spatial with the large-scale radio emission, suggesting that both are produced by large-scale galactic outflows. Assuming pressure balance between the radio and X-ray plasmas, the X-ray filling factor is >~ 10(4) times as large as the radio plasma filling factor, suggesting that large-scale outflows in Seyfert galaxies are predominantly winds of thermal X-ray emitting gas. We favor an interpretation in which large-scale outflows originate as AGN-driven jets that entrain and heat gas on kpc scales as they make their way out of the galaxy. AGN- and starburst-driven winds are also possible explanations if the winds are oriented along the rotation axis of the galaxy disk. Since large-scale outflows are present in at least 50 percent of Seyfert galaxies, the soft X-ray emission from the outflowing gas may, in many cases, explain the ``soft excess" X-ray feature observed below 2 keV in X-ray spectra of many Seyfert 2 galaxies.

  8. An efficient and novel computation method for simulating diffraction patterns from large-scale coded apertures on large-scale focal plane arrays

    Science.gov (United States)

    Shrekenhamer, Abraham; Gottesman, Stephen R.

    2012-10-01

    A novel and memory efficient method for computing diffraction patterns produced on large-scale focal planes by largescale Coded Apertures at wavelengths where diffraction effects are significant has been developed and tested. The scheme, readily implementable on portable computers, overcomes the memory limitations of present state-of-the-art simulation codes such as Zemax. The method consists of first calculating a set of reference complex field (amplitude and phase) patterns on the focal plane produced by a single (reference) central hole, extending to twice the focal plane array size, with one such pattern for each Line-of-Sight (LOS) direction and wavelength in the scene, and with the pattern amplitude corresponding to the square-root of the spectral irradiance from each such LOS direction in the scene at selected wavelengths. Next the set of reference patterns is transformed to generate pattern sets for other holes. The transformation consists of a translational pattern shift corresponding to each hole's position offset and an electrical phase shift corresponding to each hole's position offset and incoming radiance's direction and wavelength. The set of complex patterns for each direction and wavelength is then summed coherently and squared for each detector to yield a set of power patterns unique for each direction and wavelength. Finally the set of power patterns is summed to produce the full waveband diffraction pattern from the scene. With this tool researchers can now efficiently simulate diffraction patterns produced from scenes by large-scale Coded Apertures onto large-scale focal plane arrays to support the development and optimization of coded aperture masks and image reconstruction algorithms.

  9. What Will the Neighbors Think? Building Large-Scale Science Projects Around the World

    International Nuclear Information System (INIS)

    Jones, Craig; Mrotzek, Christian; Toge, Nobu; Sarno, Doug

    2007-01-01

    Public participation is an essential ingredient for turning the International Linear Collider into a reality. Wherever the proposed particle accelerator is sited in the world, its neighbors -- in any country -- will have something to say about hosting a 35-kilometer-long collider in their backyards. When it comes to building large-scale physics projects, almost every laboratory has a story to tell. Three case studies from Japan, Germany and the US will be presented to examine how community relations are handled in different parts of the world. How do particle physics laboratories interact with their local communities? How do neighbors react to building large-scale projects in each region? How can the lessons learned from past experiences help in building the next big project? These and other questions will be discussed to engage the audience in an active dialogue about how a large-scale project like the ILC can be a good neighbor.

  10. Survey of large-scale solar water heaters installed in Taiwan, China

    Energy Technology Data Exchange (ETDEWEB)

    Chang Keh-Chin; Lee Tsong-Sheng; Chung Kung-Ming [Cheng Kung Univ., Tainan (China); Lien Ya-Feng; Lee Chine-An [Cheng Kung Univ. Research and Development Foundation, Tainan (China)

    2008-07-01

    Almost all the solar collectors installed in Taiwan, China were used for production of hot water for homeowners (residential systems), in which the area of solar collectors is less than 10 square meters. From 2001 to 2006, there were only 39 large-scale systems (defined as the area of solar collectors being over 100 m{sup 2}) installed. Their utilization purposes are for rooming house (dormitory), swimming pool, restaurant, and manufacturing process. A comprehensive survey of those large-scale solar water heaters was conducted in 2006. The objectives of the survey were to asses the systems' performance and to have the feedback from the individual users. It is found that lack of experience in system design and maintenance are the key factors for reliable operation of a system. For further promotion of large-scale solar water heaters in Taiwan, a more compressive program on a system design for manufacturing process should be conducted. (orig.)

  11. Large-scale climatic anomalies affect marine predator foraging behaviour and demography

    Science.gov (United States)

    Bost, Charles A.; Cotté, Cedric; Terray, Pascal; Barbraud, Christophe; Bon, Cécile; Delord, Karine; Gimenez, Olivier; Handrich, Yves; Naito, Yasuhiko; Guinet, Christophe; Weimerskirch, Henri

    2015-10-01

    Determining the links between the behavioural and population responses of wild species to environmental variations is critical for understanding the impact of climate variability on ecosystems. Using long-term data sets, we show how large-scale climatic anomalies in the Southern Hemisphere affect the foraging behaviour and population dynamics of a key marine predator, the king penguin. When large-scale subtropical dipole events occur simultaneously in both subtropical Southern Indian and Atlantic Oceans, they generate tropical anomalies that shift the foraging zone southward. Consequently the distances that penguins foraged from the colony and their feeding depths increased and the population size decreased. This represents an example of a robust and fast impact of large-scale climatic anomalies affecting a marine predator through changes in its at-sea behaviour and demography, despite lack of information on prey availability. Our results highlight a possible behavioural mechanism through which climate variability may affect population processes.

  12. A method of orbital analysis for large-scale first-principles simulations

    International Nuclear Information System (INIS)

    Ohwaki, Tsukuru; Otani, Minoru; Ozaki, Taisuke

    2014-01-01

    An efficient method of calculating the natural bond orbitals (NBOs) based on a truncation of the entire density matrix of a whole system is presented for large-scale density functional theory calculations. The method recovers an orbital picture for O(N) electronic structure methods which directly evaluate the density matrix without using Kohn-Sham orbitals, thus enabling quantitative analysis of chemical reactions in large-scale systems in the language of localized Lewis-type chemical bonds. With the density matrix calculated by either an exact diagonalization or O(N) method, the computational cost is O(1) for the calculation of NBOs associated with a local region where a chemical reaction takes place. As an illustration of the method, we demonstrate how an electronic structure in a local region of interest can be analyzed by NBOs in a large-scale first-principles molecular dynamics simulation for a liquid electrolyte bulk model (propylene carbonate + LiBF 4 )

  13. Presenting an Approach for Conducting Knowledge Architecture within Large-Scale Organizations.

    Science.gov (United States)

    Varaee, Touraj; Habibi, Jafar; Mohaghar, Ali

    2015-01-01

    Knowledge architecture (KA) establishes the basic groundwork for the successful implementation of a short-term or long-term knowledge management (KM) program. An example of KA is the design of a prototype before a new vehicle is manufactured. Due to a transformation to large-scale organizations, the traditional architecture of organizations is undergoing fundamental changes. This paper explores the main strengths and weaknesses in the field of KA within large-scale organizations and provides a suitable methodology and supervising framework to overcome specific limitations. This objective was achieved by applying and updating the concepts from the Zachman information architectural framework and the information architectural methodology of enterprise architecture planning (EAP). The proposed solution may be beneficial for architects in knowledge-related areas to successfully accomplish KM within large-scale organizations. The research method is descriptive; its validity is confirmed by performing a case study and polling the opinions of KA experts.

  14. Presenting an Approach for Conducting Knowledge Architecture within Large-Scale Organizations

    Science.gov (United States)

    Varaee, Touraj; Habibi, Jafar; Mohaghar, Ali

    2015-01-01

    Knowledge architecture (KA) establishes the basic groundwork for the successful implementation of a short-term or long-term knowledge management (KM) program. An example of KA is the design of a prototype before a new vehicle is manufactured. Due to a transformation to large-scale organizations, the traditional architecture of organizations is undergoing fundamental changes. This paper explores the main strengths and weaknesses in the field of KA within large-scale organizations and provides a suitable methodology and supervising framework to overcome specific limitations. This objective was achieved by applying and updating the concepts from the Zachman information architectural framework and the information architectural methodology of enterprise architecture planning (EAP). The proposed solution may be beneficial for architects in knowledge-related areas to successfully accomplish KM within large-scale organizations. The research method is descriptive; its validity is confirmed by performing a case study and polling the opinions of KA experts. PMID:25993414

  15. Advances in Large-Scale Solar Heating and Long Term Storage in Denmark

    DEFF Research Database (Denmark)

    Heller, Alfred

    2000-01-01

    According to (the) information from the European Large-Scale Solar Heating Network, (See http://www.hvac.chalmers.se/cshp/), the area of installed solar collectors for large-scale application is in Europe, approximately 8 mill m2, corresponding to about 4000 MW thermal power. The 11 plants...... the last 10 years and the corresponding cost per collector area for the final installed plant is kept constant, even so the solar production is increased. Unfortunately large-scale seasonal storage was not able to keep up with the advances in solar technology, at least for pit water and gravel storage...... of the total 51 plants are equipped with long-term storage. In Denmark, 7 plants are installed, comprising of approx. 18,000-m2 collector area with new plants planned. The development of these plants and the involved technologies will be presented in this paper, with a focus on the improvements for Danish...

  16. Investigation on the integral output power model of a large-scale wind farm

    Institute of Scientific and Technical Information of China (English)

    BAO Nengsheng; MA Xiuqian; NI Weidou

    2007-01-01

    The integral output power model of a large-scale wind farm is needed when estimating the wind farm's output over a period of time in the future.The actual wind speed power model and calculation method of a wind farm made up of many wind turbine units are discussed.After analyzing the incoming wind flow characteristics and their energy distributions,and after considering the multi-effects among the wind turbine units and certain assumptions,the incoming wind flow model of multi-units is built.The calculation algorithms and steps of the integral output power model of a large-scale wind farm are provided.Finally,an actual power output of the wind farm is calculated and analyzed by using the practical measurement wind speed data.The characteristics of a large-scale wind farm are also discussed.

  17. Can administrative referenda be an instrument of control over large-scale technical installations?

    International Nuclear Information System (INIS)

    Rossnagel, A.

    1986-01-01

    An administrative referendum offers the possibility of direct participation of the citizens in decisions concerning large-scale technical installations. The article investigates the legal status of such a referendum on the basis of constitutional and democratic principles. The conclusion drawn is that any attempt to realize more direct democracy in a concrete field of jurisdiction of the state will meet with very large difficulties. On the other hand, the author clearly states more direct democracy for control over the establishment of large-scale technology to be sensible in terms of politics and principles of democracy, and possible within the constitutional system. Developments towards more direct democracy would mean an enhancement of representative democracy and would be adequate vis a vis the problems posed by large-scale technology. (HSCH) [de

  18. Using radar altimetry to update a large-scale hydrological model of the Brahmaputra river basin

    DEFF Research Database (Denmark)

    Finsen, F.; Milzow, Christian; Smith, R.

    2014-01-01

    Measurements of river and lake water levels from space-borne radar altimeters (past missions include ERS, Envisat, Jason, Topex) are useful for calibration and validation of large-scale hydrological models in poorly gauged river basins. Altimetry data availability over the downstream reaches...... of the Brahmaputra is excellent (17 high-quality virtual stations from ERS-2, 6 from Topex and 10 from Envisat are available for the Brahmaputra). In this study, altimetry data are used to update a large-scale Budyko-type hydrological model of the Brahmaputra river basin in real time. Altimetry measurements...... improved model performance considerably. The Nash-Sutcliffe model efficiency increased from 0.77 to 0.83. Real-time river basin modelling using radar altimetry has the potential to improve the predictive capability of large-scale hydrological models elsewhere on the planet....

  19. How the Internet Will Help Large-Scale Assessment Reinvent Itself

    Directory of Open Access Journals (Sweden)

    Randy Elliot Bennett

    2001-02-01

    Full Text Available Large-scale assessment in the United States is undergoing enormous pressure to change. That pressure stems from many causes. Depending upon the type of test, the issues precipitating change include an outmoded cognitive-scientific basis for test design; a mismatch with curriculum; the differential performance of population groups; a lack of information to help individuals improve; and inefficiency. These issues provide a strong motivation to reconceptualize both the substance and the business of large-scale assessment. At the same time, advances in technology, measurement, and cognitive science are providing the means to make that reconceptualization a reality. The thesis of this paper is that the largest facilitating factor will be technological, in particular the Internet. In the same way that it is already helping to revolutionize commerce, education, and even social interaction, the Internet will help revolutionize the business and substance of large-scale assessment.

  20. Copy of Using Emulation and Simulation to Understand the Large-Scale Behavior of the Internet.

    Energy Technology Data Exchange (ETDEWEB)

    Adalsteinsson, Helgi; Armstrong, Robert C.; Chiang, Ken; Gentile, Ann C.; Lloyd, Levi; Minnich, Ronald G.; Vanderveen, Keith; Van Randwyk, Jamie A; Rudish, Don W.

    2008-10-01

    We report on the work done in the late-start LDRDUsing Emulation and Simulation toUnderstand the Large-Scale Behavior of the Internet. We describe the creation of a researchplatform that emulates many thousands of machines to be used for the study of large-scale inter-net behavior. We describe a proof-of-concept simple attack we performed in this environment.We describe the successful capture of a Storm bot and, from the study of the bot and furtherliterature search, establish large-scale aspects we seek to understand via emulation of Storm onour research platform in possible follow-on work. Finally, we discuss possible future work.3

  1. Output Control Technologies for a Large-scale PV System Considering Impacts on a Power Grid

    Science.gov (United States)

    Kuwayama, Akira

    The mega-solar demonstration project named “Verification of Grid Stabilization with Large-scale PV Power Generation systems” had been completed in March 2011 at Wakkanai, the northernmost city of Japan. The major objectives of this project were to evaluate adverse impacts of large-scale PV power generation systems connected to the power grid and develop output control technologies with integrated battery storage system. This paper describes the outline and results of this project. These results show the effectiveness of battery storage system and also proposed output control methods for a large-scale PV system to ensure stable operation of power grids. NEDO, New Energy and Industrial Technology Development Organization of Japan conducted this project and HEPCO, Hokkaido Electric Power Co., Inc managed the overall project.

  2. Algorithm 873: LSTRS: MATLAB Software for Large-Scale Trust-Region Subproblems and Regularization

    DEFF Research Database (Denmark)

    Rojas Larrazabal, Marielba de la Caridad; Santos, Sandra A.; Sorensen, Danny C.

    2008-01-01

    A MATLAB 6.0 implementation of the LSTRS method is resented. LSTRS was described in Rojas, M., Santos, S.A., and Sorensen, D.C., A new matrix-free method for the large-scale trust-region subproblem, SIAM J. Optim., 11(3):611-646, 2000. LSTRS is designed for large-scale quadratic problems with one...... at each step. LSTRS relies on matrix-vector products only and has low and fixed storage requirements, features that make it suitable for large-scale computations. In the MATLAB implementation, the Hessian matrix of the quadratic objective function can be specified either explicitly, or in the form...... of a matrix-vector multiplication routine. Therefore, the implementation preserves the matrix-free nature of the method. A description of the LSTRS method and of the MATLAB software, version 1.2, is presented. Comparisons with other techniques and applications of the method are also included. A guide...

  3. Analysis for preliminary evaluation of discrete fracture flow and large-scale permeability in sedimentary rocks

    International Nuclear Information System (INIS)

    Kanehiro, B.Y.; Lai, C.H.; Stow, S.H.

    1987-05-01

    Conceptual models for sedimentary rock settings that could be used in future evaluation and suitability studies are being examined through the DOE Repository Technology Program. One area of concern for the hydrologic aspects of these models is discrete fracture flow analysis as related to the estimation of the size of the representative elementary volume, evaluation of the appropriateness of continuum assumptions and estimation of the large-scale permeabilities of sedimentary rocks. A basis for preliminary analysis of flow in fracture systems of the types that might be expected to occur in low permeability sedimentary rocks is presented. The approach used involves numerical modeling of discrete fracture flow for the configuration of a large-scale hydrologic field test directed at estimation of the size of the representative elementary volume and large-scale permeability. Analysis of fracture data on the basis of this configuration is expected to provide a preliminary indication of the scale at which continuum assumptions can be made

  4. Global and exponential attractors of the three dimensional viscous primitive equations of large-scale moist atmosphere

    OpenAIRE

    You, Bo; Li, Fang

    2016-01-01

    This paper is concerned with the long-time behavior of solutions for the three dimensional viscous primitive equations of large-scale moist atmosphere. We prove the existence of a global attractor for the three dimensional viscous primitive equations of large-scale moist atmosphere by asymptotic a priori estimate and construct an exponential attractor by using the smoothing property of the semigroup generated by the three dimensional viscous primitive equations of large-scale moist atmosphere...

  5. Scale interactions in a mixing layer – the role of the large-scale gradients

    KAUST Repository

    Fiscaletti, D.

    2016-02-15

    © 2016 Cambridge University Press. The interaction between the large and the small scales of turbulence is investigated in a mixing layer, at a Reynolds number based on the Taylor microscale of , via direct numerical simulations. The analysis is performed in physical space, and the local vorticity root-mean-square (r.m.s.) is taken as a measure of the small-scale activity. It is found that positive large-scale velocity fluctuations correspond to large vorticity r.m.s. on the low-speed side of the mixing layer, whereas, they correspond to low vorticity r.m.s. on the high-speed side. The relationship between large and small scales thus depends on position if the vorticity r.m.s. is correlated with the large-scale velocity fluctuations. On the contrary, the correlation coefficient is nearly constant throughout the mixing layer and close to unity if the vorticity r.m.s. is correlated with the large-scale velocity gradients. Therefore, the small-scale activity appears closely related to large-scale gradients, while the correlation between the small-scale activity and the large-scale velocity fluctuations is shown to reflect a property of the large scales. Furthermore, the vorticity from unfiltered (small scales) and from low pass filtered (large scales) velocity fields tend to be aligned when examined within vortical tubes. These results provide evidence for the so-called \\'scale invariance\\' (Meneveau & Katz, Annu. Rev. Fluid Mech., vol. 32, 2000, pp. 1-32), and suggest that some of the large-scale characteristics are not lost at the small scales, at least at the Reynolds number achieved in the present simulation.

  6. Eight attention points when evaluating large-scale public sector reforms

    DEFF Research Database (Denmark)

    Hansen, Morten Balle; Breidahl, Karen Nielsen; Furubo, Jan-Eric

    2017-01-01

    This chapter analyses the challenges related to evaluations of large-scale public sector reforms. It is based on a meta-evaluation of the evaluation of the reform of the Norwegian Labour Market and Welfare Administration (the NAV-reform) in Norway, which entailed both a significant reorganization...... sector reforms. Based on the analysis, eight crucial points of attention when evaluating large-scale public sector reforms are elaborated. We discuss their reasons and argue that other countries will face the same challenges and thus can learn from the experiences of Norway....

  7. Prospects for investment in large-scale, grid-connected solar power in Africa

    DEFF Research Database (Denmark)

    Hansen, Ulrich Elmer; Nygaard, Ivan; Pedersen, Mathilde Brix

    since the 1990s have changed the competiveness of solar PV in all markets, ranging from individual households via institutions to mini-grids and grid-connected installations. In volume and investment, the market for large-scale grid-connected solar power plants is by far the most important......-scale investments in grid-connected solar power plants and local assembly facilities for PV panels, have exceeded even optimistic scenarios. Finally, therefore, there seem to be bright prospects for investment in large-scale grid-connected solar power in Africa....

  8. Large-scale seismic test for soil-structure interaction research in Hualien, Taiwan

    International Nuclear Information System (INIS)

    Ueshima, T.; Kokusho, T.; Okamoto, T.

    1995-01-01

    It is important to evaluate dynamic soil-structure interaction more accurately in the aseismic design of important facilities such as nuclear power plants. A large-scale model structure with about 1/4th of commercial nuclear power plants was constructed on the gravelly layers in seismically active Hualien, Taiwan. This international joint project is called 'the Hualien LSST Project', where 'LSST' is short for Large-Scale Seismic Test. In this paper, research tasks and responsibilities, the process of the construction work and research tasks along the time-line, main results obtained up to now, and so on in this Project are described. (J.P.N.)

  9. Computing the universe: how large-scale simulations illuminate galaxies and dark energy

    Science.gov (United States)

    O'Shea, Brian

    2015-04-01

    High-performance and large-scale computing is absolutely to understanding astronomical objects such as stars, galaxies, and the cosmic web. This is because these are structures that operate on physical, temporal, and energy scales that cannot be reasonably approximated in the laboratory, and whose complexity and nonlinearity often defies analytic modeling. In this talk, I show how the growth of computing platforms over time has facilitated our understanding of astrophysical and cosmological phenomena, focusing primarily on galaxies and large-scale structure in the Universe.

  10. Distributed and hierarchical control techniques for large-scale power plant systems

    International Nuclear Information System (INIS)

    Raju, G.V.S.; Kisner, R.A.

    1985-08-01

    In large-scale systems, integrated and coordinated control functions are required to maximize plant availability, to allow maneuverability through various power levels, and to meet externally imposed regulatory limitations. Nuclear power plants are large-scale systems. Prime subsystems are those that contribute directly to the behavior of the plant's ultimate output. The prime subsystems in a nuclear power plant include reactor, primary and intermediate heat transport, steam generator, turbine generator, and feedwater system. This paper describes and discusses the continuous-variable control system developed to supervise prime plant subsystems for optimal control and coordination

  11. Facile Large-scale synthesis of stable CuO nanoparticles

    Science.gov (United States)

    Nazari, P.; Abdollahi-Nejand, B.; Eskandari, M.; Kohnehpoushi, S.

    2018-04-01

    In this work, a novel approach in synthesizing the CuO nanoparticles was introduced. A sequential corrosion and detaching was proposed in the growth and dispersion of CuO nanoparticles in the optimum pH value of eight. The produced CuO nanoparticles showed six nm (±2 nm) in diameter and spherical feather with a high crystallinity and uniformity in size. In this method, a large-scale production of CuO nanoparticles (120 grams in an experimental batch) from Cu micro-particles was achieved which may met the market criteria for large-scale production of CuO nanoparticles.

  12. Analysis of environmental impact assessment for large-scale X-ray medical equipments

    International Nuclear Information System (INIS)

    Fu Jin; Pei Chengkai

    2011-01-01

    Based on an Environmental Impact Assessment (EIA) project, this paper elaborates the basic analysis essentials of EIA for the sales project of large-scale X-ray medical equipment, and provides the analysis procedure of environmental impact and dose estimation method under normal and accident conditions. The key points of EIA for the sales project of large-scale X-ray medical equipment include the determination of pollution factor and management limit value according to the project's actual situation, the utilization of various methods of assessment and prediction such as analogy, actual measurement and calculation to analyze, monitor, calculate and predict the pollution during normal and accident condition. (authors)

  13. Primordial Non-Gaussianity in the Large-Scale Structure of the Universe

    Directory of Open Access Journals (Sweden)

    Vincent Desjacques

    2010-01-01

    generated the cosmological fluctuations observed today. Any detection of significant non-Gaussianity would thus have profound implications for our understanding of cosmic structure formation. The large-scale mass distribution in the Universe is a sensitive probe of the nature of initial conditions. Recent theoretical progress together with rapid developments in observational techniques will enable us to critically confront predictions of inflationary scenarios and set constraints as competitive as those from the Cosmic Microwave Background. In this paper, we review past and current efforts in the search for primordial non-Gaussianity in the large-scale structure of the Universe.

  14. Design of a Large-scale Three-dimensional Flexible Arrayed Tactile Sensor

    Directory of Open Access Journals (Sweden)

    Junxiang Ding

    2011-01-01

    Full Text Available This paper proposes a new type of large-scale three-dimensional flexible arrayed tactile sensor based on conductive rubber. It can be used to detect three-dimensional force information on the continuous surface of the sensor, which realizes a true skin type tactile sensor. The widely used method of liquid rubber injection molding (LIMS method is used for "the overall injection molding" sample preparation. The structure details of staggered nodes and a new decoupling algorithm of force analysis are given. Simulation results show that the sensor based on this structure can achieve flexible measurement of large-scale 3-D tactile sensor arrays.

  15. 2MASS Constraints on the Local Large-Scale Structure: A Challenge to LCDM?

    OpenAIRE

    Frith, W. J.; Shanks, T.; Outram, P. J.

    2004-01-01

    We investigate the large-scale structure of the local galaxy distribution using the recently completed 2 Micron All Sky Survey (2MASS). First, we determine the K-band number counts over the 4000 sq.deg. APM survey area where evidence for a large-scale `local hole' has previously been detected and compare them to a homogeneous prediction. Considering a LCDM form for the 2-point angular correlation function, the observed deficiency represents a 5 sigma fluctuation in the galaxy distribution. We...

  16. Large-scale melting and impact mixing on early-formed asteroids

    DEFF Research Database (Denmark)

    Greenwood, Richard; Barrat, J.-A.; Scott, Edward Robert Dalton

    Large-scale melting of asteroids and planetesimals is now known to have taken place ex-tremely early in solar system history [1]. The first-generation bodies produced by this process would have been subject to rapid collisional reprocessing, leading in most cases to fragmentation and/or accretion...... the relationship between the different groups of achondrites [3, 4]. Here we present new oxygen isotope evidence con-cerning the role of large-scale melting and subsequent impact mixing in the evolution of three important achondrite groups: the main-group pallasites, meso-siderites and HEDs....

  17. Development of Best Practices for Large-scale Data Management Infrastructure

    NARCIS (Netherlands)

    S. Stadtmüller; H.F. Mühleisen (Hannes); C. Bizer; M.L. Kersten (Martin); J.A. de Rijke (Arjen); F.E. Groffen (Fabian); Y. Zhang (Ying); G. Ladwig; A. Harth; M Trampus

    2012-01-01

    htmlabstractThe amount of available data for processing is constantly increasing and becomes more diverse. We collect our experiences on deploying large-scale data management tools on local-area clusters or cloud infrastructures and provide guidance to use these computing and storage

  18. A note on solving large-scale zero-one programming problems

    NARCIS (Netherlands)

    Adema, Jos J.

    1988-01-01

    A heuristic for solving large-scale zero-one programming problems is provided. The heuristic is based on the modifications made by H. Crowder et al. (1983) to the standard branch-and-bound strategy. First, the initialization is modified. The modification is only useful if the objective function

  19. Understanding water delivery performance in a large-scale irrigation system in Peru

    NARCIS (Netherlands)

    Vos, J.M.C.

    2005-01-01

    During a two-year field study the performance of the water delivery was evaluated in a large-scale irrigation system on the north coast of Peru. Flow measurements were carried out along the main canals, along two secondary canals, and in two tertiary blocks in the Chancay-Lambayeque irrigation

  20. A fast method for large-scale isolation of phages from hospital ...

    African Journals Online (AJOL)

    This plaque-forming method could be adopted to isolate E. coli phage easily, rapidly and in large quantities. Among the 18 isolated E. coli phages, 10 of them had a broad host range in E. coli and warrant further study. Key words: Escherichia coli phages, large-scale isolation, drug resistance, biological properties.

  1. Large-scale agent-based social simulation : A study on epidemic prediction and control

    NARCIS (Netherlands)

    Zhang, M.

    2016-01-01

    Large-scale agent-based social simulation is gradually proving to be a versatile methodological approach for studying human societies, which could make contributions from policy making in social science, to distributed artificial intelligence and agent technology in computer science, and to theory

  2. Probabilistic discrimination between large-scale environments of intensifying and decaying African Easterly Waves

    Energy Technology Data Exchange (ETDEWEB)

    Agudelo, Paula A. [Area Hidrometria e Instrumentacion Carrera, Empresas Publicas de Medellin, Medellin (Colombia); Hoyos, Carlos D.; Curry, Judith A.; Webster, Peter J. [School of Earth and Atmospheric Sciences, Georgia Institute of Technology, Atlanta, GA (United States)

    2011-04-15

    About 50-60% of Atlantic tropical cyclones (TCs) including nearly 85% of intense hurricanes have their origins as African Easterly Waves (AEWs). However, predicting the likelihood of AEW intensification remains a difficult task. We have developed a Bayesian diagnostic methodology to understand genesis of North Atlantic TCs spawned by AEWs through the examination of the characteristics of the AEW itself together with the large-scale environment, resulting in a probabilistic discrimination between large-scale environments associated with intensifying and decaying AEWs. The methodology is based on a new objective and automatic AEW tracking scheme used for the period 1980 to 2001 based on spatio-temporally Fourier-filtered relative vorticity and meridional winds at different levels and outgoing long wave radiation. Using the AEW and Hurricane Best Track Files (HURDAT) data sets, probability density functions of environmental variables that discriminate between AEWs that decay, become TCs or become major hurricanes are determined. Results indicate that the initial amplitude of the AEWs is a major determinant for TC genesis, and that TC genesis risk increases when the wave enters an environment characterized by pre-existing large-scale convergence and moist convection. For the prediction of genesis, the most useful variables are column integrated heating, vertical velocity and specific humidity, and a combined form of divergence and vertical velocity and SST. It is also found that the state of the large-scale environment modulates the annual cycle and interannual variability of the AEW intensification efficiency. (orig.)

  3. The Large-Scale Biosphere-Atmosphere Experiment in Amazonia: Analyzing Regional Land Use Change Effects.

    Science.gov (United States)

    Michael Keller; Maria Assunção Silva-Dias; Daniel C. Nepstad; Meinrat O. Andreae

    2004-01-01

    The Large-Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) is a multi-disciplinary, multinational scientific project led by Brazil. LBA researchers seek to understand Amazonia in its global context especially with regard to regional and global climate. Current development activities in Amazonia including deforestation, logging, cattle ranching, and agriculture...

  4. ECOLOGICAL RESEARCH IN THE LARGE-SCALE BIOSPHERE–ATMOSPHERE EXPERIMENT IN AMAZONIA: EARLY RESULTS.

    Science.gov (United States)

    M. Keller; A. Alencar; G. P. Asner; B. Braswell; M. Bustamente; E. Davidson; T. Feldpausch; E. Fern ndes; M. Goulden; P. Kabat; B. Kruijt; F. Luizao; S. Miller; D. Markewitz; A. D. Nobre; C. A. Nobre; N. Priante Filho; H. Rocha; P. Silva Dias; C von Randow; G. L. Vourlitis

    2004-01-01

    The Large-scale Biosphere–Atmosphere Experiment in Amazonia (LBA) is a multinational, interdisciplinary research program led by Brazil. Ecological studies in LBA focus on how tropical forest conversion, regrowth, and selective logging influence carbon storage, nutrient dynamics, trace gas fluxes, and the prospect for sustainable land use in the Amazon region. Early...

  5. Accelerating Relevance Vector Machine for Large-Scale Data on Spark

    Directory of Open Access Journals (Sweden)

    Liu Fang

    2017-01-01

    Full Text Available Relevance vector machine (RVM is a machine learning algorithm based on a sparse Bayesian framework, which performs well when running classification and regression tasks on small-scale datasets. However, RVM also has certain drawbacks which restricts its practical applications such as (1 slow training process, (2 poor performance on training large-scale datasets. In order to solve these problem, we propose Discrete AdaBoost RVM (DAB-RVM which incorporate ensemble learning in RVM at first. This method performs well with large-scale low-dimensional datasets. However, as the number of features increases, the training time of DAB-RVM increases as well. To avoid this phenomenon, we utilize the sufficient training samples of large-scale datasets and propose all features boosting RVM (AFB-RVM, which modifies the way of obtaining weak classifiers. In our experiments we study the differences between various boosting techniques with RVM, demonstrating the performance of the proposed approaches on Spark. As a result of this paper, two proposed approaches on Spark for different types of large-scale datasets are available.

  6. Verbalizing, Visualizing, and Navigating: The Effect of Strategies on Encoding a Large-Scale Virtual Environment

    Science.gov (United States)

    Kraemer, David J. M.; Schinazi, Victor R.; Cawkwell, Philip B.; Tekriwal, Anand; Epstein, Russell A.; Thompson-Schill, Sharon L.

    2017-01-01

    Using novel virtual cities, we investigated the influence of verbal and visual strategies on the encoding of navigation-relevant information in a large-scale virtual environment. In 2 experiments, participants watched videos of routes through 4 virtual cities and were subsequently tested on their memory for observed landmarks and their ability to…

  7. Most experiments done so far with limited plants. Large-scale testing ...

    Indian Academy of Sciences (India)

    First page Back Continue Last page Graphics. Most experiments done so far with limited plants. Large-scale testing needs to be done with objectives such as: Apart from primary transformants, their progenies must be tested. Experiments on segregation, production of homozygous lines, analysis of expression levels in ...

  8. REQUIREMENTS FOR SYSTEMS DEVELOPMENT LIFE CYCLE MODELS FOR LARGE-SCALE DEFENSE SYSTEMS

    Directory of Open Access Journals (Sweden)

    Kadir Alpaslan DEMIR

    2015-10-01

    Full Text Available TLarge-scale defense system projects are strategic for maintaining and increasing the national defense capability. Therefore, governments spend billions of dollars in the acquisition and development of large-scale defense systems. The scale of defense systems is always increasing and the costs to build them are skyrocketing. Today, defense systems are software intensive and they are either a system of systems or a part of it. Historically, the project performances observed in the development of these systems have been signifi cantly poor when compared to other types of projects. It is obvious that the currently used systems development life cycle models are insuffi cient to address today’s challenges of building these systems. Using a systems development life cycle model that is specifi cally designed for largescale defense system developments and is effective in dealing with today’s and near-future challenges will help to improve project performances. The fi rst step in the development a large-scale defense systems development life cycle model is the identifi cation of requirements for such a model. This paper contributes to the body of literature in the fi eld by providing a set of requirements for system development life cycle models for large-scale defense systems. Furthermore, a research agenda is proposed.

  9. Predicting the effect of fire on large-scale vegetation patterns in North America.

    Science.gov (United States)

    Donald McKenzie; David L. Peterson; Ernesto. Alvarado

    1996-01-01

    Changes in fire regimes are expected across North America in response to anticipated global climatic changes. Potential changes in large-scale vegetation patterns are predicted as a result of altered fire frequencies. A new vegetation classification was developed by condensing Kuchler potential natural vegetation types into aggregated types that are relatively...

  10. Large-scale thinning, ponderosa pine, and mountain pine beetle in the Black Hills, USA

    Science.gov (United States)

    Jose F. Negron; Kurt K. Allen; Angie Ambourn; Blaine Cook; Kenneth Marchand

    2017-01-01

    Mountain pine beetle (Dendroctonus ponderosae Hopkins) (MPB), can cause extensive ponderosa pine (Pinus ponderosa Dougl. ex Laws.) mortality in the Black Hills of South Dakota and Wyoming, USA. Lower tree densities have been associated with reduced MPB-caused tree mortality, but few studies have reported on large-scale thinning and most data come from small plots that...

  11. A Heuristic Approach to Author Name Disambiguation in Bibliometrics Databases for Large-scale Research Assessments

    NARCIS (Netherlands)

    D'Angelo, C.A.; Giuffrida, C.; Abramo, G.

    2011-01-01

    National exercises for the evaluation of research activity by universities are becoming regular practice in ever more countries. These exercises have mainly been conducted through the application of peer-review methods. Bibliometrics has not been able to offer a valid large-scale alternative because

  12. Ecological research in the large-scale biosphere-atmosphere experiment in Amazonia: early results

    NARCIS (Netherlands)

    Keller, M.; Alencar, A.; Asner, G.P.; Braswell, B.; Bustamante, M.; Davidson, E.; Feldpausch, T.; Fernandes, E.; Goulden, M.; Kabat, P.; Kruijt, B.; Luizão, F.; Miller, S.; Markewitz, D.; Nobre, A.D.; Nobre, C.A.; Priante Filho, N.; Rocha, da H.; Silva Dias, P.; Randow, von C.; Vourlitis, G.L.

    2004-01-01

    The Large-scale Biosphere-Atmosphere Experiment in Amazonia (LBA) is a multinational, interdisciplinary research program led by Brazil. Ecological studies in LBA focus on how tropical forest conversion, regrowth, and selective logging influence carbon storage,. nutrient dynamics, trace gas fluxes,

  13. A large-scale perspective on stress-induced alterations in resting-state networks

    Science.gov (United States)

    Maron-Katz, Adi; Vaisvaser, Sharon; Lin, Tamar; Hendler, Talma; Shamir, Ron

    2016-02-01

    Stress is known to induce large-scale neural modulations. However, its neural effect once the stressor is removed and how it relates to subjective experience are not fully understood. Here we used a statistically sound data-driven approach to investigate alterations in large-scale resting-state functional connectivity (rsFC) induced by acute social stress. We compared rsfMRI profiles of 57 healthy male subjects before and after stress induction. Using a parcellation-based univariate statistical analysis, we identified a large-scale rsFC change, involving 490 parcel-pairs. Aiming to characterize this change, we employed statistical enrichment analysis, identifying anatomic structures that were significantly interconnected by these pairs. This analysis revealed strengthening of thalamo-cortical connectivity and weakening of cross-hemispheral parieto-temporal connectivity. These alterations were further found to be associated with change in subjective stress reports. Integrating report-based information on stress sustainment 20 minutes post induction, revealed a single significant rsFC change between the right amygdala and the precuneus, which inversely correlated with the level of subjective recovery. Our study demonstrates the value of enrichment analysis for exploring large-scale network reorganization patterns, and provides new insight on stress-induced neural modulations and their relation to subjective experience.

  14. Output regulation of large-scale hydraulic networks with minimal steady state power consumption

    NARCIS (Netherlands)

    Jensen, Tom Nørgaard; Wisniewski, Rafał; De Persis, Claudio; Kallesøe, Carsten Skovmose

    2014-01-01

    An industrial case study involving a large-scale hydraulic network is examined. The hydraulic network underlies a district heating system, with an arbitrary number of end-users. The problem of output regulation is addressed along with a optimization criterion for the control. The fact that the

  15. EFFECTS OF LARGE-SCALE POULTRY FARMS ON AQUATIC MICROBIAL COMMUNITIES: A MOLECULAR INVESTIGATION.

    Science.gov (United States)

    The effects of large-scale poultry production operations on water quality and human health are largely unknown. Poultry litter is frequently applied as fertilizer to agricultural lands adjacent to large poultry farms. Run-off from the land introduces a variety of stressors into t...

  16. Received signal strength in large-scale wireless relay sensor network: a stochastic ray approach

    NARCIS (Netherlands)

    Hu, L.; Chen, Y.; Scanlon, W.G.

    2011-01-01

    The authors consider a point percolation lattice representation of a large-scale wireless relay sensor network (WRSN) deployed in a cluttered environment. Each relay sensor corresponds to a grid point in the random lattice and the signal sent by the source is modelled as an ensemble of photons that

  17. Idealised modelling of storm surges in large-scale coastal basins

    NARCIS (Netherlands)

    Chen, Wenlong

    2015-01-01

    Coastal areas around the world are frequently attacked by various types of storms, threatening human life and property. This study aims to understand storm surge processes in large-scale coastal basins, particularly focusing on the influences of geometry, topography and storm characteristics on the

  18. Explore the Usefulness of Person-Fit Analysis on Large-Scale Assessment

    Science.gov (United States)

    Cui, Ying; Mousavi, Amin

    2015-01-01

    The current study applied the person-fit statistic, l[subscript z], to data from a Canadian provincial achievement test to explore the usefulness of conducting person-fit analysis on large-scale assessments. Item parameter estimates were compared before and after the misfitting student responses, as identified by l[subscript z], were removed. The…

  19. Boarding School, Academic Motivation and Engagement, and Psychological Well-Being: A Large-Scale Investigation

    Science.gov (United States)

    Martin, Andrew J.; Papworth, Brad; Ginns, Paul; Liem, Gregory Arief D.

    2014-01-01

    Boarding school has been a feature of education systems for centuries. Minimal large-scale quantitative data have been collected to examine its association with important educational and other outcomes. The present study represents one of the largest studies into boarding school conducted to date. It investigates boarding school and students'…

  20. Evaluation of Large-Scale Public-Sector Reforms: A Comparative Analysis

    Science.gov (United States)

    Breidahl, Karen N.; Gjelstrup, Gunnar; Hansen, Hanne Foss; Hansen, Morten Balle

    2017-01-01

    Research on the evaluation of large-scale public-sector reforms is rare. This article sets out to fill that gap in the evaluation literature and argues that it is of vital importance since the impact of such reforms is considerable and they change the context in which evaluations of other and more delimited policy areas take place. In our…

  1. Update of the Large-scale Concentration Maps for the Netherlands (GCN)

    International Nuclear Information System (INIS)

    Van den Elshout, S.; Molenaar, R.

    2011-01-01

    Every year the RIVM and PBL publish the so-called Large-scale concentration maps of the Netherlands (GCN maps). These maps offer an approximation of the background concentrations of several air-polluting substances. Sometimes these maps need to be updated to realize a better approximation of the background concentrations. [nl

  2. Using Practitioner Inquiry within and against Large-Scale Educational Reform

    Science.gov (United States)

    Hines, Mary Beth; Conner-Zachocki, Jennifer

    2015-01-01

    This research study examines the impact of teacher research on participants in a large-scale educational reform initiative in the United States, No Child Left Behind, and its strand for reading teachers, Reading First. Reading First supported professional development for teachers in order to increase student scores on standardized tests. The…

  3. Design of Availability-Dependent Distributed Services in Large-Scale Uncooperative Settings

    Science.gov (United States)

    Morales, Ramses Victor

    2009-01-01

    Thesis Statement: "Availability-dependent global predicates can be efficiently and scalably realized for a class of distributed services, in spite of specific selfish and colluding behaviors, using local and decentralized protocols". Several types of large-scale distributed systems spanning the Internet have to deal with availability variations…

  4. Balancing Tensions in Educational Policy Reforms: Large-Scale Implementation of Assessment for Learning in Norway

    Science.gov (United States)

    Hopfenbeck, Therese N.; Flórez Petour, María Teresa; Tolo, Astrid

    2015-01-01

    This study investigates how different stakeholders in Norway experienced a government-initiated, large-scale policy implementation programme on "Assessment for Learning" ("AfL"). Data were collected through 58 interviews with stakeholders in charge of the policy; Ministers of Education and members of the Directorate of…

  5. Energy transfers in large-scale and small-scale dynamos

    Science.gov (United States)

    Samtaney, Ravi; Kumar, Rohit; Verma, Mahendra

    2015-11-01

    We present the energy transfers, mainly energy fluxes and shell-to-shell energy transfers in small-scale dynamo (SSD) and large-scale dynamo (LSD) using numerical simulations of MHD turbulence for Pm = 20 (SSD) and for Pm = 0.2 on 10243 grid. For SSD, we demonstrate that the magnetic energy growth is caused by nonlocal energy transfers from the large-scale or forcing-scale velocity field to small-scale magnetic field. The peak of these energy transfers move towards lower wavenumbers as dynamo evolves, which is the reason for the growth of the magnetic fields at the large scales. The energy transfers U2U (velocity to velocity) and B2B (magnetic to magnetic) are forward and local. For LSD, we show that the magnetic energy growth takes place via energy transfers from large-scale velocity field to large-scale magnetic field. We observe forward U2U and B2B energy flux, similar to SSD.

  6. The Emergence of Large-Scale Computer Assisted Summative Examination Facilities in Higher Education

    NARCIS (Netherlands)

    Draaijer, S.; Warburton, W. I.

    2014-01-01

    A case study is presented of VU University Amsterdam where a dedicated large-scale CAA examination facility was established. In the facility, 385 students can take an exam concurrently. The case study describes the change factors and processes leading up to the decision by the institution to

  7. Model Predictive Control for Flexible Power Consumption of Large-Scale Refrigeration Systems

    DEFF Research Database (Denmark)

    Shafiei, Seyed Ehsan; Stoustrup, Jakob; Rasmussen, Henrik

    2014-01-01

    A model predictive control (MPC) scheme is introduced to directly control the electrical power consumption of large-scale refrigeration systems. Deviation from the baseline of the consumption is corresponded to the storing and delivering of thermal energy. By virtue of such correspondence...

  8. Large-scale budget applications of mathematical programming in the Forest Service

    Science.gov (United States)

    Malcolm Kirby

    1978-01-01

    Mathematical programming applications in the Forest Service, U.S. Department of Agriculture, are growing. They are being used for widely varying problems: budgeting, lane use planning, timber transport, road maintenance and timber harvest planning. Large-scale applications are being mace in budgeting. The model that is described can be used by developing economies....

  9. The Ecological Impacts of Large-Scale Agrofuel Monoculture Production Systems in the Americas

    Science.gov (United States)

    Altieri, Miguel A.

    2009-01-01

    This article examines the expansion of agrofuels in the Americas and the ecological impacts associated with the technologies used in the production of large-scale monocultures of corn and soybeans. In addition to deforestation and displacement of lands devoted to food crops due to expansion of agrofuels, the massive use of transgenic crops and…

  10. Evaluating stream trout habitat on large-scale aerial color photographs

    Science.gov (United States)

    Wallace J. Greentree; Robert C. Aldrich

    1976-01-01

    Large-scale aerial color photographs were used to evaluate trout habitat by studying stream and streambank conditions. Ninety-two percent of these conditions could be identified correctly on the color photographs. Color photographs taken 1 year apart showed that rehabilitation efforts resulted in stream vegetation changes. Water depth was correlated with film density:...

  11. Investigating and Stimulating Primary Teachers' Attitudes Towards Science: Summary of a Large-Scale Research Project

    Science.gov (United States)

    Walma van der Molen, Juliette; van Aalderen-Smeets, Sandra

    2013-01-01

    Attention to the attitudes of primary teachers towards science is of fundamental importance to research on primary science education. The current article describes a large-scale research project that aims to overcome three main shortcomings in attitude research, i.e. lack of a strong theoretical concept of attitude, methodological flaws in…

  12. Achieving online consent to participation in large-scale gene-environment studies: a tangible destination

    NARCIS (Netherlands)

    Wood, F.; Kowalczuk, J.; Elwyn, G.; Mitchell, C.; Gallacher, J.

    2011-01-01

    BACKGROUND: Population based genetics studies are dependent on large numbers of individuals in the pursuit of small effect sizes. Recruiting and consenting a large number of participants is both costly and time consuming. We explored whether an online consent process for large-scale genetics studies

  13. Effects of mud supply on large-scale estuary morphology and development over centuries to millennia

    NARCIS (Netherlands)

    Braat, L.; van Kessel, Thijs; Leuven, J.R.F.W.; Kleinhans, M.G.

    2017-01-01

    Alluvial river estuaries consist largely of sand but are typically flanked by mudflats and salt marshes. The analogy with meandering rivers that are kept narrower than braided rivers by cohesive floodplain formation raises the question of how large-scale estuarine morphology and the late Holocene

  14. PathlinesExplorer — Image-based exploration of large-scale pathline fields

    KAUST Repository

    Nagoor, Omniah H.; Hadwiger, Markus; Srinivasan, Madhusudhanan

    2015-01-01

    -accessing the original huge data. Our approach is based on constructing a per-pixel linked list data structure in which each pixel contains a list of pathline segments. With this view-dependent method, it is possible to filter, color-code, and explore large-scale flow

  15. Selective vulnerability related to aging in large-scale resting brain networks.

    Science.gov (United States)

    Zhang, Hong-Ying; Chen, Wen-Xin; Jiao, Yun; Xu, Yao; Zhang, Xiang-Rong; Wu, Jing-Tao

    2014-01-01

    Normal aging is associated with cognitive decline. Evidence indicates that large-scale brain networks are affected by aging; however, it has not been established whether aging has equivalent effects on specific large-scale networks. In the present study, 40 healthy subjects including 22 older (aged 60-80 years) and 18 younger (aged 22-33 years) adults underwent resting-state functional MRI scanning. Four canonical resting-state networks, including the default mode network (DMN), executive control network (ECN), dorsal attention network (DAN) and salience network, were extracted, and the functional connectivities in these canonical networks were compared between the younger and older groups. We found distinct, disruptive alterations present in the large-scale aging-related resting brain networks: the ECN was affected the most, followed by the DAN. However, the DMN and salience networks showed limited functional connectivity disruption. The visual network served as a control and was similarly preserved in both groups. Our findings suggest that the aged brain is characterized by selective vulnerability in large-scale brain networks. These results could help improve our understanding of the mechanism of degeneration in the aging brain. Additional work is warranted to determine whether selective alterations in the intrinsic networks are related to impairments in behavioral performance.

  16. Large-scale computer networks and the future of legal knowledge-based systems

    NARCIS (Netherlands)

    Leenes, R.E.; Svensson, Jorgen S.; Hage, J.C.; Bench-Capon, T.J.M.; Cohen, M.J.; van den Herik, H.J.

    1995-01-01

    In this paper we investigate the relation between legal knowledge-based systems and large-scale computer networks such as the Internet. On the one hand, researchers of legal knowledge-based systems have claimed huge possibilities, but despite the efforts over the last twenty years, the number of

  17. The ENIGMA Consortium : large-scale collaborative analyses of neuroimaging and genetic data

    NARCIS (Netherlands)

    Thompson, Paul M.; Stein, Jason L.; Medland, Sarah E.; Hibar, Derrek P.; Vasquez, Alejandro Arias; Renteria, Miguel E.; Toro, Roberto; Jahanshad, Neda; Schumann, Gunter; Franke, Barbara; Wright, Margaret J.; Martin, Nicholas G.; Agartz, Ingrid; Alda, Martin; Alhusaini, Saud; Almasy, Laura; Almeida, Jorge; Alpert, Kathryn; Andreasen, Nancy C.; Andreassen, Ole A.; Apostolova, Liana G.; Appel, Katja; Armstrong, Nicola J.; Aribisala, Benjamin; Bastin, Mark E.; Bauer, Michael; Bearden, Carrie E.; Bergmann, Orjan; Binder, Elisabeth B.; Blangero, John; Bockholt, Henry J.; Boen, Erlend; Bois, Catherine; Boomsma, Dorret I.; Booth, Tom; Bowman, Ian J.; Bralten, Janita; Brouwer, Rachel M.; Brunner, Han G.; Brohawn, David G.; Buckner, Randy L.; Buitelaar, Jan; Bulayeva, Kazima; Bustillo, Juan R.; Calhoun, Vince D.; Hartman, Catharina A.; Hoekstra, Pieter J.; Penninx, Brenda W.; Schmaal, Lianne; van Tol, Marie-Jose

    The Enhancing NeuroImaging Genetics through Meta-Analysis (ENIGMA) Consortium is a collaborative network of researchers working together on a range of large-scale studies that integrate data from 70 institutions worldwide. Organized into Working Groups that tackle questions in neuroscience,

  18. Some effects of integrated production planning in large-scale kitchens

    DEFF Research Database (Denmark)

    Engelund, Eva Høy; Friis, Alan; Jacobsen, Peter

    2005-01-01

    Integrated production planning in large-scale kitchens proves advantageous for increasing the overall quality of the food produced and the flexibility in terms of a diverse food supply. The aim is to increase the flexibility and the variability in the production as well as the focus on freshness ...

  19. Data warehousing technologies for large-scale and right-time data

    DEFF Research Database (Denmark)

    Xiufeng, Liu

    heterogeneous sources into a central data warehouse (DW) by Extract-Transform-Load (ETL) at regular time intervals, e.g., monthly, weekly, or daily. But now, it becomes challenging for large-scale data, and hard to meet the near real-time/right-time business decisions. This thesis considers some...

  20. The ENIGMA Consortium: large-scale collaborative analyses of neuroimaging and genetic data

    NARCIS (Netherlands)

    Thompson, Paul M.; Stein, Jason L.; Medland, Sarah E.; Hibar, Derrek P.; Vasquez, Alejandro Arias; Renteria, Miguel E.; Toro, Roberto; Jahanshad, Neda; Schumann, Gunter; Franke, Barbara; Wright, Margaret J.; Martin, Nicholas G.; Agartz, Ingrid; Alda, Martin; Alhusaini, Saud; Almasy, Laura; Almeida, Jorge; Alpert, Kathryn; Andreasen, Nancy C.; Andreassen, Ole A.; Apostolova, Liana G.; Appel, Katja; Armstrong, Nicola J.; Aribisala, Benjamin; Bastin, Mark E.; Bauer, Michael; Bearden, Carrie E.; Bergmann, Orjan; Binder, Elisabeth B.; Blangero, John; Bockholt, Henry J.; Bøen, Erlend; Bois, Catherine; Boomsma, Dorret I.; Booth, Tom; Bowman, Ian J.; Bralten, Janita; Brouwer, Rachel M.; Brunner, Han G.; Brohawn, David G.; Buckner, Randy L.; Buitelaar, Jan; Bulayeva, Kazima; Bustillo, Juan R.; Calhoun, Vince D.; Cannon, Dara M.; Cantor, Rita M.; Carless, Melanie A.; Caseras, Xavier; Cavalleri, Gianpiero L.; Chakravarty, M. Mallar; Chang, Kiki D.; Ching, Christopher R. K.; Christoforou, Andrea; Cichon, Sven; Clark, Vincent P.; Conrod, Patricia; Coppola, Giovanni; Crespo-Facorro, Benedicto; Curran, Joanne E.; Czisch, Michael; Deary, Ian J.; de Geus, Eco J. C.; den Braber, Anouk; Delvecchio, Giuseppe; Depondt, Chantal; de Haan, Lieuwe; de Zubicaray, Greig I.; Dima, Danai; Dimitrova, Rali; Djurovic, Srdjan; Dong, Hongwei; Donohoe, Gary; Duggirala, Ravindranath; Dyer, Thomas D.; Ehrlich, Stefan; Ekman, Carl Johan; Elvsåshagen, Torbjørn; Emsell, Louise; Erk, Susanne; Espeseth, Thomas; Fagerness, Jesen; Fears, Scott; Fedko, Iryna; Fernández, Guillén; Fisher, Simon E.; Foroud, Tatiana; Fox, Peter T.; Francks, Clyde; Frangou, Sophia; Frey, Eva Maria; Frodl, Thomas; Frouin, Vincent; Garavan, Hugh; Giddaluru, Sudheer; Glahn, David C.; Godlewska, Beata; Goldstein, Rita Z.; Gollub, Randy L.; Grabe, Hans J.; Grimm, Oliver; Gruber, Oliver; Guadalupe, Tulio; Gur, Raquel E.; Gur, Ruben C.; Göring, Harald H. H.; Hagenaars, Saskia; Hajek, Tomas; Hall, Geoffrey B.; Hall, Jeremy; Hardy, John; Hartman, Catharina A.; Hass, Johanna; Hatton, Sean N.; Haukvik, Unn K.; Hegenscheid, Katrin; Heinz, Andreas; Hickie, Ian B.; Ho, Beng-Choon; Hoehn, David; Hoekstra, Pieter J.; Hollinshead, Marisa; Holmes, Avram J.; Homuth, Georg; Hoogman, Martine; Hong, L. Elliot; Hosten, Norbert; Hottenga, Jouke-Jan; Hulshoff Pol, Hilleke E.; Hwang, Kristy S.; Jack, Clifford R.; Jenkinson, Mark; Johnston, Caroline; Jönsson, Erik G.; Kahn, René S.; Kasperaviciute, Dalia; Kelly, Sinead; Kim, Sungeun; Kochunov, Peter; Koenders, Laura; Krämer, Bernd; Kwok, John B. J.; Lagopoulos, Jim; Laje, Gonzalo; Landen, Mikael; Landman, Bennett A.; Lauriello, John; Lawrie, Stephen M.; Lee, Phil H.; Le Hellard, Stephanie; Lemaître, Herve; Leonardo, Cassandra D.; Li, Chiang-Shan; Liberg, Benny; Liewald, David C.; Liu, Xinmin; Lopez, Lorna M.; Loth, Eva; Lourdusamy, Anbarasu; Luciano, Michelle; Macciardi, Fabio; Machielsen, Marise W. J.; Macqueen, Glenda M.; Malt, Ulrik F.; Mandl, René; Manoach, Dara S.; Martinot, Jean-Luc; Matarin, Mar; Mather, Karen A.; Mattheisen, Manuel; Mattingsdal, Morten; Meyer-Lindenberg, Andreas; McDonald, Colm; McIntosh, Andrew M.; McMahon, Francis J.; McMahon, Katie L.; Meisenzahl, Eva; Melle, Ingrid; Milaneschi, Yuri; Mohnke, Sebastian; Montgomery, Grant W.; Morris, Derek W.; Moses, Eric K.; Mueller, Bryon A.; Muñoz Maniega, Susana; Mühleisen, Thomas W.; Müller-Myhsok, Bertram; Mwangi, Benson; Nauck, Matthias; Nho, Kwangsik; Nichols, Thomas E.; Nilsson, Lars-Göran; Nugent, Allison C.; Nyberg, Lars; Olvera, Rene L.; Oosterlaan, Jaap; Ophoff, Roel A.; Pandolfo, Massimo; Papalampropoulou-Tsiridou, Melina; Papmeyer, Martina; Paus, Tomas; Pausova, Zdenka; Pearlson, Godfrey D.; Penninx, Brenda W.; Peterson, Charles P.; Pfennig, Andrea; Phillips, Mary; Pike, G. Bruce; Poline, Jean-Baptiste; Potkin, Steven G.; Pütz, Benno; Ramasamy, Adaikalavan; Rasmussen, Jerod; Rietschel, Marcella; Rijpkema, Mark; Risacher, Shannon L.; Roffman, Joshua L.; Roiz-Santiañez, Roberto; Romanczuk-Seiferth, Nina; Rose, Emma J.; Royle, Natalie A.; Rujescu, Dan; Ryten, Mina; Sachdev, Perminder S.; Salami, Alireza; Satterthwaite, Theodore D.; Savitz, Jonathan; Saykin, Andrew J.; Scanlon, Cathy; Schmaal, Lianne; Schnack, Hugo G.; Schork, Andrew J.; Schulz, S. Charles; Schür, Remmelt; Seidman, Larry; Shen, Li; Shoemaker, Jody M.; Simmons, Andrew; Sisodiya, Sanjay M.; Smith, Colin; Smoller, Jordan W.; Soares, Jair C.; Sponheim, Scott R.; Sprooten, Emma; Starr, John M.; Steen, Vidar M.; Strakowski, Stephen; Strike, Lachlan; Sussmann, Jessika; Sämann, Philipp G.; Teumer, Alexander; Toga, Arthur W.; Tordesillas-Gutierrez, Diana; Trabzuni, Daniah; Trost, Sarah; Turner, Jessica; van den Heuvel, Martijn; van der Wee, Nic J.; van Eijk, Kristel; van Erp, Theo G. M.; van Haren, Neeltje E. M.; van 't Ent, Dennis; van Tol, Marie-Jose; Valdés Hernández, Maria C.; Veltman, Dick J.; Versace, Amelia; Völzke, Henry; Walker, Robert; Walter, Henrik; Wang, Lei; Wardlaw, Joanna M.; Weale, Michael E.; Weiner, Michael W.; Wen, Wei; Westlye, Lars T.; Whalley, Heather C.; Whelan, Christopher D.; White, Tonya; Winkler, Anderson M.; Wittfeld, Katharina; Woldehawariat, Girma; Wolf, Christiane; Zilles, David; Zwiers, Marcel P.; Thalamuthu, Anbupalam; Schofield, Peter R.; Freimer, Nelson B.; Lawrence, Natalia S.; Drevets, Wayne

    2014-01-01

    The Enhancing NeuroImaging Genetics through Meta-Analysis (ENIGMA) Consortium is a collaborative network of researchers working together on a range of large-scale studies that integrate data from 70 institutions worldwide. Organized into Working Groups that tackle questions in neuroscience,