WorldWideScience

Sample records for large part relies

  1. 76 FR 69545 - Conditions and Requirements for Relying on Component Part Testing or Certification, or Another...

    Science.gov (United States)

    2011-11-08

    ... the final rule to incorporate the concept that a finished product certifier may rely upon finished... revised, on our own initiative, the final rule to incorporate the concept that a finished product... Vol. 76 Tuesday, No. 216 November 8, 2011 Part IV Consumer Product Safety Commission 16 CFR Parts...

  2. Reliģijas makdonaldizācija

    OpenAIRE

    Siliņš, Toms

    2012-01-01

    Bakalaura darbā apskatītas makdonaldizācijas procesa izpausmes Latvijas reliģisko organizāciju darbībā. Pētījuma mērķis ir noskaidrot makdonaldizācijas izpausmes Latvijas reliģiskajās organizācijās, tādēļ izvirzīti pētījuma jautājumi – kādas ir liecības par makdonaldizācijas procesa ietekmi uz reliģiskajām organizācijām Latvijā un kā vērtējama makdonaldizācijas procesa ietekme uz reliģiskajām organizācijām Latvijā. Pētījuma ietvaros veiktas divas ekspertu intervijas, kuru analīzes rezultātā i...

  3. Alternative Fuels Data Center: Colorado Airport Relies on Natural Gas

    Science.gov (United States)

    Fueling Stations Colorado Airport Relies on Natural Gas Fueling Stations to someone by E-mail Share Alternative Fuels Data Center: Colorado Airport Relies on Natural Gas Fueling Stations on Facebook Tweet about Alternative Fuels Data Center: Colorado Airport Relies on Natural Gas Fueling Stations on

  4. Explosive force of primacord grid forms large sheet metal parts

    Science.gov (United States)

    1966-01-01

    Primacord which is woven through fish netting in a grid pattern is used for explosive forming of large sheet metal parts. The explosive force generated by the primacord detonation is uniformly distributed over the entire surface of the sheet metal workpiece.

  5. Alternative Fuels Data Center: St. Louis Airport Relies on Biodiesel and

    Science.gov (United States)

    Natural Gas Vehicles St. Louis Airport Relies on Biodiesel and Natural Gas Vehicles to someone by E-mail Share Alternative Fuels Data Center: St. Louis Airport Relies on Biodiesel and Natural Gas Vehicles on Facebook Tweet about Alternative Fuels Data Center: St. Louis Airport Relies on Biodiesel and

  6. Relying on the Information

    Directory of Open Access Journals (Sweden)

    TK YAYIN KURULU

    2013-11-01

    Full Text Available The editorial discusses internet filtering in the light of the optional internet packages for users, namely Family, Standard, Kids and Domestic, to be offered by the Information Technologies Board with the "Secure Internet Law" to take effect on August 22, 2011. Also sharing the hesitations about the objectivity of the application with the reader, editorial emphasizes an education system focused on breeding citizens who are not scared of relying on the information and who can, by themselves, decide whether information is harmful or not. References are made to the results of similar applications abroad while the press statement made by Turkish NGOs on the issue are included.

  7. Trust dynamics in a large system implementation

    DEFF Research Database (Denmark)

    Schlichter, Bjarne Rerup; Rose, Jeremy

    2013-01-01

    outcomes, but largely ignored the dynamics of trust relations. Giddens, as part of his study of modernity, theorises trust dynamics in relation to abstract social systems, though without focusing on information systems. We use Giddens’ concepts to investigate evolving trust relationships in a longitudinal......A large information systems implementation (such as Enterprise Resource Planning systems) relies on the trust of its stakeholders to succeed. Such projects impact diverse groups of stakeholders, each with their legitimate interests and expectations. Levels of stakeholder trust can be expected...... case analysis of a large Integrated Hospital System implementation for the Faroe Islands. Trust relationships suffered a serious breakdown, but the project was able to recover and meet its goals. We develop six theoretical propositions theorising the relationship between trust and project outcomes...

  8. Use of citric acid for large parts decontamination

    International Nuclear Information System (INIS)

    Holland, M.E.

    1979-01-01

    Laboratory and field studies have been performed to identify and evaluate chemical decontamination agents to replace ammonium carbonate, an environmentally unacceptable compound, in the decontamination facility for large process equipment at the Portsmouth Gaseous Diffusion Plant. Preliminary screening of over 40 possible decontamination agents on the basis of efficiency, availability, toxicity, cost, corrosiveness, and practicality indicated sodium carbonate and citric acid to be the most promising. Extensive laboratory studies were performed with these two reagents. Corrosion rates, decontamination factors, uranium recovery efficiencies, technetium ( 99 Tc)/ion exchange removal effects, and possible environmental impacts were determined or investigated. Favorable results were found in all areas. Detailed monitoring and analysis during two-week trial periods in which sodium carbonate and citric acid were used in the large parts decontamination facility resulted in similar evaluation and conclusions. Because it has cleaning properties not possessed by sodium carbonate, and because it eliminated several operational problems by incorporating two acidic decontamination reagents (citric and nitric acids) instead of one basic reagent (sodium or ammonium carbonate) and one acidic reagent (nitric acid), citric acid was selected for one-year field testing. On the basis of its excellent performance in the field tests, citric acid is recommended as a permanent replacement for ammonium carbonate in the decontamination facility for large process equipment

  9. 17 CFR Appendix B to Part 420 - Sample Large Position Report

    Science.gov (United States)

    2010-04-01

    ..., and as collateral for financial derivatives and other securities transactions $ Total Memorandum 1... 17 Commodity and Securities Exchanges 3 2010-04-01 2010-04-01 false Sample Large Position Report B Appendix B to Part 420 Commodity and Securities Exchanges DEPARTMENT OF THE TREASURY REGULATIONS UNDER...

  10. Maladies reliées aux loisirs aquatiques

    Science.gov (United States)

    Sanborn, Margaret; Takaro, Tim

    2013-01-01

    Résumé Objectif Passer en revue les facteurs de risque, la prise en charge et la prévention des maladies reliées aux loisirs aquatiques en pratique familiale. Sources des données Des articles originaux et de synthèse entre janvier 1998 et février 2012 ont été identifiés à l’aide de PubMed et des expressions de recherche en anglais water-related illness, recreational water illness et swimmer illness. Message principal Il y a un risque de 3 % à 8 % de maladies gastrointestinales (MGI) après la baignade. Les groupes à risque élevé de MGI sont les enfants de moins de 5 ans, surtout s’ils n’ont pas été vaccinés contre le rotavirus, les personnes âgées et les patients immunodéficients. Les enfants sont à plus grand risque parce qu’ils avalent plus d’eau quand ils nagent, restent dans l’eau plus longtemps et jouent dans l’eau peu profonde et le sable qui sont plus contaminés. Les adeptes des sports dans lesquels le contact avec l’eau est abondant comme le triathlon et le surf cerf-volant sont aussi à risque élevé et même ceux qui s’adonnent à des activités impliquant un contact partiel avec l’eau comme la navigation de plaisance et la pêche ont un risque de 40 % à 50 % fois plus grand de MGI par rapport à ceux qui ne pratiquent pas de sports aquatiques. Il y a lieu de faire une culture des selles quand on soupçonne une maladie reliée aux loisirs aquatiques et l’échelle clinique de la déshydratation est utile pour l’évaluation des besoins de traitement chez les enfants affectés. Conclusion Les maladies reliées aux loisirs aquatiques est la principale cause de MGI durant la saison des baignades. La reconnaissance que la baignade est une source importante de maladies peut aider à prévenir les cas récurrents et secondaires. On recommande fortement le vaccin contre le rotavirus chez les enfants qui se baignent souvent.

  11. Juxtaposed Color Halftoning Relying on Discrete Lines

    OpenAIRE

    Babaei, Vahid; Hersch, Roger

    2013-01-01

    Most halftoning techniques allow screen dots to overlap. They rely on the assumption that the inks are transparent, i.e. the inks do not scatter a significant portion of the light back to the air. However, many special effect inks such as metallic inks, iridescent inks or pigmented inks are not transparent. In order to create halftone images, halftone dots formed by such inks should be juxtaposed, i.e. printed side by side. We propose an efficient juxtaposed color halftoning technique for pla...

  12. Invitro genotoxicity, assessment of cytotoxicity and of Rely X luting cement on human lymphocyte cells before and after irradiation

    International Nuclear Information System (INIS)

    Shetty, Shilpa S.; Hegde, Mithra N.; Shabin; Hegde, Nidarsh D.; Suchetha Kumari; Sanjeev, Ganesh

    2013-01-01

    In dentistry, a luting agent is a viscous material placed between tooth structure and a prosthesis that by polymerization firmly attach the prosthesis to the tooth structure. Luting agents contact a large area of dentin when used for crown cementation. There is little information on biocompatibility tests, especially on the effect of electron beam irradiation on cytotoxicity for luting resin cements. To determine the in vitro cytotoxicity and genotoxicity of Rely X luting cement on human lymphocyte cells before and after irradiation. Rely X luting cement was obtained commercially. Samples were prepared as per the ISO standard size of 25x2x2 mm using polytetrafluoroethylene teflon mould and divided into two groups - non irradiated and irradiated groups. The samples in irradiated category were exposed to 200 Gy of electron beam irradiation at Microtron Centre, Mangalore University, Mangalore, India. For hemolysis the samples were immersed in phosphate buffer saline and incubated at 370℃ for 24 hrs, 7 days and 14 days. 200 μl of 24 hr material extract was mixed with human peripheral blood lymphocyte tested for comet assay by single cell DNA comet assay. Hemolytic activity of non irradiated Rely X luting cement after 24 hrs, 7 days and 14 days was 54.78±1.48, 69.91±2.41 and 43.21±0.92 respectively whereas hemolytic activity of irradiated Rely X luting cement after 24 hrs, 7 days and 14 days was 91.8±8.29, 56.95±19.7 and 41.34±12.30. The irradiation of Rely X luting cement with 200 Gy dose of electron beam irradiation caused an increase in the frequency of DNA damage when compared to that of the non-irradiated group. Based on the experimental condition, it is concluded that incomplete polymerization of the dental luting cements has resulted in the elution of the resin components which are responsible for the cytotoxicity and genotoxicity of Rely X luting cement on human lymphocyte cells. (author)

  13. Increasing Uncertainty: The Dangers of Relying on Conventional Forces for Nuclear Deterrence

    Science.gov (United States)

    2016-03-14

    72 | Air & Space Power Journal Increasing Uncertainty The Dangers of Relying on Conventional Forces for Nuclear Deterrence Jennifer Bradley To put...relationships and should serve as the cornerstone of US nuclear deterrence policy. Although Russia and China are not identified as adversaries of...exactly what has happened over the past year. The US decision to meet the needs of deterrence by relying less on nuclear weapons and instead devel- oping

  14. 47 CFR 15.717 - TVBDs that rely on spectrum sensing.

    Science.gov (United States)

    2010-10-01

    ... Television Band Devices § 15.717 TVBDs that rely on spectrum sensing. (a) Parties may submit applications for... that are identical in electrical characteristics and antenna systems may be certified under the...

  15. An atom trap relying on optical pumping

    International Nuclear Information System (INIS)

    Bouyer, P.; Lemonde, P.; Ben Dahan, M.; Michaud, A.; Salomon, C.; Dalibard, J.

    1994-01-01

    We have investigated a new radiation pressure trap which relies on optical pumping and does not require any magnetic field. It employs six circularly polarized divergent beams and works on the red of a J g →J e = J g + 1 atomic transition with J g ≥1/2. We have demonstrated this trap with cesium atoms from a vapour cell using the 852 nm J g = 4→J e = 5 resonance transition. The trap contained up to 3.10 7 atoms in a cloud of 1/√e radius of 330 μm. (orig.)

  16. Intuitive Face Judgments Rely on Holistic Eye Movement Pattern.

    Science.gov (United States)

    Mega, Laura F; Volz, Kirsten G

    2017-01-01

    Non-verbal signals such as facial expressions are of paramount importance for social encounters. Their perception predominantly occurs without conscious awareness and is effortlessly integrated into social interactions. In other words, face perception is intuitive. Contrary to classical intuition tasks, this work investigates intuitive processes in the realm of every-day type social judgments. Two differently instructed groups of participants judged the authenticity of emotional facial expressions, while their eye movements were recorded: an 'intuitive group,' instructed to rely on their "gut feeling" for the authenticity judgments, and a 'deliberative group,' instructed to make their judgments after careful analysis of the face. Pixel-wise statistical maps of the resulting eye movements revealed a differential viewing pattern, wherein the intuitive judgments relied on fewer, longer and more centrally located fixations. These markers have been associated with a global/holistic viewing strategy. The holistic pattern of intuitive face judgments is in line with evidence showing that intuition is related to processing the "gestalt" of an object, rather than focusing on details. Our work thereby provides further evidence that intuitive processes are characterized by holistic perception, in an understudied and real world domain of intuition research.

  17. Concepts and Plans towards fast large scale Monte Carlo production for the ATLAS Experiment

    Science.gov (United States)

    Ritsch, E.; Atlas Collaboration

    2014-06-01

    The huge success of the physics program of the ATLAS experiment at the Large Hadron Collider (LHC) during Run 1 relies upon a great number of simulated Monte Carlo events. This Monte Carlo production takes the biggest part of the computing resources being in use by ATLAS as of now. In this document we describe the plans to overcome the computing resource limitations for large scale Monte Carlo production in the ATLAS Experiment for Run 2, and beyond. A number of fast detector simulation, digitization and reconstruction techniques are being discussed, based upon a new flexible detector simulation framework. To optimally benefit from these developments, a redesigned ATLAS MC production chain is presented at the end of this document.

  18. Juxtaposed color halftoning relying on discrete lines.

    Science.gov (United States)

    Babaei, Vahid; Hersch, Roger D

    2013-02-01

    Most halftoning techniques allow screen dots to overlap. They rely on the assumption that the inks are transparent, i.e., the inks do not scatter a significant portion of the light back to the air. However, many special effect inks, such as metallic inks, iridescent inks, or pigmented inks, are not transparent. In order to create halftone images, halftone dots formed by such inks should be juxtaposed, i.e., printed side by side. We propose an efficient juxtaposed color halftoning technique for placing any desired number of colorant layers side by side without overlapping. The method uses a monochrome library of screen elements made of discrete lines with rational thicknesses. Discrete line juxtaposed color halftoning is performed efficiently by multiple accesses to the screen element library.

  19. Unusually large tsunamis frequent a currently creeping part of the Aleutian megathrust

    Science.gov (United States)

    Witter, Robert C.; Carver, G.A.; Briggs, Richard; Gelfenbaum, Guy R.; Koehler, R.D.; La Selle, SeanPaul M.; Bender, Adrian M.; Engelhart, S.E.; Hemphill-Haley, E.; Hill, Troy D.

    2016-01-01

    Current models used to assess earthquake and tsunami hazards are inadequate where creep dominates a subduction megathrust. Here we report geological evidence for large tsunamis, occurring on average every 300–340 years, near the source areas of the 1946 and 1957 Aleutian tsunamis. These areas bookend a postulated seismic gap over 200 km long where modern geodetic measurements indicate that the megathrust is currently creeping. At Sedanka Island, evidence for large tsunamis includes six sand sheets that blanket a lowland facing the Pacific Ocean, rise to 15 m above mean sea level, contain marine diatoms, cap terraces, adjoin evidence for scour, and date from the past 1700 years. The youngest sheet, and modern drift logs found as far as 800 m inland and >18 m elevation, likely record the 1957 tsunami. Modern creep on the megathrust coexists with previously unrecognized tsunami sources along this part of the Aleutian Subduction Zone.

  20. Steam turbines of large output especially for nuclear power stations. Part 1

    International Nuclear Information System (INIS)

    Drahny, J.; Stasny, M.

    1986-01-01

    At the international conference, 53 papers were presented in 3 sessions dealing with the design of large output steam turbines, with problems of flow in steam turbines, and with the reliability and service life of steam turbines. Part 1 of the conference proceedings contains two introductory papers, one reviewing the 100 years history of steam turbines (not included in INIS), the other giving an overview of the development of steam turbines in the eighties; and the 13 papers heard in the session on steam turbine design, all inputted in INIS. (A.K.)

  1. Intuitive Face Judgments Rely on Holistic Eye Movement Pattern

    Directory of Open Access Journals (Sweden)

    Laura F. Mega

    2017-06-01

    Full Text Available Non-verbal signals such as facial expressions are of paramount importance for social encounters. Their perception predominantly occurs without conscious awareness and is effortlessly integrated into social interactions. In other words, face perception is intuitive. Contrary to classical intuition tasks, this work investigates intuitive processes in the realm of every-day type social judgments. Two differently instructed groups of participants judged the authenticity of emotional facial expressions, while their eye movements were recorded: an ‘intuitive group,’ instructed to rely on their “gut feeling” for the authenticity judgments, and a ‘deliberative group,’ instructed to make their judgments after careful analysis of the face. Pixel-wise statistical maps of the resulting eye movements revealed a differential viewing pattern, wherein the intuitive judgments relied on fewer, longer and more centrally located fixations. These markers have been associated with a global/holistic viewing strategy. The holistic pattern of intuitive face judgments is in line with evidence showing that intuition is related to processing the “gestalt” of an object, rather than focusing on details. Our work thereby provides further evidence that intuitive processes are characterized by holistic perception, in an understudied and real world domain of intuition research.

  2. Concepts and Plans towards fast large scale Monte Carlo production for the ATLAS Experiment

    CERN Document Server

    Chapman, J; Duehrssen, M; Elsing, M; Froidevaux, D; Harrington, R; Jansky, R; Langenberg, R; Mandrysch, R; Marshall, Z; Ritsch, E; Salzburger, A

    2014-01-01

    The huge success of the physics program of the ATLAS experiment at the Large Hadron Collider (LHC) during run I relies upon a great number of simulated Monte Carlo events. This Monte Carlo production takes the biggest part of the computing resources being in use by ATLAS as of now. In this document we describe the plans to overcome the computing resource limitations for large scale Monte Carlo production in the ATLAS Experiment for run II, and beyond. A number of fast detector simulation, digitization and reconstruction techniques and are being discussed, based upon a new flexible detector simulation framework. To optimally benefit from these developments, a redesigned ATLAS MC production chain is presented at the end of this document.

  3. Transport Coefficients from Large Deviation Functions

    OpenAIRE

    Gao, Chloe Ya; Limmer, David T.

    2017-01-01

    We describe a method for computing transport coefficients from the direct evaluation of large deviation functions. This method is general, relying on only equilibrium fluctuations, and is statistically efficient, employing trajectory based importance sampling. Equilibrium fluctuations of molecular currents are characterized by their large deviation functions, which are scaled cumulant generating functions analogous to the free energies. A diffusion Monte Carlo algorithm is used to evaluate th...

  4. Konfuciānisma, daoisma un budisma mantojums Ķīnas tautas reliģijā

    OpenAIRE

    Dudaļeva, Jūlija

    2014-01-01

    Šī bakalaura darba nosaukums ir „Konfuciānisma, daoisma un budisma mantojums Ķīnas tautas reliģijā”. Nosaukums ķīniešu tautas reliģijas ir pētnieku izveidots, un tas neatbilst nevienam tradicionālam ķīniešu nosaukumam. Tautas reliģija ir brīvi strukturēts ticību, prakses, dievību, mītu un vērtību savienojums, kas ietver sevī elementus no senču dievkalpojumiem, mirušo kulta, dabas pielūgsmes, animisma, vietējām tradīcijām, daoisma, konfuciānisma un budisma. Darba mērķis ir izpētīt konfuciān...

  5. PART 2: LARGE PARTICLE MODELLING Simulation of particle filtration processes in deformable media

    Directory of Open Access Journals (Sweden)

    Gernot Boiger

    2008-06-01

    Full Text Available In filtration processes it is necessary to consider both, the interaction of thefluid with the solid parts as well as the effect of particles carried in the fluidand accumulated on the solid. While part 1 of this paper deals with themodelling of fluid structure interaction effects, the accumulation of dirtparticles will be addressed in this paper. A closer look is taken on theimplementation of a spherical, LAGRANGIAN particle model suitable forsmall and large particles. As dirt accumulates in the fluid stream, it interactswith the surrounding filter fibre structure and over time causes modificationsof the filter characteristics. The calculation of particle force interactioneffects is necessary for an adequate simulation of this situation. A detailedDiscrete Phase Lagrange Model was developed to take into account thetwo-way coupling of the fluid and accumulated particles. The simulation oflarge particles and the fluid-structure interaction is realised in a single finitevolume flow solver on the basis of the OpenSource software OpenFoam.

  6. The interannual precipitation variability in the southern part of Iran as linked to large-scale climate modes

    Energy Technology Data Exchange (ETDEWEB)

    Pourasghar, Farnaz; Jahanbakhsh, Saeed; Sari Sarraf, Behrooz [The University of Tabriz, Department of Physical Geography, Faculty of Humanities and Social Science, Tabriz (Iran, Islamic Republic of); Tozuka, Tomoki [The University of Tokyo, Department of Earth and Planetary Science, Graduate School of Science, Tokyo (Japan); Ghaemi, Hooshang [Iran Meteorological Organization, Tehran (Iran, Islamic Republic of); Yamagata, Toshio [The University of Tokyo, Department of Earth and Planetary Science, Graduate School of Science, Tokyo (Japan); Application Laboratory/JAMSTEC, Yokohama, Kanagawa (Japan)

    2012-11-15

    The interannual variation of precipitation in the southern part of Iran and its link with the large-scale climate modes are examined using monthly data from 183 meteorological stations during 1974-2005. The majority of precipitation occurs during the rainy season from October to May. The interannual variation in fall and early winter during the first part of the rainy season shows apparently a significant positive correlation with the Indian Ocean Dipole (IOD) and El Nino-Southern Oscillation (ENSO). However, a partial correlation analysis used to extract the respective influence of IOD and ENSO shows a significant positive correlation only with the IOD and not with ENSO. The southeasterly moisture flux anomaly over the Arabian Sea turns anti-cyclonically and transport more moisture to the southern part of Iran from the Arabian Sea, the Red Sea, and the Persian Gulf during the positive IOD. On the other hand, the moisture flux has northerly anomaly over Iran during the negative IOD, which results in reduced moisture supply from the south. During the latter part of the rainy season in late winter and spring, the interannual variation of precipitation is more strongly influenced by modes of variability over the Mediterranean Sea. The induced large-scale atmospheric circulation anomaly controls moisture supply from the Red Sea and the Persian Gulf. (orig.)

  7. 12 CFR 221.117 - When bank in “good faith” has not relied on stock as collateral.

    Science.gov (United States)

    2010-01-01

    ... bank in “good faith” has not relied on stock as collateral. (a) The Board has received questions... “indirectly secured” by stock as indicated by the phrase, “if the lender, in good faith, has not relied upon...

  8. Development of Spray on Bag for manufacturing of large composites parts: Diffusivity analysis

    Science.gov (United States)

    Dempah, Maxime Joseph

    Bagging materials are utilized in many composites manufacturing processes. The selection is mainly driven by cost, temperature requirements, chemical compatibility and tear properties of the bag. The air barrier properties of the bag are assumed to be adequate or in many cases are not considered at all. However, the gas barrier property of a bag is the most critical parameter, as it can negatively affect the quality of the final laminate. The barrier property is a function of the bag material, uniformity, thickness and temperature. Improved barrier properties are needed for large parts, high pressure consolidated components and structures where air stays entrapped on the part surface. The air resistance property of the film is defined as permeability and is investigated in this thesis. A model was developed to evaluate the gas transport through the film and an experimental cell was implemented to characterize various commercial films. Understanding and characterizing the transport phenomena through the film allows optimization of the bagging material for various manufacturing processes. Spray-on-Bag is a scalable alternative bagging method compared to standard films. The approach allows in-situ fabrication of the bag on large and complex geometry structures where optimization of the bag properties can be varied on a local level. An experimental setup was developed and implemented using a six axis robot and an automated spraying system. Experiments were performed on a flat surface and specimens were characterized and compared to conventional films. Air barrier properties were within range of standard film approaches showing the potential to fabricate net shape bagging structures in an automated process.

  9. Anemonefishes rely on visual and chemical cues to correctly identify conspecifics

    Science.gov (United States)

    Johnston, Nicole K.; Dixson, Danielle L.

    2017-09-01

    Organisms rely on sensory cues to interpret their environment and make important life-history decisions. Accurate recognition is of particular importance in diverse reef environments. Most evidence on the use of sensory cues focuses on those used in predator avoidance or habitat recognition, with little information on their role in conspecific recognition. Yet conspecific recognition is essential for life-history decisions including settlement, mate choice, and dominance interactions. Using a sensory manipulated tank and a two-chamber choice flume, anemonefish conspecific response was measured in the presence and absence of chemical and/or visual cues. Experiments were then repeated in the presence or absence of two heterospecific species to evaluate whether a heterospecific fish altered the conspecific response. Anemonefishes responded to both the visual and chemical cues of conspecifics, but relied on the combination of the two cues to recognize conspecifics inside the sensory manipulated tank. These results contrast previous studies focusing on predator detection where anemonefishes were found to compensate for the loss of one sensory cue (chemical) by utilizing a second cue (visual). This lack of sensory compensation may impact the ability of anemonefishes to acclimate to changing reef environments in the future.

  10. The Long-Term Multicenter Observational Study of Dabigatran Treatment in Patients With Atrial Fibrillation (RELY-ABLE) Study

    DEFF Research Database (Denmark)

    Connolly, S. J.; Wallentin, L.; Ezekowitz, M. D.

    2013-01-01

    . There is a need for longer-term follow-up of patients on dabigatran and for further data comparing the 2 dabigatran doses. Methods and Results Patients randomly assigned to dabigatran in RE-LY were eligible for the Long-term Multicenter Extension of Dabigatran Treatment in Patients with Atrial Fibrillation (RELY...

  11. Effects of Part-Time Faculty Employment on Community College Graduation Rates

    Science.gov (United States)

    Jacoby, Daniel

    2006-01-01

    Regression analysis indicates that graduation rates for public community colleges in the United States are adversely affected when institutions rely heavily upon part-time faculty instruction. Negative effects may be partially offset if the use of part-time faculty increases the net faculty resource available per student. However, the evidence…

  12. Un premier service mobile en Égypte qui relie les petits exploitants ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Un premier service mobile en Égypte qui relie les petits exploitants aux acheteurs. Un homme qui parle sur un téléphone mobile. Les petits exploitants agricoles dominent l'agriculture égyptienne, mais leur manque de connaissances en matière de commercialisation et de compétences techniques, ainsi qu'une mauvaise ...

  13. Immobilization of α-Amylase from Anoxybacillus sp. SK3-4 on ReliZyme and Immobead Supports

    Directory of Open Access Journals (Sweden)

    Ummirul Mukminin Kahar

    2016-09-01

    Full Text Available α-Amylase from Anoxybacillus sp. SK3-4 (ASKA is a thermostable enzyme that produces a high level of maltose from starches. A truncated ASKA (TASKA variant with improved expression and purification efficiency was characterized in an earlier study. In this work, TASKA was purified and immobilized through covalent attachment on three epoxide (ReliZyme EP403/M, Immobead IB-150P, and Immobead IB-150A and an amino-epoxide (ReliZyme HFA403/M activated supports. Several parameters affecting immobilization were analyzed, including the pH, temperature, and quantity (mg of enzyme added per gram of support. The influence of the carrier surface properties, pore sizes, and lengths of spacer arms (functional groups on biocatalyst performances were studied. Free and immobilized TASKAs were stable at pH 6.0–9.0 and active at pH 8.0. The enzyme showed optimal activity and considerable stability at 60 °C. Immobilized TASKA retained 50% of its initial activity after 5–12 cycles of reuse. Upon degradation of starches and amylose, only immobilized TASKA on ReliZyme HFA403/M has comparable hydrolytic ability with the free enzyme. To the best of our knowledge, this is the first report of an immobilization study of an α-amylase from Anoxybacillus spp. and the first report of α-amylase immobilization using ReliZyme and Immobeads as supports.

  14. Defining the buffering process by a triprotic acid without relying on Stewart-electroneutrality considerations.

    Science.gov (United States)

    Nguyen, Minhtri K; Kao, Liyo; Kurtz, Ira

    2011-08-17

    Upon the addition of protons to an aqueous solution, a component of the H+ load will be bound i.e. buffered. In an aqueous solution containing a triprotic acid, H+ can be bound to three different states of the acid as well as to OH- ions that are derived from the auto-ionization of H2O. In quantifying the buffering process of a triprotic acid, one must define the partitioning of H+ among the three states of the acid and also the OH- ions in solution in order to predict the equilibrium pH value. However, previous quantitative approaches that model triprotic acid titration behaviour and used to predict the equilibrium pH rely on the mathematical convenience of electroneutrality/charge balance considerations. This fact has caused confusion in the literature, and has led to the assumption that charge balance/electroneutrality is a causal factor in modulating proton buffering (Stewart formulation). However, as we have previously shown, although charge balance can be used mathematically as a convenient tool in deriving various formulae, electroneutrality per se is not a fundamental physicochemical parameter that is mechanistically involved in the underlying buffering and proton transfer reactions. The lack of distinction between a mathematical tool, and a fundamental physicochemical parameter is in part a reason for the current debate regarding the Stewart formulation of acid-base analysis. We therefore posed the following question: Is it possible to generate an equation that defines and predicts the buffering of a triprotic acid that is based only on H+ partitioning without incorporating electroneutrality in the derivation? Towards this goal, we derived our new equation utilizing: 1) partitioning of H+ buffering; 2) conservation of mass; and 3) acid-base equilibria. In validating this model, we compared the predicted equilibrium pH with the measured pH of an aqueous solution consisting of Na2HPO4 to which HCl was added. The measured pH values were in excellent agreement

  15. Defining the buffering process by a triprotic acid without relying on stewart-electroneutrality considerations

    Directory of Open Access Journals (Sweden)

    Kao Liyo

    2011-08-01

    Full Text Available Abstract Upon the addition of protons to an aqueous solution, a component of the H+ load will be bound i.e. buffered. In an aqueous solution containing a triprotic acid, H+ can be bound to three different states of the acid as well as to OH- ions that are derived from the auto-ionization of H2O. In quantifying the buffering process of a triprotic acid, one must define the partitioning of H+ among the three states of the acid and also the OH- ions in solution in order to predict the equilibrium pH value. However, previous quantitative approaches that model triprotic acid titration behaviour and used to predict the equilibrium pH rely on the mathematical convenience of electroneutrality/charge balance considerations. This fact has caused confusion in the literature, and has led to the assumption that charge balance/electroneutrality is a causal factor in modulating proton buffering (Stewart formulation. However, as we have previously shown, although charge balance can be used mathematically as a convenient tool in deriving various formulae, electroneutrality per se is not a fundamental physicochemical parameter that is mechanistically involved in the underlying buffering and proton transfer reactions. The lack of distinction between a mathematical tool, and a fundamental physicochemical parameter is in part a reason for the current debate regarding the Stewart formulation of acid-base analysis. We therefore posed the following question: Is it possible to generate an equation that defines and predicts the buffering of a triprotic acid that is based only on H+ partitioning without incorporating electroneutrality in the derivation? Towards this goal, we derived our new equation utilizing: 1 partitioning of H+ buffering; 2 conservation of mass; and 3 acid-base equilibria. In validating this model, we compared the predicted equilibrium pH with the measured pH of an aqueous solution consisting of Na2HPO4 to which HCl was added. The measured pH values

  16. Defining the buffering process by a triprotic acid without relying on stewart-electroneutrality considerations

    Science.gov (United States)

    2011-01-01

    Upon the addition of protons to an aqueous solution, a component of the H+ load will be bound i.e. buffered. In an aqueous solution containing a triprotic acid, H+ can be bound to three different states of the acid as well as to OH- ions that are derived from the auto-ionization of H2O. In quantifying the buffering process of a triprotic acid, one must define the partitioning of H+ among the three states of the acid and also the OH- ions in solution in order to predict the equilibrium pH value. However, previous quantitative approaches that model triprotic acid titration behaviour and used to predict the equilibrium pH rely on the mathematical convenience of electroneutrality/charge balance considerations. This fact has caused confusion in the literature, and has led to the assumption that charge balance/electroneutrality is a causal factor in modulating proton buffering (Stewart formulation). However, as we have previously shown, although charge balance can be used mathematically as a convenient tool in deriving various formulae, electroneutrality per se is not a fundamental physicochemical parameter that is mechanistically involved in the underlying buffering and proton transfer reactions. The lack of distinction between a mathematical tool, and a fundamental physicochemical parameter is in part a reason for the current debate regarding the Stewart formulation of acid-base analysis. We therefore posed the following question: Is it possible to generate an equation that defines and predicts the buffering of a triprotic acid that is based only on H+ partitioning without incorporating electroneutrality in the derivation? Towards this goal, we derived our new equation utilizing: 1) partitioning of H+ buffering; 2) conservation of mass; and 3) acid-base equilibria. In validating this model, we compared the predicted equilibrium pH with the measured pH of an aqueous solution consisting of Na2HPO4 to which HCl was added. The measured pH values were in excellent agreement

  17. 19 CFR Appendix C to Part 113 - Bond for Deferral of Duty on Large Yachts Imported for Sale at United States Boat Shows

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 1 2010-04-01 2010-04-01 false Bond for Deferral of Duty on Large Yachts Imported... Appendix C to Part 113—Bond for Deferral of Duty on Large Yachts Imported for Sale at United States Boat Shows Bond for Deferral of Duty on Large Yachts Imported for Sale at United States Boat Shows ____, as...

  18. Transport Coefficients from Large Deviation Functions

    Directory of Open Access Journals (Sweden)

    Chloe Ya Gao

    2017-10-01

    Full Text Available We describe a method for computing transport coefficients from the direct evaluation of large deviation functions. This method is general, relying on only equilibrium fluctuations, and is statistically efficient, employing trajectory based importance sampling. Equilibrium fluctuations of molecular currents are characterized by their large deviation functions, which are scaled cumulant generating functions analogous to the free energies. A diffusion Monte Carlo algorithm is used to evaluate the large deviation functions, from which arbitrary transport coefficients are derivable. We find significant statistical improvement over traditional Green–Kubo based calculations. The systematic and statistical errors of this method are analyzed in the context of specific transport coefficient calculations, including the shear viscosity, interfacial friction coefficient, and thermal conductivity.

  19. Transport Coefficients from Large Deviation Functions

    Science.gov (United States)

    Gao, Chloe; Limmer, David

    2017-10-01

    We describe a method for computing transport coefficients from the direct evaluation of large deviation function. This method is general, relying on only equilibrium fluctuations, and is statistically efficient, employing trajectory based importance sampling. Equilibrium fluctuations of molecular currents are characterized by their large deviation functions, which is a scaled cumulant generating function analogous to the free energy. A diffusion Monte Carlo algorithm is used to evaluate the large deviation functions, from which arbitrary transport coefficients are derivable. We find significant statistical improvement over traditional Green-Kubo based calculations. The systematic and statistical errors of this method are analyzed in the context of specific transport coefficient calculations, including the shear viscosity, interfacial friction coefficient, and thermal conductivity.

  20. The Very Large Array Data Processing Pipeline

    Science.gov (United States)

    Kent, Brian R.; Masters, Joseph S.; Chandler, Claire J.; Davis, Lindsey E.; Kern, Jeffrey S.; Ott, Juergen; Schinzel, Frank K.; Medlin, Drew; Muders, Dirk; Williams, Stewart; Geers, Vincent C.; Momjian, Emmanuel; Butler, Bryan J.; Nakazato, Takeshi; Sugimoto, Kanako

    2018-01-01

    We present the VLA Pipeline, software that is part of the larger pipeline processing framework used for the Karl G. Jansky Very Large Array (VLA), and Atacama Large Millimeter/sub-millimeter Array (ALMA) for both interferometric and single dish observations.Through a collection of base code jointly used by the VLA and ALMA, the pipeline builds a hierarchy of classes to execute individual atomic pipeline tasks within the Common Astronomy Software Applications (CASA) package. Each pipeline task contains heuristics designed by the team to actively decide the best processing path and execution parameters for calibration and imaging. The pipeline code is developed and written in Python and uses a "context" structure for tracking the heuristic decisions and processing results. The pipeline "weblog" acts as the user interface in verifying the quality assurance of each calibration and imaging stage. The majority of VLA scheduling blocks above 1 GHz are now processed with the standard continuum recipe of the pipeline and offer a calibrated measurement set as a basic data product to observatory users. In addition, the pipeline is used for processing data from the VLA Sky Survey (VLASS), a seven year community-driven endeavor started in September 2017 to survey the entire sky down to a declination of -40 degrees at S-band (2-4 GHz). This 5500 hour next-generation large radio survey will explore the time and spectral domains, relying on pipeline processing to generate calibrated measurement sets, polarimetry, and imaging data products that are available to the astronomical community with no proprietary period. Here we present an overview of the pipeline design philosophy, heuristics, and calibration and imaging results produced by the pipeline. Future development will include the testing of spectral line recipes, low signal-to-noise heuristics, and serving as a testing platform for science ready data products.The pipeline is developed as part of the CASA software package by an

  1. Instantons and Large N

    Science.gov (United States)

    Mariño, Marcos

    2015-09-01

    Preface; Part I. Instantons: 1. Instantons in quantum mechanics; 2. Unstable vacua in quantum field theory; 3. Large order behavior and Borel summability; 4. Non-perturbative aspects of Yang-Mills theories; 5. Instantons and fermions; Part II. Large N: 6. Sigma models at large N; 7. The 1=N expansion in QCD; 8. Matrix models and matrix quantum mechanics at large N; 9. Large N QCD in two dimensions; 10. Instantons at large N; Appendix A. Harmonic analysis on S3; Appendix B. Heat kernel and zeta functions; Appendix C. Effective action for large N sigma models; References; Author index; Subject index.

  2. Robust Sensing of Approaching Vehicles Relying on Acoustic Cues

    Directory of Open Access Journals (Sweden)

    Mitsunori Mizumachi

    2014-05-01

    Full Text Available The latest developments in automobile design have allowed them to be equipped with various sensing devices. Multiple sensors such as cameras and radar systems can be simultaneously used for active safety systems in order to overcome blind spots of individual sensors. This paper proposes a novel sensing technique for catching up and tracking an approaching vehicle relying on an acoustic cue. First, it is necessary to extract a robust spatial feature from noisy acoustical observations. In this paper, the spatio-temporal gradient method is employed for the feature extraction. Then, the spatial feature is filtered out through sequential state estimation. A particle filter is employed to cope with a highly non-linear problem. Feasibility of the proposed method has been confirmed with real acoustical observations, which are obtained by microphones outside a cruising vehicle.

  3. The Welfare Effects of Involuntary Part-time Work

    DEFF Research Database (Denmark)

    Borowczyk-Martins, Daniel; Lalé, Etienne

    2018-01-01

    Employed individuals in the USA are increasingly more likely to move to involuntarily part-time work than to unemployment. Spells of involuntary part-time work are different from unemployment spells: a full-time worker who takes on a part-time job suffers an earnings loss while remaining employed......, and is unlikely to receive income compensation from publicly provided insurance programmes. We analyse these differences through the lens of an incomplete-market, job-search model featuring unemployment risk alongside an additional risk of involuntary part-time employment. A calibration of the model consistent...... with US institutions and labour market dynamics shows that involuntary part-time work generates lower welfare losses relative to unemployment. This finding relies critically on the much higher probability to return to full-time employment from part-time work. We interpret it as a premium in access to full...

  4. Sustainability of small reservoirs and large scale water availability under current conditions and climate change

    NARCIS (Netherlands)

    Krol, Martinus S.; de Vries, Marjella J.; van Oel, Pieter R.; Carlos de Araújo, José

    2011-01-01

    Semi-arid river basins often rely on reservoirs for water supply. Small reservoirs may impact on large-scale water availability both by enhancing availability in a distributed sense and by subtracting water for large downstream user communities, e.g. served by large reservoirs. Both of these impacts

  5. Evidence for large-scale uniformity of physical laws

    International Nuclear Information System (INIS)

    Tubbs, A.D.; Wolfe, A.M.

    1980-01-01

    The coincidence of redshifts deduced from 21 cm and resonance transitions in absorbing gas detected in front of four quasi-stellar objects results in stringent limits on the variation of the product of three physical constants both in space and in time. We find that α 2 g/sub p/(m/M) is spatially uniform, to a few parts in 10 4 , throughout the observable universe. This uniformity holds subsequent to an epoch corresponding to less than 5% of the current age of the universe t 0 . Moreover, time variations in α 2 g/sub p/m/M are excluded to the same accuracy subsequent to an epoch corresponding to > or approx. =0.20 t 0 . These limits are largely model independent, relying only upon the cosmoligical interpretation of redshifts, and the isotropy of the 3 K background radiation. That a quantity as complex as g/sub p/, which depends on all the details of strong interaction physics, is uniform throughout most of spacetime, even in causally disjoint regions, suggests that all physical laws are globally invariant

  6. LOW PRESSURE CARBURIZING IN A LARGE-CHAMBER DEVICE FOR HIGH-PERFORMANCE AND PRECISION THERMAL TREATMENT OF PARTS OF MECHANICAL GEAR

    Directory of Open Access Journals (Sweden)

    Emilia Wołowiec-Korecka

    2017-03-01

    Full Text Available This paper presents the findings of research of a short-pulse low pressure carburizing technology developed for a new large-chamber furnace for high-performance and precision thermal treatment of parts of mechanical gear. Sections of the article discuss the novel constructions of the device in which parts being carburized flow in a stream, as well as the low-pressure carburizing experiment. The method has been found to yield uniform, even and repeatable carburized layers on typical gear used in automotive industry.

  7. Library Outreach to Part-Time and Distance Education Instructors

    Science.gov (United States)

    Shelton, Kay

    2009-01-01

    As community colleges rely on part-time faculty and offer more online courses, faculty teaching in those capacities may not be as connected to the college as their full-time, on-campus counterparts. They may know very little about the library; in turn their students may not learn what the library has to offer. This article provides suggestions for…

  8. Cortical control of object-specific grasp relies on adjustments of both activity and effective connectivity

    DEFF Research Database (Denmark)

    Tia, Banty; Takemi, Mitsuaki; Kosugi, Akito

    2017-01-01

    The cortical mechanisms of grasping have been extensively studied in macaques and humans. Here, we investigated whether common marmosets could rely on similar mechanisms despite striking differences in manual dexterity. Two common marmosets were trained to grasp-and-pull three objects eliciting d...

  9. Scramjet test flow reconstruction for a large-scale expansion tube, Part 2: axisymmetric CFD analysis

    Science.gov (United States)

    Gildfind, D. E.; Jacobs, P. A.; Morgan, R. G.; Chan, W. Y. K.; Gollan, R. J.

    2017-11-01

    This paper presents the second part of a study aiming to accurately characterise a Mach 10 scramjet test flow generated using a large free-piston-driven expansion tube. Part 1 described the experimental set-up, the quasi-one-dimensional simulation of the full facility, and the hybrid analysis technique used to compute the nozzle exit test flow properties. The second stage of the hybrid analysis applies the computed 1-D shock tube flow history as an inflow to a high-fidelity two-dimensional-axisymmetric analysis of the acceleration tube. The acceleration tube exit flow history is then applied as an inflow to a further refined axisymmetric nozzle model, providing the final nozzle exit test flow properties and thereby completing the analysis. This paper presents the results of the axisymmetric analyses. These simulations are shown to closely reproduce experimentally measured shock speeds and acceleration tube static pressure histories, as well as nozzle centreline static and impact pressure histories. The hybrid scheme less successfully predicts the diameter of the core test flow; however, this property is readily measured through experimental pitot surveys. In combination, the full test flow history can be accurately determined.

  10. Diffeomorphic Iterative Centroid Methods for Template Estimation on Large Datasets

    OpenAIRE

    Cury , Claire; Glaunès , Joan Alexis; Colliot , Olivier

    2014-01-01

    International audience; A common approach for analysis of anatomical variability relies on the stimation of a template representative of the population. The Large Deformation Diffeomorphic Metric Mapping is an attractive framework for that purpose. However, template estimation using LDDMM is computationally expensive, which is a limitation for the study of large datasets. This paper presents an iterative method which quickly provides a centroid of the population in the shape space. This centr...

  11. Sensitivity of the scale partition for variational multiscale large-eddy simulation of channel flow

    NARCIS (Netherlands)

    Holmen, J.; Hughes, T.J.R.; Oberai, A.A.; Wells, G.N.

    2004-01-01

    The variational multiscale method has been shown to perform well for large-eddy simulation (LES) of turbulent flows. The method relies upon a partition of the resolved velocity field into large- and small-scale components. The subgrid model then acts only on the small scales of motion, unlike

  12. Reliģijas, apmierinātības ar dzīvi un pašvērtējuma saistība.

    OpenAIRE

    Bērziņa, Annija

    2015-01-01

    Šī darba mērķis ir noskaidrot saistību starp reliģiju, apmierinātību ar dzīvi un pašvērtējumu. Apmierinātība ar dzīvi sevī ietver laimi, subjektīvo labklājību un pozitīvas emocijas. Šī darba ietvaros vairāk tiks apskatīta subjektīvā labklājība. Reliģija ir ticība kam mistiskam , kas sevī ietver reliģisku normu ievērošanu, rituālus, ticību un būšanu reliģiskā kopienā. Starp subjektīvo labklājību un reliģiju pastāv pozitīva korelācija. Pašvērtējums ir cilvēka subjektīvs viedoklis par...

  13. 12 CFR Appendix C to Part 230 - Effect on State Laws

    Science.gov (United States)

    2010-01-01

    ... depository institution may not make disclosures using the inconsistent term or take actions relying on the...) Inconsistent Requirements State law requirements that are inconsistent with the requirements of the act and this part are preempted to the extent of the inconsistency. A state law is inconsistent if it requires...

  14. High sugar-induced insulin resistance in Drosophila relies on the lipocalin Neural Lazarillo.

    Directory of Open Access Journals (Sweden)

    Matthieu Y Pasco

    Full Text Available In multicellular organisms, insulin/IGF signaling (IIS plays a central role in matching energy needs with uptake and storage, participating in functions as diverse as metabolic homeostasis, growth, reproduction and ageing. In mammals, this pleiotropy of action relies in part on a dichotomy of action of insulin, IGF-I and their respective membrane-bound receptors. In organisms with simpler IIS, this functional separation is questionable. In Drosophila IIS consists of several insulin-like peptides called Dilps, activating a unique membrane receptor and its downstream signaling cascade. During larval development, IIS is involved in metabolic homeostasis and growth. We have used feeding conditions (high sugar diet, HSD that induce an important change in metabolic homeostasis to monitor possible effects on growth. Unexpectedly we observed that HSD-fed animals exhibited severe growth inhibition as a consequence of peripheral Dilp resistance. Dilp-resistant animals present several metabolic disorders similar to those observed in type II diabetes (T2D patients. By exploring the molecular mechanisms involved in Drosophila Dilp resistance, we found a major role for the lipocalin Neural Lazarillo (NLaz, a target of JNK signaling. NLaz expression is strongly increased upon HSD and animals heterozygous for an NLaz null mutation are fully protected from HSD-induced Dilp resistance. NLaz is a secreted protein homologous to the Retinol-Binding Protein 4 involved in the onset of T2D in human and mice. These results indicate that insulin resistance shares common molecular mechanisms in flies and human and that Drosophila could emerge as a powerful genetic system to study some aspects of this complex syndrome.

  15. Cytotoxicity Comparison of Harvard Zinc Phosphate Cement Versus Panavia F2 and Rely X Plus Resin Cements on Rat L929-fibroblasts.

    Science.gov (United States)

    Mahasti, Sahabi; Sattari, Mandana; Romoozi, Elham; Akbar-Zadeh Baghban, Alireza

    2011-01-01

    Resin cements, regardless of their biocompatibility, have been widely used in restorative dentistry during the recent years. These cements contain hydroxy ethyl methacrylate (HEMA) molecules which are claimed to penetrate into dentinal tubules and may affect dental pulp. Since tooth preparation for metal ceramic restorations involves a large surface of the tooth, cytotoxicity of these cements would be more important in fixed prosthodontic treatments. The purpose of this study was to compare the cytotoxicity of two resin cements (Panavia F2 and Rely X Plus) versus zinc phosphate cement (Harvard) using rat L929-fibroblasts in vitro. In this experimental study, ninety hollow glass cylinders (internal diameter 5-mm, height 2-mm) were made and divided into three groups. Each group was filled with one of three experimental cements; Harvard Zinc Phosphate cement, Panavia F2 resin cement and Rely X Plus resin cement. L929- Fibroblast were passaged and subsequently cultured in 6-well plates of 5×10(5) cells each. The culture medium was RPMI_ 1640. All samples were incubated in CO2. Using enzyme-linked immune-sorbent assay (ELISA) and (3-(4,5-dimethylthiazol-2-yl)-2, 5-diphenyltetrazolium bromide) (MTT) assay, the cytotoxicity of the cements was investigated at 1 hour, 24 hours and one week post exposure. Statistical analyses were performed via two-way ANOVA and honestly significant difference (HSD) Tukey tests. This study revealed significant differences between the three cements at the different time intervals. Harvard cement displayed the greatest cytotoxicity at all three intervals. After 1 hour Panavia F2 showed the next greatest cytotoxicity, but after 24-hours and oneweek intervals Rely X Plus showed the next greatest cytotoxicity. The results further showed that cytotoxicity decreased significantly in the Panavia F2 group with time (pHarvard cement group failed to showed no noticeable change in cytotoxicity with time. Although this study has limitations, it provides

  16. Comparing the life cycle costs of using harvest residue as feedstock for small- and large-scale bioenergy systems (part II)

    International Nuclear Information System (INIS)

    Cleary, Julian; Wolf, Derek P.; Caspersen, John P.

    2015-01-01

    In part II of our two-part study, we estimate the nominal electricity generation and GHG (greenhouse gas) mitigation costs of using harvest residue from a hardwood forest in Ontario, Canada to fuel (1) a small-scale (250 kW e ) combined heat and power wood chip gasification unit and (2) a large-scale (211 MW e ) coal-fired generating station retrofitted to combust wood pellets. Under favorable operational and regulatory conditions, generation costs are similar: 14.1 and 14.9 cents per kWh (c/kWh) for the small- and large-scale facilities, respectively. However, GHG mitigation costs are considerably higher for the large-scale system: $159/tonne of CO 2 eq., compared to $111 for the small-scale counterpart. Generation costs increase substantially under existing conditions, reaching: (1) 25.5 c/kWh for the small-scale system, due to a regulation mandating the continual presence of an operating engineer; and (2) 22.5 c/kWh for the large-scale system due to insufficient biomass supply, which reduces plant capacity factor from 34% to 8%. Limited inflation adjustment (50%) of feed-in tariff rates boosts these costs by 7% to 11%. Results indicate that policy generalizations based on scale require careful consideration of the range of operational/regulatory conditions in the jurisdiction of interest. Further, if GHG mitigation is prioritized, small-scale systems may be more cost-effective. - Highlights: • Generation costs for two forest bioenergy systems of different scales are estimated. • Nominal electricity costs are 14.1–28.3 cents/kWh for the small-scale plant. • Nominal electricity costs are 14.9–24.2 cents/kWh for the large-scale plant. • GHG mitigation costs from displacing coal and LPG are $111-$281/tonne of CO 2 eq. • High sensitivity to cap. factor (large-scale) and labor requirements (small-scale)

  17. In or Out: The Cultural Integration of Part-Time Faculty at Two New England Community Colleges

    Science.gov (United States)

    Shanahan, Ellen C.

    2013-01-01

    Public community colleges rely increasingly on high percentages of adjunct or part-time faculty. While these faculty members often teach many course sections, they often are disconnected from the institutional culture and mission. This comparative case study examined two New England community colleges, one with 100% part-time faculty and one with…

  18. Efficient secretion of small proteins in mammalian cells relies on Sec62-dependent posttranslational translocation

    Science.gov (United States)

    Lakkaraju, Asvin K. K.; Thankappan, Ratheeshkumar; Mary, Camille; Garrison, Jennifer L.; Taunton, Jack; Strub, Katharina

    2012-01-01

    Mammalian cells secrete a large number of small proteins, but their mode of translocation into the endoplasmic reticulum is not fully understood. Cotranslational translocation was expected to be inefficient due to the small time window for signal sequence recognition by the signal recognition particle (SRP). Impairing the SRP pathway and reducing cellular levels of the translocon component Sec62 by RNA interference, we found an alternate, Sec62-dependent translocation path in mammalian cells required for the efficient translocation of small proteins with N-terminal signal sequences. The Sec62-dependent translocation occurs posttranslationally via the Sec61 translocon and requires ATP. We classified preproteins into three groups: 1) those that comprise ≤100 amino acids are strongly dependent on Sec62 for efficient translocation; 2) those in the size range of 120–160 amino acids use the SRP pathway, albeit inefficiently, and therefore rely on Sec62 for efficient translocation; and 3) those larger than 160 amino acids depend on the SRP pathway to preserve a transient translocation competence independent of Sec62. Thus, unlike in yeast, the Sec62-dependent translocation pathway in mammalian cells serves mainly as a fail-safe mechanism to ensure efficient secretion of small proteins and provides cells with an opportunity to regulate secretion of small proteins independent of the SRP pathway. PMID:22648169

  19. Two-Stage Part-Based Pedestrian Detection

    DEFF Research Database (Denmark)

    Møgelmose, Andreas; Prioletti, Antonio; Trivedi, Mohan M.

    2012-01-01

    Detecting pedestrians is still a challenging task for automotive vision system due the extreme variability of targets, lighting conditions, occlusions, and high speed vehicle motion. A lot of research has been focused on this problem in the last 10 years and detectors based on classifiers has...... gained a special place among the different approaches presented. This work presents a state-of-the-art pedestrian detection system based on a two stages classifier. Candidates are extracted with a Haar cascade classifier trained with the DaimlerDB dataset and then validated through part-based HOG...... of several metrics, such as detection rate, false positives per hour, and frame rate. The novelty of this system rely in the combination of HOG part-based approach, tracking based on specific optimized feature and porting on a real prototype....

  20. On the architecture for the X part of a very large FX correlator using two-accumulator CMACs

    Science.gov (United States)

    Lapshev, Stepan; Rezaul Hasan, S. M.

    2016-02-01

    This paper presents an improved input-buffer architecture for the X part of a very large FX correlator that optimizes memory use to both increase performance and reduce the overall power consumption. The architecture uses an array of two-accumulator CMACs that are reused for different pairs of correlated signals. Using two accumulators in every CMAC allows the processing array to alternately correlate two sets of signal pairs selected in such a way so that they share some or all of the processed data samples. This leads to increased processing bandwidth and a significant reduction of the memory read rate due to not having to update some or all of the processing buffers in every second processing cycle. The overall memory access rate is at most 75 % of that of the single-accumulator CMAC array. This architecture is intended for correlators of very large multi-element radio telescopes such as the Square Kilometre Array (SKA), and is suitable for an ASIC implementation.

  1. Holes generation in glass using large spot femtosecond laser pulses

    Science.gov (United States)

    Berg, Yuval; Kotler, Zvi; Shacham-Diamand, Yosi

    2018-03-01

    We demonstrate high-throughput, symmetrical, holes generation in fused silica glass using a large spot size, femtosecond IR-laser irradiation which modifies the glass properties and yields an enhanced chemical etching rate. The process relies on a balanced interplay between the nonlinear Kerr effect and multiphoton absorption in the glass which translates into symmetrical glass modification and increased etching rate. The use of a large laser spot size makes it possible to process thick glasses at high speeds over a large area. We have demonstrated such fabricated holes with an aspect ratio of 1:10 in a 1 mm thick glass samples.

  2. Human children rely more on social information than chimpanzees do.

    Science.gov (United States)

    van Leeuwen, Edwin J C; Call, Josep; Haun, Daniel B M

    2014-11-01

    Human societies are characterized by more cultural diversity than chimpanzee communities. However, it is currently unclear what mechanism might be driving this difference. Because reliance on social information is a pivotal characteristic of culture, we investigated individual and social information reliance in children and chimpanzees. We repeatedly presented subjects with a reward-retrieval task on which they had collected conflicting individual and social information of equal accuracy in counterbalanced order. While both species relied mostly on their individual information, children but not chimpanzees searched for the reward at the socially demonstrated location more than at a random location. Moreover, only children used social information adaptively when individual knowledge on the location of the reward had not yet been obtained. Social information usage determines information transmission and in conjunction with mechanisms that create cultural variants, such as innovation, it facilitates diversity. Our results may help explain why humans are more culturally diversified than chimpanzees. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  3. Logical Characterisation of Concept Transformations from Human into Machine relying on Predicate Logic

    DEFF Research Database (Denmark)

    Badie, Farshad

    2016-01-01

    Providing more human-like concept learning in machines has always been one of the most significant goals of machine learning paradigms and of human-machine interaction techniques. This article attempts to provide a logical specification of conceptual mappings from humans’ minds into machines......’ knowledge bases. I will focus on the representation of the mappings (transformations) relying on First-Order Predicate Logic. Additionally, the structure of concepts in the common ground between humans and machines will be analysed. It seems quite necessary to pay attention to the philosophy...

  4. Fruit production in three masting tree species does not rely on stored carbon reserves.

    Science.gov (United States)

    Hoch, Günter; Siegwolf, Rolf T W; Keel, Sonja G; Körner, Christian; Han, Qingmin

    2013-03-01

    Fruiting is typically considered to massively burden the seasonal carbon budget of trees. The cost of reproduction has therefore been suggested as a proximate factor explaining observed mast-fruiting patterns. Here, we used a large-scale, continuous (13)C labeling of mature, deciduous trees in a temperate Swiss forest to investigate to what extent fruit formation in three species with masting reproduction behavior (Carpinus betulus, Fagus sylvatica, Quercus petraea) relies on the import of stored carbon reserves. Using a free-air CO2 enrichment system, we exposed trees to (13)C-depleted CO2 during 8 consecutive years. By the end of this experiment, carbon reserve pools had significantly lower δ(13)C values compared to control trees. δ(13)C analysis of new biomass during the first season after termination of the CO2 enrichment allowed us to distinguish the sources of built-in carbon (old carbon reserves vs. current assimilates). Flowers and expanding leaves carried a significant (13)C label from old carbon stores. In contrast, fruits and vegetative infructescence tissues were exclusively produced from current, unlabeled photoassimilates in all three species, including F. sylvatica, which had a strong masting season. Analyses of δ(13)C in purified starch from xylem of fruit-bearing shoots revealed a complete turn-over of starch during the season, likely due to its usage for bud break. This study is the first to directly demonstrate that fruiting is independent from old carbon reserves in masting trees, with significant implications for mechanistic models that explain mast seeding.

  5. Siemens: Smart Technologies for Large Control Systems

    CERN Multimedia

    CERN. Geneva; BAKANY, Elisabeth

    2015-01-01

    The CERN Large Hadron Collider (LHC) is known to be one of the most complex scientific machines ever built by mankind. Its correct functioning relies on the integration of a multitude of interdependent industrial control systems, which provide different and essential services to run and protect the accelerators and experiments. These systems have to deal with several millions of data points (e.g. sensors, actuators, configuration parameters, etc…) which need to be acquired, processed, archived and analysed. Since more than 20 years, CERN and Siemens have developed a strong collaboration to deal with the challenges for these large systems. The presentation will cover the current work on the SCADA (Supervisory Control and Data Acquisition) systems and Data Analytics Frameworks.

  6. Large electrostatic accelerators

    International Nuclear Information System (INIS)

    Jones, C.M.

    1984-01-01

    The paper is divided into four parts: a discussion of the motivation for the construction of large electrostatic accelerators, a description and discussion of several large electrostatic accelerators which have been recently completed or are under construction, a description of several recent innovations which may be expected to improve the performance of large electrostatic accelerators in the future, and a description of an innovative new large electrostatic accelerator whose construction is scheduled to begin next year

  7. 77 FR 70117 - Purchase of Certain Debt Securities by Business and Industrial Development Companies Relying on...

    Science.gov (United States)

    2012-11-23

    ... 3235-AL02 Purchase of Certain Debt Securities by Business and Industrial Development Companies Relying... securities; (B) is engaged or proposes to engage in the business of issuing face-amount certificates of the... business of issuing redeemable securities, the operations of which are subject to regulation by the State...

  8. Episodic Memory Retrieval Functionally Relies on Very Rapid Reactivation of Sensory Information.

    Science.gov (United States)

    Waldhauser, Gerd T; Braun, Verena; Hanslmayr, Simon

    2016-01-06

    Episodic memory retrieval is assumed to rely on the rapid reactivation of sensory information that was present during encoding, a process termed "ecphory." We investigated the functional relevance of this scarcely understood process in two experiments in human participants. We presented stimuli to the left or right of fixation at encoding, followed by an episodic memory test with centrally presented retrieval cues. This allowed us to track the reactivation of lateralized sensory memory traces during retrieval. Successful episodic retrieval led to a very early (∼100-200 ms) reactivation of lateralized alpha/beta (10-25 Hz) electroencephalographic (EEG) power decreases in the visual cortex contralateral to the visual field at encoding. Applying rhythmic transcranial magnetic stimulation to interfere with early retrieval processing in the visual cortex led to decreased episodic memory performance specifically for items encoded in the visual field contralateral to the site of stimulation. These results demonstrate, for the first time, that episodic memory functionally relies on very rapid reactivation of sensory information. Remembering personal experiences requires a "mental time travel" to revisit sensory information perceived in the past. This process is typically described as a controlled, relatively slow process. However, by using electroencephalography to measure neural activity with a high time resolution, we show that such episodic retrieval entails a very rapid reactivation of sensory brain areas. Using transcranial magnetic stimulation to alter brain function during retrieval revealed that this early sensory reactivation is causally relevant for conscious remembering. These results give first neural evidence for a functional, preconscious component of episodic remembering. This provides new insight into the nature of human memory and may help in the understanding of psychiatric conditions that involve the automatic intrusion of unwanted memories. Copyright

  9. Proteinortho: detection of (co-)orthologs in large-scale analysis.

    Science.gov (United States)

    Lechner, Marcus; Findeiss, Sven; Steiner, Lydia; Marz, Manja; Stadler, Peter F; Prohaska, Sonja J

    2011-04-28

    Orthology analysis is an important part of data analysis in many areas of bioinformatics such as comparative genomics and molecular phylogenetics. The ever-increasing flood of sequence data, and hence the rapidly increasing number of genomes that can be compared simultaneously, calls for efficient software tools as brute-force approaches with quadratic memory requirements become infeasible in practise. The rapid pace at which new data become available, furthermore, makes it desirable to compute genome-wide orthology relations for a given dataset rather than relying on relations listed in databases. The program Proteinortho described here is a stand-alone tool that is geared towards large datasets and makes use of distributed computing techniques when run on multi-core hardware. It implements an extended version of the reciprocal best alignment heuristic. We apply Proteinortho to compute orthologous proteins in the complete set of all 717 eubacterial genomes available at NCBI at the beginning of 2009. We identified thirty proteins present in 99% of all bacterial proteomes. Proteinortho significantly reduces the required amount of memory for orthology analysis compared to existing tools, allowing such computations to be performed on off-the-shelf hardware.

  10. Proteinortho: Detection of (Co-orthologs in large-scale analysis

    Directory of Open Access Journals (Sweden)

    Steiner Lydia

    2011-04-01

    Full Text Available Abstract Background Orthology analysis is an important part of data analysis in many areas of bioinformatics such as comparative genomics and molecular phylogenetics. The ever-increasing flood of sequence data, and hence the rapidly increasing number of genomes that can be compared simultaneously, calls for efficient software tools as brute-force approaches with quadratic memory requirements become infeasible in practise. The rapid pace at which new data become available, furthermore, makes it desirable to compute genome-wide orthology relations for a given dataset rather than relying on relations listed in databases. Results The program Proteinortho described here is a stand-alone tool that is geared towards large datasets and makes use of distributed computing techniques when run on multi-core hardware. It implements an extended version of the reciprocal best alignment heuristic. We apply Proteinortho to compute orthologous proteins in the complete set of all 717 eubacterial genomes available at NCBI at the beginning of 2009. We identified thirty proteins present in 99% of all bacterial proteomes. Conclusions Proteinortho significantly reduces the required amount of memory for orthology analysis compared to existing tools, allowing such computations to be performed on off-the-shelf hardware.

  11. AUTOMATIC ORIENTATION OF LARGE BLOCKS OF OBLIQUE IMAGES

    Directory of Open Access Journals (Sweden)

    E. Rupnik

    2013-05-01

    Full Text Available Nowadays, multi-camera platforms combining nadir and oblique cameras are experiencing a revival. Due to their advantages such as ease of interpretation, completeness through mitigation of occluding areas, as well as system accessibility, they have found their place in numerous civil applications. However, automatic post-processing of such imagery still remains a topic of research. Configuration of cameras poses a challenge on the traditional photogrammetric pipeline used in commercial software and manual measurements are inevitable. For large image blocks it is certainly an impediment. Within theoretical part of the work we review three common least square adjustment methods and recap on possible ways for a multi-camera system orientation. In the practical part we present an approach that successfully oriented a block of 550 images acquired with an imaging system composed of 5 cameras (Canon Eos 1D Mark III with different focal lengths. Oblique cameras are rotated in the four looking directions (forward, backward, left and right by 45° with respect to the nadir camera. The workflow relies only upon open-source software: a developed tool to analyse image connectivity and Apero to orient the image block. The benefits of the connectivity tool are twofold: in terms of computational time and success of Bundle Block Adjustment. It exploits the georeferenced information provided by the Applanix system in constraining feature point extraction to relevant images only, and guides the concatenation of images during the relative orientation. Ultimately an absolute transformation is performed resulting in mean re-projection residuals equal to 0.6 pix.

  12. Clinical picture and treatment of complications of lower part of large intestine resulting from radiotherapy for intra-pelvic cancer

    International Nuclear Information System (INIS)

    Ikeda, Yoshihito; Sunagawa, Keishin; Matsumura, Shigejiro; Watanabe, Kenji; Masaoka, Yoshio

    1976-01-01

    The authors described clinical pictures and those treatments of 40 patients with complications of the lower part of the large intestine resulting from radiotherapy for cancer of the uterus, ovarium or the penis. As the radiotherapy, 60 Co-telecobalt (6,000-16,000R) and 60 Co-needle (1,000-8,568 mch) intracavitary irradiation were used alone or in combination. Findings in the complications of the lower part of the large intestine were classified into Grade I (13 cases), II (14), III (14), and IV (4) according to Sherman. The prodromal symptoms of the complications appeared in 2-6 months following the irradiation in more than a half of the patients, and it appeared within a year in most of the patients. Most of the patients complained about melena, anemia, proctagra, tenesmus and diarrhea. In the cases of Grade III, the symptoms of ileus such as constipation, abdominal distention, and abdominal pain appeared. Internal treatment was given principally, and preternal anus was made when frequent blood transfusion was required. Fourteen cases of those in Grade I and II recovered within 1-3 years. The cases which received proctostomy, including those who had bleeding, stricture and fistulation, had favorable prognosis. This result suggested that the radiotherapy for intra-pelvic cancer should be controlled to prevent further development of the complications in the rectum beyond Grade I. (Serizawa, K.)

  13. Regulated Medicare Advantage And Marketplace Individual Health Insurance Markets Rely On Insurer Competition.

    Science.gov (United States)

    Frank, Richard G; McGuire, Thomas G

    2017-09-01

    Two important individual health insurance markets-Medicare Advantage and the Marketplaces-are tightly regulated but rely on competition among insurers to supply and price health insurance products. Many local health insurance markets have little competition, which increases prices to consumers. Furthermore, both markets are highly subsidized in ways that can exacerbate the impact of market power-that is, the ability to set price above cost-on health insurance prices. Policy makers need to foster robust competition in both sectors and avoid designing subsidies that make the market-power problem worse. Project HOPE—The People-to-People Health Foundation, Inc.

  14. Mapping the electrical properties of large-area graphene

    DEFF Research Database (Denmark)

    Bøggild, Peter; Mackenzie, David; Whelan, Patrick Rebsdorf

    2017-01-01

    The significant progress in terms of fabricating large-area graphene films for transparent electrodes, barriers, electronics, telecommunication and other applications has not yet been accompanied by efficient methods for characterizing the electrical properties of large-area graphene. While......, and a high measurement effort per device. In this topical review, we provide a comprehensive overview of the issues that need to be addressed by any large-area characterisation method for electrical key performance indicators, with emphasis on electrical uniformity and on how this can be used to provide...... a more accurate analysis of the graphene film. We review and compare three different, but complementary approaches that rely either on fixed contacts (dry laser lithography), movable contacts (micro four point probes) and non-contact (terahertz time-domain spectroscopy) between the probe and the graphene...

  15. 78 FR 29392 - Embedded Digital Devices in Safety-Related Systems, Systems Important to Safety, and Items Relied...

    Science.gov (United States)

    2013-05-20

    ... NUCLEAR REGULATORY COMMISSION [NRC-2013-0098] Embedded Digital Devices in Safety-Related Systems, Systems Important to Safety, and Items Relied on for Safety AGENCY: Nuclear Regulatory Commission. ACTION... (NRC) is issuing for public comment Draft Regulatory Issue Summary (RIS) 2013-XX, ``Embedded Digital...

  16. Large scale sodium interactions. Part 1. Test facility design

    International Nuclear Information System (INIS)

    King, D.L.; Smaardyk, J.E.; Sallach, R.A.

    1977-01-01

    During the design of the test facility for large scale sodium interaction testing, an attempt was made to keep the system as simple and yet versatile as possible; therefore, a once through design was employed as opposed to any type of conventional sodium ''loop.'' The initial series of tests conducted at the facility call for rapidly dropping from 20 kg to 225 kg of sodium at temperatures from 825 0 K to 1125 0 K into concrete crucibles. The basic system layout is described. A commercial drum heater is used to melt the sodium which is in 55 gallon drums and then a slight argon pressurization is used to force the liquid sodium through a metallic filter and into a dump tank. Then the sodium dump tank is heated to the desired temperature. A diaphragm is mechanically ruptured and the sodium is dumped into a crucible that is housed inside a large steel test chamber

  17. Tokamak building-design considerations for a large tokamak device

    International Nuclear Information System (INIS)

    Barrett, R.J.; Thomson, S.L.

    1981-01-01

    Design and construction of a satisfactory tokamak building to support FED appears feasible. Further, a pressure vessel building does not appear necessary to meet the plant safety requirements. Some of the building functions will require safety class systems to assure reliable and safe operation. A rectangular tokamak building has been selected for FED preconceptual design which will be part of the confinement system relying on ventilation and other design features to reduce the consequences and probability of radioactivity release

  18. Large deviations

    CERN Document Server

    Varadhan, S R S

    2016-01-01

    The theory of large deviations deals with rates at which probabilities of certain events decay as a natural parameter in the problem varies. This book, which is based on a graduate course on large deviations at the Courant Institute, focuses on three concrete sets of examples: (i) diffusions with small noise and the exit problem, (ii) large time behavior of Markov processes and their connection to the Feynman-Kac formula and the related large deviation behavior of the number of distinct sites visited by a random walk, and (iii) interacting particle systems, their scaling limits, and large deviations from their expected limits. For the most part the examples are worked out in detail, and in the process the subject of large deviations is developed. The book will give the reader a flavor of how large deviation theory can help in problems that are not posed directly in terms of large deviations. The reader is assumed to have some familiarity with probability, Markov processes, and interacting particle systems.

  19. Usage of advanced thick airfoils for the outer part of very large offshore turbines

    International Nuclear Information System (INIS)

    Grasso, F; Ceyhan, O

    2014-01-01

    Nowadays one of the big challenges in wind energy is connected to the development of very large wind turbines with 100 m blades and 8-10MW power production. The European project INNWIND.EU plays an important role in this challenge because it is focused on exploring and exploiting technical innovations to make these machines not only feasible but also cost effective. In this context, the present work investigates the benefits of adopting thick airfoils also at the outer part of the blade. In fact, if these airfoils are comparable to the existing thinner ones in terms of aerodynamics, the extra thickness would lead to a save in weight. Lightweight blades would visibly contribute to reduce the cost of energy of the turbines and make them cost effective. The reference turbine defined in INNWIND.EU project has been adjusted to use the new airfoils. The results show that the rotor performance is not sacrificed when the 24% airfoils are replaced by the ECN 30% thick airfoils, while 24% extra thickness can be obtained

  20. Einfluss von Flüssigkeitskontamination auf die Verbundfestigkeit von Wurzelkanalsealern (Adseal und RelyX Unicem)

    OpenAIRE

    Höfer, Margarita

    2010-01-01

    Ziel dieser Arbeit war es, eine Aussage über die Verbundfestigkeit des epoxidharz-basierten Wurzelkanalsealers Adseal und des selbstadhäsiven methacrylatbasierten Befestigungszements RelyX Unicem zum Wurzelkanaldentin in Abhängigkeit vom Verbleib der Spülflüssigkeit im Wurzelkanal zu treffen. Die Kontamination des Wurzelkanals wurde mittels dreier verschiedener Spüllösungen durchgeführt. Zu diesem Zweck wurden Scherfestigkeitsmessungen sowie lichtmikroskopische Untersuchungen der Sealer-Sprea...

  1. Large spin systematics in CFT

    Energy Technology Data Exchange (ETDEWEB)

    Alday, Luis F.; Bissi, Agnese; Łukowski, Tomasz [Mathematical Institute, University of Oxford,Andrew Wiles Building, Radcliffe Observatory Quarter,Woodstock Road, Oxford, OX2 6GG (United Kingdom)

    2015-11-16

    Using conformal field theory (CFT) arguments we derive an infinite number of constraints on the large spin expansion of the anomalous dimensions and structure constants of higher spin operators. These arguments rely only on analyticity, unitarity, crossing-symmetry and the structure of the conformal partial wave expansion. We obtain results for both, perturbative CFT to all order in the perturbation parameter, as well as non-perturbatively. For the case of conformal gauge theories this provides a proof of the reciprocity principle to all orders in perturbation theory and provides a new “reciprocity' principle for structure constants. We argue that these results extend also to non-conformal theories.

  2. Large spin systematics in CFT

    International Nuclear Information System (INIS)

    Alday, Luis F.; Bissi, Agnese; Łukowski, Tomasz

    2015-01-01

    Using conformal field theory (CFT) arguments we derive an infinite number of constraints on the large spin expansion of the anomalous dimensions and structure constants of higher spin operators. These arguments rely only on analyticity, unitarity, crossing-symmetry and the structure of the conformal partial wave expansion. We obtain results for both, perturbative CFT to all order in the perturbation parameter, as well as non-perturbatively. For the case of conformal gauge theories this provides a proof of the reciprocity principle to all orders in perturbation theory and provides a new “reciprocity' principle for structure constants. We argue that these results extend also to non-conformal theories.

  3. A comparison of three design tree based search algorithms for the detection of engineering parts constructed with CATIA V5 in large databases

    Directory of Open Access Journals (Sweden)

    Robin Roj

    2014-07-01

    Full Text Available This paper presents three different search engines for the detection of CAD-parts in large databases. The analysis of the contained information is performed by the export of the data that is stored in the structure trees of the CAD-models. A preparation program generates one XML-file for every model, which in addition to including the data of the structure tree, also owns certain physical properties of each part. The first search engine is specializes in the discovery of standard parts, like screws or washers. The second program uses certain user input as search parameters, and therefore has the ability to perform personalized queries. The third one compares one given reference part with all parts in the database, and locates files that are identical, or similar to, the reference part. All approaches run automatically, and have the analysis of the structure tree in common. Files constructed with CATIA V5, and search engines written with Python have been used for the implementation. The paper also includes a short comparison of the advantages and disadvantages of each program, as well as a performance test.

  4. Mapping the emotional face. How individual face parts contribute to successful emotion recognition.

    Directory of Open Access Journals (Sweden)

    Martin Wegrzyn

    Full Text Available Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes to disgust and happiness (mouth. The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation.

  5. Mapping the emotional face. How individual face parts contribute to successful emotion recognition

    Science.gov (United States)

    Wegrzyn, Martin; Vogt, Maria; Kireclioglu, Berna; Schneider, Julia; Kissler, Johanna

    2017-01-01

    Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes) to disgust and happiness (mouth). The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation. PMID:28493921

  6. Multidetector row computed tomography in bowel obstruction. Part 2. Large bowel obstruction

    Energy Technology Data Exchange (ETDEWEB)

    Sinha, R. [Department of Radiology, Glenfield Hospital, Leicester (United Kingdom)]. E-mail: rakesh.sinha@uhl-tr.nhs.uk; Verma, R. [Department of Radiology, Glenfield Hospital, Leicester (United Kingdom)

    2005-10-01

    Large bowel obstruction may present as an emergency as high-grade colonic obstruction and can result in perforation. Perforated large bowel obstruction causes faecal peritonitis, which can result in high morbidity and mortality. Multidetector row computed tomography (MDCT) has the potential of providing an accurate diagnosis of large bowel obstruction. The rapid acquisition of images within one breath-hold reduces misregistration artefacts than can occur in critically ill or uncooperative patients. The following is a review of the various causes of large bowel obstruction with emphasis on important pathogenic factors, CT appearances and the use of multiplanar reformatted images in the diagnostic workup.

  7. Can Camera Traps Monitor Komodo Dragons a Large Ectothermic Predator?

    OpenAIRE

    Ariefiandy, Achmad; Purwandana, Deni; Seno, Aganto; Ciofi, Claudio; Jessop, Tim S.

    2013-01-01

    Camera trapping has greatly enhanced population monitoring of often cryptic and low abundance apex carnivores. Effectiveness of passive infrared camera trapping, and ultimately population monitoring, relies on temperature mediated differences between the animal and its ambient environment to ensure good camera detection. In ectothermic predators such as large varanid lizards, this criterion is presumed less certain. Here we evaluated the effectiveness of camera trapping to potentially monitor...

  8. Quality of record linkage in a highly automated cancer registry that relies on encrypted identity data

    Directory of Open Access Journals (Sweden)

    Schmidtmann, Irene

    2016-06-01

    Full Text Available Objectives: In the absence of unique ID numbers, cancer and other registries in Germany and elsewhere rely on identity data to link records pertaining to the same patient. These data are often encrypted to ensure privacy. Some record linkage errors unavoidably occur. These errors were quantified for the cancer registry of North Rhine Westphalia which uses encrypted identity data. Methods: A sample of records was drawn from the registry, record linkage information was included. In parallel, plain text data for these records were retrieved to generate a gold standard. Record linkage error frequencies in the cancer registry were determined by comparison of the results of the routine linkage with the gold standard. Error rates were projected to larger registries.Results: In the sample studied, the homonym error rate was 0.015%; the synonym error rate was 0.2%. The F-measure was 0.9921. Projection to larger databases indicated that for a realistic development the homonym error rate will be around 1%, the synonym error rate around 2%.Conclusion: Observed error rates are low. This shows that effective methods to standardize and improve the quality of the input data have been implemented. This is crucial to keep error rates low when the registry’s database grows. The planned inclusion of unique health insurance numbers is likely to further improve record linkage quality. Cancer registration entirely based on electronic notification of records can process large amounts of data with high quality of record linkage.

  9. Modelling of natural convection flows with large temperature differences: a benchmark problem for low Mach number solvers. Part. 1 reference solutions

    International Nuclear Information System (INIS)

    Le Quere, P.; Weisman, C.; Paillere, H.; Vierendeels, J.; Dick, E.; Becker, R.; Braack, M.; Locke, J.

    2005-01-01

    Heat transfer by natural convection and conduction in enclosures occurs in numerous practical situations including the cooling of nuclear reactors. For large temperature difference, the flow becomes compressible with a strong coupling between the continuity, the momentum and the energy equations through the equation of state, and its properties (viscosity, heat conductivity) also vary with the temperature, making the Boussinesq flow approximation inappropriate and inaccurate. There are very few reference solutions in the literature on non-Boussinesq natural convection flows. We propose here a test case problem which extends the well-known De Vahl Davis differentially heated square cavity problem to the case of large temperature differences for which the Boussinesq approximation is no longer valid. The paper is split in two parts: in this first part, we propose as yet unpublished reference solutions for cases characterized by a non-dimensional temperature difference of 0.6, Ra 10 6 (constant property and variable property cases) and Ra = 10 7 (variable property case). These reference solutions were produced after a first international workshop organized by Cea and LIMSI in January 2000, in which the above authors volunteered to produce accurate numerical solutions from which the present reference solutions could be established. (authors)

  10. Sustainability of small reservoirs and large scale water availability under current conditions and climate change

    OpenAIRE

    Krol, Martinus S.; de Vries, Marjella J.; van Oel, P.R.; Carlos de Araújo, José

    2011-01-01

    Semi-arid river basins often rely on reservoirs for water supply. Small reservoirs may impact on large-scale water availability both by enhancing availability in a distributed sense and by subtracting water for large downstream user communities, e.g. served by large reservoirs. Both of these impacts of small reservoirs are subject to climate change. Using a case-study on North-East Brazil, this paper shows that climate change impacts on water availability may be severe, and impacts on distrib...

  11. submitter Influence of 3D Effects on Field Quality in the Straight Part of Accelerator Magnets for the High Luminosity Large Hadron Collider

    CERN Document Server

    Nilsson, Emelie; Todesco, Ezio; Enomoto, Shun; Farinon, Stefania; Fabbricatore, Pasquale; Nakamoto, Tatsushi; Sugano, Michinaka; Savary, Frederic

    2017-01-01

    A dedicated D1 beam separation dipole is currently being developed at KEK for the Large Hadron Collider Luminosity upgrade (HL-LHC). Four 150 mm aperture, 5.6 T magnetic field and 6.7 m long Nb-Ti magnets will replace resistive D1 dipoles. The development includes fabrication and testing of 2.2 m model magnets. The dipole has a single layer coil and thin spacers between coil and iron, giving a non-negligible impact of saturation on field quality at nominal field. The magnetic design of the straight section coil cross section is based on 2D optimization and a separate optimization concerns the coil ends. However, magnetic measurements of the short model showed a large difference (tens of units) between the sextupole harmonic in the straight part and the 2D calculation. This difference is correctly modelled only by a 3D analysis: 3D calculations show that the magnetic field quality in the straight part is influenced by the coil ends, even for the 6.7 m long magnets. The effect is even more remarkable in the sho...

  12. submitter Influence of 3D Effects on Field Quality in the Straight Part of Accelerator Magnets for the High Luminosity Large Hadron Collider

    CERN Document Server

    Nilsson, Emelie; Todesco, Ezio; Enomoto, Shun; Farinon, Stefania; Fabbricatore, Pasquale; Nakamoto, Tatsushi; Sugano, Michinaka; Savary, Frederic

    2018-01-01

    A dedicated D1 beam separation dipole is currently being developed at KEK for the Large Hadron Collider Luminosity upgrade (HL-LHC). Four 150 mm aperture, 5.6 T magnetic field and 6.7 m long Nb-Ti magnets will replace resistive D1 dipoles. The development includes fabrication and testing of 2.2 m model magnets. The dipole has a single layer coil and thin spacers between coil and iron, giving a non-negligible impact of saturation on field quality at nominal field. The magnetic design of the straight section coil cross section is based on 2D optimization and a separate optimization concerns the coil ends. However, magnetic measurements of the short model showed a large difference (tens of units) between the sextupole harmonic in the straight part and the 2D calculation. This difference is correctly modelled only by a 3D analysis: 3D calculations show that the magnetic field quality in the straight part is influenced by the coil ends, even for the 6.7 m long magnets. The effect is even more remarkable in the sho...

  13. Phonological memory in sign language relies on the visuomotor neural system outside the left hemisphere language network.

    Science.gov (United States)

    Kanazawa, Yuji; Nakamura, Kimihiro; Ishii, Toru; Aso, Toshihiko; Yamazaki, Hiroshi; Omori, Koichi

    2017-01-01

    Sign language is an essential medium for everyday social interaction for deaf people and plays a critical role in verbal learning. In particular, language development in those people should heavily rely on the verbal short-term memory (STM) via sign language. Most previous studies compared neural activations during signed language processing in deaf signers and those during spoken language processing in hearing speakers. For sign language users, it thus remains unclear how visuospatial inputs are converted into the verbal STM operating in the left-hemisphere language network. Using functional magnetic resonance imaging, the present study investigated neural activation while bilinguals of spoken and signed language were engaged in a sequence memory span task. On each trial, participants viewed a nonsense syllable sequence presented either as written letters or as fingerspelling (4-7 syllables in length) and then held the syllable sequence for 12 s. Behavioral analysis revealed that participants relied on phonological memory while holding verbal information regardless of the type of input modality. At the neural level, this maintenance stage broadly activated the left-hemisphere language network, including the inferior frontal gyrus, supplementary motor area, superior temporal gyrus and inferior parietal lobule, for both letter and fingerspelling conditions. Interestingly, while most participants reported that they relied on phonological memory during maintenance, direct comparisons between letters and fingers revealed strikingly different patterns of neural activation during the same period. Namely, the effortful maintenance of fingerspelling inputs relative to letter inputs activated the left superior parietal lobule and dorsal premotor area, i.e., brain regions known to play a role in visuomotor analysis of hand/arm movements. These findings suggest that the dorsal visuomotor neural system subserves verbal learning via sign language by relaying gestural inputs to

  14. Caution required when relying on a colleague's advice; a comparison between professional advice and evidence from the literature

    NARCIS (Netherlands)

    Schaafsma, Frederieke; Verbeek, Jos; Hulshof, Carel; van Dijk, Frank

    2005-01-01

    Background: Occupational Physicians rely especially on advice from colleagues when answering their information demands. On the other hand, Evidence-based Medicine (EBM) promotes the use of up-to-date research literature instead of experts. To find out if there was a difference between expert-based

  15. Large picture archiving and communication systems of the world--Part 2.

    Science.gov (United States)

    Bauman, R A; Gell, G; Dwyer, S J

    1996-11-01

    A survey of 82 institutions worldwide was done in 1995 to identify large picture archiving and communication systems (PACS) in clinical operation. A continuing strong trend toward the creation and operation of large PACS was identified. In the 15 months since the first such survey the number of clinical large PACS went from 13 to 23, almost a doubling in that short interval. New systems were added in Asia, Europe, and North America. A strong move to primary interpretation from soft copy was identified, and filmless radiology has become a reality. Workstations for interpretation reside mainly within radiology, but one-third of reporting PACS have more than 20 workstations outside of radiology. Fiber distributed data interface networks were the most numerous, but a variety of networks was reported to be in use. Replies on various display times showed surprisingly good, albeit diverse, speeds. The planned archive length of many systems was 60 months, with usually more than 1 year of data on-line. The main large archive and off-line storage media for these systems were optical disks and magneto-optical disks. Compression was not used before interpretation in most cases, but many systems used 2.5:1 compression for on-line, interpreted cases and 10:1 compression for longer-term archiving. A move to digital imaging and communication in medicine interface usage was identified.

  16. Analysis of Critical Parts and Materials

    Science.gov (United States)

    1980-12-01

    1 1 1% 1% 1% 1% Large Orders Manual Ordering of Some Critical Parts Order Spares with Original Order Incentives Belter Capital Investment...demand 23 Large orders 24 Long lead procurement funding (including raw materials, facility funding) 25 Manpower analysis and training 26 Manual ... ordering of some critical parts 27 More active role in schedule negotiation 28 Multiple source procurements 29 Multi-year program funding 30 Order

  17. A review on the use of gas and steam turbine combined cycles as prime movers for large ships. Part I: Background and design

    International Nuclear Information System (INIS)

    Haglind, Fredrik

    2008-01-01

    The aim of the present paper is to review the prospects of using combined cycles as prime movers for large ships, like, container ships, tankers and bulk carriers. The paper is divided into three parts of which this paper constitutes Part I. Here, the environmental and human health concerns of international shipping are outlined. The regulatory framework relevant for shipping and the design of combined cycles are discussed. In Part II, previous work and experience are reviewed, and an overview of the implications of introducing combined cycles as prime movers is included. In Part III, marine fuels are discussed and the pollutant emissions of gas turbines are compared with those of two-stroke, slow-speed diesel engines. Environmental effects of shipping include contributions to the formation of ground-level ozone, acidification, eutrophication and climate impact. Tightening environmental regulations limit the fuel sulphur content and pollutant emissions. For moderate live steam pressures, a vertical HRSG of drum-type mounted directly over the gas turbine, is suggested to be a viable configuration that minimizes ground floor and space requirements

  18. Quasi-potential and Two-Scale Large Deviation Theory for Gillespie Dynamics

    KAUST Repository

    Li, Tiejun

    2016-01-07

    The construction of energy landscape for bio-dynamics is attracting more and more attention recent years. In this talk, I will introduce the strategy to construct the landscape from the connection to rare events, which relies on the large deviation theory for Gillespie-type jump dynamics. In the application to a typical genetic switching model, the two-scale large deviation theory is developed to take into account the fast switching of DNA states. The comparison with other proposals are also discussed. We demonstrate different diffusive limits arise when considering different regimes for genetic translation and switching processes.

  19. Does nuclear power have a part to play?

    International Nuclear Information System (INIS)

    Hampson, D.C.

    1992-01-01

    Uranium has three significant uses: the generation of electricity, the production of heat for industrial purposes and space heating, and the cogeneration of both heat and electric power. Electricity is the most widely used and rapidly growing form of secondary energy. All conservation scenarios, including that of the World Commission on Environmental and Development (The Brundtland Report), rely on its expanded use. This paper considers the current role of nuclear energy in meeting world electricity needs and the part played by Australian uranium. It reviews the work being done on the development of small and medium sized power reactors, the strengthening and expansion of the Australian electricity grind and the possibility that the combination of the two, together with environmental concerns, may provide the opportunity for nuclear power to play a part in our future energy mix. 5 refs., 1 tab., 5 figs

  20. Phased array inspection of large size forged steel parts

    Science.gov (United States)

    Dupont-Marillia, Frederic; Jahazi, Mohammad; Belanger, Pierre

    2018-04-01

    High strength forged steel requires uncompromising quality to warrant advance performance for numerous critical applications. Ultrasonic inspection is commonly used in nondestructive testing to detect cracks and other defects. In steel blocks of relatively small dimensions (at least two directions not exceeding a few centimetres), phased array inspection is a trusted method to generate images of the inside of the blocks and therefore identify and size defects. However, casting of large size forged ingots introduces changes of mechanical parameters such as grain size, the Young's modulus, the Poisson's ratio, and the chemical composition. These heterogeneities affect the wave propagation, and consequently, the reliability of ultrasonic inspection and the imaging capabilities for these blocks. In this context, a custom phased array transducer designed for a 40-ton bainitic forged ingot was investigated. Following a previous study that provided local mechanical parameters for a similar block, two-dimensional simulations were made to compute the optimal transducer parameters including the pitch, width and number of elements. It appeared that depending on the number of elements, backwall reconstruction can generate high amplitude artefacts. Indeed, the large dimensions of the simulated block introduce numerous constructive interferences from backwall reflections which may lead to important artefacts. To increase image quality, the reconstruction algorithm was adapted and promising results were observed and compared with the scattering cone filter method available in the CIVA software.

  1. Large-scale functional purification of recombinant HIV-1 capsid.

    Directory of Open Access Journals (Sweden)

    Magdeleine Hung

    Full Text Available During human immunodeficiency virus type-1 (HIV-1 virion maturation, capsid proteins undergo a major rearrangement to form a conical core that protects the viral nucleoprotein complexes. Mutations in the capsid sequence that alter the stability of the capsid core are deleterious to viral infectivity and replication. Recently, capsid assembly has become an attractive target for the development of a new generation of anti-retroviral agents. Drug screening efforts and subsequent structural and mechanistic studies require gram quantities of active, homogeneous and pure protein. Conventional means of laboratory purification of Escherichia coli expressed recombinant capsid protein rely on column chromatography steps that are not amenable to large-scale production. Here we present a function-based purification of wild-type and quadruple mutant capsid proteins, which relies on the inherent propensity of capsid protein to polymerize and depolymerize. This method does not require the packing of sizable chromatography columns and can generate double-digit gram quantities of functionally and biochemically well-behaved proteins with greater than 98% purity. We have used the purified capsid protein to characterize two known assembly inhibitors in our in-house developed polymerization assay and to measure their binding affinities. Our capsid purification procedure provides a robust method for purifying large quantities of a key protein in the HIV-1 life cycle, facilitating identification of the next generation anti-HIV agents.

  2. Mizan: Optimizing Graph Mining in Large Parallel Systems

    KAUST Repository

    Kalnis, Panos

    2012-03-01

    Extracting information from graphs, from nding shortest paths to complex graph mining, is essential for many ap- plications. Due to the shear size of modern graphs (e.g., social networks), processing must be done on large paral- lel computing infrastructures (e.g., the cloud). Earlier ap- proaches relied on the MapReduce framework, which was proved inadequate for graph algorithms. More recently, the message passing model (e.g., Pregel) has emerged. Although the Pregel model has many advantages, it is agnostic to the graph properties and the architecture of the underlying com- puting infrastructure, leading to suboptimal performance. In this paper, we propose Mizan, a layer between the users\\' code and the computing infrastructure. Mizan considers the structure of the input graph and the architecture of the in- frastructure in order to: (i) decide whether it is bene cial to generate a near-optimal partitioning of the graph in a pre- processing step, and (ii) choose between typical point-to- point message passing and a novel approach that puts com- puting nodes in a virtual overlay ring. We deployed Mizan on a small local Linux cluster, on the cloud (256 virtual machines in Amazon EC2), and on an IBM Blue Gene/P supercomputer (1024 CPUs). We show that Mizan executes common algorithms on very large graphs 1-2 orders of mag- nitude faster than MapReduce-based implementations and up to one order of magnitude faster than implementations relying on Pregel-like hash-based graph partitioning.

  3. Investing in a Large Stretch Press

    Science.gov (United States)

    Choate, M.; Nealson, W.; Jay, G.; Buss, W.

    1986-01-01

    Press for forming large aluminum parts from plates provides substantial economies. Study assessed advantages and disadvantages of investing in large stretch-forming press, and also developed procurement specification for press.

  4. Object Categorization in Finer Levels Relies More on Higher Spatial Frequencies and Takes Longer.

    Science.gov (United States)

    Ashtiani, Matin N; Kheradpisheh, Saeed R; Masquelier, Timothée; Ganjtabesh, Mohammad

    2017-01-01

    The human visual system contains a hierarchical sequence of modules that take part in visual perception at different levels of abstraction, i.e., superordinate, basic, and subordinate levels. One important question is to identify the "entry" level at which the visual representation is commenced in the process of object recognition. For a long time, it was believed that the basic level had a temporal advantage over two others. This claim has been challenged recently. Here we used a series of psychophysics experiments, based on a rapid presentation paradigm, as well as two computational models, with bandpass filtered images of five object classes to study the processing order of the categorization levels. In these experiments, we investigated the type of visual information required for categorizing objects in each level by varying the spatial frequency bands of the input image. The results of our psychophysics experiments and computational models are consistent. They indicate that the different spatial frequency information had different effects on object categorization in each level. In the absence of high frequency information, subordinate and basic level categorization are performed less accurately, while the superordinate level is performed well. This means that low frequency information is sufficient for superordinate level, but not for the basic and subordinate levels. These finer levels rely more on high frequency information, which appears to take longer to be processed, leading to longer reaction times. Finally, to avoid the ceiling effect, we evaluated the robustness of the results by adding different amounts of noise to the input images and repeating the experiments. As expected, the categorization accuracy decreased and the reaction time increased significantly, but the trends were the same. This shows that our results are not due to a ceiling effect. The compatibility between our psychophysical and computational results suggests that the temporal

  5. Large-scale grid management

    International Nuclear Information System (INIS)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-01-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series

  6. Anemia predicts thromboembolic events, bleeding complications and mortality in patients with atrial fibrillation : insights from the RE-LY trial

    NARCIS (Netherlands)

    Westenbrink, B. D.; Alings, M.; Connolly, S. J.; Eikelboom, J.; Ezekowitz, M. D.; Oldgren, J.; Yang, S.; Pongue, J.; Yusuf, S.; Wallentin, L.; van Gilst, W. H.

    BackgroundAnemia may predispose to thromboembolic events or bleeding in anticoagulated patients with atrial fibrillation (AF). ObjectivesTo investigate whether anemia is associated with thromboembolic events and bleeding in patients with AF. Patients and methodsWe retrospectively analyzed the RE-LY

  7. Large-scale cryopumping for controlled fusion

    International Nuclear Information System (INIS)

    Pittenger, L.C.

    1977-01-01

    Vacuum pumping by freezing out or otherwise immobilizing the pumped gas is an old concept. In several plasma physics experiments for controlled fusion research, cryopumping has been used to provide clean, ultrahigh vacua. Present day fusion research devices, which rely almost universally upon neutral beams for heating, are high gas throughput systems, the pumping of which is best accomplished by cryopumping in the high mass-flow, moderate-to-high vacuum regime. Cryopumping systems have been developed for neutral beam injection systems on several fusion experiments (HVTS, TFTR) and are being developed for the overall pumping of a large, high-throughput mirror containment experiment (MFTF). In operation, these large cryopumps will require periodic defrosting, some schemes for which are discussed, along with other operational considerations. The development of cryopumps for fusion reactors is begun with the TFTR and MFTF systems. Likely paths for necessary further development for power-producing reactors are also discussed

  8. Large-scale cryopumping for controlled fusion

    Energy Technology Data Exchange (ETDEWEB)

    Pittenger, L.C.

    1977-07-25

    Vacuum pumping by freezing out or otherwise immobilizing the pumped gas is an old concept. In several plasma physics experiments for controlled fusion research, cryopumping has been used to provide clean, ultrahigh vacua. Present day fusion research devices, which rely almost universally upon neutral beams for heating, are high gas throughput systems, the pumping of which is best accomplished by cryopumping in the high mass-flow, moderate-to-high vacuum regime. Cryopumping systems have been developed for neutral beam injection systems on several fusion experiments (HVTS, TFTR) and are being developed for the overall pumping of a large, high-throughput mirror containment experiment (MFTF). In operation, these large cryopumps will require periodic defrosting, some schemes for which are discussed, along with other operational considerations. The development of cryopumps for fusion reactors is begun with the TFTR and MFTF systems. Likely paths for necessary further development for power-producing reactors are also discussed.

  9. Universal character and large N factorization in topological gauge/string theory

    International Nuclear Information System (INIS)

    Kanno, Hiroaki

    2006-01-01

    We establish a formula of the large N factorization of the modular S-matrix for the coupled representations in U(N) Chern-Simons theory. The formula was proposed by Aganagic, Neitzke and Vafa, based on computations involving the conifold transition. We present a more rigorous proof that relies on the universal character for rational representations and an expression of the modular S-matrix in terms of the specialization of characters

  10. Pneumococcal Competence Coordination Relies on a Cell-Contact Sensing Mechanism.

    Directory of Open Access Journals (Sweden)

    Marc Prudhomme

    2016-06-01

    Full Text Available Bacteria have evolved various inducible genetic programs to face many types of stress that challenge their growth and survival. Competence is one such program. It enables genetic transformation, a major horizontal gene transfer process. Competence development in liquid cultures of Streptococcus pneumoniae is synchronized within the whole cell population. This collective behavior is known to depend on an exported signaling Competence Stimulating Peptide (CSP, whose action generates a positive feedback loop. However, it is unclear how this CSP-dependent population switch is coordinated. By monitoring spontaneous competence development in real time during growth of four distinct pneumococcal lineages, we have found that competence shift in the population relies on a self-activated cell fraction that arises via a growth time-dependent mechanism. We demonstrate that CSP remains bound to cells during this event, and conclude that the rate of competence development corresponds to the propagation of competence by contact between activated and quiescent cells. We validated this two-step cell-contact sensing mechanism by measuring competence development during co-cultivation of strains with altered capacity to produce or respond to CSP. Finally, we found that the membrane protein ComD retains the CSP, limiting its free diffusion in the medium. We propose that competence initiator cells originate stochastically in response to stress, to form a distinct subpopulation that then transmits the CSP by cell-cell contact.

  11. [Scope of the latest RE-LY substudies: clinical implications].

    Science.gov (United States)

    Ruiz-Giménez Arrieta, N

    2012-03-01

    The approval of the use of dabiatran in stroke prevention in patients with nonvalvular atrial fibrilation (NVAF) is based on the results of the RE-LY (Randomized Evaluation of Long-Term Anticoagulation Therapy) trial, one of the largest studies to date in this entity. In this trial, dabigatran showed similar safety and efficacy to warfarin in primary and secondary prevention of stroke in patients with AF. At a dose of 150 mg twice daily, dabigatran was superior to warfarin in the prevention of stroke or systemic embolism and the 110 mg dose twice daily showed similar efficacy and greater safety, given the lower incidence of hemorrhage. These results were consistently found in the various subanalyses, with some slight differences of interest for clinical practice. The ideal candidates for dabiatran are patients with NVAF suitable for cardioversion, who require short periods of anticoagulation, patients in remote geographical areas with difficulty in achieving good anticoagulation control or good control with anti-vitamin K treatment due to IRN fluctuations, and patients with a low risk of hemorrhage and a CHADS score ≥ 3 and/or with prior stroke, whenever there are no contraindications. The choice of dabigatran dose should be evaluated according to the patient's individual characteristics (caution must be exercised when prescribing this drug in the elderly and in renal insufficiency) and embolic and/or hemorrhagic risk. Studies of the long-term safety of this drug, pharmacoeconomic analyses in Spain and post-commercialization pharmacovigilance data are required before the definitive uses of this drug can be established. Copyright © 2012 Elsevier España, S.L. All rights reserved.

  12. Segmentation of the hippocampus by transferring algorithmic knowledge for large cohort processing.

    Science.gov (United States)

    Thyreau, Benjamin; Sato, Kazunori; Fukuda, Hiroshi; Taki, Yasuyuki

    2018-01-01

    The hippocampus is a particularly interesting target for neuroscience research studies due to its essential role within the human brain. In large human cohort studies, bilateral hippocampal structures are frequently identified and measured to gain insight into human behaviour or genomic variability in neuropsychiatric disorders of interest. Automatic segmentation is performed using various algorithms, with FreeSurfer being a popular option. In this manuscript, we present a method to segment the bilateral hippocampus using a deep-learned appearance model. Deep convolutional neural networks (ConvNets) have shown great success in recent years, due to their ability to learn meaningful features from a mass of training data. Our method relies on the following key novelties: (i) we use a wide and variable training set coming from multiple cohorts (ii) our training labels come in part from the output of the FreeSurfer algorithm, and (iii) we include synthetic data and use a powerful data augmentation scheme. Our method proves to be robust, and it has fast inference (deep neural-network methods can easily encode, and even improve, existing anatomical knowledge, even when this knowledge exists in algorithmic form. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Sabiedriskās attiecības jauno reliģisko kustību praksē. Baznīcas "Jaunā paaudze" gadījuma analīze

    OpenAIRE

    Maļuha, Sandra

    2008-01-01

    Izstrādātā bakalaura darba nosaukums ir „Sabiedriskās attiecības jauno reliģisko kustību praksē: baznīcas „Jaunā paaudze” gadījuma analīze”. Darbā aplūkotā pētnieciskā problēma ir jauns, līdz šim maz pētīts fenomens - sabiedrisko attiecību mērķu realizācija jauno reliģisko kustību darbībā. Izvēlētais pētniecības objekts ir baznīca „Jaunā paaudze”, kura, pēc Tieslietu ministrijas Reliģisko lietu pārvaldes sniegtajiem datiem, šobrīd ir lielākā jaunā reliģiskā kustība Latvijā. Teorētiskā pama...

  14. Engineering survey planning for the alignment of a particle accelerator: part II. Design of a reference network and measurement strategy

    Science.gov (United States)

    Junqueira Leão, Rodrigo; Raffaelo Baldo, Crhistian; Collucci da Costa Reis, Maria Luisa; Alves Trabanco, Jorge Luiz

    2018-03-01

    The building blocks of particle accelerators are magnets responsible for keeping beams of charged particles at a desired trajectory. Magnets are commonly grouped in support structures named girders, which are mounted on vertical and horizontal stages. The performance of this type of machine is highly dependent on the relative alignment between its main components. The length of particle accelerators ranges from small machines to large-scale national or international facilities, with typical lengths of hundreds of meters to a few kilometers. This relatively large volume together with micrometric positioning tolerances make the alignment activity a classical large-scale dimensional metrology problem. The alignment concept relies on networks of fixed monuments installed on the building structure to which all accelerator components are referred. In this work, the Sirius accelerator is taken as a case study, and an alignment network is optimized via computational methods in terms of geometry, densification, and surveying procedure. Laser trackers are employed to guide the installation and measure the girders’ positions, using the optimized network as a reference and applying the metric developed in part I of this paper. Simulations demonstrate the feasibility of aligning the 220 girders of the Sirius synchrotron to better than 0.080 mm, at a coverage probability of 95%.

  15. The part-time wage penalty in European countries: how large is it for men?

    OpenAIRE

    O'Dorchai, Sile Padraigin; Plasman, Robert; Rycx, François

    2007-01-01

    Economic theory advances a number of reasons for the existence of a wage gap between part-time and full-time workers. Empirical work has concentrated on the wage effects of part-time work for women. For men, much less empirical evidence exists, mainly because of lacking data. In this paper, we take advantage of access to unique harmonised matched employer-employee data (i.e. the 1995 European Structure of Earnings Survey) to investigate the magnitude and sources of the part-time wage penalty ...

  16. Anza palaeoichnological site. Late Cretaceous. Morocco. Part II. Problems of large dinosaur trackways and the first African Macropodosaurus trackway

    Science.gov (United States)

    Masrour, Moussa; Lkebir, Noura; Pérez-Lorente, Félix

    2017-10-01

    The Anza site shows large ichnological surfaces indicating the coexistence in the same area of different vertebrate footprints (dinosaur and pterosaur) and of different types (tridactyl and tetradactyl, semiplantigrade and rounded without digit marks) and the footprint variability of long trackways. This area may become a world reference in ichnology because it contains the second undebatable African site with Cretaceous pterosaur footprints - described in part I - and the first African site with Macropodosaurus footprints. In this work, problems related to long trackways are also analyzed, such as their sinuosity, the order-disorder of the variability (long-short) of the pace length and the difficulty of morphological classification of the theropod footprints due to their morphological variability.

  17. 16 CFR 1115.5 - Reporting of failures to comply with a voluntary consumer product safety standard relied upon by...

    Science.gov (United States)

    2010-01-01

    ... voluntary consumer product safety standard relied upon by the Commission under section 9 of the CPSA. 1115.5 Section 1115.5 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION CONSUMER PRODUCT SAFETY ACT REGULATIONS SUBSTANTIAL PRODUCT HAZARD REPORTS General Interpretation § 1115.5 Reporting of failures to comply...

  18. Primary cutaneous anaplastic large-cell lymphoma.

    Science.gov (United States)

    Perry, Edward; Karajgikar, Jay; Tabbara, Imad A

    2013-10-01

    Since the recognition of the anaplastic large-cell lymphomas in the 1980s, much has been learned about the diagnosis, clinical presentation, and treatment of these malignant conditions. The systemic and primary cutaneous types of anaplastic large cell lymphomas have been differentiated on clinical and immunophenotypical findings, but further research is required to elucidate their exact etiologies and pathogeneses. Primary cutaneous anaplastic large-cell lymphoma has a 95% disease-specific 5-year survival, owing partly to the relatively benign course of the disease and partly to the variety of effective treatments that are available. As with many other oncological diseases, new drugs are continually being tested and developed, with immunotherapy and biological response modifiers showing promise.

  19. Non-Destructive Thermography Analysis of Impact Damage on Large-Scale CFRP Automotive Parts

    Science.gov (United States)

    Maier, Alexander; Schmidt, Roland; Oswald-Tranta, Beate; Schledjewski, Ralf

    2014-01-01

    Laminated composites are increasingly used in aeronautics and the wind energy industry, as well as in the automotive industry. In these applications, the construction and processing need to fulfill the highest requirements regarding weight and mechanical properties. Environmental issues, like fuel consumption and CO2-footprint, set new challenges in producing lightweight parts that meet the highly monitored standards for these branches. In the automotive industry, one main aspect of construction is the impact behavior of structural parts. To verify the quality of parts made from composite materials with little effort, cost and time, non-destructive test methods are increasingly used. A highly recommended non-destructive testing method is thermography analysis. In this work, a prototype for a car’s base plate was produced by using vacuum infusion. For research work, testing specimens were produced with the same multi-layer build up as the prototypes. These specimens were charged with defined loads in impact tests to simulate the effect of stone chips. Afterwards, the impacted specimens were investigated with thermography analysis. The research results in that work will help to understand the possible fields of application and the usage of thermography analysis as the first quick and economic failure detection method for automotive parts. PMID:28788464

  20. Non-Destructive Thermography Analysis of Impact Damage on Large-Scale CFRP Automotive Parts

    Directory of Open Access Journals (Sweden)

    Alexander Maier

    2014-01-01

    Full Text Available Laminated composites are increasingly used in aeronautics and the wind energy industry, as well as in the automotive industry. In these applications, the construction and processing need to fulfill the highest requirements regarding weight and mechanical properties. Environmental issues, like fuel consumption and CO2-footprint, set new challenges in producing lightweight parts that meet the highly monitored standards for these branches. In the automotive industry, one main aspect of construction is the impact behavior of structural parts. To verify the quality of parts made from composite materials with little effort, cost and time, non-destructive test methods are increasingly used. A highly recommended non-destructive testing method is thermography analysis. In this work, a prototype for a car’s base plate was produced by using vacuum infusion. For research work, testing specimens were produced with the same multi-layer build up as the prototypes. These specimens were charged with defined loads in impact tests to simulate the effect of stone chips. Afterwards, the impacted specimens were investigated with thermography analysis. The research results in that work will help to understand the possible fields of application and the usage of thermography analysis as the first quick and economic failure detection method for automotive parts.

  1. Non-Destructive Thermography Analysis of Impact Damage on Large-Scale CFRP Automotive Parts.

    Science.gov (United States)

    Maier, Alexander; Schmidt, Roland; Oswald-Tranta, Beate; Schledjewski, Ralf

    2014-01-14

    Laminated composites are increasingly used in aeronautics and the wind energy industry, as well as in the automotive industry. In these applications, the construction and processing need to fulfill the highest requirements regarding weight and mechanical properties. Environmental issues, like fuel consumption and CO₂-footprint, set new challenges in producing lightweight parts that meet the highly monitored standards for these branches. In the automotive industry, one main aspect of construction is the impact behavior of structural parts. To verify the quality of parts made from composite materials with little effort, cost and time, non-destructive test methods are increasingly used. A highly recommended non-destructive testing method is thermography analysis. In this work, a prototype for a car's base plate was produced by using vacuum infusion. For research work, testing specimens were produced with the same multi-layer build up as the prototypes. These specimens were charged with defined loads in impact tests to simulate the effect of stone chips. Afterwards, the impacted specimens were investigated with thermography analysis. The research results in that work will help to understand the possible fields of application and the usage of thermography analysis as the first quick and economic failure detection method for automotive parts.

  2. Large Pelagics Biological Survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Large Pelagics Biological Survey (LPBS) collects additional length and weight information and body parts such as otoliths, caudal vertebrae, dorsal spines, and...

  3. da Vinci decoded: does da Vinci stereopsis rely on disparity?

    Science.gov (United States)

    Tsirlin, Inna; Wilcox, Laurie M; Allison, Robert S

    2012-11-01

    In conventional stereopsis, the depth between two objects is computed based on the retinal disparity in the position of matching points in the two eyes. When an object is occluded by another object in the scene, so that it is visible only in one eye, its retinal disparity cannot be computed. Nakayama and Shimojo (1990) found that a precept of quantitative depth between the two objects could still be established for such stimuli and proposed that this precept is based on the constraints imposed by occlusion geometry. They named this and other occlusion-based depth phenomena "da Vinci stereopsis." Subsequent research found quantitative depth based on occlusion geometry in several other classes of stimuli grouped under the term da Vinci stereopsis. However, Nakayama and Shimojo's findings were later brought into question by Gillam, Cook, and Blackburn (2003), who suggested that quantitative depth in their stimuli was perceived based on conventional disparity. In order to understand whether da Vinci stereopsis relies on one type of mechanism or whether its function is stimulus dependent we examine the nature and source of depth in the class of stimuli used by Nakayama and Shimojo (1990). We use three different psychophysical and computational methods to show that the most likely source for depth in these stimuli is occlusion geometry. Based on these experiments and previous data we discuss the potential mechanisms responsible for processing depth from monocular features in da Vinci stereopsis.

  4. Self-Calibrated In-Process Photogrammetry for Large Raw Part Measurement and Alignment before Machining.

    Science.gov (United States)

    Mendikute, Alberto; Yagüe-Fabra, José A; Zatarain, Mikel; Bertelsen, Álvaro; Leizea, Ibai

    2017-09-09

    Photogrammetry methods are being used more and more as a 3D technique for large scale metrology applications in industry. Optical targets are placed on an object and images are taken around it, where measuring traceability is provided by precise off-process pre-calibrated digital cameras and scale bars. According to the 2D target image coordinates, target 3D coordinates and camera views are jointly computed. One of the applications of photogrammetry is the measurement of raw part surfaces prior to its machining. For this application, post-process bundle adjustment has usually been adopted for computing the 3D scene. With that approach, a high computation time is observed, leading in practice to time consuming and user dependent iterative review and re-processing procedures until an adequate set of images is taken, limiting its potential for fast, easy-to-use, and precise measurements. In this paper, a new efficient procedure is presented for solving the bundle adjustment problem in portable photogrammetry. In-process bundle computing capability is demonstrated on a consumer grade desktop PC, enabling quasi real time 2D image and 3D scene computing. Additionally, a method for the self-calibration of camera and lens distortion has been integrated into the in-process approach due to its potential for highest precision when using low cost non-specialized digital cameras. Measurement traceability is set only by scale bars available in the measuring scene, avoiding the uncertainty contribution of off-process camera calibration procedures or the use of special purpose calibration artifacts. The developed self-calibrated in-process photogrammetry has been evaluated both in a pilot case scenario and in industrial scenarios for raw part measurement, showing a total in-process computing time typically below 1 s per image up to a maximum of 2 s during the last stages of the computed industrial scenes, along with a relative precision of 1/10,000 (e.g. 0.1 mm error in 1 m) with

  5. A methodology for fault diagnosis in large chemical processes and an application to a multistage flash desalination process: Part II

    International Nuclear Information System (INIS)

    Tarifa, Enrique E.; Scenna, Nicolas J.

    1998-01-01

    In Part I, an efficient method for identifying faults in large processes was presented. The whole plant is divided into sectors by using structural, functional, or causal decomposition. A signed directed graph (SDG) is the model used for each sector. The SDG represents interactions among process variables. This qualitative model is used to carry out qualitative simulation for all possible faults. The output of this step is information about the process behaviour. This information is used to build rules. When a symptom is detected in one sector, its rules are evaluated using on-line data and fuzzy logic to yield the diagnosis. In this paper the proposed methodology is applied to a multiple stage flash (MSF) desalination process. This process is composed of sequential flash chambers. It was designed for a pilot plant that produces drinkable water for a community in Argentina; that is, it is a real case. Due to the large number of variables, recycles, phase changes, etc., this process is a good challenge for the proposed diagnosis method

  6. Unparalleled sample treatment throughput for proteomics workflows relying on ultrasonic energy.

    Science.gov (United States)

    Jorge, Susana; Araújo, J E; Pimentel-Santos, F M; Branco, Jaime C; Santos, Hugo M; Lodeiro, Carlos; Capelo, J L

    2018-02-01

    We report on the new microplate horn ultrasonic device as a powerful tool to speed proteomics workflows with unparalleled throughput. 96 complex proteomes were digested at the same time in 4min. Variables such as ultrasonication time, ultrasonication amplitude, and protein to enzyme ratio were optimized. The "classic" method relying on overnight protein digestion (12h) and the sonoreactor-based method were also employed for comparative purposes. We found the protein digestion efficiency homogeneously distributed in the entire microplate horn surface using the following conditions: 4min sonication time and 25% amplitude. Using this approach, patients with lymphoma and myeloma were classified using principal component analysis and a 2D gel-mass spectrometry based approach. Furthermore, we demonstrate the excellent performance by using MALDI-mass spectrometry based profiling as a fast way to classify patients with rheumatoid arthritis, systemic lupus erythematosus, and ankylosing spondylitis. Finally, the speed and simplicity of this method were demonstrated by clustering 90 patients with knee osteoarthritis disease (30), with a prosthesis (30, control group) and healthy individuals (30) with no history of joint disease. Overall, the new approach allows profiling a disease in just one week while allows to match the minimalism rules as outlined by Halls. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Very large thermal rectification in bulk composites consisting partly of icosahedral quasicrystals

    International Nuclear Information System (INIS)

    Takeuchi, Tsunehiro

    2014-01-01

    The bulk thermal rectifiers usable at a high temperature above 300 K were developed by making full use of the unusual electron thermal conductivity of icosahedral quasicrystals. The unusual electron thermal conductivity was caused by a synergy effect of quasiperiodicity and by a narrow pseudogap at the Fermi level. The rectification ratio, defined by TRR = |J large |/|J small |, reached vary large values exceeding 2.0. This significant thermal rectification would lead to new practical applications for the heat management. (paper)

  8. 76 FR 10671 - Assessments, Large Bank Pricing

    Science.gov (United States)

    2011-02-25

    ... 327 Assessments, Large Bank Pricing; Final Rule #0;#0;Federal Register / Vol. 76 , No. 38 / Friday... Part 327 RIN 3064-AD66 Assessments, Large Bank Pricing AGENCY: Federal Deposit Insurance Corporation..., (202) 898-6796; Lisa Ryu, Chief, Large Bank Pricing Section, Division of Insurance and Research, (202...

  9. [Zeme, vara un reliģija viduslaikos un jaunajas laikos Baltijas jūras reģionā] / Marija Golubeva

    Index Scriptorium Estoniae

    Golubeva, Marija, 1973-

    2011-01-01

    Arvustus: Zeme, vara un reliģija viduslaikos un jaunajas laikos Baltijas jūras reģionā / Hrsg. von Andris Šnē. Acta Universitatis Latviensis ; 725. (Riga: Verlag Latvijas Universitātes Akadēmiskais apgāds, 2009)

  10. Large forging manufacturing process

    Science.gov (United States)

    Thamboo, Samuel V.; Yang, Ling

    2002-01-01

    A process for forging large components of Alloy 718 material so that the components do not exhibit abnormal grain growth includes the steps of: a) providing a billet with an average grain size between ASTM 0 and ASTM 3; b) heating the billet to a temperature of between 1750.degree. F. and 1800.degree. F.; c) upsetting the billet to obtain a component part with a minimum strain of 0.125 in at least selected areas of the part; d) reheating the component part to a temperature between 1750.degree. F. and 1800.degree. F.; e) upsetting the component part to a final configuration such that said selected areas receive no strains between 0.01 and 0.125; f) solution treating the component part at a temperature of between 1725.degree. F. and 1750.degree. F.; and g) aging the component part over predetermined times at different temperatures. A modified process achieves abnormal grain growth in selected areas of a component where desirable.

  11. Stabilization Algorithms for Large-Scale Problems

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg

    2006-01-01

    The focus of the project is on stabilization of large-scale inverse problems where structured models and iterative algorithms are necessary for computing approximate solutions. For this purpose, we study various iterative Krylov methods and their abilities to produce regularized solutions. Some......-curve. This heuristic is implemented as a part of a larger algorithm which is developed in collaboration with G. Rodriguez and P. C. Hansen. Last, but not least, a large part of the project has, in different ways, revolved around the object-oriented Matlab toolbox MOORe Tools developed by PhD Michael Jacobsen. New...

  12. Large Eddy Simulations of Complex Flows in IC-Engine's Exhaust Manifold and Turbine

    OpenAIRE

    Fjällman, Johan

    2014-01-01

    The thesis deals with the flow in pipe bends and radial turbines geometries that are commonly found in an Internal Combustion Engine (ICE). The development phase of internal combustion engines relies more and more on simulations as an important complement to experiments. This is partly because of the reduction in development cost and the shortening of the development time. This is one of the reasons for the need of more accurate and predictive simulations. By using more complex computational ...

  13. Cochlear implant users rely on tempo rather than on pitch information during perception of musical emotion.

    Science.gov (United States)

    Caldwell, Meredith; Rankin, Summer K; Jiradejvong, Patpong; Carver, Courtney; Limb, Charles J

    2015-09-01

    The purpose of this study was to investigate the extent to which cochlear implant (CI) users rely on tempo and mode in perception of musical emotion when compared with normal hearing (NH) individuals. A test battery of novel four-bar melodies was created and adapted to four permutations with alterations of tonality (major vs. minor) and tempo (presto vs. largo), resulting in non-ambiguous (major key/fast tempo and minor key/slow tempo) and ambiguous (major key/slow tempo, and minor key/fast tempo) musical stimuli. Both CI and NH participants listened to each clip and provided emotional ratings on a Likert scale of +5 (happy) to -5 (sad). A three-way ANOVA demonstrated an overall effect for tempo in both groups, and an overall effect for mode in the NH group. The CI group rated stimuli of the same tempo similarly, regardless of changes in mode, whereas the NH group did not. A subgroup analysis indicated the same effects in both musician and non-musician CI users and NH listeners. The results suggest that the CI group relied more heavily on tempo than mode in making musical emotion decisions. The subgroup analysis further suggests that level of musical training did not significantly impact this finding. CI users weigh temporal cues more heavily than pitch cues in inferring musical emotion. These findings highlight the significant disadvantage of CI users in comparison with NH listeners for music perception, particularly during recognition of musical emotion, a critically important feature of music.

  14. Forging technology for large nuclear pressure vessel parts

    International Nuclear Information System (INIS)

    Kakimoto, Hideki; Ikegami, Tomonori

    2014-01-01

    The increasing output of nuclear power generation calls for larger vessels for next-generation nuclear power plants. A vessel with an increased diameter requires increased load for its forging, which can make it difficult to use a conventional solid die. In order to reduce the forging load, a rotary incremental forging method has been applied to hot forging. This method includes pressing and rotating a material in an incremental manner such that a target shape is obtained. This study aimed at improving the accuracy of numerical simulation for the rotary incremental forging to reduce the load when forging large vessels. This has enabled the temperature of the material and flow stress to be precisely predicted; an example of this is reported in the paper. Specifically, the heat transfer coefficient to be used for the numerical simulation had been determined experimentally from a small-scale hot-forging. The reduction of the flow stress associated with incremental forging, had been deduced from a compression test, and the value was applied to the numerical simulation. A preform was designed on the basis of the above simulation to perform a 1/1 size scale experiment. A precision of better than 5% has been confirmed for the shape prediction. (author)

  15. The Software Reliability of Large Scale Integration Circuit and Very Large Scale Integration Circuit

    OpenAIRE

    Artem Ganiyev; Jan Vitasek

    2010-01-01

    This article describes evaluation method of faultless function of large scale integration circuits (LSI) and very large scale integration circuits (VLSI). In the article there is a comparative analysis of factors which determine faultless of integrated circuits, analysis of already existing methods and model of faultless function evaluation of LSI and VLSI. The main part describes a proposed algorithm and program for analysis of fault rate in LSI and VLSI circuits.

  16. Large Break LOCA Accident Management Strategies for Accidents With Large Containment Leaks

    International Nuclear Information System (INIS)

    Sdouz, Gert

    2006-01-01

    The goal of this work is the investigation of the influence of different accident management strategies on the thermal-hydraulics in the containment during a Large Break Loss of Coolant Accident with a large containment leak from the beginning of the accident. The increasing relevance of terrorism suggests a closer look at this kind of severe accidents. Normally the course of severe accidents and their associated phenomena are investigated with the assumption of an intact containment from the beginning of the accident. This intact containment has the ability to retain a large part of the radioactive inventory. In these cases there is only a release via a very small leakage due to the un-tightness of the containment up to cavity bottom melt through. This paper represents the last part of a comprehensive study on the influence of accident management strategies on the source term of VVER-1000 reactors. Basically two different accident sequences were investigated: the 'Station Blackout'- sequence and the 'Large Break LOCA'. In a first step the source term calculations were performed assuming an intact containment from the beginning of the accident and no accident management action. In a further step the influence of different accident management strategies was studied. The last part of the project was a repetition of the calculations with the assumption of a damaged containment from the beginning of the accident. This paper concentrates on the last step in the case of a Large Break LOCA. To be able to compare the results with calculations performed years ago the calculations were performed using the Source Term Code Package (STCP), hydrogen explosions are not considered. In this study four different scenarios have been investigated. The main parameter was the switch on time of the spray systems. One of the results is the influence of different accident management strategies on the source term. In the comparison with the sequence with intact containment it was

  17. Biostatistics primer: part I.

    Science.gov (United States)

    Overholser, Brian R; Sowinski, Kevin M

    2007-12-01

    Biostatistics is the application of statistics to biologic data. The field of statistics can be broken down into 2 fundamental parts: descriptive and inferential. Descriptive statistics are commonly used to categorize, display, and summarize data. Inferential statistics can be used to make predictions based on a sample obtained from a population or some large body of information. It is these inferences that are used to test specific research hypotheses. This 2-part review will outline important features of descriptive and inferential statistics as they apply to commonly conducted research studies in the biomedical literature. Part 1 in this issue will discuss fundamental topics of statistics and data analysis. Additionally, some of the most commonly used statistical tests found in the biomedical literature will be reviewed in Part 2 in the February 2008 issue.

  18. High resolution, large deformation 3D traction force microscopy.

    Directory of Open Access Journals (Sweden)

    Jennet Toyjanova

    Full Text Available Traction Force Microscopy (TFM is a powerful approach for quantifying cell-material interactions that over the last two decades has contributed significantly to our understanding of cellular mechanosensing and mechanotransduction. In addition, recent advances in three-dimensional (3D imaging and traction force analysis (3D TFM have highlighted the significance of the third dimension in influencing various cellular processes. Yet irrespective of dimensionality, almost all TFM approaches have relied on a linear elastic theory framework to calculate cell surface tractions. Here we present a new high resolution 3D TFM algorithm which utilizes a large deformation formulation to quantify cellular displacement fields with unprecedented resolution. The results feature some of the first experimental evidence that cells are indeed capable of exerting large material deformations, which require the formulation of a new theoretical TFM framework to accurately calculate the traction forces. Based on our previous 3D TFM technique, we reformulate our approach to accurately account for large material deformation and quantitatively contrast and compare both linear and large deformation frameworks as a function of the applied cell deformation. Particular attention is paid in estimating the accuracy penalty associated with utilizing a traditional linear elastic approach in the presence of large deformation gradients.

  19. Large electrostatic accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Jones, C.M.

    1984-01-01

    The increasing importance of energetic heavy ion beams in the study of atomic physics, nuclear physics, and materials science has partially or wholly motivated the construction of a new generation of large electrostatic accelerators designed to operate at terminal potentials of 20 MV or above. In this paper, the author briefly discusses the status of these new accelerators and also discusses several recent technological advances which may be expected to further improve their performance. The paper is divided into four parts: (1) a discussion of the motivation for the construction of large electrostatic accelerators, (2) a description and discussion of several large electrostatic accelerators which have been recently completed or are under construction, (3) a description of several recent innovations which may be expected to improve the performance of large electrostatic accelerators in the future, and (4) a description of an innovative new large electrostatic accelerator whose construction is scheduled to begin next year. Due to time and space constraints, discussion is restricted to consideration of only tandem accelerators.

  20. Large electrostatic accelerators

    International Nuclear Information System (INIS)

    Jones, C.M.

    1984-01-01

    The increasing importance of energetic heavy ion beams in the study of atomic physics, nuclear physics, and materials science has partially or wholly motivated the construction of a new generation of large electrostatic accelerators designed to operate at terminal potentials of 20 MV or above. In this paper, the author briefly discusses the status of these new accelerators and also discusses several recent technological advances which may be expected to further improve their performance. The paper is divided into four parts: (1) a discussion of the motivation for the construction of large electrostatic accelerators, (2) a description and discussion of several large electrostatic accelerators which have been recently completed or are under construction, (3) a description of several recent innovations which may be expected to improve the performance of large electrostatic accelerators in the future, and (4) a description of an innovative new large electrostatic accelerator whose construction is scheduled to begin next year. Due to time and space constraints, discussion is restricted to consideration of only tandem accelerators

  1. Large-scale preparation of plasmid DNA.

    Science.gov (United States)

    Heilig, J S; Elbing, K L; Brent, R

    2001-05-01

    Although the need for large quantities of plasmid DNA has diminished as techniques for manipulating small quantities of DNA have improved, occasionally large amounts of high-quality plasmid DNA are desired. This unit describes the preparation of milligram quantities of highly purified plasmid DNA. The first part of the unit describes three methods for preparing crude lysates enriched in plasmid DNA from bacterial cells grown in liquid culture: alkaline lysis, boiling, and Triton lysis. The second part describes four methods for purifying plasmid DNA in such lysates away from contaminating RNA and protein: CsCl/ethidium bromide density gradient centrifugation, polyethylene glycol (PEG) precipitation, anion-exchange chromatography, and size-exclusion chromatography.

  2. A solvent- and vacuum-free route to large-area perovskite films for efficient solar modules

    Science.gov (United States)

    Chen, Han; Ye, Fei; Tang, Wentao; He, Jinjin; Yin, Maoshu; Wang, Yanbo; Xie, Fengxian; Bi, Enbing; Yang, Xudong; Grätzel, Michael; Han, Liyuan

    2017-10-01

    Recent advances in the use of organic-inorganic hybrid perovskites for optoelectronics have been rapid, with reported power conversion efficiencies of up to 22 per cent for perovskite solar cells. Improvements in stability have also enabled testing over a timescale of thousands of hours. However, large-scale deployment of such cells will also require the ability to produce large-area, uniformly high-quality perovskite films. A key challenge is to overcome the substantial reduction in power conversion efficiency when a small device is scaled up: a reduction from over 20 per cent to about 10 per cent is found when a common aperture area of about 0.1 square centimetres is increased to more than 25 square centimetres. Here we report a new deposition route for methyl ammonium lead halide perovskite films that does not rely on use of a common solvent or vacuum: rather, it relies on the rapid conversion of amine complex precursors to perovskite films, followed by a pressure application step. The deposited perovskite films were free of pin-holes and highly uniform. Importantly, the new deposition approach can be performed in air at low temperatures, facilitating fabrication of large-area perovskite devices. We reached a certified power conversion efficiency of 12.1 per cent with an aperture area of 36.1 square centimetres for a mesoporous TiO2-based perovskite solar module architecture.

  3. A Large Dimensional Analysis of Regularized Discriminant Analysis Classifiers

    KAUST Repository

    Elkhalil, Khalil

    2017-11-01

    This article carries out a large dimensional analysis of standard regularized discriminant analysis classifiers designed on the assumption that data arise from a Gaussian mixture model with different means and covariances. The analysis relies on fundamental results from random matrix theory (RMT) when both the number of features and the cardinality of the training data within each class grow large at the same pace. Under mild assumptions, we show that the asymptotic classification error approaches a deterministic quantity that depends only on the means and covariances associated with each class as well as the problem dimensions. Such a result permits a better understanding of the performance of regularized discriminant analsysis, in practical large but finite dimensions, and can be used to determine and pre-estimate the optimal regularization parameter that minimizes the misclassification error probability. Despite being theoretically valid only for Gaussian data, our findings are shown to yield a high accuracy in predicting the performances achieved with real data sets drawn from the popular USPS data base, thereby making an interesting connection between theory and practice.

  4. Application of k0-based internal monostandard NAA for large sample analysis of clay pottery. As a part of inter comparison exercise

    International Nuclear Information System (INIS)

    Acharya, R.; Dasari, K.B.; Pujari, P.K.; Swain, K.K.; Shinde, A.D.; Reddy, A.V.R.

    2014-01-01

    As a part of inter comparison exercise of an IAEA Coordinated Research Project on large sample neutron activation analysis, a large size and non standard geometry size pottery replica (obtained from Peru) was analyzed by k 0 -based internal monostandard neutron activation analysis (IM-NAA). Two large size sub samples (0.40 and 0.25 kg) were irradiated at graphite reflector position of AHWR Critical Facility in BARC, Trombay, Mumbai, India. Small samples (100-200 mg) were also analyzed by IM-NAA for comparison purpose. Radioactive assay was carried out using a 40 % relative efficiency HPGe detector. To examine homogeneity of the sample, counting was also carried out using X-Z rotary scanning unit. In situ relative detection efficiency was evaluated using gamma rays of the activation products in the irradiated sample in the energy range of 122-2,754 keV. Elemental concentration ratios with respect to Na of small size (100 mg mass) as well as large size (15 and 400 g) samples were used to check the homogeneity of the samples. Concentration ratios of 18 elements such as K, Sc, Cr, Mn, Fe, Co, Zn, As, Rb, Cs, La, Ce, Sm, Eu, Yb, Lu, Hf and Th with respect to Na (internal mono standard) were calculated using IM-NAA. Absolute concentrations were arrived at for both large and small samples using Na concentration, obtained from relative method of NAA. The percentage combined uncertainties at ±1 s confidence limit on the determined values were in the range of 3-9 %. Two IAEA reference materials SL-1 and SL-3 were analyzed by IM-NAA to evaluate accuracy of the method. (author)

  5. Optimization of large-scale heterogeneous system-of-systems models.

    Energy Technology Data Exchange (ETDEWEB)

    Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Lee, Herbert K. H. (University of California, Santa Cruz, Santa Cruz, CA); Hart, William Eugene; Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Woodruff, David L. (University of California, Davis, Davis, CA)

    2012-01-01

    Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

  6. Epoxy blanket protects milled part during explosive forming

    Science.gov (United States)

    1966-01-01

    Epoxy blanket protects chemically milled or machined sections of large, complex structural parts during explosive forming. The blanket uniformly covers all exposed surfaces and fills any voids to support and protect the entire part.

  7. Large-scale grid management; Storskala Nettforvaltning

    Energy Technology Data Exchange (ETDEWEB)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-07-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series.

  8. Policy support for large scale demonstration projects for hydrogen use in transport. Deliverable D 5.1 (Part B)

    International Nuclear Information System (INIS)

    Ros, M.E.; Jeeninga, H.; Godfroij, P.

    2007-06-01

    This research addresses the possible policy support mechanisms for hydrogen use in transport to answer the question which policy support mechanism potentially is most effective to stimulate hydrogen in transport and especially for large scale demonstrations. This is done by investigating two approaches. First, the possible policy support mechanisms for energy innovations. Second, by relating these to the different technology development stages (R and D, early market and mass market stage) and reviewing their effect on different parts of the hydrogen energy chain (production, distribution and end-use). Additionally, a comparison of the currently policy support mechanisms used in Europe (on EU level) with the United States (National and State level) is made. The analysis shows that in principle various policy support mechanisms can be used to stimulate hydrogen. The choice for a policy support mechanism should depend on the need to reduce the investment cost (euros/MW), production/use cost (euros/GJ) or increase performance (euros/kg CO2 avoided) of a technology during its development. Careful thought has to be put into the design and choice of a policy support mechanism because it can have effects on other parts of the hydrogen energy chain, mostly how hydrogen is produced. The effectiveness of a policy support mechanism greatly depends on the ability to adapt to the developments of the technology and the changing requirements which come with technological progress. In time different policy support mechanisms have to be applied. For demonstration projects there is currently the tendency to apply R and D subsidies in Europe, while the United States applies a variety of policy support mechanisms. The United States not only has higher and more support for demonstration projects but also has stronger incentives to prepare early market demand (for instance requiring public procurement and sales obligations). In order to re-establish the level playing field, Europe may

  9. You can't rely on color, yet we all do 2.0 (Manuscript Only)

    Science.gov (United States)

    van Nes, Floris L.

    2014-01-01

    Everybody views and uses color from early childhood onwards. But this magnificent property of all objects around us turns out to be elusive if you try to specify it and communicate it to another person. Also, people often don't know what effects color may have under different conditions. However, color is so important and omnipresent, that people can hardly avoid to 'rely on it' - so they do, in particular on its predictability. Thus, there is a discrepancy between the seeming self-evidence of color and the difficulty in specifying it accurately, for the prevailing circumstances. In order to analyze this situation, and possibly remedy it, a short historic perspective of the utilization and specification of color is given. The 'utilization' includes the emotional effects of color, which are important in, for instance, interior decorating but also play a role in literature and religion. 'Specification' begins with the early efforts by scientists, philosophers and artists to bring some order and understanding in what was observed with and while using color. Color has a number of basic functions: embellishment; attracting attention; coding; and bringing order in text by causing text parts presented in the same color to be judged as belonging together. People with a profession that involves color choices for many others, such as designers and manufacturers of products, including electronic visual displays, should have a fairly thorough knowledge of colorimetry and color perception. Unfortunately, they often don't, simply because for 'practitioners' whose work involves different aspects, applying color being only one of those, the available tools for specifying and applying color turn out to be too difficult to use. Two consequences of an insufficient knowledge of the effects color may have are given here. The first of these consequences, on color blindness, relates to 8% of the population, but the second one, on reading colored text, bears on everyone. Practical

  10. Simple Model for Simulating Characteristics of River Flow Velocity in Large Scale

    Directory of Open Access Journals (Sweden)

    Husin Alatas

    2015-01-01

    Full Text Available We propose a simple computer based phenomenological model to simulate the characteristics of river flow velocity in large scale. We use shuttle radar tomography mission based digital elevation model in grid form to define the terrain of catchment area. The model relies on mass-momentum conservation law and modified equation of motion of falling body in inclined plane. We assume inelastic collision occurs at every junction of two river branches to describe the dynamics of merged flow velocity.

  11. Overcurrent protection of transformers. Part 2: Traditional and new fusing philosophies for small and large transformers

    Energy Technology Data Exchange (ETDEWEB)

    Cook, C. J.; Niemira, J. K.

    2003-07-01

    New and traditional fusing philosophies for protecting transformers are discussed. This second in a two-part paper covers selection criteria for a transformer-primary fuse to protect the transformer consistent with industry-accepted through-fault protection curves. Also covered are the principles of coordination as they relate to the proper selection of the primary-side fuse and power fuses and the principles underlying the protection of load-side conductors and cables. The critical nature of secondary fault protection on small three-phase transformers used on industrial, commercial, and institutional power systems, as well as small-to-medium size three-phase power transformers used in utility substations is emphasized, in view of the long lead time and expense involved in replacing these transformers. In contrast, no special protection recommendations are made for small-kVA overhead distribution transformers, since they are not considered likely to experience secondary faults, and the rare faults that do occur will not likely be detected and cleared by the primary fuse. Also of importance is the fact that these transformers are inexpensive and readily available. Overall, large fuse rating, used in combination with a tank-mounted surge arrester is recommended, because it can provide better transformer protection than the smaller fuse ratings traditionally employed. 4 refs., 2 tabs., 4 figs.

  12. Scramjet test flow reconstruction for a large-scale expansion tube, Part 1: quasi-one-dimensional modelling

    Science.gov (United States)

    Gildfind, D. E.; Jacobs, P. A.; Morgan, R. G.; Chan, W. Y. K.; Gollan, R. J.

    2017-11-01

    Large-scale free-piston driven expansion tubes have uniquely high total pressure capabilities which make them an important resource for development of access-to-space scramjet engine technology. However, many aspects of their operation are complex, and their test flows are fundamentally unsteady and difficult to measure. While computational fluid dynamics methods provide an important tool for quantifying these flows, these calculations become very expensive with increasing facility size and therefore have to be carefully constructed to ensure sufficient accuracy is achieved within feasible computational times. This study examines modelling strategies for a Mach 10 scramjet test condition developed for The University of Queensland's X3 facility. The present paper outlines the challenges associated with test flow reconstruction, describes the experimental set-up for the X3 experiments, and then details the development of an experimentally tuned quasi-one-dimensional CFD model of the full facility. The 1-D model, which accurately captures longitudinal wave processes, is used to calculate the transient flow history in the shock tube. This becomes the inflow to a higher-fidelity 2-D axisymmetric simulation of the downstream facility, detailed in the Part 2 companion paper, leading to a validated, fully defined nozzle exit test flow.

  13. Scramjet test flow reconstruction for a large-scale expansion tube, Part 1: quasi-one-dimensional modelling

    Science.gov (United States)

    Gildfind, D. E.; Jacobs, P. A.; Morgan, R. G.; Chan, W. Y. K.; Gollan, R. J.

    2018-07-01

    Large-scale free-piston driven expansion tubes have uniquely high total pressure capabilities which make them an important resource for development of access-to-space scramjet engine technology. However, many aspects of their operation are complex, and their test flows are fundamentally unsteady and difficult to measure. While computational fluid dynamics methods provide an important tool for quantifying these flows, these calculations become very expensive with increasing facility size and therefore have to be carefully constructed to ensure sufficient accuracy is achieved within feasible computational times. This study examines modelling strategies for a Mach 10 scramjet test condition developed for The University of Queensland's X3 facility. The present paper outlines the challenges associated with test flow reconstruction, describes the experimental set-up for the X3 experiments, and then details the development of an experimentally tuned quasi-one-dimensional CFD model of the full facility. The 1-D model, which accurately captures longitudinal wave processes, is used to calculate the transient flow history in the shock tube. This becomes the inflow to a higher-fidelity 2-D axisymmetric simulation of the downstream facility, detailed in the Part 2 companion paper, leading to a validated, fully defined nozzle exit test flow.

  14. In-bore instrumentation/diagnostics for large-bore EMLs

    International Nuclear Information System (INIS)

    Fernandez, M.J.; Ager, S.A.; Hudson, R.D.

    1991-01-01

    This paper reports on a flying laboratory technique of in-bore diagnostics for large-bore electromagnetic launchers (EMLs). The high pressure, heat, and magnetic flux environment of the EML and its containment structures do not allow easy implementation of conventional diagnostic techniques. Researchers have relied on remote sensing methods, such as B probes (isolated from the bore), for data. The accuracy and relevance of such discrete, remote measurement is somewhat questionable. An in-house program has been initiated to determine the feasibility of making measurement of EML parameters on board a projectile. This technique utilizes off-the-shelf components in a configuration that has been proven effective in measuring projectile acceleration in the bore of propellant driven guns

  15. Characterize kinematic rupture history of large earthquakes with Multiple Haskell sources

    Science.gov (United States)

    Jia, Z.; Zhan, Z.

    2017-12-01

    Earthquakes are often regarded as continuous rupture along a single fault, but the occurrence of complex large events involving multiple faults and dynamic triggering challenges this view. Such rupture complexities cause difficulties in existing finite fault inversion algorithms, because they rely on specific parameterizations and regularizations to obtain physically meaningful solutions. Furthermore, it is difficult to assess reliability and uncertainty of obtained rupture models. Here we develop a Multi-Haskell Source (MHS) method to estimate rupture process of large earthquakes as a series of sub-events of varying location, timing and directivity. Each sub-event is characterized by a Haskell rupture model with uniform dislocation and constant unilateral rupture velocity. This flexible yet simple source parameterization allows us to constrain first-order rupture complexity of large earthquakes robustly. Additionally, relatively few parameters in the inverse problem yields improved uncertainty analysis based on Markov chain Monte Carlo sampling in a Bayesian framework. Synthetic tests and application of MHS method on real earthquakes show that our method can capture major features of large earthquake rupture process, and provide information for more detailed rupture history analysis.

  16. Fringe fields modeling for the high luminosity LHC large aperture quadrupoles

    CERN Document Server

    Dalena, B; Payet, J; Chancé, A; Brett, D R; Appleby, R B; De Maria, R; Giovannozzi, M

    2014-01-01

    The HL-LHC Upgrade project relies on large aperture magnets (mainly the inner Triplet and the separation dipole D1). The beam is much more sensitive to non-linear perturbations in this region, such as those induced by the fringe fields of the low-beta quadrupoles. Different tracking models are compared in order to provide a numerical estimate of the impact of fringe fields for the actual design of the inner triplet quadrupoles. The implementation of the fringe fields in SixTrack, to be used for dynamic apertures studies, is also discussed.

  17. Precise large deviations of aggregate claims in a size-dependent renewal risk model with stopping time claim-number process

    Directory of Open Access Journals (Sweden)

    Shuo Zhang

    2017-04-01

    Full Text Available Abstract In this paper, we consider a size-dependent renewal risk model with stopping time claim-number process. In this model, we do not make any assumption on the dependence structure of claim sizes and inter-arrival times. We study large deviations of the aggregate amount of claims. For the subexponential heavy-tailed case, we obtain a precise large-deviation formula; our method substantially relies on a martingale for the structure of our models.

  18. High-frequency remote monitoring of large lakes with MODIS 500 m imagery

    Science.gov (United States)

    McCullough, Ian M.; Loftin, Cynthia S.; Sader, Steven A.

    2012-01-01

    Satellite-based remote monitoring programs of regional lake water quality largely have relied on Landsat Thematic Mapper (TM) owing to its long image archive, moderate spatial resolution (30 m), and wide sensitivity in the visible portion of the electromagnetic spectrum, despite some notable limitations such as temporal resolution (i.e., 16 days), data pre-processing requirements to improve data quality, and aging satellites. Moderate-Resolution Imaging Spectroradiometer (MODIS) sensors on Aqua/Terra platforms compensate for these shortcomings, although at the expense of spatial resolution. We developed and evaluated a remote monitoring protocol for water clarity of large lakes using MODIS 500 m data and compared MODIS utility to Landsat-based methods. MODIS images captured during May–September 2001, 2004 and 2010 were analyzed with linear regression to identify the relationship between lake water clarity and satellite-measured surface reflectance. Correlations were strong (R² = 0.72–0.94) throughout the study period; however, they were the most consistent in August, reflecting seasonally unstable lake conditions and inter-annual differences in algal productivity during the other months. The utility of MODIS data in remote water quality estimation lies in intra-annual monitoring of lake water clarity in inaccessible, large lakes, whereas Landsat is more appropriate for inter-annual, regional trend analyses of lakes ≥ 8 ha. Model accuracy is improved when ancillary variables are included to reflect seasonal lake dynamics and weather patterns that influence lake clarity. The identification of landscape-scale drivers of regional water quality is a useful way to supplement satellite-based remote monitoring programs relying on spectral data alone.

  19. Part-based Pedestrian Detection and Feature-based Tracking for Driver Assistance

    DEFF Research Database (Denmark)

    Prioletti, Antonio; Møgelmose, Andreas; Grislieri, Paolo

    2013-01-01

    Detecting pedestrians is still a challenging task for automotive vision systems due to the extreme variability of targets, lighting conditions, occlusion, and high-speed vehicle motion. Much research has been focused on this problem in the last ten years and detectors based on classifiers have...... on a prototype vehicle and offers high performance in terms of several metrics, such as detection rate, false positives per hour, and frame rate. The novelty of this system relies on the combination of a HOG part-based approach, tracking based on a specific optimized feature, and porting on a real prototype....

  20. Working Around the Military. Challenges to Military Spouse Employment and Education

    National Research Council Canada - National Science Library

    Harrell, Margaret

    2004-01-01

    Successful recruiting and retention of the active duty force relies in large part on the extent to which service members and their spouses experience both job satisfaction and contentment with life in the military...

  1. Plug-and-Design: Embracing Mobile Devices as Part of the Design Environment

    OpenAIRE

    MESKENS, Jan; LUYTEN, Kris; CONINX, Karin

    2009-01-01

    Due to the large amount of mobile devices that continue to appear on the consumer market, mobile user interface design becomes increasingly important. The major issue with many existing mobile user interface design approaches is the time and effort that is needed to deploy a user interface design to the target device. In order to address this issue, we propose the plug-and-design tool that relies on a continuous multi-device mouse pointer to design user interfaces directly on the mobile targe...

  2. A regularized vortex-particle mesh method for large eddy simulation

    DEFF Research Database (Denmark)

    Spietz, Henrik Juul; Walther, Jens Honore; Hejlesen, Mads Mølholm

    We present recent developments of the remeshed vortex particle-mesh method for simulating incompressible fluid flow. The presented method relies on a parallel higher-order FFT based solver for the Poisson equation. Arbitrary high order is achieved through regularization of singular Green’s function...... solutions to the Poisson equation and recently we have derived novel high order solutions for a mixture of open and periodic domains. With this approach the simulated variables may formally be viewed as the approximate solution to the filtered Navier Stokes equations, hence we use the method for Large Eddy...

  3. Resource analysis of the Chinese society 1980-2002 based on exergy-Part 4: Fishery and rangeland

    International Nuclear Information System (INIS)

    Chen, B.; Chen, G.Q.

    2007-01-01

    This fourth part is the continuation of the third part on agricultural products. The major fishery and rangeland products entering the Chinese society from 1980 to 2002 are calculated and analyzed in detail in this paper. The aquatic production, mainly relying on freshwater and seawater breeding, Enhancement policy of fishery resources, including closed fishing season system, construction of artificial fish reefs and ecological fish breeding, etc., is discussed in detail. The degradation of the major rangeland areas, hay yields and intake rangeland resources by the livestock, are also described associated with the strategic adjustment and comprehensive program to protect rangeland resources during the study period

  4. Rucio - The next generation large scale distributed system for ATLAS Data Management

    CERN Document Server

    Beermann, T; The ATLAS collaboration; Lassnig, M; Barisits, M; Vigne, R; Serfon, C; Stewart, G A; Goossens, L; Nairz, A; Molfetas, A

    2014-01-01

    Rucio is the next-generation Distributed Data Management (DDM) system benefiting from recent advances in cloud and "Big Data" computing to address the ATLAS experiment scaling requirements. Rucio is an evolution of the ATLAS DDM system Don Quijote 2 (DQ2), which has demonstrated very large scale data management capabilities with more than 150 petabytes spread worldwide across 130 sites, and accesses from 1,000 active users. However, DQ2 is reaching its limits in terms of scalability, requiring a large number of support staff to operate and being hard to extend with new technologies. Rucio will deal with these issues by relying on new technologies to ensure system scalability, address new user requirements and employ a new automation framework to reduce operational overheads.

  5. The Low Pitch of High-Frequency Complex Tones Relies on Temporal Fine Structure Information

    DEFF Research Database (Denmark)

    Santurette, Sébastien; Dau, Torsten

    2010-01-01

    High-frequency complex tones containing only unresolved harmonic components with a frequency spacing Δf usually evoke a low pitch equal to Δf. However, for inharmonic components, the low pitch is often found to deviate slightly from Δf. Whether this pitch shift relies exclusively on temporal fine...... structure (TFS) cues has been a matter of debate. It is also controversial up to which frequency TFS information remains available, and to what extent envelope cues become dominant as frequency increases. Using a pitch-matching paradigm, this study investigated whether the pitch of transposed tones.......5]. All stimuli were presented at 50 dB SPL in broadband pink-noise (13.5 dB/Hz at 1 kHz), and 40 matches per condition were obtained. For fenv = fc/11.5, the results favored hypothesis A for all values of fc, indicating that TFS cues are available and used for pitch extraction, up to at least 7 k...

  6. Large-group psychodynamics and massive violence

    Directory of Open Access Journals (Sweden)

    Vamik D. Volkan

    2006-06-01

    Full Text Available Beginning with Freud, psychoanalytic theories concerning large groups have mainly focused on individuals' perceptions of what their large groups psychologically mean to them. This chapter examines some aspects of large-group psychology in its own right and studies psychodynamics of ethnic, national, religious or ideological groups, the membership of which originates in childhood. I will compare the mourning process in individuals with the mourning process in large groups to illustrate why we need to study large-group psychology as a subject in itself. As part of this discussion I will also describe signs and symptoms of large-group regression. When there is a threat against a large-group's identity, massive violence may be initiated and this violence in turn, has an obvious impact on public health.

  7. A large deviation principle in H\\"older norm for multiple fractional integrals

    OpenAIRE

    Sanz-Solé, Marta; Torrecilla-Tarantino, Iván

    2007-01-01

    For a fractional Brownian motion $B^H$ with Hurst parameter $H\\in]{1/4},{1/2}[\\cup]{1/2},1[$, multiple indefinite integrals on a simplex are constructed and the regularity of their sample paths are studied. Then, it is proved that the family of probability laws of the processes obtained by replacing $B^H$ by $\\epsilon^{{1/2}} B^H$ satisfies a large deviation principle in H\\"older norm. The definition of the multiple integrals relies upon a representation of the fractional Brownian motion in t...

  8. Large area and flexible electronics

    CERN Document Server

    Caironi, Mario

    2015-01-01

    From materials to applications, this ready reference covers the entire value chain from fundamentals via processing right up to devices, presenting different approaches to large-area electronics, thus enabling readers to compare materials, properties and performance.Divided into two parts, the first focuses on the materials used for the electronic functionality, covering organic and inorganic semiconductors, including vacuum and solution-processed metal-oxide semiconductors, nanomembranes and nanocrystals, as well as conductors and insulators. The second part reviews the devices and applicatio

  9. Organization of Circadian Behavior Relies on Glycinergic Transmission.

    Science.gov (United States)

    Frenkel, Lia; Muraro, Nara I; Beltrán González, Andrea N; Marcora, María S; Bernabó, Guillermo; Hermann-Luibl, Christiane; Romero, Juan I; Helfrich-Förster, Charlotte; Castaño, Eduardo M; Marino-Busjle, Cristina; Calvo, Daniel J; Ceriani, M Fernanda

    2017-04-04

    The small ventral lateral neurons (sLNvs) constitute a central circadian pacemaker in the Drosophila brain. They organize daily locomotor activity, partly through the release of the neuropeptide pigment-dispersing factor (PDF), coordinating the action of the remaining clusters required for network synchronization. Despite extensive efforts, the basic principles underlying communication among circadian clusters remain obscure. We identified classical neurotransmitters released by sLNvs through disruption of specific transporters. Adult-specific RNAi-mediated downregulation of the glycine transporter or impairment of glycine synthesis in LNv neurons increased period length by nearly an hour without affecting rhythmicity of locomotor activity. Electrophysiological recordings showed that glycine reduces spiking frequency in circadian neurons. Interestingly, downregulation of glycine receptor subunits in specific sLNv targets impaired rhythmicity, revealing involvement of glycine in information processing within the network. These data identify glycinergic inhibition of specific targets as a cue that contributes to the synchronization of the circadian network. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Dipolar modulation of Large-Scale Structure

    Science.gov (United States)

    Yoon, Mijin

    For the last two decades, we have seen a drastic development of modern cosmology based on various observations such as the cosmic microwave background (CMB), type Ia supernovae, and baryonic acoustic oscillations (BAO). These observational evidences have led us to a great deal of consensus on the cosmological model so-called LambdaCDM and tight constraints on cosmological parameters consisting the model. On the other hand, the advancement in cosmology relies on the cosmological principle: the universe is isotropic and homogeneous on large scales. Testing these fundamental assumptions is crucial and will soon become possible given the planned observations ahead. Dipolar modulation is the largest angular anisotropy of the sky, which is quantified by its direction and amplitude. We measured a huge dipolar modulation in CMB, which mainly originated from our solar system's motion relative to CMB rest frame. However, we have not yet acquired consistent measurements of dipolar modulations in large-scale structure (LSS), as they require large sky coverage and a number of well-identified objects. In this thesis, we explore measurement of dipolar modulation in number counts of LSS objects as a test of statistical isotropy. This thesis is based on two papers that were published in peer-reviewed journals. In Chapter 2 [Yoon et al., 2014], we measured a dipolar modulation in number counts of WISE matched with 2MASS sources. In Chapter 3 [Yoon & Huterer, 2015], we investigated requirements for detection of kinematic dipole in future surveys.

  11. Substantial improvements in large-scale redocking and screening using the novel HYDE scoring function

    Science.gov (United States)

    Schneider, Nadine; Hindle, Sally; Lange, Gudrun; Klein, Robert; Albrecht, Jürgen; Briem, Hans; Beyer, Kristin; Claußen, Holger; Gastreich, Marcus; Lemmen, Christian; Rarey, Matthias

    2012-06-01

    The HYDE scoring function consistently describes hydrogen bonding, the hydrophobic effect and desolvation. It relies on HYdration and DEsolvation terms which are calibrated using octanol/water partition coefficients of small molecules. We do not use affinity data for calibration, therefore HYDE is generally applicable to all protein targets. HYDE reflects the Gibbs free energy of binding while only considering the essential interactions of protein-ligand complexes. The greatest benefit of HYDE is that it yields a very intuitive atom-based score, which can be mapped onto the ligand and protein atoms. This allows the direct visualization of the score and consequently facilitates analysis of protein-ligand complexes during the lead optimization process. In this study, we validated our new scoring function by applying it in large-scale docking experiments. We could successfully predict the correct binding mode in 93% of complexes in redocking calculations on the Astex diverse set, while our performance in virtual screening experiments using the DUD dataset showed significant enrichment values with a mean AUC of 0.77 across all protein targets with little or no structural defects. As part of these studies, we also carried out a very detailed analysis of the data that revealed interesting pitfalls, which we highlight here and which should be addressed in future benchmark datasets.

  12. The large deviation approach to statistical mechanics

    International Nuclear Information System (INIS)

    Touchette, Hugo

    2009-01-01

    The theory of large deviations is concerned with the exponential decay of probabilities of large fluctuations in random systems. These probabilities are important in many fields of study, including statistics, finance, and engineering, as they often yield valuable information about the large fluctuations of a random system around its most probable state or trajectory. In the context of equilibrium statistical mechanics, the theory of large deviations provides exponential-order estimates of probabilities that refine and generalize Einstein's theory of fluctuations. This review explores this and other connections between large deviation theory and statistical mechanics, in an effort to show that the mathematical language of statistical mechanics is the language of large deviation theory. The first part of the review presents the basics of large deviation theory, and works out many of its classical applications related to sums of random variables and Markov processes. The second part goes through many problems and results of statistical mechanics, and shows how these can be formulated and derived within the context of large deviation theory. The problems and results treated cover a wide range of physical systems, including equilibrium many-particle systems, noise-perturbed dynamics, nonequilibrium systems, as well as multifractals, disordered systems, and chaotic systems. This review also covers many fundamental aspects of statistical mechanics, such as the derivation of variational principles characterizing equilibrium and nonequilibrium states, the breaking of the Legendre transform for nonconcave entropies, and the characterization of nonequilibrium fluctuations through fluctuation relations.

  13. The large deviation approach to statistical mechanics

    Science.gov (United States)

    Touchette, Hugo

    2009-07-01

    The theory of large deviations is concerned with the exponential decay of probabilities of large fluctuations in random systems. These probabilities are important in many fields of study, including statistics, finance, and engineering, as they often yield valuable information about the large fluctuations of a random system around its most probable state or trajectory. In the context of equilibrium statistical mechanics, the theory of large deviations provides exponential-order estimates of probabilities that refine and generalize Einstein’s theory of fluctuations. This review explores this and other connections between large deviation theory and statistical mechanics, in an effort to show that the mathematical language of statistical mechanics is the language of large deviation theory. The first part of the review presents the basics of large deviation theory, and works out many of its classical applications related to sums of random variables and Markov processes. The second part goes through many problems and results of statistical mechanics, and shows how these can be formulated and derived within the context of large deviation theory. The problems and results treated cover a wide range of physical systems, including equilibrium many-particle systems, noise-perturbed dynamics, nonequilibrium systems, as well as multifractals, disordered systems, and chaotic systems. This review also covers many fundamental aspects of statistical mechanics, such as the derivation of variational principles characterizing equilibrium and nonequilibrium states, the breaking of the Legendre transform for nonconcave entropies, and the characterization of nonequilibrium fluctuations through fluctuation relations.

  14. A course in mathematical statistics and large sample theory

    CERN Document Server

    Bhattacharya, Rabi; Patrangenaru, Victor

    2016-01-01

    This graduate-level textbook is primarily aimed at graduate students of statistics, mathematics, science, and engineering who have had an undergraduate course in statistics, an upper division course in analysis, and some acquaintance with measure theoretic probability. It provides a rigorous presentation of the core of mathematical statistics. Part I of this book constitutes a one-semester course on basic parametric mathematical statistics. Part II deals with the large sample theory of statistics — parametric and nonparametric, and its contents may be covered in one semester as well. Part III provides brief accounts of a number of topics of current interest for practitioners and other disciplines whose work involves statistical methods. Large Sample theory with many worked examples, numerical calculations, and simulations to illustrate theory Appendices provide ready access to a number of standard results, with many proofs Solutions given to a number of selected exercises from Part I Part II exercises with ...

  15. A methodology for fault diagnosis in large chemical processes and an application to a multistage flash desalination process: Part I

    International Nuclear Information System (INIS)

    Tarifa, Enrique E.; Scenna, Nicolas J.

    1998-01-01

    This work presents a new strategy for fault diagnosis in large chemical processes (E.E. Tarifa, Fault diagnosis in complex chemistries plants: plants of large dimensions and batch processes. Ph.D. thesis, Universidad Nacional del Litoral, Santa Fe, 1995). A special decomposition of the plant is made in sectors. Afterwards each sector is studied independently. These steps are carried out in the off-line mode. They produced vital information for the diagnosis system. This system works in the on-line mode and is based on a two-tier strategy. When a fault is produced, the upper level identifies the faulty sector. Then, the lower level carries out an in-depth study that focuses only on the critical sectors to identify the fault. The loss of information produced by the process partition may cause spurious diagnosis. This problem is overcome at the second level using qualitative simulation and fuzzy logic. In the second part of this work, the new methodology is tested to evaluate its performance in practical cases. A multiple stage flash desalination system (MSF) is chosen because it is a complex system, with many recycles and variables to be supervised. The steps for the knowledge base generation and all the blocks included in the diagnosis system are analyzed. Evaluation of the diagnosis performance is carried out using a rigorous dynamic simulator

  16. Glass: a small part of the Climate Change problem, a large part of the solution

    Directory of Open Access Journals (Sweden)

    Stockdale, J.

    2009-04-01

    Full Text Available The challenging EU targets for reducing greenhouse gas emissions and generating electricity from renewable sources were established as – 20% and 20% by 2020. As part of the strategy, EU confirmed in 2007 the need to save around 300 million tonnes of CO2 per year from EU buildings by 2020. Housing itself accounts for some 40% of emissions, mostly associated with heating. Industry will be expected to source and use appropriate materials and process technologies to improve their own energy consumption and at the same time deliver products that permit to reach those targets. This article examines the relationship between the emissions from relevant sectors of the glass industry and compares them with the carbon savings that can be achieved with the products the industry makes. Four main areas are discussed: glass fibre insulation, advanced glazing (low emissivity glass and advanced solar glass, continuous filament glass fibre and special glass applications. It is suggested that as well as considering the use of free allowances or border carbon adjustment, member states need to take account of the benefit of these products when formulating emission constraint policies; a carbon credit feedback loop should be also explored to encourage cheaper production and installation and avoid carbon leakage.

    La UE ha establecido los objetivos de reducción de emisiones de CO2 en -20% y de generación de electricidad a partir de energías renovables en el 20% para el año 2020. Como parte de esta estrategia la UE confirmó en 2007 la necesidad de reducir en 300 millones de toneladas por año las emisiones provenientes de los edificios en el mismo año 2020. El parque de viviendas aporta alrededor del 40% de las emisiones, básicamente relacionadas con sistemas de calefacción. Se espera de la industria que utilice procesos apropiados para mejorar su propio consumo energético y al mismo tiempo desarrolle y produzca materiales que

  17. Importance of the inverted control in measuring holistic face processing with the composite effect and part-whole effect.

    Directory of Open Access Journals (Sweden)

    Elinor eMcKone

    2013-02-01

    Full Text Available Holistic coding for faces is shown in several illusions that demonstrate integration of the percept across the entire face. The illusions occur upright but, crucially, not inverted. Converting the illusions into experimental tasks that measure their strength – and thus index degree of holistic coding – is often considered straightforward yet in fact relies on a hidden assumption, namely that there is no contribution to the experimental measure from secondary cognitive factors. For the composite effect, a relevant secondary factor is size of the "spotlight" of visuospatial attention. The composite task assumes this spotlight can be easily restricted to the target half (e.g., top half of the compound face stimulus. Yet, if this assumption were not true then a large spotlight, in the absence of holistic perception, could produce a false composite effect, present even for inverted faces and contributing partially to the score for upright faces. We review evidence that various factors can influence spotlight size: race/culture (Asians often prefer a more global distribution of attention than Caucasians; sex (females can be more global; appearance of the join or gap between face halves; and location of the eyes, which typically attract attention. Results from 5 experiments then show inverted faces can sometimes produce large false composite effects, and imply that whether this happens or not depends on complex interactions between causal factors. We also report, for both identity and expression, that only top-half-face targets (containing eyes produce valid composite measures. A sixth experiment demonstrates an example of a false inverted part-whole effect, where encoding-specificity is the secondary cognitive factor. We conclude the inverted face control should be tested in all composite and part-whole studies, and an effect for upright faces should be interpreted as a pure measure of holistic processing only when the experimental design produces

  18. Importance of the Inverted Control in Measuring Holistic Face Processing with the Composite Effect and Part-Whole Effect

    Science.gov (United States)

    McKone, Elinor; Davies, Anne Aimola; Darke, Hayley; Crookes, Kate; Wickramariyaratne, Tushara; Zappia, Stephanie; Fiorentini, Chiara; Favelle, Simone; Broughton, Mary; Fernando, Dinusha

    2013-01-01

    Holistic coding for faces is shown in several illusions that demonstrate integration of the percept across the entire face. The illusions occur upright but, crucially, not inverted. Converting the illusions into experimental tasks that measure their strength – and thus index degree of holistic coding – is often considered straightforward yet in fact relies on a hidden assumption, namely that there is no contribution to the experimental measure from secondary cognitive factors. For the composite effect, a relevant secondary factor is size of the “spotlight” of visuospatial attention. The composite task assumes this spotlight can be easily restricted to the target half (e.g., top-half) of the compound face stimulus. Yet, if this assumption were not true then a large spotlight, in the absence of holistic perception, could produce a false composite effect, present even for inverted faces and contributing partially to the score for upright faces. We review evidence that various factors can influence spotlight size: race/culture (Asians often prefer a more global distribution of attention than Caucasians); sex (females can be more global); appearance of the join or gap between face halves; and location of the eyes, which typically attract attention. Results from five experiments then show inverted faces can sometimes produce large false composite effects, and imply that whether this happens or not depends on complex interactions between causal factors. We also report, for both identity and expression, that only top-half face targets (containing eyes) produce valid composite measures. A sixth experiment demonstrates an example of a false inverted part-whole effect, where encoding-specificity is the secondary cognitive factor. We conclude the inverted face control should be tested in all composite and part-whole studies, and an effect for upright faces should be interpreted as a pure measure of holistic processing only when the experimental design produces no

  19. Measurement of Atmospheric Neutrino Oscillations with Very Large Volume Neutrino Telescopes

    Directory of Open Access Journals (Sweden)

    J. P. Yáñez

    2015-01-01

    Full Text Available Neutrino oscillations have been probed during the last few decades using multiple neutrino sources and experimental set-ups. In the recent years, very large volume neutrino telescopes have started contributing to the field. First ANTARES and then IceCube have relied on large and sparsely instrumented volumes to observe atmospheric neutrinos for combinations of baselines and energies inaccessible to other experiments. Using this advantage, the latest result from IceCube starts approaching the precision of other established technologies and is paving the way for future detectors, such as ORCA and PINGU. These new projects seek to provide better measurements of neutrino oscillation parameters and eventually determine the neutrino mass ordering. The results from running experiments and the potential from proposed projects are discussed in this review, emphasizing the experimental challenges involved in the measurements.

  20. Interaction in the large energetics companies in the Republic of Macedonia (Part 3)

    International Nuclear Information System (INIS)

    Janevski, Risto

    2000-01-01

    After the disintegration of former power energetic system of Yugoslavia 1991, the Republic of Macedonia has faced enormous problems in the energetic field. It was necessary to realize all options in order to secure enough electric power for normal economic capacities function. In that course a direct involvement of five large companies, which represent very significant energetic subjects, will largely determine the future energetic conditions and circumstances in our country. These are the following companies: P.E. Electric Power Co. of Macedonia; Heat Power Co.; HEK Jugohrom; Fenimak. In this paper the energetic system of the OKTA Crude Oil Refinery from 1991-1998 is analyzed, as well as its characteristics and plans for the future development

  1. DC Microgrids – Part I

    DEFF Research Database (Denmark)

    Dragicevic, Tomislav; Lu, Xiaonan; Quintero, Juan Carlos Vasquez

    2016-01-01

    which relies only on local measurements, some line of communication between units needs to be made available in order to achieve coordinated control. Depending on the communication method, three basic coordinated control strategies can be distinguished, i.e. decentralized, centralized and distributed...... control. Decentralized control can be regarded as an extension of local control since it is also based exclusively on local measurements. In contrast, centralized and distributed control strategies rely on digital communication technologies. A number of approaches to using these three coordinated control...

  2. Grinding Parts For Automatic Welding

    Science.gov (United States)

    Burley, Richard K.; Hoult, William S.

    1989-01-01

    Rollers guide grinding tool along prospective welding path. Skatelike fixture holds rotary grinder or file for machining large-diameter rings or ring segments in preparation for welding. Operator grasps handles to push rolling fixture along part. Rollers maintain precise dimensional relationship so grinding wheel cuts precise depth. Fixture-mounted grinder machines surface to quality sufficient for automatic welding; manual welding with attendant variations and distortion not necessary. Developed to enable automatic welding of parts, manual welding of which resulted in weld bead permeated with microscopic fissures.

  3. Algorithm 873: LSTRS: MATLAB Software for Large-Scale Trust-Region Subproblems and Regularization

    DEFF Research Database (Denmark)

    Rojas Larrazabal, Marielba de la Caridad; Santos, Sandra A.; Sorensen, Danny C.

    2008-01-01

    A MATLAB 6.0 implementation of the LSTRS method is resented. LSTRS was described in Rojas, M., Santos, S.A., and Sorensen, D.C., A new matrix-free method for the large-scale trust-region subproblem, SIAM J. Optim., 11(3):611-646, 2000. LSTRS is designed for large-scale quadratic problems with one...... at each step. LSTRS relies on matrix-vector products only and has low and fixed storage requirements, features that make it suitable for large-scale computations. In the MATLAB implementation, the Hessian matrix of the quadratic objective function can be specified either explicitly, or in the form...... of a matrix-vector multiplication routine. Therefore, the implementation preserves the matrix-free nature of the method. A description of the LSTRS method and of the MATLAB software, version 1.2, is presented. Comparisons with other techniques and applications of the method are also included. A guide...

  4. Hummingbirds rely on both paracellular and carrier-mediated intestinal glucose absorption to fuel high metabolism

    Science.gov (United States)

    McWhorter, Todd J; Bakken, Bradley Hartman; Karasov, William H; del Rio, Carlos Martínez

    2005-01-01

    Twenty years ago, the highest active glucose transport rate and lowest passive glucose permeability in vertebrates were reported in Rufous and Anna's hummingbirds (Selasphorus rufus, Calypte anna). These first measurements of intestinal nutrient absorption in nectarivores provided an unprecedented physiological foundation for understanding their foraging ecology. They showed that physiological processes are determinants of feeding behaviour. The conclusion that active, mediated transport accounts for essentially all glucose absorption in hummingbirds influenced two decades of subsequent research on the digestive physiology and nutritional ecology of nectarivores. Here, we report new findings demonstrating that the passive permeability of hummingbird intestines to glucose is much higher than previously reported, suggesting that not all sugar uptake is mediated. Even while possessing the highest active glucose transport rates measured in vertebrates, hummingbirds must rely partially on passive non-mediated intestinal nutrient absorption to meet their high mass-specific metabolic demands. PMID:17148346

  5. Laser-induced breakdown spectroscopy and chemometrics for classification of toys relying on toxic elements

    International Nuclear Information System (INIS)

    Godoi, Quienly; Leme, Flavio O.; Trevizan, Lilian C.; Pereira Filho, Edenir R.; Rufini, Iolanda A.; Santos, Dario; Krug, Francisco J.

    2011-01-01

    Quality control of toys for avoiding children exposure to potentially toxic elements is of utmost relevance and it is a common requirement in national and/or international norms for health and safety reasons. Laser-induced breakdown spectroscopy (LIBS) was recently evaluated at authors' laboratory for direct analysis of plastic toys and one of the main difficulties for the determination of Cd, Cr and Pb was the variety of mixtures and types of polymers. As most norms rely on migration (lixiviation) protocols, chemometric classification models from LIBS spectra were tested for sampling toys that present potential risk of Cd, Cr and Pb contamination. The classification models were generated from the emission spectra of 51 polymeric toys and by using Partial Least Squares - Discriminant Analysis (PLS-DA), Soft Independent Modeling of Class Analogy (SIMCA) and K-Nearest Neighbor (KNN). The classification models and validations were carried out with 40 and 11 test samples, respectively. Best results were obtained when KNN was used, with corrected predictions varying from 95% for Cd to 100% for Cr and Pb.

  6. Analysis Methods for Extracting Knowledge from Large-Scale WiFi Monitoring to Inform Building Facility Planning

    DEFF Research Database (Denmark)

    Ruiz-Ruiz, Antonio; Blunck, Henrik; Prentow, Thor Siiger

    2014-01-01

    realistic data to inform facility planning. In this paper, we propose analysis methods to extract knowledge from large sets of network collected WiFi traces to better inform facility management and planning in large building complexes. The analysis methods, which build on a rich set of temporal and spatial......The optimization of logistics in large building com- plexes with many resources, such as hospitals, require realistic facility management and planning. Current planning practices rely foremost on manual observations or coarse unverified as- sumptions and therefore do not properly scale or provide....... Spatio-temporal visualization tools built on top of these methods enable planners to inspect and explore extracted information to inform facility-planning activities. To evaluate the methods, we present results for a large hospital complex covering more than 10 hectares. The evaluation is based on Wi...

  7. Social plasticity relies on different neuroplasticity mechanisms across the brain social decision-making network in zebrafish

    Directory of Open Access Journals (Sweden)

    Magda C Teles

    2016-02-01

    Full Text Available Social living animals need to adjust the expression of their behavior to their status within the group and to changes in social context and this ability (social plasticity has an impact on their Darwinian fitness. At the proximate level social plasticity must rely on neuroplasticity in the brain social decision-making network (SDMN that underlies the expression of social behavior, such that the same neural circuit may underlie the expression of different behaviors depending on social context. Here we tested this hypothesis in zebrafish by characterizing the gene expression response in the SDMN to changes in social status of a set of genes involved in different types of neural plasticity: bdnf, involved in changes in synaptic strength; npas4, involved in contextual learning and dependent establishment of GABAergic synapses; neuroligins (nlgn1 and nlgn2 as synaptogenesis markers; and genes involved in adult neurogenesis (wnt3 and neurod. Four social phenotypes were experimentally induced: Winners and Losers of a real-opponent interaction; Mirror-fighters, that fight their own image in a mirror and thus do not experience a change in social status despite the expression of aggressive behavior; and non-interacting fish, which were used as a reference group. Our results show that each social phenotype (i.e. Winners, Losers and Mirror-fighters present specific patterns of gene expression across the SDMN, and that different neuroplasticity genes are differentially expressed in different nodes of the network (e.g. BDNF in the dorsolateral telencephalon, which is a putative teleost homologue of the mammalian hippocampus. Winners expressed unique patterns of gene co-expression across the SDMN, whereas in Losers and Mirror-fighters the co-expression patterns were similar in the dorsal regions of the telencephalon and in the supracommissural nucleus of the ventral telencephalic area, but differents in the remaining regions of the ventral telencephalon. These

  8. Social Plasticity Relies on Different Neuroplasticity Mechanisms across the Brain Social Decision-Making Network in Zebrafish.

    Science.gov (United States)

    Teles, Magda C; Cardoso, Sara D; Oliveira, Rui F

    2016-01-01

    Social living animals need to adjust the expression of their behavior to their status within the group and to changes in social context and this ability (social plasticity) has an impact on their Darwinian fitness. At the proximate level social plasticity must rely on neuroplasticity in the brain social decision-making network (SDMN) that underlies the expression of social behavior, such that the same neural circuit may underlie the expression of different behaviors depending on social context. Here we tested this hypothesis in zebrafish by characterizing the gene expression response in the SDMN to changes in social status of a set of genes involved in different types of neural plasticity: bdnf, involved in changes in synaptic strength; npas4, involved in contextual learning and dependent establishment of GABAergic synapses; neuroligins (nlgn1 and nlgn2) as synaptogenesis markers; and genes involved in adult neurogenesis (wnt3 and neurod). Four social phenotypes were experimentally induced: Winners and Losers of a real-opponent interaction; Mirror-fighters, that fight their own image in a mirror and thus do not experience a change in social status despite the expression of aggressive behavior; and non-interacting fish, which were used as a reference group. Our results show that each social phenotype (i.e., Winners, Losers, and Mirror-fighters) present specific patterns of gene expression across the SDMN, and that different neuroplasticity genes are differentially expressed in different nodes of the network (e.g., BDNF in the dorsolateral telencephalon, which is a putative teleost homolog of the mammalian hippocampus). Winners expressed unique patterns of gene co-expression across the SDMN, whereas in Losers and Mirror-fighters the co-expression patterns were similar in the dorsal regions of the telencephalon and in the supracommissural nucleus of the ventral telencephalic area, but differents in the remaining regions of the ventral telencephalon. These results

  9. Social Plasticity Relies on Different Neuroplasticity Mechanisms across the Brain Social Decision-Making Network in Zebrafish

    Science.gov (United States)

    Teles, Magda C.; Cardoso, Sara D.; Oliveira, Rui F.

    2016-01-01

    Social living animals need to adjust the expression of their behavior to their status within the group and to changes in social context and this ability (social plasticity) has an impact on their Darwinian fitness. At the proximate level social plasticity must rely on neuroplasticity in the brain social decision-making network (SDMN) that underlies the expression of social behavior, such that the same neural circuit may underlie the expression of different behaviors depending on social context. Here we tested this hypothesis in zebrafish by characterizing the gene expression response in the SDMN to changes in social status of a set of genes involved in different types of neural plasticity: bdnf, involved in changes in synaptic strength; npas4, involved in contextual learning and dependent establishment of GABAergic synapses; neuroligins (nlgn1 and nlgn2) as synaptogenesis markers; and genes involved in adult neurogenesis (wnt3 and neurod). Four social phenotypes were experimentally induced: Winners and Losers of a real-opponent interaction; Mirror-fighters, that fight their own image in a mirror and thus do not experience a change in social status despite the expression of aggressive behavior; and non-interacting fish, which were used as a reference group. Our results show that each social phenotype (i.e., Winners, Losers, and Mirror-fighters) present specific patterns of gene expression across the SDMN, and that different neuroplasticity genes are differentially expressed in different nodes of the network (e.g., BDNF in the dorsolateral telencephalon, which is a putative teleost homolog of the mammalian hippocampus). Winners expressed unique patterns of gene co-expression across the SDMN, whereas in Losers and Mirror-fighters the co-expression patterns were similar in the dorsal regions of the telencephalon and in the supracommissural nucleus of the ventral telencephalic area, but differents in the remaining regions of the ventral telencephalon. These results

  10. VisualRank: applying PageRank to large-scale image search.

    Science.gov (United States)

    Jing, Yushi; Baluja, Shumeet

    2008-11-01

    Because of the relative ease in understanding and processing text, commercial image-search systems often rely on techniques that are largely indistinguishable from text-search. Recently, academic studies have demonstrated the effectiveness of employing image-based features to provide alternative or additional signals. However, it remains uncertain whether such techniques will generalize to a large number of popular web queries, and whether the potential improvement to search quality warrants the additional computational cost. In this work, we cast the image-ranking problem into the task of identifying "authority" nodes on an inferred visual similarity graph and propose VisualRank to analyze the visual link structures among images. The images found to be "authorities" are chosen as those that answer the image-queries well. To understand the performance of such an approach in a real system, we conducted a series of large-scale experiments based on the task of retrieving images for 2000 of the most popular products queries. Our experimental results show significant improvement, in terms of user satisfaction and relevancy, in comparison to the most recent Google Image Search results. Maintaining modest computational cost is vital to ensuring that this procedure can be used in practice; we describe the techniques required to make this system practical for large scale deployment in commercial search engines.

  11. Charging in the environment of large spacecraft

    International Nuclear Information System (INIS)

    Lai, S.T.

    1993-01-01

    This paper discusses some potential problems of spacecraft charging as a result of interactions between a large spacecraft, such as the Space Station, and its environment. Induced electric field, due to VXB effect, may be important for large spacecraft at low earth orbits. Differential charging, due to different properties of surface materials, may be significant when the spacecraft is partly in sunshine and partly in shadow. Triple-root potential jump condition may occur because of differential charging. Sudden onset of severe differential charging may occur when an electron or ion beam is emitted from the spacecraft. The beam may partially return to the ''hot spots'' on the spacecraft. Wake effects, due to blocking of ambient ion trajectories, may result in an undesirable negative potential region in the vicinity of a large spacecraft. Outgassing and exhaust may form a significant spacecraft induced environment; ionization may occur. Spacecraft charging and discharging may affect the electronic components on board

  12. Center-stabilized Yang-Mills Theory:Confinement and Large N Volume Independence

    International Nuclear Information System (INIS)

    Unsal, Mithat; Yaffe, Laurence G.

    2008-01-01

    We examine a double trace deformation of SU(N) Yang-Mills theory which, for large N and large volume, is equivalent to unmodified Yang-Mills theory up to O(1/N 2 ) corrections. In contrast to the unmodified theory, large N volume independence is valid in the deformed theory down to arbitrarily small volumes. The double trace deformation prevents the spontaneous breaking of center symmetry which would otherwise disrupt large N volume independence in small volumes. For small values of N, if the theory is formulated on R 3 x S 1 with a sufficiently small compactification size L, then an analytic treatment of the non-perturbative dynamics of the deformed theory is possible. In this regime, we show that the deformed Yang-Mills theory has a mass gap and exhibits linear confinement. Increasing the circumference L or number of colors N decreases the separation of scales on which the analytic treatment relies. However, there are no order parameters which distinguish the small and large radius regimes. Consequently, for small N the deformed theory provides a novel example of a locally four-dimensional pure gauge theory in which one has analytic control over confinement, while for large N it provides a simple fully reduced model for Yang-Mills theory. The construction is easily generalized to QCD and other QCD-like theories

  13. Center-stabilized Yang-Mills theory: Confinement and large N volume independence

    International Nuclear Information System (INIS)

    Uensal, Mithat; Yaffe, Laurence G.

    2008-01-01

    We examine a double trace deformation of SU(N) Yang-Mills theory which, for large N and large volume, is equivalent to unmodified Yang-Mills theory up to O(1/N 2 ) corrections. In contrast to the unmodified theory, large N volume independence is valid in the deformed theory down to arbitrarily small volumes. The double trace deformation prevents the spontaneous breaking of center symmetry which would otherwise disrupt large N volume independence in small volumes. For small values of N, if the theory is formulated on R 3 xS 1 with a sufficiently small compactification size L, then an analytic treatment of the nonperturbative dynamics of the deformed theory is possible. In this regime, we show that the deformed Yang-Mills theory has a mass gap and exhibits linear confinement. Increasing the circumference L or number of colors N decreases the separation of scales on which the analytic treatment relies. However, there are no order parameters which distinguish the small and large radius regimes. Consequently, for small N the deformed theory provides a novel example of a locally four-dimensional pure-gauge theory in which one has analytic control over confinement, while for large N it provides a simple fully reduced model for Yang-Mills theory. The construction is easily generalized to QCD and other QCD-like theories.

  14. Indoor Climate of Large Glazed Spaces

    DEFF Research Database (Denmark)

    Hendriksen, Ole Juhl; Madsen, Christina E.; Heiselberg, Per

    In recent years large glazed spaces has found increased use both in connection with renovation of buildings and as part of new buildings. One of the objectives is to add an architectural element, which combines indoor- and outdoor climate. In order to obtain a satisfying indoor climate it is crui...... it is cruicial at the design stage to be able to predict the performance regarding thermal comfort and energy consumption. This paper focus on the practical implementation of Computational Fluid Dynamics (CFD) and the relation to other simulation tools regarding indoor climate.......In recent years large glazed spaces has found increased use both in connection with renovation of buildings and as part of new buildings. One of the objectives is to add an architectural element, which combines indoor- and outdoor climate. In order to obtain a satisfying indoor climate...

  15. Large component deformation studies using videogrammetry

    International Nuclear Information System (INIS)

    Greenwood, J.A.

    1999-01-01

    Fermilab has the responsibility for developing certain components for the Large Hadron Collider (LHC), to be commissioned at CERN in 2005. As part of the development process, a referencing strategy must be created such that the position of internal active components may be known relative to external targeting. One question to be answered is the issue of dimensional stability of a part that will be transported over long distances; another is whether the external framework is coherent. This paper reviews the efforts of the designers of the component and the Lab's Alignment and Metrology Group to understand the behavior of a moderately large part, in this case a pie-shaped CSC chamber of dimensions 2 x 3 x 0.3 m , as it is positioned in various orientations relative to gravity. All measurements were made using a Geodetic Services, Inc. INCA 6.3 camera with an 18 min Nikon lens (Fig. 1) and were processed using GSI's V-STARS 4.1 software. Photogrammetry, more particularly digital videogrammetry, has shown that it can effectively service projects of this nature. When compared to optical tooling and laser tracker approaches, it is hard to imagine the full complement of difficulties videogrammetry allows one to avoid. Certainly the fact that neither the camera nor the part need to be stationary makes, photogrammetry an obvious choice. (author)

  16. The Compact Muon Solenoid Experiment at the Large Hadron Collider The Compact Muon Solenoid Experiment at the Large Hadron Collider

    Directory of Open Access Journals (Sweden)

    David Delepine

    2012-02-01

    Full Text Available The Compact Muon Solenoid experiment at the CERN Large Hadron Collider will study protonproton collisions at unprecedented energies and luminosities. In this article we providefi rst a brief general introduction to particle physics. We then explain what CERN is. Thenwe describe the Large Hadron Collider at CERN, the most powerful particle acceleratorever built. Finally we describe the Compact Muon Solenoid experiment, its physics goals,construction details, and current status.El experimento Compact Muon Solenoid en el Large Hadron Collider del CERN estudiarácolisiones protón protón a energías y luminosidades sin precedente. En este artículo presentamos primero una breve introducción general a la física de partículas. Despuésexplicamos lo que es el CERN. Luego describimos el Large Hadron Collider, el más potente acelerador de partículas construido por el hombre, en el CERN. Finalmente describimos el experimento Compact Muon Solenoid, sus objetivos en física, los detalles de su construcción,y su situación presente.

  17. Action speaks louder than words: Empathy mainly modulates emotions from theory of mind-laden parts of a story

    DEFF Research Database (Denmark)

    Wallentin, Mikkel; Simonsen, Arndis; Nielsen, Andreas Højlund

    2013-01-01

    Narratives are thought to evoke emotions through empathy, which is thought to rely on theory of mind (a.k.a. “mentalizing”). In this study we investigated the extent to which these assumptions hold. Young adults rated their experienced emotional intensity while listening to a narrative...... and subsequently took an empathy test. We show how empathy correlates well with overall level of experienced intensity. However, no correlation with empathy is found in those parts of the story that received highest intensity ratings across participants. Reverse correlation analysis reveals that these parts...... contain physical threat scenarios, while parts where empathy score is highly correlated with intensity describe social interaction that can only be understood through mentalizing. This suggests that narratives evoke emotions, both based on “simple” physical contagion (affective empathy) and on complex...

  18. HFSB-seeding for large-scale tomographic PIV in wind tunnels

    Science.gov (United States)

    Caridi, Giuseppe Carlo Alp; Ragni, Daniele; Sciacchitano, Andrea; Scarano, Fulvio

    2016-12-01

    A new system for large-scale tomographic particle image velocimetry in low-speed wind tunnels is presented. The system relies upon the use of sub-millimetre helium-filled soap bubbles as flow tracers, which scatter light with intensity several orders of magnitude higher than micron-sized droplets. With respect to a single bubble generator, the system increases the rate of bubbles emission by means of transient accumulation and rapid release. The governing parameters of the system are identified and discussed, namely the bubbles production rate, the accumulation and release times, the size of the bubble injector and its location with respect to the wind tunnel contraction. The relations between the above parameters, the resulting spatial concentration of tracers and measurement of dynamic spatial range are obtained and discussed. Large-scale experiments are carried out in a large low-speed wind tunnel with 2.85 × 2.85 m2 test section, where a vertical axis wind turbine of 1 m diameter is operated. Time-resolved tomographic PIV measurements are taken over a measurement volume of 40 × 20 × 15 cm3, allowing the quantitative analysis of the tip-vortex structure and dynamical evolution.

  19. 12 CFR 714.5 - What is required if you rely on an estimated residual value greater than 25% of the original cost...

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false What is required if you rely on an estimated residual value greater than 25% of the original cost of the leased property? 714.5 Section 714.5 Banks and Banking NATIONAL CREDIT UNION ADMINISTRATION REGULATIONS AFFECTING CREDIT UNIONS LEASING § 714.5 What is...

  20. Gauge Coupling Unification with Partly Composite Matter

    International Nuclear Information System (INIS)

    Gherghetta, Tony

    2005-01-01

    It is shown how gauge coupling unification can occur in models with partly composite matter. The particle states which are composite only contribute small logarithmns to the running of gauge couplings, while the elementary states contribute the usual large logarithmns. This introduces a new differential running contribution to the gauge couplings from partly composite SU(5) matter multiplets. In particular, for partly supersymmetric models, the incomplete SU(5) elementary matter multiplets restore gauge coupling unification even though the usual elementary gaugino and Higgsino contributions need not be present

  1. Large eddy simulation study of the kinetic energy entrainment by energetic turbulent flow structures in large wind farms

    Science.gov (United States)

    VerHulst, Claire; Meneveau, Charles

    2014-02-01

    In this study, we address the question of how kinetic energy is entrained into large wind turbine arrays and, in particular, how large-scale flow structures contribute to such entrainment. Previous research has shown this entrainment to be an important limiting factor in the performance of very large arrays where the flow becomes fully developed and there is a balance between the forcing of the atmospheric boundary layer and the resistance of the wind turbines. Given the high Reynolds numbers and domain sizes on the order of kilometers, we rely on wall-modeled large eddy simulation (LES) to simulate turbulent flow within the wind farm. Three-dimensional proper orthogonal decomposition (POD) analysis is then used to identify the most energetic flow structures present in the LES data. We quantify the contribution of each POD mode to the kinetic energy entrainment and its dependence on the layout of the wind turbine array. The primary large-scale structures are found to be streamwise, counter-rotating vortices located above the height of the wind turbines. While the flow is periodic, the geometry is not invariant to all horizontal translations due to the presence of the wind turbines and thus POD modes need not be Fourier modes. Differences of the obtained modes with Fourier modes are documented. Some of the modes are responsible for a large fraction of the kinetic energy flux to the wind turbine region. Surprisingly, more flow structures (POD modes) are needed to capture at least 40% of the turbulent kinetic energy, for which the POD analysis is optimal, than are needed to capture at least 40% of the kinetic energy flux to the turbines. For comparison, we consider the cases of aligned and staggered wind turbine arrays in a neutral atmospheric boundary layer as well as a reference case without wind turbines. While the general characteristics of the flow structures are robust, the net kinetic energy entrainment to the turbines depends on the presence and relative

  2. Recurring part arrangements in shape collections

    KAUST Repository

    Zheng, Youyi; Cohen-Or, Daniel; Averkiou, Melinos; Mitra, Niloy J.

    2014-01-01

    Extracting semantically related parts across models remains challenging, especially without supervision. The common approach is to co-analyze a model collection, while assuming the existence of descriptive geometric features that can directly identify related parts. In the presence of large shape variations, common geometric features, however, are no longer sufficiently descriptive. In this paper, we explore an indirect top-down approach, where instead of part geometry, part arrangements extracted from each model are compared. The key observation is that while a direct comparison of part geometry can be ambiguous, part arrangements, being higher level structures, remain consistent, and hence can be used to discover latent commonalities among semantically related shapes. We show that our indirect analysis leads to the detection of recurring arrangements of parts, which are otherwise difficult to discover in a direct unsupervised setting. We evaluate our algorithm on ground truth datasets and report advantages over geometric similarity-based bottom-up co-segmentation algorithms. © 2014 The Author(s) Computer Graphics Forum © 2014 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.

  3. Recurring part arrangements in shape collections

    KAUST Repository

    Zheng, Youyi

    2014-05-01

    Extracting semantically related parts across models remains challenging, especially without supervision. The common approach is to co-analyze a model collection, while assuming the existence of descriptive geometric features that can directly identify related parts. In the presence of large shape variations, common geometric features, however, are no longer sufficiently descriptive. In this paper, we explore an indirect top-down approach, where instead of part geometry, part arrangements extracted from each model are compared. The key observation is that while a direct comparison of part geometry can be ambiguous, part arrangements, being higher level structures, remain consistent, and hence can be used to discover latent commonalities among semantically related shapes. We show that our indirect analysis leads to the detection of recurring arrangements of parts, which are otherwise difficult to discover in a direct unsupervised setting. We evaluate our algorithm on ground truth datasets and report advantages over geometric similarity-based bottom-up co-segmentation algorithms. © 2014 The Author(s) Computer Graphics Forum © 2014 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.

  4. Autoclave cycle optimization for high performance composite parts manufacturing

    OpenAIRE

    Nele, L.; Caggiano, A.; Teti, R.

    2016-01-01

    In aeronautical production, autoclave curing of composite parts must be performed according to a specified diagram of temperature and pressure vs time. Part-tool assembly thermal inertia and shape have a large influence on the heating and cooling rate, and therefore on the dwell time within the target temperature range. When simultaneously curing diverse composite parts, the total autoclave cycle time is driven by the part-tool assembly with the lower heating and cooling rates. With the aim t...

  5. Large-scale network dynamics of beta-band oscillations underlie auditory perceptual decision-making

    Directory of Open Access Journals (Sweden)

    Mohsen Alavash

    2017-06-01

    Full Text Available Perceptual decisions vary in the speed at which we make them. Evidence suggests that translating sensory information into perceptual decisions relies on distributed interacting neural populations, with decision speed hinging on power modulations of the neural oscillations. Yet the dependence of perceptual decisions on the large-scale network organization of coupled neural oscillations has remained elusive. We measured magnetoencephalographic signals in human listeners who judged acoustic stimuli composed of carefully titrated clouds of tone sweeps. These stimuli were used in two task contexts, in which the participants judged the overall pitch or direction of the tone sweeps. We traced the large-scale network dynamics of the source-projected neural oscillations on a trial-by-trial basis using power-envelope correlations and graph-theoretical network discovery. In both tasks, faster decisions were predicted by higher segregation and lower integration of coupled beta-band (∼16–28 Hz oscillations. We also uncovered the brain network states that promoted faster decisions in either lower-order auditory or higher-order control brain areas. Specifically, decision speed in judging the tone sweep direction critically relied on the nodal network configurations of anterior temporal, cingulate, and middle frontal cortices. Our findings suggest that global network communication during perceptual decision-making is implemented in the human brain by large-scale couplings between beta-band neural oscillations. The speed at which we make perceptual decisions varies. This translation of sensory information into perceptual decisions hinges on dynamic changes in neural oscillatory activity. However, the large-scale neural-network embodiment supporting perceptual decision-making is unclear. We addressed this question by experimenting two auditory perceptual decision-making situations. Using graph-theoretical network discovery, we traced the large-scale network

  6. GPU-based large-scale visualization

    KAUST Repository

    Hadwiger, Markus

    2013-11-19

    Recent advances in image and volume acquisition as well as computational advances in simulation have led to an explosion of the amount of data that must be visualized and analyzed. Modern techniques combine the parallel processing power of GPUs with out-of-core methods and data streaming to enable the interactive visualization of giga- and terabytes of image and volume data. A major enabler for interactivity is making both the computational and the visualization effort proportional to the amount of data that is actually visible on screen, decoupling it from the full data size. This leads to powerful display-aware multi-resolution techniques that enable the visualization of data of almost arbitrary size. The course consists of two major parts: An introductory part that progresses from fundamentals to modern techniques, and a more advanced part that discusses details of ray-guided volume rendering, novel data structures for display-aware visualization and processing, and the remote visualization of large online data collections. You will learn how to develop efficient GPU data structures and large-scale visualizations, implement out-of-core strategies and concepts such as virtual texturing that have only been employed recently, as well as how to use modern multi-resolution representations. These approaches reduce the GPU memory requirements of extremely large data to a working set size that fits into current GPUs. You will learn how to perform ray-casting of volume data of almost arbitrary size and how to render and process gigapixel images using scalable, display-aware techniques. We will describe custom virtual texturing architectures as well as recent hardware developments in this area. We will also describe client/server systems for distributed visualization, on-demand data processing and streaming, and remote visualization. We will describe implementations using OpenGL as well as CUDA, exploiting parallelism on GPUs combined with additional asynchronous

  7. Workflow management in large distributed systems

    International Nuclear Information System (INIS)

    Legrand, I; Newman, H; Voicu, R; Dobre, C; Grigoras, C

    2011-01-01

    The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.

  8. Workflow management in large distributed systems

    Science.gov (United States)

    Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.

    2011-12-01

    The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.

  9. Dignity and the ownership and use of body parts.

    Science.gov (United States)

    Foster, Charles

    2014-10-01

    Property-based models of the ownership of body parts are common. They are inadequate. They fail to deal satisfactorily with many important problems, and even when they do work, they rely on ideas that have to be derived from deeper, usually unacknowledged principles. This article proposes that the parent principle is always human dignity, and that one will get more satisfactory answers if one interrogates the older, wiser parent instead of the younger, callow offspring. But human dignity has a credibility problem. It is often seen as hopelessly amorphous or incurably theological. These accusations are often just. But a more thorough exegesis exculpates dignity and gives it its proper place at the fountainhead of bioethics. Dignity is objective human thriving. Thriving considerations can and should be applied to dead people as well as live ones. To use dignity properly, the unit of bioethical analysis needs to be the whole transaction rather than (for instance) the doctor-patient relationship. The dignity interests of all the stakeholders are assessed in a sort of utilitarianism. Its use in relation to body part ownership is demonstrated. Article 8(1) of the European Convention of Human Rights endorses and mandates this approach.

  10. The Unique Challenges of Conserving Large Old Trees.

    Science.gov (United States)

    Lindenmayer, David B; Laurance, William F

    2016-06-01

    Large old trees play numerous critical ecological roles. They are susceptible to a plethora of interacting threats, in part because the attributes that confer a competitive advantage in intact ecosystems make them maladapted to rapidly changing, human-modified environments. Conserving large old trees will require surmounting a number of unresolved challenges. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Characterization of large volume HPGe detectors. Part II: Experimental results

    International Nuclear Information System (INIS)

    Bruyneel, Bart; Reiter, Peter; Pascovici, Gheorghe

    2006-01-01

    Measurements on a 12-fold segmented, n-type, large volume, irregular shaped HPGe detector were performed in order to determine the parameters of anisotropic mobility for electrons and holes as charge carriers created by γ-ray interactions. To characterize the electron mobility the complete outer detector surface was scanned in small steps employing photopeak interactions at 60keV. A precise measurement of the hole drift anisotropy was performed with 356keV γ-rays. The drift velocity anisotropy and crystal geometry cause considerable rise time differences in pulse shapes depending on the position of the spatial charge carrier creation. Pulse shapes of direct and transient signals are reproduced by weighting potential calculations with high precision. The measured angular dependence of rise times is caused by the anisotropic mobility, crystal geometry, changing field strength and space charge effects. Preamplified signals were processed employing digital spectroscopy electronics. Response functions, crosstalk contributions and averaging procedures were taken into account implying novel methods due to the segmentation of the Ge-crystal and digital signal processing electronics

  12. Incorporating social and cultural significance of large old trees in conservation policy.

    Science.gov (United States)

    Blicharska, Malgorzata; Mikusiński, Grzegorz

    2014-12-01

    In addition to providing key ecological functions, large old trees are a part of a social realm and as such provide numerous social-cultural benefits to people. However, their social and cultural values are often neglected when designing conservation policies and management guidelines. We believe that awareness of large old trees as a part of human identity and cultural heritage is essential when addressing the issue of their decline worldwide. Large old trees provide humans with aesthetic, symbolic, religious, and historic values, as well as concrete tangible benefits, such as leaves, branches, or nuts. In many cultures particularly large trees are treated with reverence. Also, contemporary popular culture utilizes the image of trees as sentient beings and builds on the ancient myths that attribute great powers to large trees. Although the social and cultural role of large old trees is usually not taken into account in conservation, accounting for human-related values of these trees is an important part of conservation policy because it may strengthen conservation by highlighting the potential synergies in protecting ecological and social values. © 2014 Society for Conservation Biology.

  13. Rely-Guarantee Protocols

    Science.gov (United States)

    2014-05-01

    University Pittsburgh, PA 15213 1School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. 2Faculdade de Ciências e Tecnologia ...additional examples that are not in the ECOOP paper. This work was partially supported by Fundação para a Ciência e Tecnologia (Portuguese Foundation

  14. A full scale approximation of covariance functions for large spatial data sets

    KAUST Repository

    Sang, Huiyan

    2011-10-10

    Gaussian process models have been widely used in spatial statistics but face tremendous computational challenges for very large data sets. The model fitting and spatial prediction of such models typically require O(n 3) operations for a data set of size n. Various approximations of the covariance functions have been introduced to reduce the computational cost. However, most existing approximations cannot simultaneously capture both the large- and the small-scale spatial dependence. A new approximation scheme is developed to provide a high quality approximation to the covariance function at both the large and the small spatial scales. The new approximation is the summation of two parts: a reduced rank covariance and a compactly supported covariance obtained by tapering the covariance of the residual of the reduced rank approximation. Whereas the former part mainly captures the large-scale spatial variation, the latter part captures the small-scale, local variation that is unexplained by the former part. By combining the reduced rank representation and sparse matrix techniques, our approach allows for efficient computation for maximum likelihood estimation, spatial prediction and Bayesian inference. We illustrate the new approach with simulated and real data sets. © 2011 Royal Statistical Society.

  15. A full scale approximation of covariance functions for large spatial data sets

    KAUST Repository

    Sang, Huiyan; Huang, Jianhua Z.

    2011-01-01

    Gaussian process models have been widely used in spatial statistics but face tremendous computational challenges for very large data sets. The model fitting and spatial prediction of such models typically require O(n 3) operations for a data set of size n. Various approximations of the covariance functions have been introduced to reduce the computational cost. However, most existing approximations cannot simultaneously capture both the large- and the small-scale spatial dependence. A new approximation scheme is developed to provide a high quality approximation to the covariance function at both the large and the small spatial scales. The new approximation is the summation of two parts: a reduced rank covariance and a compactly supported covariance obtained by tapering the covariance of the residual of the reduced rank approximation. Whereas the former part mainly captures the large-scale spatial variation, the latter part captures the small-scale, local variation that is unexplained by the former part. By combining the reduced rank representation and sparse matrix techniques, our approach allows for efficient computation for maximum likelihood estimation, spatial prediction and Bayesian inference. We illustrate the new approach with simulated and real data sets. © 2011 Royal Statistical Society.

  16. Biomagnetic Signals of the Large Intestine

    International Nuclear Information System (INIS)

    Cordova, T.; Sosa, M.; Bradshaw, L. A.; Adilton, A.

    2008-01-01

    Large intestine is part of the gastrointestinal tract with an average length, in adults, of 1.5 m. The gold standard technique in clinical medicine is the colonoscopy. Nevertheless, other techniques are capable of presenting information on physiological processes which take place in this part of the gastrointestinal system. Three recent studies are discussed in this paper in order to make this information more widely available. The authors consider that the biomagnetic technique could be easily implemented in hospitals around the world. Options will be available for research and clinical medicine

  17. Biological adhesion of the flatworm Macrostomum lignano relies on a duo-gland system and is mediated by a cell type-specific intermediate filament protein

    NARCIS (Netherlands)

    Lengerer, Birgit; Pjeta, Robert; Wunderer, Julia; Rodrigues, Marcelo; Arbore, Roberto; Schaerer, Lukas; Berezikov, Eugene; Hess, Michael W.; Pfaller, Kristian; Egger, Bernhard; Obwegeser, Sabrina; Salvenmoser, Willi; Ladurner, Peter

    2014-01-01

    Background: Free-living flatworms, in both marine and freshwater environments, are able to adhere to and release from a substrate several times within a second. This reversible adhesion relies on adhesive organs comprised of three cell types: an adhesive gland cell, a releasing gland cell, and an

  18. Mouse V1 population correlates of visual detection rely on heterogeneity within neuronal response patterns

    Science.gov (United States)

    Montijn, Jorrit S; Goltstein, Pieter M; Pennartz, Cyriel MA

    2015-01-01

    Previous studies have demonstrated the importance of the primary sensory cortex for the detection, discrimination, and awareness of visual stimuli, but it is unknown how neuronal populations in this area process detected and undetected stimuli differently. Critical differences may reside in the mean strength of responses to visual stimuli, as reflected in bulk signals detectable in functional magnetic resonance imaging, electro-encephalogram, or magnetoencephalography studies, or may be more subtly composed of differentiated activity of individual sensory neurons. Quantifying single-cell Ca2+ responses to visual stimuli recorded with in vivo two-photon imaging, we found that visual detection correlates more strongly with population response heterogeneity rather than overall response strength. Moreover, neuronal populations showed consistencies in activation patterns across temporally spaced trials in association with hit responses, but not during nondetections. Contrary to models relying on temporally stable networks or bulk signaling, these results suggest that detection depends on transient differentiation in neuronal activity within cortical populations. DOI: http://dx.doi.org/10.7554/eLife.10163.001 PMID:26646184

  19. Gravitational segregation of liquid slag in large ladle

    Directory of Open Access Journals (Sweden)

    J. Chen

    2012-04-01

    Full Text Available The process of gravitational segregation makes liquid steel slag components occur differentiation. And it shows that the upper part slag in the slag ladle contains higher CaO; and the lower part slag contains higher SiO2. The content of MgO (5,48 % in the upper part slag is higher than that of the lower part (2,50 %, and only Al2O3 content of the upper and the lower part slag is close to each other. The difference of chemical compositions in the slag ladle shows that there is gravitational segregation during slow solidification of liquid steel slag, which will has some impact of the steel slag processing on the large slag ladle.

  20. Distributed HUC-based modeling with SUMMA for ensemble streamflow forecasting over large regional domains.

    Science.gov (United States)

    Saharia, M.; Wood, A.; Clark, M. P.; Bennett, A.; Nijssen, B.; Clark, E.; Newman, A. J.

    2017-12-01

    Most operational streamflow forecasting systems rely on a forecaster-in-the-loop approach in which some parts of the forecast workflow require an experienced human forecaster. But this approach faces challenges surrounding process reproducibility, hindcasting capability, and extension to large domains. The operational hydrologic community is increasingly moving towards `over-the-loop' (completely automated) large-domain simulations yet recent developments indicate a widespread lack of community knowledge about the strengths and weaknesses of such systems for forecasting. A realistic representation of land surface hydrologic processes is a critical element for improving forecasts, but often comes at the substantial cost of forecast system agility and efficiency. While popular grid-based models support the distributed representation of land surface processes, intermediate-scale Hydrologic Unit Code (HUC)-based modeling could provide a more efficient and process-aligned spatial discretization, reducing the need for tradeoffs between model complexity and critical forecasting requirements such as ensemble methods and comprehensive model calibration. The National Center for Atmospheric Research is collaborating with the University of Washington, the Bureau of Reclamation and the USACE to implement, assess, and demonstrate real-time, over-the-loop distributed streamflow forecasting for several large western US river basins and regions. In this presentation, we present early results from short to medium range hydrologic and streamflow forecasts for the Pacific Northwest (PNW). We employ a real-time 1/16th degree daily ensemble model forcings as well as downscaled Global Ensemble Forecasting System (GEFS) meteorological forecasts. These datasets drive an intermediate-scale configuration of the Structure for Unifying Multiple Modeling Alternatives (SUMMA) model, which represents the PNW using over 11,700 HUCs. The system produces not only streamflow forecasts (using the Mizu

  1. Large amplitude waves and fields in plasmas

    International Nuclear Information System (INIS)

    Angelis, U. de; Naples Univ.

    1990-02-01

    In this review, based mostly on the results of the recent workshop on ''Large Amplitude Waves and Fields in Plasmas'' held at ICTP (Trieste, Italy) in May 1989 during the Spring College on Plasma Physics, I will mostly concentrate on underdense, cold, homogeneous plasmas, discussing some of the alternative (to fusion) uses of laser-plasma interaction. In Part I an outline of some basic non-linear processes is given, together with some recent experimental results. The processes are chosen because of their relevance to the applications or because new interesting developments have been reported at the ICTP workshop (or both). In Part II the excitation mechanisms and uses of large amplitude plasma waves are presented: these include phase-conjugation in plasmas, plasma based accelerators (beat-wave, plasma wake-field and laser wake-field), plasma lenses and plasma wigglers for Free Electron Lasers. (author)

  2. Decision analysis of mitigation and remediation of sedimentation within large wetland systems: a case study using Agassiz National Wildlife Refuge

    Science.gov (United States)

    Post van der Burg, Max; Jenni, Karen E.; Nieman, Timothy L.; Eash, Josh D.; Knutsen, Gregory A.

    2014-01-01

    Sedimentation has been identified as an important stressor across a range of wetland systems. The U.S. Fish and Wildlife Service has the responsibility of maintaining wetlands within its National Wildlife Refuge System for use by migratory waterbirds and other wildlife. Many of these wetlands could be negatively affected by accelerated rates of sedimentation, especially those located in agricultural parts of the landscape. In this report we document the results of a decision analysis project designed to help U.S. Fish and Wildlife Service staff at the Agassiz National Wildlife Refuge (herein referred to as the Refuge) determine a strategy for managing and mitigating the negative effects of sediment loading within Refuge wetlands. The Refuge’s largest wetland, Agassiz Pool, has accumulated so much sediment that it has become dominated by hybrid cattail (Typha × glauca), and the ability of the staff to control water levels in the Agassiz Pool has been substantially reduced. This project consisted of a workshop with Refuge staff, local and regional stakeholders, and several technical and scientific experts. At the workshop we established Refuge management and stakeholder objectives, a range of possible management strategies, and assessed the consequences of those strategies. After deliberating a range of actions, the staff chose to consider the following three strategies: (1) an inexpensive strategy, which largely focused on using outreach to reduce external sediment inputs to the Refuge; (2) the most expensive option, which built on the first option and relied on additional infrastructure changes to the Refuge to increase management capacity; and (3) a strategy that was less expensive than strategy 2 and relied mostly on existing infrastructure to improve management capacity. Despite the fact that our assessments were qualitative, Refuge staff decided they had enough information to select the third strategy. Following our qualitative assessment, we discussed

  3. A Large Hadron Electron Collider at CERN

    CERN Document Server

    Abelleira Fernandez, J L; Adzic, P; Akay, A N; Aksakal, H; Albacete, J L; Allanach, B; Alekhin, S; Allport, P; Andreev, V; Appleby, R B; Arikan, E; Armesto, N; Azuelos, G; Bai, M; Barber, D; Bartels, J; Behnke, O; Behr, J; Belyaev, A S; Ben-Zvi, I; Bernard, N; Bertolucci, S; Bettoni, S; Biswal, S; Blumlein, J; Bottcher, H; Bogacz, A; Bracco, C; Bracinik, J; Brandt, G; Braun, H; Brodsky, S; Bruning, O; Bulyak, E; Buniatyan, A; Burkhardt, H; Cakir, I T; Cakir, O; Calaga, R; Caldwell, A; Cetinkaya, V; Chekelian, V; Ciapala, E; Ciftci, R; Ciftci, A K; Cole, B A; Collins, J C; Dadoun, O; Dainton, J; Roeck, A.De; d'Enterria, D; DiNezza, P; Dudarev, A; Eide, A; Enberg, R; Eroglu, E; Eskola, K J; Favart, L; Fitterer, M; Forte, S; Gaddi, A; Gambino, P; Garcia Morales, H; Gehrmann, T; Gladkikh, P; Glasman, C; Glazov, A; Godbole, R; Goddard, B; Greenshaw, T; Guffanti, A; Guzey, V; Gwenlan, C; Han, T; Hao, Y; Haug, F; Herr, W; Herve, A; Holzer, B J; Ishitsuka, M; Jacquet, M; Jeanneret, B; Jensen, E; Jimenez, J M; Jowett, J M; Jung, H; Karadeniz, H; Kayran, D; Kilic, A; Kimura, K; Klees, R; Klein, M; Klein, U; Kluge, T; Kocak, F; Korostelev, M; Kosmicki, A; Kostka, P; Kowalski, H; Kraemer, M; Kramer, G; Kuchler, D; Kuze, M; Lappi, T; Laycock, P; Levichev, E; Levonian, S; Litvinenko, V N; Lombardi, A; Maeda, J; Marquet, C; Mellado, B; Mess, K H; Milanese, A; Milhano, J G; Moch, S; Morozov, I I; Muttoni, Y; Myers, S; Nandi, S; Nergiz, Z; Newman, P R; Omori, T; Osborne, J; Paoloni, E; Papaphilippou, Y; Pascaud, C; Paukkunen, H; Perez, E; Pieloni, T; Pilicer, E; Pire, B; Placakyte, R; Polini, A; Ptitsyn, V; Pupkov, Y; Radescu, V; Raychaudhuri, S; Rinolfi, L; Rizvi, E; Rohini, R; Rojo, J; Russenschuck, S; Sahin, M; Salgado, C A; Sampei, K; Sassot, R; Sauvan, E; Schaefer, M; Schneekloth, U; Schorner-Sadenius, T; Schulte, D; Senol, A; Seryi, A; Sievers, P; Skrinsky, A N; Smith, W; South, D; Spiesberger, H; Stasto, A M; Strikman, M; Sullivan, M; Sultansoy, S; Sun, Y P; Surrow, B; Szymanowski, L; Taels, P; Tapan, I; Tasci, T; Tassi, E; Kate, H.Ten; Terron, J; Thiesen, H; Thompson, L; Thompson, P; Tokushuku, K; Tomas Garcia, R; Tommasini, D; Trbojevic, D; Tsoupas, N; Tuckmantel, J; Turkoz, S; Trinh, T N; Tywoniuk, K; Unel, G; Ullrich, T; Urakawa, J; VanMechelen, P; Variola, A; Veness, R; Vivoli, A; Vobly, P; Wagner, J; Wallny, R; Wallon, S; Watt, G; Weiss, C; Wiedemann, U A; Wienands, U; Willeke, F; Xiao, B W; Yakimenko, V; Zarnecki, A F; Zhang, Z; Zimmermann, F; Zlebcik, R; Zomer, F; CERN. Geneva. LHeC Department

    2012-01-01

    This document provides a brief overview of the recently published report on the design of the Large Hadron Electron Collider (LHeC), which comprises its physics programme, accelerator physics, technology and main detector concepts. The LHeC exploits and develops challenging, though principally existing, accelerator and detector technologies. This summary is complemented by brief illustrations of some of the highlights of the physics programme, which relies on a vastly extended kinematic range, luminosity and unprecedented precision in deep inelastic scattering. Illustrations are provided regarding high precision QCD, new physics (Higgs, SUSY) and electron-ion physics. The LHeC is designed to run synchronously with the LHC in the twenties and to achieve an integrated luminosity of O(100) fb$^{-1}$. It will become the cleanest high resolution microscope of mankind and will substantially extend as well as complement the investigation of the physics of the TeV energy scale, which has been enabled by the LHC.

  4. Management of large losses of substances molars

    OpenAIRE

    Reynal, Florence

    2016-01-01

    This work aims to compare the different coronary restorative materials used for large tooth losses in primary molars. The first part of this thesis is dedicated to a description of the four main materials used in coronary restorations of molars : amalgam, glass-ionomer cements, resin-based composite and pedodontic crown. The second part is a systematic review of the literature. The aim is to compare the long-term survival rates of different restoration techniques, in order to help the practit...

  5. Large-scale weakly supervised object localization via latent category learning.

    Science.gov (United States)

    Chong Wang; Kaiqi Huang; Weiqiang Ren; Junge Zhang; Maybank, Steve

    2015-04-01

    Localizing objects in cluttered backgrounds is challenging under large-scale weakly supervised conditions. Due to the cluttered image condition, objects usually have large ambiguity with backgrounds. Besides, there is also a lack of effective algorithm for large-scale weakly supervised localization in cluttered backgrounds. However, backgrounds contain useful latent information, e.g., the sky in the aeroplane class. If this latent information can be learned, object-background ambiguity can be largely reduced and background can be suppressed effectively. In this paper, we propose the latent category learning (LCL) in large-scale cluttered conditions. LCL is an unsupervised learning method which requires only image-level class labels. First, we use the latent semantic analysis with semantic object representation to learn the latent categories, which represent objects, object parts or backgrounds. Second, to determine which category contains the target object, we propose a category selection strategy by evaluating each category's discrimination. Finally, we propose the online LCL for use in large-scale conditions. Evaluation on the challenging PASCAL Visual Object Class (VOC) 2007 and the large-scale imagenet large-scale visual recognition challenge 2013 detection data sets shows that the method can improve the annotation precision by 10% over previous methods. More importantly, we achieve the detection precision which outperforms previous results by a large margin and can be competitive to the supervised deformable part model 5.0 baseline on both data sets.

  6. Can we rely on the antiretroviral treatment as the only means for human immunodeficiency virusprevention? A Public Health perspective.

    Science.gov (United States)

    Mozalevskis, Antons; Manzanares-Laya, Sandra; García de Olalla, Patricia; Moreno, Antonio; Jacques-Aviñó, Constanza; Caylà, Joan A

    2015-11-01

    The evidence that supports the preventive effect of combination antiretroviral treatment (cART) in HIV sexual transmission suggested the so-called 'treatment as prevention' (TAP) strategy as a promising tool for slowing down HIV transmission. As the messages and attitudes towards condom use in the context of TAP appear to be somehow confusing, the aim here is to assess whether relying on cART alone to prevent HIV transmission can currently be recommended from the Public Health perspective. A review is made of the literature on the effects of TAP strategy on HIV transmission and the epidemiology of other sexual transmitted infections (STIs) in the cART era, and recommendations from Public Health institutions on the TAP as of February 2014. The evolution of HIV and other STIs in Barcelona from 2007 to 2012 has also been analysed. Given that the widespread use of cART has coincided with an increasing incidence of HIV and other STIs, mainly amongst men who have sex with men, a combination and diversified prevention methods should always be considered and recommended in counselling. An informed decision on whether to stop using condoms should only be made by partners within stable couples, and after receiving all the up-to-date information regarding TAP. From the public health perspective, primary prevention should be a priority; therefore relying on cART alone is not a sufficient strategy to prevent new HIV and other STIs. Copyright © 2014 Elsevier España, S.L.U. y Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.

  7. What lessons from part 2 of the fifth UN's Intergovernmental Panel on Climate Change (IPCC) report?

    International Nuclear Information System (INIS)

    Laville, Bettina

    2014-04-01

    This paper analyses part 2 of the fifth UN's Intergovernmental Panel on Climate Change (IPCC) report which describes with a detailed mapping the global warming impacts on species, oceans and economy. The IPCC proposes a geopolitical model of continuous and smooth adaptation to climate change which would imply a new cooperation-creating economy and would mitigate the risks of international conflicts. This new economy must rely on strong public organisations which implies a beforehand restoration of public accounts

  8. A map of the cosmic microwave background radiation from the Wilkinson Microwave Anisotropy Probe (WMAP), showing the large-scale fluctuations (the quadrupole and octopole) isolated by an analysis done partly by theorists at CERN.

    CERN Multimedia

    2004-01-01

    A recent analysis, in part by theorists working at CERN, suggests a new view of the cosmic microwave background radiation. It seems the solar system, rather than the universe, causes the radiation's large-scale fluctuations, similar to the bass in a song.

  9. Data science and symbolic AI: Synergies, challenges and opportunities

    KAUST Repository

    Hoehndorf, Robert

    2017-06-02

    Symbolic approaches to artificial intelligence represent things within a domain of knowledge through physical symbols, combine symbols into symbol expressions, and manipulate symbols and symbol expressions through inference processes. While a large part of Data Science relies on statistics and applies statistical approaches to artificial intelligence, there is an increasing potential for successfully applying symbolic approaches as well. Symbolic representations and symbolic inference are close to human cognitive representations and therefore comprehensible and interpretable; they are widely used to represent data and metadata, and their specific semantic content must be taken into account for analysis of such information; and human communication largely relies on symbols, making symbolic representations a crucial part in the analysis of natural language. Here we discuss the role symbolic representations and inference can play in Data Science, highlight the research challenges from the perspective of the data scientist, and argue that symbolic methods should become a crucial component of the data scientists’ toolbox.

  10. Data science and symbolic AI: Synergies, challenges and opportunities

    KAUST Repository

    Hoehndorf, Robert; Queralt-Rosinach, Nú ria

    2017-01-01

    Symbolic approaches to artificial intelligence represent things within a domain of knowledge through physical symbols, combine symbols into symbol expressions, and manipulate symbols and symbol expressions through inference processes. While a large part of Data Science relies on statistics and applies statistical approaches to artificial intelligence, there is an increasing potential for successfully applying symbolic approaches as well. Symbolic representations and symbolic inference are close to human cognitive representations and therefore comprehensible and interpretable; they are widely used to represent data and metadata, and their specific semantic content must be taken into account for analysis of such information; and human communication largely relies on symbols, making symbolic representations a crucial part in the analysis of natural language. Here we discuss the role symbolic representations and inference can play in Data Science, highlight the research challenges from the perspective of the data scientist, and argue that symbolic methods should become a crucial component of the data scientists’ toolbox.

  11. Low-Temperature and Rapid Growth of Large Single-Crystalline Graphene with Ethane.

    Science.gov (United States)

    Sun, Xiao; Lin, Li; Sun, Luzhao; Zhang, Jincan; Rui, Dingran; Li, Jiayu; Wang, Mingzhan; Tan, Congwei; Kang, Ning; Wei, Di; Xu, H Q; Peng, Hailin; Liu, Zhongfan

    2018-01-01

    Future applications of graphene rely highly on the production of large-area high-quality graphene, especially large single-crystalline graphene, due to the reduction of defects caused by grain boundaries. However, current large single-crystalline graphene growing methodologies are suffering from low growth rate and as a result, industrial graphene production is always confronted by high energy consumption, which is primarily caused by high growth temperature and long growth time. Herein, a new growth condition achieved via ethane being the carbon feedstock to achieve low-temperature yet rapid growth of large single-crystalline graphene is reported. Ethane condition gives a growth rate about four times faster than methane, achieving about 420 µm min -1 for the growth of sub-centimeter graphene single crystals at temperature about 1000 °C. In addition, the temperature threshold to obtain graphene using ethane can be reduced to 750 °C, lower than the general growth temperature threshold (about 1000 °C) with methane on copper foil. Meanwhile ethane always keeps higher graphene growth rate than methane under the same growth temperature. This study demonstrates that ethane is indeed a potential carbon source for efficient growth of large single-crystalline graphene, thus paves the way for graphene in high-end electronical and optoelectronical applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Integration of Detectors Into a Large Experiment: Examples From ATLAS and CMS

    CERN Document Server

    Froidevaux, D

    2011-01-01

    Integration of Detectors Into a Large Experiment: Examples From ATLAS andCMS, part of 'Landolt-Börnstein - Group I Elementary Particles, Nuclei and Atoms: Numerical Data and Functional Relationships in Science and Technology, Volume 21B2: Detectors for Particles and Radiation. Part 2: Systems and Applications'. This document is part of Part 2 'Principles and Methods' of Subvolume B 'Detectors for Particles and Radiation' of Volume 21 'Elementary Particles' of Landolt-Börnstein - Group I 'Elementary Particles, Nuclei and Atoms'. It contains the Chapter '5 Integration of Detectors Into a Large Experiment: Examples From ATLAS and CMS' with the content: 5 Integration of Detectors Into a Large Experiment: Examples From ATLAS and CMS 5.1 Introduction 5.1.1 The context 5.1.2 The main initial physics goals of ATLAS and CMS at the LHC 5.1.3 A snapshot of the current status of the ATLAS and CMS experiments 5.2 Overall detector concept and magnet systems 5.2.1 Overall detector concept 5.2.2 Magnet systems 5.2.2.1 Rad...

  13. Large behavioral variability of motile E. coli revealed in 3D spatial exploration

    Science.gov (United States)

    Figueroa-Morales, N.; Darnige, T.; Martinez, V.; Douarche, C.; Soto, R.; Lindner, A.; Clement, E.

    2017-11-01

    Bacterial motility determines the spatio-temporal structure of microbial communities, controls infection spreading and the microbiota organization in guts or in soils. Quantitative modeling of chemotaxis and statistical descriptions of active bacterial suspensions currently rely on the classical vision of a run-and-tumble strategy exploited by bacteria to explore their environment. Here we report a large behavioral variability of wild-type E. coli, revealed in their three-dimensional trajectories. We found a broad distribution of run times for individual cells, in stark contrast with the accepted vision of a single characteristic time. We relate our results to the slow fluctuations of a signaling protein which triggers the switching of the flagellar motor reversal responsible for tumbles. We demonstrate that such a large distribution of run times introduces measurement biases in most practical situations. These results reconcile a notorious conundrum between observations of run times and motor switching statistics. Our study implies that the statistical modeling of transport properties and of the chemotactic response of bacterial populations need to be profoundly revised to correctly account for the large variability of motility features.

  14. A Methodology for Estimating Large-Customer Demand Response MarketPotential

    Energy Technology Data Exchange (ETDEWEB)

    Goldman, Charles; Hopper, Nicole; Bharvirkar, Ranjit; Neenan,Bernie; Cappers,Peter

    2007-08-01

    Demand response (DR) is increasingly recognized as an essential ingredient to well-functioning electricity markets. DR market potential studies can answer questions about the amount of DR available in a given area and from which market segments. Several recent DR market potential studies have been conducted, most adapting techniques used to estimate energy-efficiency (EE) potential. In this scoping study, we: reviewed and categorized seven recent DR market potential studies; recommended a methodology for estimating DR market potential for large, non-residential utility customers that uses price elasticities to account for behavior and prices; compiled participation rates and elasticity values from six DR options offered to large customers in recent years, and demonstrated our recommended methodology with large customer market potential scenarios at an illustrative Northeastern utility. We observe that EE and DR have several important differences that argue for an elasticity approach for large-customer DR options that rely on customer-initiated response to prices, rather than the engineering approaches typical of EE potential studies. Base-case estimates suggest that offering DR options to large, non-residential customers results in 1-3% reductions in their class peak demand in response to prices or incentive payments of $500/MWh. Participation rates (i.e., enrollment in voluntary DR programs or acceptance of default hourly pricing) have the greatest influence on DR impacts of all factors studied, yet are the least well understood. Elasticity refinements to reflect the impact of enabling technologies and response at high prices provide more accurate market potential estimates, particularly when arc elasticities (rather than substitution elasticities) are estimated.

  15. Mapping the electrical properties of large-area graphene

    Science.gov (United States)

    Bøggild, Peter; Mackenzie, David M. A.; Whelan, Patrick R.; Petersen, Dirch H.; Due Buron, Jonas; Zurutuza, Amaia; Gallop, John; Hao, Ling; Jepsen, Peter U.

    2017-12-01

    The significant progress in terms of fabricating large-area graphene films for transparent electrodes, barriers, electronics, telecommunication and other applications has not yet been accompanied by efficient methods for characterizing the electrical properties of large-area graphene. While in the early prototyping as well as research and development phases, electrical test devices created by conventional lithography have provided adequate insights, this approach is becoming increasingly problematic due to complications such as irreversible damage to the original graphene film, contamination, and a high measurement effort per device. In this topical review, we provide a comprehensive overview of the issues that need to be addressed by any large-area characterisation method for electrical key performance indicators, with emphasis on electrical uniformity and on how this can be used to provide a more accurate analysis of the graphene film. We review and compare three different, but complementary approaches that rely either on fixed contacts (dry laser lithography), movable contacts (micro four point probes) and non-contact (terahertz time-domain spectroscopy) between the probe and the graphene film, all of which have been optimized for maximal throughput and accuracy, and minimal damage to the graphene film. Of these three, the main emphasis is on THz time-domain spectroscopy, which is non-destructive, highly accurate and allows both conductivity, carrier density and carrier mobility to be mapped across arbitrarily large areas at rates that by far exceed any other known method. We also detail how the THz conductivity spectra give insights on the scattering mechanisms, and through that, the microstructure of graphene films subject to different growth and transfer processes. The perspectives for upscaling to realistic production environments are discussed.

  16. Sensitivity analysis for large-scale problems

    Science.gov (United States)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  17. Large space antenna concepts for ESGP

    Science.gov (United States)

    Love, Allan W.

    1989-01-01

    It is appropriate to note that 1988 marks the 100th anniversary of the birth of the reflector antenna. It was in 1888 that Heinrich Hertz constructed the first one, a parabolic cylinder made of sheet zinc bent to shape and supported by a wooden frame. Hertz demonstrated the existence of the electromagnetic waves that had been predicted theoretically by James Clerk Maxwell some 22 years earlier. In the 100 years since Hertz's pioneering work the field of electromagnetics has grown explosively: one of the technologies is that of remote sensing of planet Earth by means of electromagnetic waves, using both passive and active sensors located on an Earth Science Geostationary Platform (ESEP). For these purposes some exquisitely sensitive instruments were developed, capable of reaching to the fringes of the known universe, and relying on large reflector antennas to collect the minute signals and direct them to appropriate receiving devices. These antennas are electrically large, with diameters of 3000 to 10,000 wavelengths and with gains approaching 80 to 90 dB. Some of the reflector antennas proposed for ESGP are also electrically large. For example, at 220 GHz a 4-meter reflector is nearly 3000 wavelengths in diameter, and is electrically quite comparable with a number of the millimeter wave radiotelescopes that are being built around the world. Its surface must meet stringent requirements on rms smoothness, and ability to resist deformation. Here, however, the environmental forces at work are different. There are no varying forces due to wind and gravity, but inertial forces due to mechanical scanning must be reckoned with. With this form of beam scanning, minimizing momentum transfer to the space platform is a problem that demands an answer. Finally, reflector surface distortion due to thermal gradients caused by the solar flux probably represents the most challenging problem to be solved if these Large Space Antennas are to achieve the gain and resolution required of

  18. Large Scale Anomalies of the Cosmic Microwave Background with Planck

    DEFF Research Database (Denmark)

    Frejsel, Anne Mette

    This thesis focuses on the large scale anomalies of the Cosmic Microwave Background (CMB) and their possible origins. The investigations consist of two main parts. The first part is on statistical tests of the CMB, and the consistency of both maps and power spectrum. We find that the Planck data...

  19. The seismic cycles of large Romanian earthquake: The physical foundation, and the next large earthquake in Vrancea

    International Nuclear Information System (INIS)

    Purcaru, G.

    2002-01-01

    The occurrence patterns of large/great earthquakes at subduction zone interface and in-slab are complex in the space-time dynamics, and make even long-term forecasts very difficult. For some favourable cases where a predictive (empirical) law was found successful predictions were possible (eg. Aleutians, Kuriles, etc). For the large Romanian events (M > 6.7), occurring in the Vrancea seismic slab below 60 km, Purcaru (1974) first found the law of the occurrence time and magnitude: the law of 'quasicycles' and 'supercycles', for large and largest events (M > 7.25), respectively. The quantitative model of Purcaru with these seismic cycles has three time-bands (periods of large earthquakes)/century, discovered using the earthquake history (1100-1973) (however incomplete) of large Vrancea earthquakes for which M was initially estimated (Purcaru, 1974, 1979). Our long-term prediction model is essentially quasideterministic, it predicts uniquely the time and magnitude; since is not strict deterministic the forecasting is interval valued. It predicted the next large earthquake in 1980 in the 3rd time-band (1970-1990), and which occurred in 1977 (M7.1, M w 7.5). The prediction was successful, in long-term sense. We discuss the unpredicted events in 1986 and 1990. Since the laws are phenomenological, we give their physical foundation based on the large scale of rupture zone (RZ) and subscale of the rupture process (RP). First results show that: (1) the 1940 event (h=122 km) ruptured the lower part of the oceanic slab entirely along strike, and down dip, and similarly for 1977 but its upper part, (2) the RZ of 1977 and 1990 events overlap and the first asperity of 1977 event was rebroken in 1990. This shows the size of the events strongly depends on RZ, asperity size/strength and, thus on the failure stress level (FSL), but not on depth, (3) when FSL of high strength (HS) larger zones is critical largest events (eg. 1802, 1940) occur, thus explaining the supercyles (the 1940

  20. Preface to the volume Large Rivers

    Science.gov (United States)

    Latrubesse, Edgardo M.; Abad, Jorge D.

    2018-02-01

    The study and knowledge of the geomorphology of large rivers increased significantly during the last years and the factors that triggered these advances are multiple. On one hand, modern technologies became more accessible and their disseminated usage allowed the collection of data from large rivers as never seen before. The generalized use of high tech data collection with geophysics equipment such as acoustic Doppler current profilers-ADCPs, multibeam echosounders, plus the availability of geospatial and computational tools for morphodynamics, hydrological and hydrosedimentological modeling, have accelerated the scientific production on the geomorphology of large rivers at a global scale. Despite the advances, there is yet a lot of work ahead. Good parts of the large rivers are in the tropics and many are still unexplored. The tropics also hold crucial fluvial basins that concentrate good part of the gross domestic product of large countries like the Parana River in Argentina and Brazil, the Ganges-Brahmaputra in India, the Indus River in Pakistan, and the Mekong River in several countries of South East Asia. The environmental importance of tropical rivers is also outstanding. They hold the highest biodiversity of fluvial fauna and alluvial vegetation and many of them, particularly those in Southeast Asia, are among the most hazardous systems for floods in the entire world. Tropical rivers draining mountain chains such as the Himalaya, the Andes and insular Southeast Asia are also among the most heavily sediment loaded rivers and play a key role in both the storage of sediment at continental scale and the transference of sediments from the continent to the Ocean at planetary scale (Andermann et al., 2012; Latrubesse and Restrepo, 2014; Milliman and Syvitski, 1992; Milliman and Farsnworth, 2011; Sinha and Friend, 1994).

  1. Exploiting Data Sparsity for Large-Scale Matrix Computations

    KAUST Repository

    Akbudak, Kadir

    2018-02-24

    Exploiting data sparsity in dense matrices is an algorithmic bridge between architectures that are increasingly memory-austere on a per-core basis and extreme-scale applications. The Hierarchical matrix Computations on Manycore Architectures (HiCMA) library tackles this challenging problem by achieving significant reductions in time to solution and memory footprint, while preserving a specified accuracy requirement of the application. HiCMA provides a high-performance implementation on distributed-memory systems of one of the most widely used matrix factorization in large-scale scientific applications, i.e., the Cholesky factorization. It employs the tile low-rank data format to compress the dense data-sparse off-diagonal tiles of the matrix. It then decomposes the matrix computations into interdependent tasks and relies on the dynamic runtime system StarPU for asynchronous out-of-order scheduling, while allowing high user-productivity. Performance comparisons and memory footprint on matrix dimensions up to eleven million show a performance gain and memory saving of more than an order of magnitude for both metrics on thousands of cores, against state-of-the-art open-source and vendor optimized numerical libraries. This represents an important milestone in enabling large-scale matrix computations toward solving big data problems in geospatial statistics for climate/weather forecasting applications.

  2. Exploiting Data Sparsity for Large-Scale Matrix Computations

    KAUST Repository

    Akbudak, Kadir; Ltaief, Hatem; Mikhalev, Aleksandr; Charara, Ali; Keyes, David E.

    2018-01-01

    Exploiting data sparsity in dense matrices is an algorithmic bridge between architectures that are increasingly memory-austere on a per-core basis and extreme-scale applications. The Hierarchical matrix Computations on Manycore Architectures (HiCMA) library tackles this challenging problem by achieving significant reductions in time to solution and memory footprint, while preserving a specified accuracy requirement of the application. HiCMA provides a high-performance implementation on distributed-memory systems of one of the most widely used matrix factorization in large-scale scientific applications, i.e., the Cholesky factorization. It employs the tile low-rank data format to compress the dense data-sparse off-diagonal tiles of the matrix. It then decomposes the matrix computations into interdependent tasks and relies on the dynamic runtime system StarPU for asynchronous out-of-order scheduling, while allowing high user-productivity. Performance comparisons and memory footprint on matrix dimensions up to eleven million show a performance gain and memory saving of more than an order of magnitude for both metrics on thousands of cores, against state-of-the-art open-source and vendor optimized numerical libraries. This represents an important milestone in enabling large-scale matrix computations toward solving big data problems in geospatial statistics for climate/weather forecasting applications.

  3. The development of a prototype facility for a large diameter vertical axis wind turbine

    Energy Technology Data Exchange (ETDEWEB)

    1975-01-01

    A proposal is made in this document for the design, construction, assembly and test of a demonstration wind turbine generator system. The specific objective of the program will be to demonstrate that the proposed system satisfies the need for cheap power generation at those remote meteorological stations which currently rely exclusively on fossil fuel that must be transported to the site at great cost. It intends to demonstrate that a large vertical axis wind turbine system is within the current state-of-art, is practical and is economically attractive. The program will include a conceptual design phase, a detail design phase, a construction and assembly phase at a selected site and a demonstration phase during which data will be gathered on operation at this large scale. A theory of operation of the proposed design is included. 4 refs., 3 figs.

  4. Urbanisation, poverty and employment: the large metropolis in the third world.

    Science.gov (United States)

    Singh, A

    1992-01-01

    "The main purpose of this paper is to provide an overall review of the chief analytical as well as economic policy issues in relation to Third World cities in the light of the available theoretical and empirical studies on urbanisation, poverty and employment in the developing countries.... Part I...provides basic information on urbanisation in the Third World...[and] outlines the nature and extent of urban poverty in these large cities and considers the impact of the world economic crisis on the urban poor. Part II of the paper discusses the most important structural features of urbanisation in relation to economic development....Finally, Part III briefly examines policy issues in relation to urbanisation and poverty in the Third World's large cities." excerpt

  5. Large orders in strong-field QED

    Energy Technology Data Exchange (ETDEWEB)

    Heinzl, Thomas [School of Mathematics and Statistics, University of Plymouth, Drake Circus, Plymouth PL4 8AA (United Kingdom); Schroeder, Oliver [Science-Computing ag, Hagellocher Weg 73, D-72070 Tuebingen (Germany)

    2006-09-15

    We address the issue of large-order expansions in strong-field QED. Our approach is based on the one-loop effective action encoded in the associated photon polarization tensor. We concentrate on the simple case of crossed fields aiming at possible applications of high-power lasers to measure vacuum birefringence. A simple next-to-leading order derivative expansion reveals that the indices of refraction increase with frequency. This signals normal dispersion in the small-frequency regime where the derivative expansion makes sense. To gain information beyond that regime we determine the factorial growth of the derivative expansion coefficients evaluating the first 82 orders by means of computer algebra. From this we can infer a nonperturbative imaginary part for the indices of refraction indicating absorption (pair production) as soon as energy and intensity become (super)critical. These results compare favourably with an analytic evaluation of the polarization tensor asymptotics. Kramers-Kronig relations finally allow for a nonperturbative definition of the real parts as well and show that absorption goes hand in hand with anomalous dispersion for sufficiently large frequencies and fields.

  6. Considerations of coil protection and electrical connection schemes in large superconducting toroidal magnet system

    International Nuclear Information System (INIS)

    Yeh, H.T.

    1976-03-01

    A preliminary comparison of several different coil protection and electrical connection schemes for large superconducting toroidal magnet systems (STMS) is carried out. The tentative recommendation is to rely on external dump resistors for coil protection and to connect the coils in the toroidal magnet in several parallel loops (e.g., every fourth coil is connected into a single series loop). For the fault condition when a single coil quenches, the quenched coil should be isolated from its loop by switching devices. The magnet, as a whole, should probably be discharged if more than a few coils have quenched

  7. Assessing Programming Costs of Explicit Memory Localization on a Large Scale Shared Memory Multiprocessor

    Directory of Open Access Journals (Sweden)

    Silvio Picano

    1992-01-01

    Full Text Available We present detailed experimental work involving a commercially available large scale shared memory multiple instruction stream-multiple data stream (MIMD parallel computer having a software controlled cache coherence mechanism. To make effective use of such an architecture, the programmer is responsible for designing the program's structure to match the underlying multiprocessors capabilities. We describe the techniques used to exploit our multiprocessor (the BBN TC2000 on a network simulation program, showing the resulting performance gains and the associated programming costs. We show that an efficient implementation relies heavily on the user's ability to explicitly manage the memory system.

  8. Differences between child and adult large-scale functional brain networks for reading tasks.

    Science.gov (United States)

    Liu, Xin; Gao, Yue; Di, Qiqi; Hu, Jiali; Lu, Chunming; Nan, Yun; Booth, James R; Liu, Li

    2018-02-01

    Reading is an important high-level cognitive function of the human brain, requiring interaction among multiple brain regions. Revealing differences between children's large-scale functional brain networks for reading tasks and those of adults helps us to understand how the functional network changes over reading development. Here we used functional magnetic resonance imaging data of 17 adults (19-28 years old) and 16 children (11-13 years old), and graph theoretical analyses to investigate age-related changes in large-scale functional networks during rhyming and meaning judgment tasks on pairs of visually presented Chinese characters. We found that: (1) adults had stronger inter-regional connectivity and nodal degree in occipital regions, while children had stronger inter-regional connectivity in temporal regions, suggesting that adults rely more on visual orthographic processing whereas children rely more on auditory phonological processing during reading. (2) Only adults showed between-task differences in inter-regional connectivity and nodal degree, whereas children showed no task differences, suggesting the topological organization of adults' reading network is more specialized. (3) Children showed greater inter-regional connectivity and nodal degree than adults in multiple subcortical regions; the hubs in children were more distributed in subcortical regions while the hubs in adults were more distributed in cortical regions. These findings suggest that reading development is manifested by a shift from reliance on subcortical to cortical regions. Taken together, our study suggests that Chinese reading development is supported by developmental changes in brain connectivity properties, and some of these changes may be domain-general while others may be specific to the reading domain. © 2017 Wiley Periodicals, Inc.

  9. Fighting fire in the heat of the day: An analysis of operational and environmental conditions of use for large airtankers in United States fire suppression

    Science.gov (United States)

    Crystal S. Stonesifer; Dave Calkin; Matthew P. Thompson; Keith D. Stockmann

    2016-01-01

    Large airtanker use is widespread in wildfire suppression in the United States. The current approach to nationally dispatching the fleet of federal contract airtankers relies on filling requests for airtankers to achieve suppression objectives identified by fire managers at the incident level. In general, demand is met if resources are available, and the...

  10. Can Student Teachers Acquire Core Skills for Teaching from Part-Time Employment?

    Science.gov (United States)

    Wylie, Ken; Cummins, Brian

    2013-01-01

    Part-time employment among university students has become commonplace internationally. Research has largely focused on the impact of part-time employment on academic performance. This research takes an original approach in that it poses the question whether students can acquire core skills relevant to teaching from their part-time employment. The…

  11. The Socialization of Part-Time Faculty at Comprehensive Public Colleges

    Science.gov (United States)

    Frias, Mary Lou

    2010-01-01

    Fiscal constraints, understaffing, increased enrollments, demand for professional education, and the need for a more flexible workforce account for increases in the employment of part-time faculty in higher education. Part-time faculty tend to teach large, introductory courses for first and second-year students, who are in the "risk…

  12. Learning Similar Actions by Reinforcement or Sensory-Prediction Errors Rely on Distinct Physiological Mechanisms.

    Science.gov (United States)

    Uehara, Shintaro; Mawase, Firas; Celnik, Pablo

    2017-09-14

    Humans can acquire knowledge of new motor behavior via different forms of learning. The two forms most commonly studied have been the development of internal models based on sensory-prediction errors (error-based learning) and success-based feedback (reinforcement learning). Human behavioral studies suggest these are distinct learning processes, though the neurophysiological mechanisms that are involved have not been characterized. Here, we evaluated physiological markers from the cerebellum and the primary motor cortex (M1) using noninvasive brain stimulations while healthy participants trained finger-reaching tasks. We manipulated the extent to which subjects rely on error-based or reinforcement by providing either vector or binary feedback about task performance. Our results demonstrated a double dissociation where learning the task mainly via error-based mechanisms leads to cerebellar plasticity modifications but not long-term potentiation (LTP)-like plasticity changes in M1; while learning a similar action via reinforcement mechanisms elicited M1 LTP-like plasticity but not cerebellar plasticity changes. Our findings indicate that learning complex motor behavior is mediated by the interplay of different forms of learning, weighing distinct neural mechanisms in M1 and the cerebellum. Our study provides insights for designing effective interventions to enhance human motor learning. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Inventory Control of Spare Parts for Operating Nuclear Power plants

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jong-Hyuck; Jang, Se-Jin; Hwang, Eui-Youp; Yoo, Sung-Soo; Yoo, Keun-Bae; Lee, Sang-Guk; Hong, Sung-Yull [Korea Electric Power Research Institute, Taejon (Korea, Republic of)

    2006-07-01

    Inventory control of spare parts plays an increasingly important role in operation management. The trade-off is clear: on one hand a large number of spare parts ties up a large amount of capital, while on the other hand too little inventory may result in extremely costly emergency actions. This is why during the last few decades inventory of spare parts control has been the topics of many publications. Recently management systems such as manufacturing resources planning (MRP) and enterprise resource planning (ERP) have been added. However, most of these contributions have similar theoretical background. This means the concepts and techniques are mainly based on mathematical assumptions and modeling inventory of spare parts situations Nuclear utilities in Korea have several problems to manage the optimum level of spare parts though they used MRP System. Because most of items have long lead time and they are imported from United States, Canada, France and so on. In this paper, we will examine the available inventory optimization models which are applicable to nuclear power plant and then select optimum model and assumptions to make inventory of spare parts strategies. Then we develop the computer program to select and determine optimum level of spare parts which should be automatically controlled by KHNP ERP system. The main contribution of this paper is an inventory of spare parts control model development, which can be applied to nuclear power plants in Korea.

  14. Inventory Control of Spare Parts for Operating Nuclear Power plants

    International Nuclear Information System (INIS)

    Park, Jong-Hyuck; Jang, Se-Jin; Hwang, Eui-Youp; Yoo, Sung-Soo; Yoo, Keun-Bae; Lee, Sang-Guk; Hong, Sung-Yull

    2006-01-01

    Inventory control of spare parts plays an increasingly important role in operation management. The trade-off is clear: on one hand a large number of spare parts ties up a large amount of capital, while on the other hand too little inventory may result in extremely costly emergency actions. This is why during the last few decades inventory of spare parts control has been the topics of many publications. Recently management systems such as manufacturing resources planning (MRP) and enterprise resource planning (ERP) have been added. However, most of these contributions have similar theoretical background. This means the concepts and techniques are mainly based on mathematical assumptions and modeling inventory of spare parts situations Nuclear utilities in Korea have several problems to manage the optimum level of spare parts though they used MRP System. Because most of items have long lead time and they are imported from United States, Canada, France and so on. In this paper, we will examine the available inventory optimization models which are applicable to nuclear power plant and then select optimum model and assumptions to make inventory of spare parts strategies. Then we develop the computer program to select and determine optimum level of spare parts which should be automatically controlled by KHNP ERP system. The main contribution of this paper is an inventory of spare parts control model development, which can be applied to nuclear power plants in Korea

  15. Termination of T cell priming relies on a phase of unresponsiveness promoting disengagement from APCs and T cell division.

    Science.gov (United States)

    Bohineust, Armelle; Garcia, Zacarias; Beuneu, Hélène; Lemaître, Fabrice; Bousso, Philippe

    2018-05-07

    T cells are primed in secondary lymphoid organs by establishing stable interactions with antigen-presenting cells (APCs). However, the cellular mechanisms underlying the termination of T cell priming and the initiation of clonal expansion remain largely unknown. Using intravital imaging, we observed that T cells typically divide without being associated to APCs. Supporting these findings, we demonstrate that recently activated T cells have an intrinsic defect in establishing stable contacts with APCs, a feature that was reflected by a blunted capacity to stop upon T cell receptor (TCR) engagement. T cell unresponsiveness was caused, in part, by a general block in extracellular calcium entry. Forcing TCR signals in activated T cells antagonized cell division, suggesting that T cell hyporesponsiveness acts as a safeguard mechanism against signals detrimental to mitosis. We propose that transient unresponsiveness represents an essential phase of T cell priming that promotes T cell disengagement from APCs and favors effective clonal expansion. © 2018 Bohineust et al.

  16. Influence of Computerized Sounding Out on Spelling Performance for Children who do and not rely on AAC

    Science.gov (United States)

    McCarthy, Jillian H.; Hogan, Tiffany P.; Beukelman, David R.; Schwarz, Ilsa E.

    2015-01-01

    Purpose Spelling is an important skill for individuals who rely on augmentative alternative communication (AAC). The purpose of this study was to investigate how computerized sounding out influenced spelling accuracy of pseudo-words. Computerized sounding out was defined as a word elongated, thus providing an opportunity for a child to hear all the sounds in the word at a slower rate. Methods Seven children with cerebral palsy, four who use AAC and three who do not, participated in a single subject AB design. Results The results of the study indicated that the use of computerized sounding out increased the phonologic accuracy of the pseudo-words produced by participants. Conclusion The study provides preliminary evidence for the use of computerized sounding out during spelling tasks for children with cerebral palsy who do and do not use AAC. Future directions and clinical implications are discussed. PMID:24512195

  17. Large reservoirs: Chapter 17

    Science.gov (United States)

    Miranda, Leandro E.; Bettoli, Phillip William

    2010-01-01

    Large impoundments, defined as those with surface area of 200 ha or greater, are relatively new aquatic ecosystems in the global landscape. They represent important economic and environmental resources that provide benefits such as flood control, hydropower generation, navigation, water supply, commercial and recreational fisheries, and various other recreational and esthetic values. Construction of large impoundments was initially driven by economic needs, and ecological consequences received little consideration. However, in recent decades environmental issues have come to the forefront. In the closing decades of the 20th century societal values began to shift, especially in the developed world. Society is no longer willing to accept environmental damage as an inevitable consequence of human development, and it is now recognized that continued environmental degradation is unsustainable. Consequently, construction of large reservoirs has virtually stopped in North America. Nevertheless, in other parts of the world construction of large reservoirs continues. The emergence of systematic reservoir management in the early 20th century was guided by concepts developed for natural lakes (Miranda 1996). However, we now recognize that reservoirs are different and that reservoirs are not independent aquatic systems inasmuch as they are connected to upstream rivers and streams, the downstream river, other reservoirs in the basin, and the watershed. Reservoir systems exhibit longitudinal patterns both within and among reservoirs. Reservoirs are typically arranged sequentially as elements of an interacting network, filter water collected throughout their watersheds, and form a mosaic of predictable patterns. Traditional approaches to fisheries management such as stocking, regulating harvest, and in-lake habitat management do not always produce desired effects in reservoirs. As a result, managers may expend resources with little benefit to either fish or fishing. Some locally

  18. Large Scale Computing for the Modelling of Whole Brain Connectivity

    DEFF Research Database (Denmark)

    Albers, Kristoffer Jon

    organization of the brain in continuously increasing resolution. From these images, networks of structural and functional connectivity can be constructed. Bayesian stochastic block modelling provides a prominent data-driven approach for uncovering the latent organization, by clustering the networks into groups...... of neurons. Relying on Markov Chain Monte Carlo (MCMC) simulations as the workhorse in Bayesian inference however poses significant computational challenges, especially when modelling networks at the scale and complexity supported by high-resolution whole-brain MRI. In this thesis, we present how to overcome...... these computational limitations and apply Bayesian stochastic block models for un-supervised data-driven clustering of whole-brain connectivity in full image resolution. We implement high-performance software that allows us to efficiently apply stochastic blockmodelling with MCMC sampling on large complex networks...

  19. Lightweight computational steering of very large scale molecular dynamics simulations

    International Nuclear Information System (INIS)

    Beazley, D.M.

    1996-01-01

    We present a computational steering approach for controlling, analyzing, and visualizing very large scale molecular dynamics simulations involving tens to hundreds of millions of atoms. Our approach relies on extensible scripting languages and an easy to use tool for building extensions and modules. The system is extremely easy to modify, works with existing C code, is memory efficient, and can be used from inexpensive workstations and networks. We demonstrate how we have used this system to manipulate data from production MD simulations involving as many as 104 million atoms running on the CM-5 and Cray T3D. We also show how this approach can be used to build systems that integrate common scripting languages (including Tcl/Tk, Perl, and Python), simulation code, user extensions, and commercial data analysis packages

  20. STEP flight experiments Large Deployable Reflector (LDR) telescope

    Science.gov (United States)

    Runge, F. C.

    1984-01-01

    Flight testing plans for a large deployable infrared reflector telescope to be tested on a space platform are discussed. Subsystem parts, subassemblies, and whole assemblies are discussed. Assurance of operational deployability, rigidization, alignment, and serviceability will be sought.

  1. Longitudinal emittance blowup in the large hadron collider

    CERN Document Server

    Baudrenghien, P

    2013-01-01

    The Large Hadron Collider (LHC) relies on Landau damping for longitudinal stability. To avoid decreasing the stability margin at high energy, the longitudinal emittance must be continuously increased during the acceleration ramp. Longitudinal blowup provides the required emittance growth. The method was implemented through the summer of 2010. Band-limited RF phase-noise is injected in the main accelerating cavities during the whole ramp of about 11min. Synchrotron frequencies change along the energy ramp, but the digitally created noise tracks the frequency change. The position of the noise-band, relative to the nominal synchrotron frequency, and the bandwidth of the spectrum are set by pre-defined constants, making the diffusion stop at the edges of the demanded distribution. The noise amplitude is controlled by feedback using the measurement of the average bunch length. This algorithm reproducibly achieves the programmed bunch length of about 1.2ns, at flat top with low bunch-to-bunch scatter and provides a...

  2. The large dictionary on chemical engineering

    International Nuclear Information System (INIS)

    1995-03-01

    This book mentions the large dictionary on chemical engineering. It starts the preface. It mentions introduction for publish committee. It also has signature of publish committee. It introduces explanatory notes. It gives descriptions of glossary on chemical engineering. This has appendixes and index. This book consists of seven part to explain chemical engineering glossary. It was written by chemical engineering dictionary publish committee.

  3. Rucio - The next generation of large scale distributed system for ATLAS Data Management

    CERN Document Server

    Garonne, V; The ATLAS collaboration; Beermann, T; Goossens, L; Lassnig, M; Nairz, A; Stewart, GA; Vigne, V; Serfon, C

    2013-01-01

    Rucio is the next-generation Distributed Data Management(DDM) system benefiting from recent advances in cloud and "Big Data" computing to address HEP experiments scaling requirements. Rucio is an evolution of the ATLAS DDM system Don Quijote 2 (DQ2), which has demonstrated very large scale data management capabilities with more than 140 petabytes spread worldwide across 130 sites, and accesses from 1,000 active users. However, DQ2 is reaching its limits in terms of scalability, requiring a large number of support staff to operate and being hard to extend with new technologies. Rucio will address these issues by relying on a conceptual data model and new technology to ensure system scalability, address new user requirements and employ new automation framework to reduce operational overheads. We present the key concepts of Rucio, including its data organization/representation and a model of how ATLAS central group and user activities will be managed. The Rucio design, and the technology it employs, is described...

  4. Rucio - The next generation of large scale distributed system for ATLAS Data Management

    CERN Document Server

    Garonne, V; The ATLAS collaboration; Beermann, T; Goossens, L; Lassnig, M; Nairz, A; Stewart, GA; Vigne, V; Serfon, C

    2014-01-01

    Rucio is the next-generation Distributed Data Management(DDM) system benefiting from recent advances in cloud and ”Big Data” computing to address HEP experiments scaling requirements. Rucio is an evolution of the ATLAS DDM system Don Quijote 2 (DQ2), which has demonstrated very large scale data management capabilities with more than 140 petabytes spread worldwide across 130 sites, and accesses from 1,000 active users. However, DQ2 is reaching its limits in terms of scalability, requiring a large number of support staff to operate and being hard to extend with new technologies. Rucio will address these issues by relying on a conceptual data model and new technology to ensure system scalability, address new user requirements and employ new automation framework to reduce operational overheads. We present the key concepts of Rucio, including its data organization/representation and a model of how ATLAS central group and user activities will be managed. The Rucio design, and the technology it employs, is descr...

  5. Humans rely on the same rules to assess emotional valence and intensity in conspecific and dog vocalizations.

    Science.gov (United States)

    Faragó, Tamás; Andics, Attila; Devecseri, Viktor; Kis, Anna; Gácsi, Márta; Miklósi, Adám

    2014-01-01

    Humans excel at assessing conspecific emotional valence and intensity, based solely on non-verbal vocal bursts that are also common in other mammals. It is not known, however, whether human listeners rely on similar acoustic cues to assess emotional content in conspecific and heterospecific vocalizations, and which acoustical parameters affect their performance. Here, for the first time, we directly compared the emotional valence and intensity perception of dog and human non-verbal vocalizations. We revealed similar relationships between acoustic features and emotional valence and intensity ratings of human and dog vocalizations: those with shorter call lengths were rated as more positive, whereas those with a higher pitch were rated as more intense. Our findings demonstrate that humans rate conspecific emotional vocalizations along basic acoustic rules, and that they apply similar rules when processing dog vocal expressions. This suggests that humans may utilize similar mental mechanisms for recognizing human and heterospecific vocal emotions.

  6. Assessing End-Of-Supply Risk of Spare Parts Using the Proportional Hazard Model

    NARCIS (Netherlands)

    X. Li (Xishu); R. Dekker (Rommert); C. Heij (Christiaan); M. Hekimoğlu (Mustafa)

    2016-01-01

    textabstractOperators of long field-life systems like airplanes are faced with hazards in the supply of spare parts. If the original manufacturers or suppliers of parts end their supply, this may have large impacts on operating costs of firms needing these parts. Existing end-of-supply evaluation

  7. Impairments in part-whole representations of objects in two cases of integrative visual agnosia.

    Science.gov (United States)

    Behrmann, Marlene; Williams, Pepper

    2007-10-01

    How complex multipart visual objects are represented perceptually remains a subject of ongoing investigation. One source of evidence that has been used to shed light on this issue comes from the study of individuals who fail to integrate disparate parts of visual objects. This study reports a series of experiments that examine the ability of two such patients with this form of agnosia (integrative agnosia; IA), S.M. and C.R., to discriminate and categorize exemplars of a rich set of novel objects, "Fribbles", whose visual similarity (number of shared parts) and category membership (shared overall shape) can be manipulated. Both patients performed increasingly poorly as the number of parts required for differentiating one Fribble from another increased. Both patients were also impaired at determining when two Fribbles belonged in the same category, a process that relies on abstracting spatial relations between parts. C.R., the less impaired of the two, but not S.M., eventually learned to categorize the Fribbles but required substantially more training than normal perceivers. S.M.'s failure is not attributable to a problem in learning to use a label for identification nor is it obviously attributable to a visual memory deficit. Rather, the findings indicate that, although the patients may be able to represent a small number of parts independently, in order to represent multipart images, the parts need to be integrated or chunked into a coherent whole. It is this integrative process that is impaired in IA and appears to play a critical role in the normal object recognition of complex images.

  8. Regulatory theory: commercially sustainable markets rely upon satisfying the public interest in obtaining credible goods.

    Science.gov (United States)

    Warren-Jones, Amanda

    2017-10-01

    Regulatory theory is premised on the failure of markets, prompting a focus on regulators and industry from economic perspectives. This article argues that overlooking the public interest in the sustainability of commercial markets risks markets failing completely. This point is exemplified through health care markets - meeting an essential need - and focuses upon innovative medicines as the most desired products in that market. If this seemingly invulnerable market risks failure, there is a pressing need to consider the public interest in sustainable markets within regulatory literature and practice. Innovative medicines are credence goods, meaning that the sustainability of the market fundamentally relies upon the public trusting regulators to vouch for product quality. Yet, quality is being eroded by patent bodies focused on economic benefits from market growth, rather than ensuring innovatory value. Remunerative bodies are not funding medicines relative to market value, and market authorisation bodies are not vouching for robust safety standards or confining market entry to products for 'unmet medical need'. Arguably, this failure to assure quality heightens the risk of the market failing where it cannot be substituted by the reputation or credibility of providers of goods and/or information such as health care professionals/institutions, patient groups or industry.

  9. Research on the Application of Rapid Surveying and Mapping for Large Scare Topographic Map by Uav Aerial Photography System

    Science.gov (United States)

    Gao, Z.; Song, Y.; Li, C.; Zeng, F.; Wang, F.

    2017-08-01

    Rapid acquisition and processing method of large scale topographic map data, which relies on the Unmanned Aerial Vehicle (UAV) low-altitude aerial photogrammetry system, is studied in this paper, elaborating the main work flow. Key technologies of UAV photograph mapping is also studied, developing a rapid mapping system based on electronic plate mapping system, thus changing the traditional mapping mode and greatly improving the efficiency of the mapping. Production test and achievement precision evaluation of Digital Orth photo Map (DOM), Digital Line Graphic (DLG) and other digital production were carried out combined with the city basic topographic map update project, which provides a new techniques for large scale rapid surveying and has obvious technical advantage and good application prospect.

  10. Shared wilderness, shared responsibility, shared vision: Protecting migratory wildlife

    Science.gov (United States)

    Will Meeks; Jimmy Fox; Nancy Roeper

    2011-01-01

    Wilderness plays a vital role in global and landscape-level conservation of wildlife. Millions of migratory birds and mammals rely on wilderness lands and waters during critical parts of their life. As large, ecologically intact landscapes, wilderness areas also play a vital role in addressing global climate change by increasing carbon sequestration, reducing...

  11. Exploiting multi-scale parallelism for large scale numerical modelling of laser wakefield accelerators

    International Nuclear Information System (INIS)

    Fonseca, R A; Vieira, J; Silva, L O; Fiuza, F; Davidson, A; Tsung, F S; Mori, W B

    2013-01-01

    A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ∼10 6 cores and sustained performance over ∼2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios. (paper)

  12. A spongy graphene based bimorph actuator with ultra-large displacement towards biomimetic application.

    Science.gov (United States)

    Hu, Ying; Lan, Tian; Wu, Guan; Zhu, Zicai; Chen, Wei

    2014-11-07

    Bimorph actuators, consisting of two layers with asymmetric expansion and generating bending displacement, have been widely researched. Their actuation performances greatly rely on the difference of coefficients of thermal expansion (CTE) between the two material layers. Here, by introducing a spongy graphene (sG) paper with a large negative CTE as well as high electrical-to-thermal properties, an electromechanical sG/PDMS bimorph actuator is designed and fabricated, showing an ultra-large bending displacement output under low voltage stimulation (curvature of about 1.2 cm(-1) at 10 V for 3 s), a high displacement-to-length ratio (∼0.79), and vibration motion at AC voltage (up to 10 Hz), which is much larger and faster than that of the other electromechanical bimorph actuators. Based on the sG/PDMS bimorph serving as the "finger", a mechanical gripper is constructed to realize the fast manipulation of the objects under 0.1 Hz square wave voltage stimulation (0-8 V). The designed bimorph actuator coupled with ultra-large bending displacement, low driven voltage, and the ease of fabrication may open up substantial possibilities for the utilization of electromechanical actuators in practical biomimetic device applications.

  13. Future development of large steam turbines

    International Nuclear Information System (INIS)

    Chevance, A.

    1975-01-01

    An attempt is made to forecast the future of the large steam turbines till 1985. Three parameters affect the development of large turbines: 1) unit output; and a 2000 to 2500MW output may be scheduled; 2) steam quality: and two steam qualities may be considered: medium pressure saturated or slightly overheated steam (light water, heavy water); light enthalpie drop, high pressure steam, high temperature; high enthalpic drop; and 3) the quality of cooling supply. The largest range to be considered might be: open system cooling for sea-sites; humid tower cooling and dry tower cooling. Bi-fluid cooling cycles should be also mentioned. From the study of these influencing factors, it appears that the constructor, for an output of about 2500MW should have at his disposal the followings: two construction technologies for inlet parts and for high and intermediate pressure parts corresponding to both steam qualities; exhaust sections suitable for the different qualities of cooling supply. The two construction technologies with the two steam qualities already exist and involve no major developments. But, the exhaust section sets the question of rotational speed [fr

  14. Large-scale DNS and DNSSEC data sets for network security research

    NARCIS (Netherlands)

    van Rijswijk, Roland M.; Sperotto, Anna; Pras, Aiko

    The Domain Name System protocol is often abused to perform denial-of-service attacks. These attacks, called DNS amplification, rely on two properties of the DNS. Firstly, DNS is vulnerable to source address spoofing because it relies on the asynchronous connectionless UDP protocol. Secondly, DNS

  15. Malaysia: where big is still better. For Malays, large families are part of the plan.

    Science.gov (United States)

    1993-11-03

    The benefits of various-sized families in Malaysia were discussed by several women and supplemented with official statements on family planning (FP). The Director of the National Population and Family Development, Dr. Raj Karim, advised that maternal health is jeopardized when women have more than five children. About 30% of reproductive age women in Malaysia have five or more children. A Federation of FP Associations spokesperson agreed that women should be advised of the dangers of bearing over five children, of the importance of spacing births two to four years apart, and of the ideal age of childbearing (21-39 years). The government lacks an official policy on family size. The government position is, however, compatible with Islamic teachings on spacing in order to protect the health of the mother and child. Islamic law does not permit sterilization or abortion. The "fatwas" of Islamic teaching may have been misconstrued by those not using any form of contraception. Dr. Karim, who has five children, reported that having a large family can be difficult for a woman with a job, a career, and a husband or when both parents work. Most Malays desire large families. The average Malay family size was 4.1 children in 1990; Malaysian Chinese have fertility of 2.3 children and Malaysian Indians have 2.6 children. People say that the benefits outweigh the hardships of a large family.

  16. An NLO calculation of the electroproduction of large-E bot hadrons

    International Nuclear Information System (INIS)

    Aurenche, P.; Basu, Rahul; Fontannaz, M.; Godbole, R.M.

    2004-01-01

    We present a next-to-leading order calculation of the cross section for the leptoproduction of large-E bot hadrons and we compare our predictions with H1 data on the forward production of π 0 . We find large higher order corrections and an important sensitivity to the renormalization and factorization scales. These large corrections are shown to arise in part from BFKL-like diagrams at the lowest order. (orig.)

  17. Structural concepts for very large (400-meter-diameter) solar concentrators

    Science.gov (United States)

    Mikulas, Martin M., Jr.; Hedgepeth, John M.

    1989-01-01

    A general discussion of various types of large space structures is presented. A brief overview of the history of space structures is presented to provide insight into the current state-of-the art. Finally, the results of a structural study to assess the viability of very large solar concentrators are presented. These results include weight, stiffness, part count, and in-space construction time.

  18. Intermediation in Foreign Trade: When do Exporters Rely on Intermediaries?

    DEFF Research Database (Denmark)

    Schröder, Philipp J.H.; Trabold, H.; Trübswetter, P.

    2005-01-01

    The paper explores the question of why trade intermediaries (TIs) are frequently used as agents for exports to some countries but not to others. First, we adapt a standard intra-industry trade model with variable export costs (e.g. transport) and fixed export costs (e.g. market access) to include...... a TI that is able to pool market access cost. This framework suggests explanatory factors for the TI share in a country's exports, which are largely in line with the literature. Second, we test these explanatory factors with a new data set based on French customs information. The paper finds that: (i......) higher market access costs increase the TI share, (ii) smaller export markets feature a larger TI share, (iii) network effects are important determinants of trade intermediation....

  19. Part-based deep representation for product tagging and search

    Science.gov (United States)

    Chen, Keqing

    2017-06-01

    Despite previous studies, tagging and indexing the product images remain challenging due to the large inner-class variation of the products. In the traditional methods, the quantized hand-crafted features such as SIFTs are extracted as the representation of the product images, which are not discriminative enough to handle the inner-class variation. For discriminative image representation, this paper firstly presents a novel deep convolutional neural networks (DCNNs) architect true pre-trained on a large-scale general image dataset. Compared to the traditional features, our DCNNs representation is of more discriminative power with fewer dimensions. Moreover, we incorporate the part-based model into the framework to overcome the negative effect of bad alignment and cluttered background and hence the descriptive ability of the deep representation is further enhanced. Finally, we collect and contribute a well-labeled shoe image database, i.e., the TBShoes, on which we apply the part-based deep representation for product image tagging and search, respectively. The experimental results highlight the advantages of the proposed part-based deep representation.

  20. Imputation of missing genotypes within LD-blocks relying on the basic coalescent and beyond: consideration of population growth and structure.

    Science.gov (United States)

    Kabisch, Maria; Hamann, Ute; Lorenzo Bermejo, Justo

    2017-10-17

    Genotypes not directly measured in genetic studies are often imputed to improve statistical power and to increase mapping resolution. The accuracy of standard imputation techniques strongly depends on the similarity of linkage disequilibrium (LD) patterns in the study and reference populations. Here we develop a novel approach for genotype imputation in low-recombination regions that relies on the coalescent and permits to explicitly account for population demographic factors. To test the new method, study and reference haplotypes were simulated and gene trees were inferred under the basic coalescent and also considering population growth and structure. The reference haplotypes that first coalesced with study haplotypes were used as templates for genotype imputation. Computer simulations were complemented with the analysis of real data. Genotype concordance rates were used to compare the accuracies of coalescent-based and standard (IMPUTE2) imputation. Simulations revealed that, in LD-blocks, imputation accuracy relying on the basic coalescent was higher and less variable than with IMPUTE2. Explicit consideration of population growth and structure, even if present, did not practically improve accuracy. The advantage of coalescent-based over standard imputation increased with the minor allele frequency and it decreased with population stratification. Results based on real data indicated that, even in low-recombination regions, further research is needed to incorporate recombination in coalescence inference, in particular for studies with genetically diverse and admixed individuals. To exploit the full potential of coalescent-based methods for the imputation of missing genotypes in genetic studies, further methodological research is needed to reduce computer time, to take into account recombination, and to implement these methods in user-friendly computer programs. Here we provide reproducible code which takes advantage of publicly available software to facilitate

  1. How to correct long-term system externality of large scale wind power development by a capacity mechanism?

    International Nuclear Information System (INIS)

    Cepeda, Mauricio; Finon, Dominique

    2013-04-01

    This paper deals with the practical problems related to long-term security of supply in electricity markets in the presence of large-scale wind power development. The success of renewable promotion schemes adds a new dimension to ensuring long-term security of supply. It necessitates designing second-best policies to prevent large-scale wind power development from distorting long-run equilibrium prices and investments in conventional generation and in particular in peaking units. We rely upon a long-term simulation model which simulates electricity market players' investment decisions in a market regime and incorporates large-scale wind power development either in the presence of either subsidised wind production or in market-driven development. We test the use of capacity mechanisms to compensate for the long-term effects of large-scale wind power development on the system reliability. The first finding is that capacity mechanisms can help to reduce the social cost of large scale wind power development in terms of decrease of loss of load probability. The second finding is that, in a market-based wind power deployment without subsidy, wind generators are penalized for insufficient contribution to the long term system's reliability. (authors)

  2. Application of bamboo laminates in large-scale wind turbine blade design?

    Institute of Scientific and Technical Information of China (English)

    Long WANG; Hui LI; Tongguang WANG

    2016-01-01

    From the viewpoint of material and structure in the design of bamboo blades of large-scale wind turbine, a series of mechanical property tests of bamboo laminates as the major enhancement materials for blades are presented. The basic mechanical characteristics needed in the design of bamboo blades are brie?y introduced. Based on these data, the aerodynamic-structural integrated design of a 1.5 MW wind turbine bamboo blade relying on a conventional platform of upwind, variable speed, variable pitch, and doubly-fed generator is carried out. The process of the structural layer design of bamboo blades is documented in detail. The structural strength and fatigue life of the designed wind turbine blades are certified. The technical issues raised from the design are discussed. Key problems and direction of the future study are also summarized.

  3. The large quark mass expansion of Γ(Z0 →hadrons) and Γ(τ- →ντ+ hadrons) in the order α3s

    International Nuclear Information System (INIS)

    Larin, S.A.; Ritbergen, T. van; Vermaseren, J.A.M.

    1994-09-01

    We present the analytical α s 3 , correction to the Z 0 decay rate into hadrons. We calculate this correction up to (and including) terms of the order (m Z 2 /m 2 top ) 3 in the large top quark mass expansion. We rely on the technique of the large mass expansion of individual Feynman diagrams and treat its application in detail. We convert the obtained results of six flavour QCD to the restults in the effective theory with five active flavours, checking the decoupling relation of the QCD coupling constant. We also derive the large charm quark mass expansion of the semihadronic τ lepton decay rate in the α s 3 approximation. (orig.)

  4. Preserving biological diversity in the face of large-scale demands for biofuels

    International Nuclear Information System (INIS)

    Cook, J.J.; Beyea, J.; Keeler, K.H.

    1991-01-01

    Large-scale production and harvesting of biomass to replace fossil fuels could reduce biological diversity by eliminating habitat for native species. Forests would be managed and harvested more intensively, and virtually all arable land unsuitable for high-value agriculture or silviculture might be used to grow crops dedicated to energy. Given the prospects for a potentially large increase in biofuel production, it is time now to develop strategies for mitigating the loss of biodiversity that might ensue. Planning at micro to macro scales will be crucial to minimize the ecological impacts of producing biofuels. In particular, cropping and harvesting systems will need to provide the biological, spatial, and temporal diversity characteristics of natural ecosystems and successional sequences, if we are to have this technology support the environmental health of the world rather than compromise it. Incorporation of these ecological values will be necessary to forestall costly environmental restoration, even at the cost of submaximal biomass productivity. It is therefore doubtful that all managers will take the longer view. Since the costs of biodiversity loss are largely external to economic markets, society cannot rely on the market to protect biodiversity, and some sort of intervention will be necessary. 116 refs., 1 tab

  5. Foundations of Large-Scale Multimedia Information Management and Retrieval

    CERN Document Server

    Chang, Edward Y

    2011-01-01

    "Foundations of Large-Scale Multimedia Information Management and Retrieval - Mathematics of Perception" covers knowledge representation and semantic analysis of multimedia data and scalability in signal extraction, data mining, and indexing. The book is divided into two parts: Part I - Knowledge Representation and Semantic Analysis focuses on the key components of mathematics of perception as it applies to data management and retrieval. These include feature selection/reduction, knowledge representation, semantic analysis, distance function formulation for measuring similarity, and

  6. All projects related to | Page 289 | IDRC - International Development ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    From farm to fork: Improving nutrition in the Caribbean. Project. Obesity rates are on the rise in the Caribbean. This is due in large part to the limited attention Caribbean countries have paid to local food production.They have relied instead on economic development through exports of plantation crops. Start Date: March 1, ...

  7. Black Holes and the Large Hadron Collider

    Science.gov (United States)

    Roy, Arunava

    2011-01-01

    The European Center for Nuclear Research or CERN's Large Hadron Collider (LHC) has caught our attention partly due to the film "Angels and Demons." In the movie, an antimatter bomb attack on the Vatican is foiled by the protagonist. Perhaps just as controversial is the formation of mini black holes (BHs). Recently, the American Physical Society…

  8. Challenges in Managing Trustworthy Large-scale Digital Science

    Science.gov (United States)

    Evans, B. J. K.

    2017-12-01

    The increased use of large-scale international digital science has opened a number of challenges for managing, handling, using and preserving scientific information. The large volumes of information are driven by three main categories - model outputs including coupled models and ensembles, data products that have been processing to a level of usability, and increasingly heuristically driven data analysis. These data products are increasingly the ones that are usable by the broad communities, and far in excess of the raw instruments data outputs. The data, software and workflows are then shared and replicated to allow broad use at an international scale, which places further demands of infrastructure to support how the information is managed reliably across distributed resources. Users necessarily rely on these underlying "black boxes" so that they are productive to produce new scientific outcomes. The software for these systems depend on computational infrastructure, software interconnected systems, and information capture systems. This ranges from the fundamentals of the reliability of the compute hardware, system software stacks and libraries, and the model software. Due to these complexities and capacity of the infrastructure, there is an increased emphasis of transparency of the approach and robustness of the methods over the full reproducibility. Furthermore, with large volume data management, it is increasingly difficult to store the historical versions of all model and derived data. Instead, the emphasis is on the ability to access the updated products and the reliability by which both previous outcomes are still relevant and can be updated for the new information. We will discuss these challenges and some of the approaches underway that are being used to address these issues.

  9. Argot2: a large scale function prediction tool relying on semantic similarity of weighted Gene Ontology terms.

    Science.gov (United States)

    Falda, Marco; Toppo, Stefano; Pescarolo, Alessandro; Lavezzo, Enrico; Di Camillo, Barbara; Facchinetti, Andrea; Cilia, Elisa; Velasco, Riccardo; Fontana, Paolo

    2012-03-28

    Predicting protein function has become increasingly demanding in the era of next generation sequencing technology. The task to assign a curator-reviewed function to every single sequence is impracticable. Bioinformatics tools, easy to use and able to provide automatic and reliable annotations at a genomic scale, are necessary and urgent. In this scenario, the Gene Ontology has provided the means to standardize the annotation classification with a structured vocabulary which can be easily exploited by computational methods. Argot2 is a web-based function prediction tool able to annotate nucleic or protein sequences from small datasets up to entire genomes. It accepts as input a list of sequences in FASTA format, which are processed using BLAST and HMMER searches vs UniProKB and Pfam databases respectively; these sequences are then annotated with GO terms retrieved from the UniProtKB-GOA database and the terms are weighted using the e-values from BLAST and HMMER. The weighted GO terms are processed according to both their semantic similarity relations described by the Gene Ontology and their associated score. The algorithm is based on the original idea developed in a previous tool called Argot. The entire engine has been completely rewritten to improve both accuracy and computational efficiency, thus allowing for the annotation of complete genomes. The revised algorithm has been already employed and successfully tested during in-house genome projects of grape and apple, and has proven to have a high precision and recall in all our benchmark conditions. It has also been successfully compared with Blast2GO, one of the methods most commonly employed for sequence annotation. The server is freely accessible at http://www.medcomp.medicina.unipd.it/Argot2.

  10. 3D full-field quantification of cell-induced large deformations in fibrillar biomaterials by combining non-rigid image registration with label-free second harmonic generation.

    Science.gov (United States)

    Jorge-Peñas, Alvaro; Bové, Hannelore; Sanen, Kathleen; Vaeyens, Marie-Mo; Steuwe, Christian; Roeffaers, Maarten; Ameloot, Marcel; Van Oosterwyck, Hans

    2017-08-01

    To advance our current understanding of cell-matrix mechanics and its importance for biomaterials development, advanced three-dimensional (3D) measurement techniques are necessary. Cell-induced deformations of the surrounding matrix are commonly derived from the displacement of embedded fiducial markers, as part of traction force microscopy (TFM) procedures. However, these fluorescent markers may alter the mechanical properties of the matrix or can be taken up by the embedded cells, and therefore influence cellular behavior and fate. In addition, the currently developed methods for calculating cell-induced deformations are generally limited to relatively small deformations, with displacement magnitudes and strains typically of the order of a few microns and less than 10% respectively. Yet, large, complex deformation fields can be expected from cells exerting tractions in fibrillar biomaterials, like collagen. To circumvent these hurdles, we present a technique for the 3D full-field quantification of large cell-generated deformations in collagen, without the need of fiducial markers. We applied non-rigid, Free Form Deformation (FFD)-based image registration to compute full-field displacements induced by MRC-5 human lung fibroblasts in a collagen type I hydrogel by solely relying on second harmonic generation (SHG) from the collagen fibrils. By executing comparative experiments, we show that comparable displacement fields can be derived from both fibrils and fluorescent beads. SHG-based fibril imaging can circumvent all described disadvantages of using fiducial markers. This approach allows measuring 3D full-field deformations under large displacement (of the order of 10 μm) and strain regimes (up to 40%). As such, it holds great promise for the study of large cell-induced deformations as an inherent component of cell-biomaterial interactions and cell-mediated biomaterial remodeling. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Large-scale transportation network congestion evolution prediction using deep learning theory.

    Science.gov (United States)

    Ma, Xiaolei; Yu, Haiyang; Wang, Yunpeng; Wang, Yinhai

    2015-01-01

    Understanding how congestion at one location can cause ripples throughout large-scale transportation network is vital for transportation researchers and practitioners to pinpoint traffic bottlenecks for congestion mitigation. Traditional studies rely on either mathematical equations or simulation techniques to model traffic congestion dynamics. However, most of the approaches have limitations, largely due to unrealistic assumptions and cumbersome parameter calibration process. With the development of Intelligent Transportation Systems (ITS) and Internet of Things (IoT), transportation data become more and more ubiquitous. This triggers a series of data-driven research to investigate transportation phenomena. Among them, deep learning theory is considered one of the most promising techniques to tackle tremendous high-dimensional data. This study attempts to extend deep learning theory into large-scale transportation network analysis. A deep Restricted Boltzmann Machine and Recurrent Neural Network architecture is utilized to model and predict traffic congestion evolution based on Global Positioning System (GPS) data from taxi. A numerical study in Ningbo, China is conducted to validate the effectiveness and efficiency of the proposed method. Results show that the prediction accuracy can achieve as high as 88% within less than 6 minutes when the model is implemented in a Graphic Processing Unit (GPU)-based parallel computing environment. The predicted congestion evolution patterns can be visualized temporally and spatially through a map-based platform to identify the vulnerable links for proactive congestion mitigation.

  12. Research Advances: Pacific Northwest National Laboratory Finds New Way to Detect Destructive Enzyme Activity--Hair Dye Relies on Nanotechnology--Ways to Increase Shelf Life of Milk

    Science.gov (United States)

    King, Angela G.

    2007-01-01

    Recent advances in various research fields are described. Scientists at the Pacific Northwest National Laboratory have found a new way to detect destructive enzyme activity, scientists in France have found that an ancient hair dye used by ancient people in Greece and Rome relied on nanotechnology and in the U.S. scientists are developing new…

  13. Toward a Better Nutritional Aiding in Disasters: Relying on Lessons Learned during the Bam Earthquake.

    Science.gov (United States)

    Nekouie Moghadam, Mahmoud; Amiresmaieli, Mohammadreza; Hassibi, Mohammad; Doostan, Farideh; Khosravi, Sajad

    2017-08-01

    Introduction Examining various problems in the aftermath of disasters is very important to the disaster victims. Managing and coordinating food supply and its distribution among the victims is one of the most important problems after an earthquake. Therefore, the purpose of this study was to recognize problems and experiences in the field of nutritional aiding during an earthquake. This qualitative study was of phenomenological type. Using the purposive sampling method, 10 people who had experienced nutritional aiding during the Bam Earthquake (Iran; 2003) were interviewed. Colaizzi's method of analysis was used to analyze interview data. The findings of this study identified four main categories and 19 sub-categories concerning challenges in the nutritional aiding during the Bam Earthquake. The main topics included managerial, aiding, infrastructural, and administrative problems. The major problems in nutritional aiding include lack of prediction and development of a specific program of suitable nutritional pattern and nutritional assessment of the victims in critical conditions. Forming specialized teams, educating team members about nutrition, and making use of experts' knowledge are the most important steps to resolve these problems in the critical conditions; these measures are the duties of the relevant authorities. Nekouie Moghadam M , Amiresmaieli M , Hassibi M , Doostan F , Khosravi S . Toward a better nutritional aiding in disasters: relying on lessons learned during the Bam Earthquake. Prehosp Disaster Med. 2017;32(4):382-386.

  14. Deep Adaptive Log-Demons: Diffeomorphic Image Registration with Very Large Deformations

    Directory of Open Access Journals (Sweden)

    Liya Zhao

    2015-01-01

    Full Text Available This paper proposes a new framework for capturing large and complex deformation in image registration. Traditionally, this challenging problem relies firstly on a preregistration, usually an affine matrix containing rotation, scale, and translation and afterwards on a nonrigid transformation. According to preregistration, the directly calculated affine matrix, which is obtained by limited pixel information, may misregistrate when large biases exist, thus misleading following registration subversively. To address this problem, for two-dimensional (2D images, the two-layer deep adaptive registration framework proposed in this paper firstly accurately classifies the rotation parameter through multilayer convolutional neural networks (CNNs and then identifies scale and translation parameters separately. For three-dimensional (3D images, affine matrix is located through feature correspondences by a triplanar 2D CNNs. Then deformation removal is done iteratively through preregistration and demons registration. By comparison with the state-of-the-art registration framework, our method gains more accurate registration results on both synthetic and real datasets. Besides, principal component analysis (PCA is combined with correlation like Pearson and Spearman to form new similarity standards in 2D and 3D registration. Experiment results also show faster convergence speed.

  15. Large-Scale Agriculture and Outgrower Schemes in Ethiopia

    DEFF Research Database (Denmark)

    Wendimu, Mengistu Assefa

    , the impact of large-scale agriculture and outgrower schemes on productivity, household welfare and wages in developing countries is highly contentious. Chapter 1 of this thesis provides an introduction to the study, while also reviewing the key debate in the contemporary land ‘grabbing’ and historical large...... sugarcane outgrower scheme on household income and asset stocks. Chapter 5 examines the wages and working conditions in ‘formal’ large-scale and ‘informal’ small-scale irrigated agriculture. The results in Chapter 2 show that moisture stress, the use of untested planting materials, and conflict over land...... commands a higher wage than ‘formal’ large-scale agriculture, while rather different wage determination mechanisms exist in the two sectors. Human capital characteristics (education and experience) partly explain the differences in wages within the formal sector, but play no significant role...

  16. Female College Students' Perceptions of Organ Donation

    Science.gov (United States)

    Boland, Kathleen; Baker, Kerrie

    2010-01-01

    The current process of organ donation in the U.S. relies on the premise of altruism or voluntary consent. Yet, human organs available for donation and transplant do not meet current demands. The literature has suggested that college students, who represent a large group of potential healthy organ donors, often are not part of donor pools. Before…

  17. Jonckheere Double Star Photometry - Part VII: Aquarius

    Science.gov (United States)

    Knapp, Wilfried R. A.

    2017-10-01

    If any double star discoverer is in urgent need of photometry then it is Jonckheere. There are over 3000 Jonckheere objects listed in the WDS catalog and a good part of them with magnitudes obviously far too bright. This report covers the Jonckheere objects in the constellation Aquarius. One image per object was taken with V-filter to allow for visual magnitude measurement by differential pho-tometry. All objects were additionally checked for common proper motion by comparing 2MASS to GAIA DR1 positions and a rather surprisingly large part of the objects qualify indeed as potential CPM pairs. For a few objects also WDS position errors were found.

  18. Java The Good Parts

    CERN Document Server

    Waldo, Jim

    2010-01-01

    What if you could condense Java down to its very best features and build better applications with that simpler version? In this book, veteran Sun Labs engineer Jim Waldo reveals which parts of Java are most useful, and why those features make Java among the best programming languages available. Every language eventually builds up crud, Java included. The core language has become increasingly large and complex, and the libraries associated with it have grown even more. Learn how to take advantage of Java's best features by working with an example application throughout the book. You may not l

  19. R806 (part 1)

    CERN Multimedia

    CERN PhotoLab

    1976-01-01

    R806 was designed by the BNL-CERN-Syracuse-Yale Collaboration (Bill Willis spokesman) to study large transverse momentum phenomena, and installed in intersection 8 of the ISR. The main detectors were Lithium foil transition radiation detectors to identify electrons and liquid argon calorimeters to measure the energy of the electrons and photons (among the first such calorimeters to be used in an experiment). In part 1 there were two modules, top and bottom of the horizontal beam pipe; the black vertical pipe contains the cryogenics (LN2 and Lar) and is connected to the two modules with the horizontal piping.

  20. Dynamic state estimation techniques for large-scale electric power systems

    International Nuclear Information System (INIS)

    Rousseaux, P.; Pavella, M.

    1991-01-01

    This paper presents the use of dynamic type state estimators for energy management in electric power systems. Various dynamic type estimators have been developed, but have never been implemented. This is primarily because of dimensionality problems posed by the conjunction of an extended Kalman filter with a large scale power system. This paper precisely focuses on how to circumvent the high dimensionality, especially prohibitive in the filtering step, by using a decomposition-aggregation hierarchical scheme; to appropriately model the power system dynamics, the authors introduce new state variables in the prediction step and rely on a load forecasting method. The combination of these two techniques succeeds in solving the overall dynamic state estimation problem not only in a tractable and realistic way, but also in compliance with real-time computational requirements. Further improvements are also suggested, bound to the specifics of the high voltage electric transmission systems

  1. Application of a sensitive collection heuristic for very large protein families: Evolutionary relationship between adipose triglyceride lipase (ATGL and classic mammalian lipases

    Directory of Open Access Journals (Sweden)

    Berezovsky Igor

    2006-03-01

    Full Text Available Abstract Background Manually finding subtle yet statistically significant links to distantly related homologues becomes practically impossible for very populated protein families due to the sheer number of similarity searches to be invoked and analyzed. The unclear evolutionary relationship between classical mammalian lipases and the recently discovered human adipose triglyceride lipase (ATGL; a patatin family member is an exemplary case for such a problem. Results We describe an unsupervised, sensitive sequence segment collection heuristic suitable for assembling very large protein families. It is based on fan-like expanding, iterative database searches. To prevent inclusion of unrelated hits, additional criteria are introduced: minimal alignment length and overlap with starting sequence segments, finding starting sequences in reciprocal searches, automated filtering for compositional bias and repetitive patterns. This heuristic was implemented as FAMILYSEARCHER in the ANNIE sequence analysis environment and applied to search for protein links between the classical lipase family and the patatin-like group. Conclusion The FAMILYSEARCHER is an efficient tool for tracing distant evolutionary relationships involving large protein families. Although classical lipases and ATGL have no obvious sequence similarity and differ with regard to fold and catalytic mechanism, homology links detected with FAMILYSEARCHER show that they are evolutionarily related. The conserved sequence parts can be narrowed down to an ancestral core module consisting of three β-strands, one α-helix and a turn containing the typical nucleophilic serine. Moreover, this ancestral module also appears in numerous enzymes with various substrate specificities, but that critically rely on nucleophilic attack mechanisms.

  2. Quantum Devices Bonded Beneath a Superconducting Shield: Part 2

    Science.gov (United States)

    McRae, Corey Rae; Abdallah, Adel; Bejanin, Jeremy; Earnest, Carolyn; McConkey, Thomas; Pagel, Zachary; Mariantoni, Matteo

    The next-generation quantum computer will rely on physical quantum bits (qubits) organized into arrays to form error-robust logical qubits. In the superconducting quantum circuit implementation, this architecture will require the use of larger and larger chip sizes. In order for on-chip superconducting quantum computers to be scalable, various issues found in large chips must be addressed, including the suppression of box modes (due to the sample holder) and the suppression of slot modes (due to fractured ground planes). By bonding a metallized shield layer over a superconducting circuit using thin-film indium as a bonding agent, we have demonstrated proof of concept of an extensible circuit architecture that holds the key to the suppression of spurious modes. Microwave characterization of shielded transmission lines and measurement of superconducting resonators were compared to identical unshielded devices. The elimination of box modes was investigated, as well as bond characteristics including bond homogeneity and the presence of a superconducting connection.

  3. Large-scale bioenergy production: how to resolve sustainability trade-offs?

    Science.gov (United States)

    Humpenöder, Florian; Popp, Alexander; Bodirsky, Benjamin Leon; Weindl, Isabelle; Biewald, Anne; Lotze-Campen, Hermann; Dietrich, Jan Philipp; Klein, David; Kreidenweis, Ulrich; Müller, Christoph; Rolinski, Susanne; Stevanovic, Miodrag

    2018-02-01

    SDG agenda. Based on this, we argue that the development of policies for regulating externalities of large-scale bioenergy production should rely on broad sustainability assessments to discover potential trade-offs with the SDG agenda before implementation.

  4. Multilocus lod scores in large pedigrees: combination of exact and approximate calculations.

    Science.gov (United States)

    Tong, Liping; Thompson, Elizabeth

    2008-01-01

    To detect the positions of disease loci, lod scores are calculated at multiple chromosomal positions given trait and marker data on members of pedigrees. Exact lod score calculations are often impossible when the size of the pedigree and the number of markers are both large. In this case, a Markov Chain Monte Carlo (MCMC) approach provides an approximation. However, to provide accurate results, mixing performance is always a key issue in these MCMC methods. In this paper, we propose two methods to improve MCMC sampling and hence obtain more accurate lod score estimates in shorter computation time. The first improvement generalizes the block-Gibbs meiosis (M) sampler to multiple meiosis (MM) sampler in which multiple meioses are updated jointly, across all loci. The second one divides the computations on a large pedigree into several parts by conditioning on the haplotypes of some 'key' individuals. We perform exact calculations for the descendant parts where more data are often available, and combine this information with sampling of the hidden variables in the ancestral parts. Our approaches are expected to be most useful for data on a large pedigree with a lot of missing data. (c) 2007 S. Karger AG, Basel

  5. Measuring coalignment of retroreflectors with large lateral incoming-outgoing beam offset

    Energy Technology Data Exchange (ETDEWEB)

    Schütze, Daniel, E-mail: Daniel.Schuetze@aei.mpg.de; Sheard, Benjamin S.; Heinzel, Gerhard; Danzmann, Karsten [Max Planck Institute for Gravitational Physics (Albert Einstein Institute) and Institute for Gravitational Physics, Leibniz Universität Hannover, Callinstr. 38, 30167 Hanover (Germany); Farrant, David [Commonwealth Scientific and Industrial Research Organisation, Bradfield Road, Lindfield, NSW 2070 (Australia); Shaddock, Daniel A. [Centre for Gravitational Physics, Australian National University, Acton, ACT 0200 (Australia)

    2014-03-15

    A method based on phase-shifting Fizeau interferometry is presented with which retroreflectors with large incoming-outgoing beam separations can be tested. The method relies on a flat Reference Bar that is used to align two auxiliary mirrors parallel to each other to extend the aperture of the interferometer. The method is applied to measure the beam coalignment of a prototype Triple Mirror Assembly of the GRACE Follow-On Laser Ranging Interferometer, a future satellite-to-satellite tracking device for Earth gravimetry. The Triple Mirror Assembly features a lateral beam offset of incoming and outgoing beam of 600 mm, whereas the acceptance angle for the incoming beam is only about ±2 mrad. With the developed method, the beam coalignment of the prototype Triple Mirror Assembly was measured to be 9 μrad with a repeatability of below 1 μrad.

  6. Measuring coalignment of retroreflectors with large lateral incoming-outgoing beam offset

    International Nuclear Information System (INIS)

    Schütze, Daniel; Sheard, Benjamin S.; Heinzel, Gerhard; Danzmann, Karsten; Farrant, David; Shaddock, Daniel A.

    2014-01-01

    A method based on phase-shifting Fizeau interferometry is presented with which retroreflectors with large incoming-outgoing beam separations can be tested. The method relies on a flat Reference Bar that is used to align two auxiliary mirrors parallel to each other to extend the aperture of the interferometer. The method is applied to measure the beam coalignment of a prototype Triple Mirror Assembly of the GRACE Follow-On Laser Ranging Interferometer, a future satellite-to-satellite tracking device for Earth gravimetry. The Triple Mirror Assembly features a lateral beam offset of incoming and outgoing beam of 600 mm, whereas the acceptance angle for the incoming beam is only about ±2 mrad. With the developed method, the beam coalignment of the prototype Triple Mirror Assembly was measured to be 9 μrad with a repeatability of below 1 μrad

  7. High performance nanostructured Silicon heterojunction for water splitting on large scales

    KAUST Repository

    Bonifazi, Marcella

    2017-11-02

    In past years the global demand for energy has been increasing steeply, as well as the awareness that new sources of clean energy are essential. Photo-electrochemical devices (PEC) for water splitting applications have stirred great interest, and different approach has been explored to improve the efficiency of these devices and to avoid optical losses at the interfaces with water. These include engineering materials and nanostructuring the device\\'s surfaces [1]-[2]. Despite the promising initial results, there are still many drawbacks that needs to be overcome to reach large scale production with optimized performances [3]. We present a new device that relies on the optimization of the nanostructuring process that exploits suitably disordered surfaces. Additionally, this device could harvest light on both sides to efficiently gain and store the energy to keep the photocatalytic reaction active.

  8. High performance nanostructured Silicon heterojunction for water splitting on large scales

    KAUST Repository

    Bonifazi, Marcella; Fu, Hui-chun; He, Jr-Hau; Fratalocchi, Andrea

    2017-01-01

    In past years the global demand for energy has been increasing steeply, as well as the awareness that new sources of clean energy are essential. Photo-electrochemical devices (PEC) for water splitting applications have stirred great interest, and different approach has been explored to improve the efficiency of these devices and to avoid optical losses at the interfaces with water. These include engineering materials and nanostructuring the device's surfaces [1]-[2]. Despite the promising initial results, there are still many drawbacks that needs to be overcome to reach large scale production with optimized performances [3]. We present a new device that relies on the optimization of the nanostructuring process that exploits suitably disordered surfaces. Additionally, this device could harvest light on both sides to efficiently gain and store the energy to keep the photocatalytic reaction active.

  9. Frugivorous bats maintain functional habitat connectivity in agricultural landscapes but rely strongly on natural forest fragments.

    Science.gov (United States)

    Ripperger, Simon P; Kalko, Elisabeth K V; Rodríguez-Herrera, Bernal; Mayer, Frieder; Tschapka, Marco

    2015-01-01

    Anthropogenic changes in land use threaten biodiversity and ecosystem functioning by the conversion of natural habitat into agricultural mosaic landscapes, often with drastic consequences for the associated fauna. The first step in the development of efficient conservation plans is to understand movement of animals through complex habitat mosaics. Therefore, we studied ranging behavior and habitat use in Dermanura watsoni (Phyllostomidae), a frugivorous bat species that is a valuable seed disperser in degraded ecosystems. Radio-tracking of sixteen bats showed that the animals strongly rely on natural forest. Day roosts were exclusively located within mature forest fragments. Selection ratios showed that the bats foraged selectively within the available habitat and positively selected natural forest. However, larger daily ranges were associated with higher use of degraded habitats. Home range geometry and composition of focal foraging areas indicated that wider ranging bats performed directional foraging bouts from natural to degraded forest sites traversing the matrix over distances of up to three hundred meters. This behavior demonstrates the potential of frugivorous bats to functionally connect fragmented areas by providing ecosystem services between natural and degraded sites, and highlights the need for conservation of natural habitat patches within agricultural landscapes that meet the roosting requirements of bats.

  10. Does brain creatine content rely on exogenous creatine in healthy youth? A proof-of-principle study.

    Science.gov (United States)

    Merege-Filho, Carlos Alberto Abujabra; Otaduy, Maria Concepción Garcia; de Sá-Pinto, Ana Lúcia; de Oliveira, Maira Okada; de Souza Gonçalves, Lívia; Hayashi, Ana Paula Tanaka; Roschel, Hamilton; Pereira, Rosa Maria Rodrigues; Silva, Clovis Artur; Brucki, Sonia Maria Dozzi; da Costa Leite, Claudia; Gualano, Bruno

    2017-02-01

    It has been hypothesized that dietary creatine could influence cognitive performance by increasing brain creatine in developing individuals. This double-blind, randomized, placebo-controlled, proof-of-principle study aimed to investigate the effects of creatine supplementation on cognitive function and brain creatine content in healthy youth. The sample comprised 67 healthy participants aged 10 to 12 years. The participants were given creatine or placebo supplementation for 7 days. At baseline and after the intervention, participants undertook a battery of cognitive tests. In a random subsample of participants, brain creatine content was also assessed in the regions of left dorsolateral prefrontal cortex, left hippocampus, and occipital lobe by proton magnetic resonance spectroscopy (1H-MRS) technique. The scores obtained from verbal learning and executive functions tests did not significantly differ between groups at baseline or after the intervention (all p > 0.05). Creatine content was not significantly different between groups in left dorsolateral prefrontal cortex, left hippocampus, and occipital lobe (all p > 0.05). In conclusion, a 7-day creatine supplementation protocol did not elicit improvements in brain creatine content or cognitive performance in healthy youth, suggesting that this population mainly relies on brain creatine synthesis rather than exogenous creatine intake to maintain brain creatine homeostasis.

  11. Frugivorous bats maintain functional habitat connectivity in agricultural landscapes but rely strongly on natural forest fragments.

    Directory of Open Access Journals (Sweden)

    Simon P Ripperger

    Full Text Available Anthropogenic changes in land use threaten biodiversity and ecosystem functioning by the conversion of natural habitat into agricultural mosaic landscapes, often with drastic consequences for the associated fauna. The first step in the development of efficient conservation plans is to understand movement of animals through complex habitat mosaics. Therefore, we studied ranging behavior and habitat use in Dermanura watsoni (Phyllostomidae, a frugivorous bat species that is a valuable seed disperser in degraded ecosystems. Radio-tracking of sixteen bats showed that the animals strongly rely on natural forest. Day roosts were exclusively located within mature forest fragments. Selection ratios showed that the bats foraged selectively within the available habitat and positively selected natural forest. However, larger daily ranges were associated with higher use of degraded habitats. Home range geometry and composition of focal foraging areas indicated that wider ranging bats performed directional foraging bouts from natural to degraded forest sites traversing the matrix over distances of up to three hundred meters. This behavior demonstrates the potential of frugivorous bats to functionally connect fragmented areas by providing ecosystem services between natural and degraded sites, and highlights the need for conservation of natural habitat patches within agricultural landscapes that meet the roosting requirements of bats.

  12. Rucio - The next generation of large scale distributed system for ATLAS Data Management

    Science.gov (United States)

    Garonne, V.; Vigne, R.; Stewart, G.; Barisits, M.; eermann, T. B.; Lassnig, M.; Serfon, C.; Goossens, L.; Nairz, A.; Atlas Collaboration

    2014-06-01

    Rucio is the next-generation Distributed Data Management (DDM) system benefiting from recent advances in cloud and "Big Data" computing to address HEP experiments scaling requirements. Rucio is an evolution of the ATLAS DDM system Don Quijote 2 (DQ2), which has demonstrated very large scale data management capabilities with more than 140 petabytes spread worldwide across 130 sites, and accesses from 1,000 active users. However, DQ2 is reaching its limits in terms of scalability, requiring a large number of support staff to operate and being hard to extend with new technologies. Rucio will deal with these issues by relying on a conceptual data model and new technology to ensure system scalability, address new user requirements and employ new automation framework to reduce operational overheads. We present the key concepts of Rucio, including its data organization/representation and a model of how to manage central group and user activities. The Rucio design, and the technology it employs, is described, specifically looking at its RESTful architecture and the various software components it uses. We show also the performance of the system.

  13. Large - scale Rectangular Ruler Automated Verification Device

    Science.gov (United States)

    Chen, Hao; Chang, Luping; Xing, Minjian; Xie, Xie

    2018-03-01

    This paper introduces a large-scale rectangular ruler automated verification device, which consists of photoelectric autocollimator and self-designed mechanical drive car and data automatic acquisition system. The design of mechanical structure part of the device refer to optical axis design, drive part, fixture device and wheel design. The design of control system of the device refer to hardware design and software design, and the hardware mainly uses singlechip system, and the software design is the process of the photoelectric autocollimator and the automatic data acquisition process. This devices can automated achieve vertical measurement data. The reliability of the device is verified by experimental comparison. The conclusion meets the requirement of the right angle test procedure.

  14. The interaction region of the large detector concept

    Indian Academy of Sciences (India)

    design to study alternatives, e.g. the surface assembly of the detector. References. [1] T Behnke et al, Eds., TESLA Technical Design Report Part IV, DESY 2001-011, 2001. [2] The LDC Working Group: Detector outline document for the large detector concept, http://www.ilcldc.org/documents/dod/ (2006). Pramana – J. Phys.

  15. The Sum of the Parts

    DEFF Research Database (Denmark)

    Gross, Fridolin; Green, Sara

    2017-01-01

    Systems biologists often distance themselves from reductionist approaches and formulate their aim as understanding living systems “as a whole”. Yet, it is often unclear what kind of reductionism they have in mind, and in what sense their methodologies offer a more comprehensive approach. To addre......-up”. Specifically, we point out that system-level properties constrain lower-scale processes. Thus, large-scale modeling reveals how living systems at the ​same time ​ are ​more and ​less than the sum of the parts....

  16. Bayesian nonlinear regression for large small problems

    KAUST Repository

    Chakraborty, Sounak; Ghosh, Malay; Mallick, Bani K.

    2012-01-01

    Statistical modeling and inference problems with sample sizes substantially smaller than the number of available covariates are challenging. This is known as large p small n problem. Furthermore, the problem is more complicated when we have multiple correlated responses. We develop multivariate nonlinear regression models in this setup for accurate prediction. In this paper, we introduce a full Bayesian support vector regression model with Vapnik's ε-insensitive loss function, based on reproducing kernel Hilbert spaces (RKHS) under the multivariate correlated response setup. This provides a full probabilistic description of support vector machine (SVM) rather than an algorithm for fitting purposes. We have also introduced a multivariate version of the relevance vector machine (RVM). Instead of the original treatment of the RVM relying on the use of type II maximum likelihood estimates of the hyper-parameters, we put a prior on the hyper-parameters and use Markov chain Monte Carlo technique for computation. We have also proposed an empirical Bayes method for our RVM and SVM. Our methods are illustrated with a prediction problem in the near-infrared (NIR) spectroscopy. A simulation study is also undertaken to check the prediction accuracy of our models. © 2012 Elsevier Inc.

  17. Bayesian nonlinear regression for large small problems

    KAUST Repository

    Chakraborty, Sounak

    2012-07-01

    Statistical modeling and inference problems with sample sizes substantially smaller than the number of available covariates are challenging. This is known as large p small n problem. Furthermore, the problem is more complicated when we have multiple correlated responses. We develop multivariate nonlinear regression models in this setup for accurate prediction. In this paper, we introduce a full Bayesian support vector regression model with Vapnik\\'s ε-insensitive loss function, based on reproducing kernel Hilbert spaces (RKHS) under the multivariate correlated response setup. This provides a full probabilistic description of support vector machine (SVM) rather than an algorithm for fitting purposes. We have also introduced a multivariate version of the relevance vector machine (RVM). Instead of the original treatment of the RVM relying on the use of type II maximum likelihood estimates of the hyper-parameters, we put a prior on the hyper-parameters and use Markov chain Monte Carlo technique for computation. We have also proposed an empirical Bayes method for our RVM and SVM. Our methods are illustrated with a prediction problem in the near-infrared (NIR) spectroscopy. A simulation study is also undertaken to check the prediction accuracy of our models. © 2012 Elsevier Inc.

  18. Policy challenges for the pediatric rheumatology workforce: Part I. Education and economics

    Directory of Open Access Journals (Sweden)

    Henrickson Michael

    2011-08-01

    Full Text Available Abstract For children with rheumatic conditions, the available pediatric rheumatology workforce mitigates their access to care. While the subspecialty experiences steady growth, a critical workforce shortage constrains access. This three-part review proposes both national and international interim policy solutions for the multiple causes of the existing unacceptable shortfall. Part I explores the impact of current educational deficits and economic obstacles which constrain appropriate access to care. Proposed policy solutions follow each identified barrier. Challenges consequent to obsolete, limited or unavailable exposure to pediatric rheumatology include: absent or inadequate recognition or awareness of rheumatic disease; referral patterns that commonly foster delays in timely diagnosis; and primary care providers' inappropriate or outdated perception of outcomes. Varying models of pediatric rheumatology care delivery consequent to market competition, inadequate reimbursement and uneven institutional support serve as additional barriers to care. A large proportion of pediatrics residency programs offer pediatric rheumatology rotations. However, a minority of pediatrics residents participate. The current generalist pediatrician workforce has relatively poor musculoskeletal physical examination skills, lacking basic competency in musculoskeletal medicine. To compensate, many primary care providers rely on blood tests, generating referrals that divert scarce resources away from patients who merit accelerated access to care for rheumatic disease. Pediatric rheumatology exposure could be enhanced during residency by providing a mandatory musculoskeletal medicine rotation that includes related musculoskeletal subspecialties. An important step is the progressive improvement of many providers' fixed referral and laboratory testing patterns in lieu of sound physical examination skills. Changing demographics and persistent reimbursement disparities will

  19. The relationship between large-scale and convective states in the tropics - Towards an improved representation of convection in large-scale models

    Energy Technology Data Exchange (ETDEWEB)

    Jakob, Christian [Monash Univ., Melbourne, VIC (Australia)

    2015-02-26

    This report summarises an investigation into the relationship of tropical thunderstorms to the atmospheric conditions they are embedded in. The study is based on the use of radar observations at the Atmospheric Radiation Measurement site in Darwin run under the auspices of the DOE Atmospheric Systems Research program. Linking the larger scales of the atmosphere with the smaller scales of thunderstorms is crucial for the development of the representation of thunderstorms in weather and climate models, which is carried out by a process termed parametrisation. Through the analysis of radar and wind profiler observations the project made several fundamental discoveries about tropical storms and quantified the relationship of the occurrence and intensity of these storms to the large-scale atmosphere. We were able to show that the rainfall averaged over an area the size of a typical climate model grid-box is largely controlled by the number of storms in the area, and less so by the storm intensity. This allows us to completely rethink the way we represent such storms in climate models. We also found that storms occur in three distinct categories based on their depth and that the transition between these categories is strongly related to the larger scale dynamical features of the atmosphere more so than its thermodynamic state. Finally, we used our observational findings to test and refine a new approach to cumulus parametrisation which relies on the stochastic modelling of the area covered by different convective cloud types.

  20. Inclusion of Part-Time Faculty for the Benefit of Faculty and Students

    Science.gov (United States)

    Meixner, Cara; Kruck, S. E.; Madden, Laura T.

    2010-01-01

    The new majority of faculty in today's colleges and universities are part-time, yet sizable gaps exist in the research on their needs, interests, and experiences. Further, the peer-reviewed scholarship is largely quantitative. Principally, it focuses on the utility of the adjunct work force, comparisons between part-time and full-time faculty, and…

  1. Large Eddy Simulation of Vertical Axis Wind Turbine wakes; Part II: effects of inflow turbulence

    Science.gov (United States)

    Duponcheel, Matthieu; Chatelain, Philippe; Caprace, Denis-Gabriel; Winckelmans, Gregoire

    2017-11-01

    The aerodynamics of Vertical Axis Wind Turbines (VAWTs) is inherently unsteady, which leads to vorticity shedding mechanisms due to both the lift distribution along the blade and its time evolution. Large-scale, fine-resolution Large Eddy Simulations of the flow past Vertical Axis Wind Turbines have been performed using a state-of-the-art Vortex Particle-Mesh (VPM) method combined with immersed lifting lines. Inflow turbulence with a prescribed turbulence intensity (TI) is injected at the inlet of the simulation from a precomputed synthetic turbulence field obtained using the Mann algorithm. The wake of a standard, medium-solidity, H-shaped machine is simulated for several TI levels. The complex wake development is captured in details and over long distances: from the blades to the near wake coherent vortices, then through the transitional ones to the fully developed turbulent far wake. Mean flow and turbulence statistics are computed over more than 10 diameters downstream of the machine. The sensitivity of the wake topology and decay to the TI level is assessed.

  2. Large-scale modeling of rain fields from a rain cell deterministic model

    Science.gov (United States)

    FéRal, Laurent; Sauvageot, Henri; Castanet, Laurent; Lemorton, JoëL.; Cornet, FréDéRic; Leconte, Katia

    2006-04-01

    A methodology to simulate two-dimensional rain rate fields at large scale (1000 × 1000 km2, the scale of a satellite telecommunication beam or a terrestrial fixed broadband wireless access network) is proposed. It relies on a rain rate field cellular decomposition. At small scale (˜20 × 20 km2), the rain field is split up into its macroscopic components, the rain cells, described by the Hybrid Cell (HYCELL) cellular model. At midscale (˜150 × 150 km2), the rain field results from the conglomeration of rain cells modeled by HYCELL. To account for the rain cell spatial distribution at midscale, the latter is modeled by a doubly aggregative isotropic random walk, the optimal parameterization of which is derived from radar observations at midscale. The extension of the simulation area from the midscale to the large scale (1000 × 1000 km2) requires the modeling of the weather frontal area. The latter is first modeled by a Gaussian field with anisotropic covariance function. The Gaussian field is then turned into a binary field, giving the large-scale locations over which it is raining. This transformation requires the definition of the rain occupation rate over large-scale areas. Its probability distribution is determined from observations by the French operational radar network ARAMIS. The coupling with the rain field modeling at midscale is immediate whenever the large-scale field is split up into midscale subareas. The rain field thus generated accounts for the local CDF at each point, defining a structure spatially correlated at small scale, midscale, and large scale. It is then suggested that this approach be used by system designers to evaluate diversity gain, terrestrial path attenuation, or slant path attenuation for different azimuth and elevation angle directions.

  3. Database management system for large container inspection system

    International Nuclear Information System (INIS)

    Gao Wenhuan; Li Zheng; Kang Kejun; Song Binshan; Liu Fang

    1998-01-01

    Large Container Inspection System (LCIS) based on radiation imaging technology is a powerful tool for the Customs to check the contents inside a large container without opening it. The author has discussed a database application system, as a part of Signal and Image System (SIS), for the LCIS. The basic requirements analysis was done first. Then the selections of computer hardware, operating system, and database management system were made according to the technology and market products circumstance. Based on the above considerations, a database application system with central management and distributed operation features has been implemented

  4. The Demand of Part-time in European Companies: A Multilevel Modeling Approach

    OpenAIRE

    2011-01-01

    Part-time work is one of the most well-known « atypical » working time arrangements in Europe, shaping working time regimes across countries and mapping work-life balance patterns. Comparative studies on part-time work across European countries have pointed to large differences in the development, extent and quality of part-time employment. To explain such differences, the focus has been mainly on labor supply consideration and on public policies and/or institutional arrangements pertaining t...

  5. Exact Results in Non-Supersymmetric Large N Orientifold Field Theories

    CERN Document Server

    Armoni, Adi; Veneziano, Gabriele

    2003-01-01

    We consider non-supersymmetric large N orientifold field theories. Specifically, we discuss a gauge theory with a Dirac fermion in the anti-symmetric tensor representation. We argue that, at large N and in a large part of its bosonic sector, this theory is non-perturbatively equivalent to N=1 SYM, so that exact results established in the latter (parent) theory also hold in the daughter orientifold theory. In particular, the non-supersymmetric theory has an exactly calculable bifermion condensate, exactly degenerate parity doublets, and a vanishing cosmological constant (all this to leading order in 1/N).

  6. Inventory Centralization Decision Framework for Spare Parts

    DEFF Research Database (Denmark)

    Gregersen, Nicklas; Herbert-Hansen, Zaza Nadja Lee

    2018-01-01

    Within the current literature, there is a lack of a holistic and multidisciplinary approach to managing spare parts and their inventory configuration. This paper addresses this research gap by examining the key contextual factors which influence the degree of inventory centralization and proposes...... a novel holistic theoretical framework, the Inventory Centralization Decision Framework (ICDF), useful for practitioners. Through an extensive review of inventory management literature, six contextual factors influencing the degree of inventory centralization have been identified. Using the ICDF...... practitioners can assess the most advantageous inventory configuration of spare parts. The framework is tested on a large global company which, as a result, today actively uses the ICDF; thus showing its practical applicability....

  7. Really big data: Processing and analysis of large datasets

    Science.gov (United States)

    Modern animal breeding datasets are large and getting larger, due in part to the recent availability of DNA data for many animals. Computational methods for efficiently storing and analyzing those data are under development. The amount of storage space required for such datasets is increasing rapidl...

  8. Symmetric Encryption Relying on Chaotic Henon System for Secure Hardware-Friendly Wireless Communication of Implantable Medical Systems

    Directory of Open Access Journals (Sweden)

    Taha Belkhouja

    2018-05-01

    Full Text Available Healthcare remote devices are recognized as a promising technology for treating health related issues. Among them are the wireless Implantable Medical Devices (IMDs: These electronic devices are manufactured to treat, monitor, support or replace defected vital organs while being implanted in the human body. Thus, they play a critical role in healing and even saving lives. Current IMDs research trends concentrate on their medical reliability. However, deploying wireless technology in such applications without considering security measures may offer adversaries an easy way to compromise them. With the aim to secure these devices, we explore a new scheme that creates symmetric encryption keys to encrypt the wireless communication portion. We will rely on chaotic systems to obtain a synchronized Pseudo-Random key. The latter will be generated separately in the system in such a way that avoids a wireless key exchange, thus protecting patients from the key theft. Once the key is defined, a simple encryption system that we propose in this paper will be used. We analyze the performance of this system from a cryptographic point of view to ensure that it offers a better safety and protection for patients.

  9. Data Mining and Visualization of Large Human Behavior Data Sets

    DEFF Research Database (Denmark)

    Cuttone, Andrea

    and credit card transactions – have provided us new sources for studying our behavior. In particular smartphones have emerged as new tools for collecting data about human activity, thanks to their sensing capabilities and their ubiquity. This thesis investigates the question of what we can learn about human...... behavior from this rich and pervasive mobile sensing data. In the first part, we describe a large-scale data collection deployment collecting high-resolution data for over 800 students at the Technical University of Denmark using smartphones, including location, social proximity, calls and SMS. We provide...... an overview of the technical infrastructure, the experimental design, and the privacy measures. The second part investigates the usage of this mobile sensing data for understanding personal behavior. We describe two large-scale user studies on the deployment of self-tracking apps, in order to understand...

  10. Large Top-Quark Mass and Nonlinear Representation of Flavor Symmetry

    International Nuclear Information System (INIS)

    Feldmann, Thorsten; Mannel, Thomas

    2008-01-01

    We consider an effective theory (ET) approach to flavor-violating processes beyond the standard model, where the breaking of flavor symmetry is described by spurion fields whose low-energy vacuum expectation values are identified with the standard model Yukawa couplings. Insisting on canonical mass dimensions for the spurion fields, the large top-quark Yukawa coupling also implies a large expectation value for the associated spurion, which breaks part of the flavor symmetry already at the UV scale Λ of the ET. Below that scale, flavor symmetry in the ET is represented in a nonlinear way by introducing Goldstone modes for the partly broken flavor symmetry and spurion fields transforming under the residual symmetry. As a result, the dominance of certain flavor structures in rare quark decays can be understood in terms of the 1/Λ expansion in the ET

  11. Very large eddy simulation of the Red Sea overflow

    Science.gov (United States)

    Ilıcak, Mehmet; Özgökmen, Tamay M.; Peters, Hartmut; Baumert, Helmut Z.; Iskandarani, Mohamed

    Mixing between overflows and ambient water masses is a critical problem of deep-water mass formation in the downwelling branch of the meridional overturning circulation of the ocean. Modeling approaches that have been tested so far rely either on algebraic parameterizations in hydrostatic ocean circulation models, or on large eddy simulations that resolve most of the mixing using nonhydrostatic models. In this study, we examine the performance of a set of turbulence closures, that have not been tested in comparison to observational data for overflows before. We employ the so-called very large eddy simulation (VLES) technique, which allows the use of k-ɛ models in nonhydrostatic models. This is done by applying a dynamic spatial filtering to the k-ɛ equations. To our knowledge, this is the first time that the VLES approach is adopted for an ocean modeling problem. The performance of k-ɛ and VLES models are evaluated by conducting numerical simulations of the Red Sea overflow and comparing them to observations from the Red Sea Outflow Experiment (REDSOX). The computations are constrained to one of the main channels transporting the overflow, which is narrow enough to permit the use of a two-dimensional (and nonhydrostatic) model. A large set of experiments are conducted using different closure models, Reynolds numbers and spatial resolutions. It is found that, when no turbulence closure is used, the basic structure of the overflow, consisting of a well-mixed bottom layer (BL) and entraining interfacial layer (IL), cannot be reproduced. The k-ɛ model leads to unrealistic thicknesses for both BL and IL, while VLES results in the most realistic reproduction of the REDSOX observations.

  12. Large-scale transportation network congestion evolution prediction using deep learning theory.

    Directory of Open Access Journals (Sweden)

    Xiaolei Ma

    Full Text Available Understanding how congestion at one location can cause ripples throughout large-scale transportation network is vital for transportation researchers and practitioners to pinpoint traffic bottlenecks for congestion mitigation. Traditional studies rely on either mathematical equations or simulation techniques to model traffic congestion dynamics. However, most of the approaches have limitations, largely due to unrealistic assumptions and cumbersome parameter calibration process. With the development of Intelligent Transportation Systems (ITS and Internet of Things (IoT, transportation data become more and more ubiquitous. This triggers a series of data-driven research to investigate transportation phenomena. Among them, deep learning theory is considered one of the most promising techniques to tackle tremendous high-dimensional data. This study attempts to extend deep learning theory into large-scale transportation network analysis. A deep Restricted Boltzmann Machine and Recurrent Neural Network architecture is utilized to model and predict traffic congestion evolution based on Global Positioning System (GPS data from taxi. A numerical study in Ningbo, China is conducted to validate the effectiveness and efficiency of the proposed method. Results show that the prediction accuracy can achieve as high as 88% within less than 6 minutes when the model is implemented in a Graphic Processing Unit (GPU-based parallel computing environment. The predicted congestion evolution patterns can be visualized temporally and spatially through a map-based platform to identify the vulnerable links for proactive congestion mitigation.

  13. A large area diamond-based beam tagging hodoscope for ion therapy monitoring

    Science.gov (United States)

    Gallin-Martel, M.-L.; Abbassi, L.; Bes, A.; Bosson, G.; Collot, J.; Crozes, T.; Curtoni, S.; Dauvergne, D.; De Nolf, W.; Fontana, M.; Gallin-Martel, L.; Hostachy, J.-Y.; Krimmer, J.; Lacoste, A.; Marcatili, S.; Morse, J.; Motte, J.-F.; Muraz, J.-F.; Rarbi, F. E.; Rossetto, O.; Salomé, M.; Testa, É.; Vuiart, R.; Yamouni, M.

    2018-01-01

    The MoniDiam project is part of the French national collaboration CLaRyS (Contrôle en Ligne de l'hAdronthérapie par RaYonnements Secondaires) for on-line monitoring of hadron therapy. It relies on the imaging of nuclear reaction products that is related to the ion range. The goal here is to provide large area beam detectors with a high detection efficiency for carbon or proton beams giving time and position measurement at 100 MHz count rates (beam tagging hodoscope). High radiation hardness and intrinsic electronic properties make diamonds reliable and very fast detectors with a good signal to noise ratio. Commercial Chemical Vapor Deposited (CVD) poly-crystalline, heteroepitaxial and monocrystalline diamonds were studied. Their applicability as a particle detector was investigated using α and β radioactive sources, 95 MeV/u carbon ion beams at GANIL and 8.5 keV X-ray photon bunches from ESRF. This facility offers the unique capability of providing a focused ( 1 μm) beam in bunches of 100 ps duration, with an almost uniform energy deposition in the irradiated detector volume, therefore mimicking the interaction of single ions. A signal rise time resolution ranging from 20 to 90 ps rms and an energy resolution of 7 to 9% were measured using diamonds with aluminum disk shaped surface metallization. This enabled us to conclude that polycrystalline CVD diamond detectors are good candidates for our beam tagging hodoscope development. Recently, double-side stripped metallized diamonds were tested using the XBIC (X Rays Beam Induced Current) set-up of the ID21 beamline at ESRF which permits us to evaluate the capability of diamond to be used as position sensitive detector. The final detector will consist in a mosaic arrangement of double-side stripped diamond sensors read out by a dedicated fast-integrated electronics of several hundreds of channels.

  14. Clinical leadership: Part 2. Transforming leadership.

    Science.gov (United States)

    Sheridan, Mary; Corney, Barbra

    2003-08-01

    The second article in a series of three focuses on group-driven approaches to tackling problems and shows how good leadership relies on teamwork and respect for colleagues, helping to enhance problem-solving and enabling you to build on your team's successes.

  15. Time-gated ballistic imaging using a large aperture switching beam.

    Science.gov (United States)

    Mathieu, Florian; Reddemann, Manuel A; Palmer, Johannes; Kneer, Reinhold

    2014-03-24

    Ballistic imaging commonly denotes the formation of line-of-sight shadowgraphs through turbid media by suppression of multiply scattered photons. The technique relies on a femtosecond laser acting as light source for the images and as switch for an optical Kerr gate that separates ballistic photons from multiply scattered ones. The achievable image resolution is one major limitation for the investigation of small objects. In this study, practical influences on the optical Kerr gate and image quality are discussed theoretically and experimentally applying a switching beam with large aperture (D = 19 mm). It is shown how switching pulse energy and synchronization of switching and imaging pulse in the Kerr cell influence the gate's transmission. Image quality of ballistic imaging and standard shadowgraphy is evaluated and compared, showing that the present ballistic imaging setup is advantageous for optical densities in the range of 8 ballistic imaging setup into a schlieren-type system with an optical schlieren edge.

  16. High-Luminosity Large Hadron Collider (HL-LHC) Preliminary Design Report

    CERN Document Server

    Apollinari, G; Béjar Alonso, I; Brüning, O; Lamont, M; Rossi, L

    2015-01-01

    The Large Hadron Collider (LHC) is one of the largest scientific instruments ever built. Since opening up a new energy frontier for exploration in 2010, it has gathered a global user community of about 7,000 scientists working in fundamental particle physics and the physics of hadronic matter at extreme temperature and density. To sustain and extend its discovery potential, the LHC will need a major upgrade in the 2020s. This will increase its luminosity (rate of collisions) by a factor of five beyond the original design value and the integrated luminosity (total collisions created) by a factor ten. The LHC is already a highly complex and exquisitely optimised machine so this upgrade must be carefully conceived and will require about ten years to implement. The new configuration, known as High Luminosity LHC (HL-LHC), will rely on a number of key innovations that push accelerator technology beyond its present limits. Among these are cutting-edge 11-12 tesla superconducting magnets, compact superconducting cav...

  17. ABOUT MODELING COMPLEX ASSEMBLIES IN SOLIDWORKS – LARGE AXIAL BEARING

    Directory of Open Access Journals (Sweden)

    Cătălin IANCU

    2017-12-01

    Full Text Available In this paperwork is presented the modeling strategy used in SOLIDWORKS for modeling special items as large axial bearing and the steps to be taken in order to obtain a better design. In the paper are presented the features that are used for modeling parts, and then the steps that must be taken in order to obtain the 3D model of a large axial bearing used for bucket-wheel equipment for charcoal moving.

  18. Parts of the Whole: Approaching Education as a System

    Directory of Open Access Journals (Sweden)

    Dorothy Wallace

    2009-07-01

    Full Text Available An educational system is a highly coupled complex system of inputs, outputs, sensors and actuators. Using an engineering perspective, this column begins the process of naming and categorizing parts of the system. It then focuses on teachers as one part of a large system, and analyzes the forces that influence how teachers work, and that draw or repel individuals to a teaching career. The growing shortage of qualified teachers can be explained by properties of the system as a whole that determine the context in which teachers do their job.

  19. Continuum Thermodynamics - Part II: Applications and Examples

    Science.gov (United States)

    Albers, Bettina; Wilmanski, Krzysztof

    The intention by writing Part II of the book on continuum thermodynamics was the deepening of some issues covered in Part I as well as a development of certain skills in dealing with practical problems of oscopic processes. However, the main motivation for this part is the presentation of main facets of thermodynamics which appear when interdisciplinary problems are considered. There are many monographs on the subjects of solid mechanics and thermomechanics, on fluid mechanics and on coupled fields but most of them cover only special problems in great details which are characteristic for the chosen field. It is rather seldom that relations between these fields are discussed. This concerns, for instance, large deformations of the skeleton of porous materials with diffusion (e.g. lungs), couplings of deformable particles with the fluid motion in suspensions, couplings of adsorption processes and chemical reactions in immiscible mixtures with diffusion, various multi-component aspects of the motion, e.g. of avalanches, such as segregation processes, etc...

  20. N=4 super-Yang-Mills in LHC superspace. Part II: Non-chiral correlation functions of the stress-tensor multiplet

    CERN Document Server

    Chicherin, Dmitry

    2017-03-09

    We study the multipoint super-correlation functions of the full non-chiral stress-tensor multiplet in N=4 super-Yang-Mills theory in the Born approximation. We derive effective supergraph Feynman rules for them. Surprisingly, the Feynman rules for the non-chiral correlators differ only slightly from those for the chiral correlators. We rely on the formulation of the theory in Lorentz harmonic chiral (LHC) superspace elaborated in the twin paper \\cite{PartI}. In this approach only the chiral half of the supersymmetry is manifest. The other half is realized by nonlinear and nonlocal transformations of the LHC superfields. However, at Born level only the simple linear part of the transformations is relevant. It corresponds to effectively working in the self-dual sector of the theory. Our method is also applicable to a wider class of supermultiplets like all the half-BPS operators and the Konishi multiplet.

  1. A large-scale soil-structure interaction experiment: Part I design and construction

    International Nuclear Information System (INIS)

    Tang, H.T.; Tang, Y.K.; Wall, I.B.; Lin, E.

    1987-01-01

    In the simulated earthquake experiments (SIMQUAKE) sponsored by EPRI, the detonation of vertical arrays of explosives propagated wave motions through the ground to the model structures. Although such a simulation can provide information about dynamic soil-structure interaction (SSI) characteristics in a strong motion environment, it lacks seismic wave scattering characteristics for studying seismic input to the soil-structure system and the effect of different kinds of wave composition to the soil-structure response. To supplement the inadequacy of the simulated earthquake SSI experiment, the Electric Power Research Institute (EPRI) and the Taiwan Power Company (Taipower) jointly sponsored a large scale SSI experiment in the field. The objectives of the experiment are: (1) to obtain actual strong motion earthquakes induced database in a soft-soil environment which will substantiate predictive and design SSI models;and (2) to assess nuclear power plant reactor containment internal components dynamic response and margins relating to actual earthquake-induced excitation. These objectives are accomplished by recording and analyzing data from two instrumented, scaled down, (1/4- and 1/12-scale) reinforced concrete containments sited in a high seismic region in Taiwan where a strong-motion seismic array network is located

  2. The perception of (naked only) bodies and faceless heads relies on holistic processing: Evidence from the inversion effect.

    Science.gov (United States)

    Bonemei, Rob; Costantino, Andrea I; Battistel, Ilenia; Rivolta, Davide

    2018-05-01

    Faces and bodies are more difficult to perceive when presented inverted than when presented upright (i.e., stimulus inversion effect), an effect that has been attributed to the disruption of holistic processing. The features that can trigger holistic processing in faces and bodies, however, still remain elusive. In this study, using a sequential matching task, we tested whether stimulus inversion affects various categories of visual stimuli: faces, faceless heads, faceless heads in body context, headless bodies naked, whole bodies naked, headless bodies clothed, and whole bodies clothed. Both accuracy and inversion efficiency score results show inversion effects for all categories but for clothed bodies (with and without heads). In addition, the magnitude of the inversion effect for face, naked body, and faceless heads was similar. Our findings demonstrate that the perception of faces, faceless heads, and naked bodies relies on holistic processing. Clothed bodies (with and without heads), on the other side, may trigger clothes-sensitive rather than body-sensitive perceptual mechanisms. © 2017 The British Psychological Society.

  3. Numerical simulation of complex part manufactured by selective laser melting process

    Science.gov (United States)

    Van Belle, Laurent

    2017-10-01

    Selective Laser Melting (SLM) process belonging to the family of the Additive Manufacturing (AM) technologies, enable to build parts layer by layer, from metallic powder and a CAD model. Physical phenomena that occur in the process have the same issues as conventional welding. Thermal gradients generate significant residual stresses and distortions in the parts. Moreover, the large and complex parts to manufacturing, accentuate the undesirable effects. Therefore, it is essential for manufacturers to offer a better understanding of the process and to ensure production reliability of parts with high added value. This paper focuses on the simulation of manufacturing turbine by SLM process in order to calculate residual stresses and distortions. Numerical results will be presented.

  4. Insights into Hox protein function from a large scale combinatorial analysis of protein domains.

    Directory of Open Access Journals (Sweden)

    Samir Merabet

    2011-10-01

    Full Text Available Protein function is encoded within protein sequence and protein domains. However, how protein domains cooperate within a protein to modulate overall activity and how this impacts functional diversification at the molecular and organism levels remains largely unaddressed. Focusing on three domains of the central class Drosophila Hox transcription factor AbdominalA (AbdA, we used combinatorial domain mutations and most known AbdA developmental functions as biological readouts to investigate how protein domains collectively shape protein activity. The results uncover redundancy, interactivity, and multifunctionality of protein domains as salient features underlying overall AbdA protein activity, providing means to apprehend functional diversity and accounting for the robustness of Hox-controlled developmental programs. Importantly, the results highlight context-dependency in protein domain usage and interaction, allowing major modifications in domains to be tolerated without general functional loss. The non-pleoitropic effect of domain mutation suggests that protein modification may contribute more broadly to molecular changes underlying morphological diversification during evolution, so far thought to rely largely on modification in gene cis-regulatory sequences.

  5. A survey and experimental comparison of distributed SPARQL engines for very large RDF data

    KAUST Repository

    Abdelaziz, Ibrahim; Harbi, Razen; Khayyat, Zuhair; Kalnis, Panos

    2017-01-01

    Distributed SPARQL engines promise to support very large RDF datasets by utilizing shared-nothing computer clusters. Some are based on distributed frameworks such as MapReduce; others implement proprietary distributed processing; and some rely on expensive preprocessing for data partitioning. These systems exhibit a variety of trade-offs that are not well-understood, due to the lack of any comprehensive quantitative and qualitative evaluation. In this paper, we present a survey of 22 state-of-the-art systems that cover the entire spectrum of distributed RDF data processing and categorize them by several characteristics. Then, we select 12 representative systems and perform extensive experimental evaluation with respect to preprocessing cost, query performance, scalability and workload adaptability, using a variety of synthetic and real large datasets with up to 4.3 billion triples. Our results provide valuable insights for practitioners to understand the trade-offs for their usage scenarios. Finally, we publish online our evaluation framework, including all datasets and workloads, for researchers to compare their novel systems against the existing ones.

  6. A survey and experimental comparison of distributed SPARQL engines for very large RDF data

    KAUST Repository

    Abdelaziz, Ibrahim

    2017-10-19

    Distributed SPARQL engines promise to support very large RDF datasets by utilizing shared-nothing computer clusters. Some are based on distributed frameworks such as MapReduce; others implement proprietary distributed processing; and some rely on expensive preprocessing for data partitioning. These systems exhibit a variety of trade-offs that are not well-understood, due to the lack of any comprehensive quantitative and qualitative evaluation. In this paper, we present a survey of 22 state-of-the-art systems that cover the entire spectrum of distributed RDF data processing and categorize them by several characteristics. Then, we select 12 representative systems and perform extensive experimental evaluation with respect to preprocessing cost, query performance, scalability and workload adaptability, using a variety of synthetic and real large datasets with up to 4.3 billion triples. Our results provide valuable insights for practitioners to understand the trade-offs for their usage scenarios. Finally, we publish online our evaluation framework, including all datasets and workloads, for researchers to compare their novel systems against the existing ones.

  7. Factors Influencing Retention Among Part-Time Clinical Nursing Faculty.

    Science.gov (United States)

    Carlson, Joanne S

    This study sought to determine job characteristics influencing retention of part-time clinical nurse faculty teaching in pre-licensure nursing education. Large numbers of part-time faculty are needed to educate students in the clinical setting. Faculty retention helps maintain consistency and may positively influence student learning. A national sample of part-time clinical nurse faculty teaching in baccalaureate programs responded to a web-based survey. Respondents were asked to identify the primary reason for wanting or not wanting to continue working for a school of nursing (SON). Affinity for students, pay and benefits, support, and feeling valued were the top three reasons given for continuing to work at an SON. Conflicts with life and other job responsibilities, low pay, and workload were the top three reasons given for not continuing. Results from this study can assist nursing programs in finding strategies to help reduce attrition among part-time clinical faculty.

  8. Parameterization of a Hydrological Model for a Large, Ungauged Urban Catchment

    Directory of Open Access Journals (Sweden)

    Gerald Krebs

    2016-10-01

    Full Text Available Urbanization leads to the replacement of natural areas by impervious surfaces and affects the catchment hydrological cycle with adverse environmental impacts. Low impact development tools (LID that mimic hydrological processes of natural areas have been developed and applied to mitigate these impacts. Hydrological simulations are one possibility to evaluate the LID performance but the associated small-scale processes require a highly spatially distributed and explicit modeling approach. However, detailed data for model development are often not available for large urban areas, hampering the model parameterization. In this paper we propose a methodology to parameterize a hydrological model to a large, ungauged urban area by maintaining at the same time a detailed surface discretization for direct parameter manipulation for LID simulation and a firm reliance on available data for model conceptualization. Catchment delineation was based on a high-resolution digital elevation model (DEM and model parameterization relied on a novel model regionalization approach. The impact of automated delineation and model regionalization on simulation results was evaluated for three monitored study catchments (5.87–12.59 ha. The simulated runoff peak was most sensitive to accurate catchment discretization and calibration, while both the runoff volume and the fit of the hydrograph were less affected.

  9. Dendro-dendritic interactions between motion-sensitive large-field neurons in the fly.

    Science.gov (United States)

    Haag, Juergen; Borst, Alexander

    2002-04-15

    For visual course control, flies rely on a set of motion-sensitive neurons called lobula plate tangential cells (LPTCs). Among these cells, the so-called CH (centrifugal horizontal) cells shape by their inhibitory action the receptive field properties of other LPTCs called FD (figure detection) cells specialized for figure-ground discrimination based on relative motion. Studying the ipsilateral input circuitry of CH cells by means of dual-electrode and combined electrical-optical recordings, we find that CH cells receive graded input from HS (large-field horizontal system) cells via dendro-dendritic electrical synapses. This particular wiring scheme leads to a spatial blur of the motion image on the CH cell dendrite, and, after inhibiting FD cells, to an enhancement of motion contrast. This could be crucial for enabling FD cells to discriminate object from self motion.

  10. Simultaneous electrical and mechanical resonance drive for large signal amplification of micro resonators

    KAUST Repository

    Hasan, M. H.

    2018-01-12

    Achieving large signal-noise ratio using low levels of excitation signal is key requirement for practical applications of micro and nano electromechanical resonators. In this work, we introduce the double electromechanical resonance drive concept to achieve an order-of-magnitude dynamic signal amplification in micro resonators. The concept relies on simultaneously activating the micro-resonator mechanical and electrical resonance frequencies. We report an input voltage amplification up to 15 times for a micro-resonator when its electrical resonance is tuned to match the mechanical resonance that leads to dynamic signal amplification in air (Quality factor enhancement). Furthermore, using a multi-frequency excitation technique, input voltage and vibrational amplification of up to 30 times were shown for the same micro-resonator while relaxing the need to match its mechanical and electrical resonances.

  11. Simultaneous electrical and mechanical resonance drive for large signal amplification of micro resonators

    KAUST Repository

    Hasan, M. H.; Alsaleem, F. M.; Jaber, Nizar; Hafiz, Md Abdullah Al; Younis, Mohammad I.

    2018-01-01

    Achieving large signal-noise ratio using low levels of excitation signal is key requirement for practical applications of micro and nano electromechanical resonators. In this work, we introduce the double electromechanical resonance drive concept to achieve an order-of-magnitude dynamic signal amplification in micro resonators. The concept relies on simultaneously activating the micro-resonator mechanical and electrical resonance frequencies. We report an input voltage amplification up to 15 times for a micro-resonator when its electrical resonance is tuned to match the mechanical resonance that leads to dynamic signal amplification in air (Quality factor enhancement). Furthermore, using a multi-frequency excitation technique, input voltage and vibrational amplification of up to 30 times were shown for the same micro-resonator while relaxing the need to match its mechanical and electrical resonances.

  12. ISLA: An Isochronous Spectrometer with Large Acceptances

    Energy Technology Data Exchange (ETDEWEB)

    Bazin, D., E-mail: bazin@nscl.msu.edu; Mittig, W.

    2013-12-15

    A novel type of recoil mass spectrometer and separator is proposed for the future secondary radioactive beams of the ReA12 accelerator at NSCL/FRIB, inspired from the TOFI spectrometer developed at the Los Alamos National Laboratory for online mass measurements. The Isochronous Spectrometer with Large Acceptances (ISLA) is able to achieve superior characteristics without the compromises that usually plague the design of large acceptance spectrometers. ISLA can provide mass-to-charge ratio (m/q) measurements to better than 1 part in 1000 by using an optically isochronous time-of-flight independent of the momentum vector of the recoiling ions, despite large acceptances of 20% in momentum and 64 msr in solid angle. The characteristics of this unique design are shown, including requirements for auxiliary detectors around the target and the various types of reactions to be used with the re-accelerated radioactive beams of the future ReA12 accelerator.

  13. Expected Future Conditions for Secure Power Operation with Large Scale of RES Integration

    International Nuclear Information System (INIS)

    Majstrovic, G.; Majstrovic, M.; Sutlovic, E.

    2015-01-01

    EU energy strategy is strongly focused on the large scale integration of renewable energy sources. The most dominant part here is taken by variable sources - wind power plants. Grid integration of intermittent sources along with keeping the system stable and secure is one of the biggest challenges for the TSOs. This part is often neglected by the energy policy makers, so this paper deals with expected future conditions for secure power system operation with large scale wind integration. It gives an overview of expected wind integration development in EU, as well as expected P/f regulation and control needs. The paper is concluded with several recommendations. (author).

  14. Efficacy and safety of dabigatran compared with warfarin at different levels of international normalised ratio control for stroke prevention in atrial fibrillation: an analysis of the RE-LY trial.

    Science.gov (United States)

    Wallentin, Lars; Yusuf, Salim; Ezekowitz, Michael D; Alings, Marco; Flather, Marcus; Franzosi, Maria Grazia; Pais, Prem; Dans, Antonio; Eikelboom, John; Oldgren, Jonas; Pogue, Janice; Reilly, Paul A; Yang, Sean; Connolly, Stuart J

    2010-09-18

    Effectiveness and safety of warfarin is associated with the time in therapeutic range (TTR) with an international normalised ratio (INR) of 2·0-3·0. In the Randomised Evaluation of Long-term Anticoagulation Therapy (RE-LY) trial, dabigatran versus warfarin reduced both stroke and haemorrhage. We aimed to investigate the primary and secondary outcomes of the RE-LY trial in relation to each centre's mean TTR (cTTR) in the warfarin population. In the RE-LY trial, 18 113 patients at 951 sites were randomly assigned to 110 mg or 150 mg dabigatran twice daily versus warfarin dose adjusted to INR 2·0-3·0. Median follow-up was 2·0 years. For 18 024 patients at 906 sites, the cTTR was estimated by averaging TTR for individual warfarin-treated patients calculated by the Rosendaal method. We compared the outcomes of RE-LY across the three treatment groups within four groups defined by the quartiles of cTTR. RE-LY is registered with ClinicalTrials.gov, number NCT00262600. The quartiles of cTTR for patients in the warfarin group were: less than 57·1%, 57·1-65·5%, 65·5-72·6%, and greater than 72·6%. There were no significant interactions between cTTR and prevention of stroke and systemic embolism with either 110 mg dabigatran (interaction p=0·89) or 150 mg dabigatran (interaction p=0·20) versus warfarin. Neither were any significant interactions recorded with cTTR with regards to intracranial bleeding with 110 mg dabigatran (interaction p=0·71) or 150 mg dabigatran (interaction p=0·89) versus warfarin. There was a significant interaction between cTTR and major bleeding when comparing 150 mg dabigatran with warfarin (interaction p=0·03), with less bleeding events at lower cTTR but similar events at higher cTTR, whereas rates of major bleeding were lower with 110 mg dabigatran than with warfarin irrespective of cTTR. There were significant interactions between cTTR and effects of both 110 mg and 150 mg dabigatran versus warfarin on the composite of all

  15. Transplant recipients are vulnerable to coverage denial under Medicare Part D.

    Science.gov (United States)

    Potter, Lisa M; Maldonado, Angela Q; Lentine, Krista L; Schnitzler, Mark A; Zhang, Zidong; Hess, Gregory P; Garrity, Edward; Kasiske, Bertram L; Axelrod, David A

    2018-02-15

    Transplant immunosuppressants are often used off-label because of insufficient randomized prospective trial data to achieve organ-specific US Food and Drug Administration (FDA) approval. Transplant recipients who rely on Medicare Part D for immunosuppressant drug coverage are vulnerable to coverage denial for off-label prescriptions, unless use is supported by Centers for Medicare & Medicaid Services (CMS)-approved compendia. An integrated dataset including national transplant registry data and 3 years of dispensed pharmacy records was used to identify the prevalence of immunosuppression use that is both off-label and not supported by CMS-approved compendia. Numbers of potentially vulnerable transplant recipients were identified. Off-label and off-compendia immunosuppression regimens are frequently prescribed (3-year mean: lung 66.5%, intestine 34.2%, pancreas 33.4%, heart 21.8%, liver 16.5%, kidney 0%). The annual retail cost of these at-risk medications exceeds $30 million. This population-based study of transplant immunosuppressants vulnerable to claim denials under Medicare Part D coverage demonstrates a substantial gap between clinical practice, current FDA approval processes, and policy mandates for pharmaceutical coverage. This coverage barrier reduces access to life-saving medications for patients without alternative resources and may increase the risk of graft loss and death from medication nonadherence. © 2018 The American Society of Transplantation and the American Society of Transplant Surgeons.

  16. Knowing how to reach all parts of the South

    NARCIS (Netherlands)

    Clancy, Joy S.

    1999-01-01

    The economic contribution of small and medium scale industries (SMI) in the South is considerable. However, the pollution they generate is considered to be generally greater per unit of output than for the equivalent large-scale enterprises. The SMI sector in the South is a sector of two parts.

  17. Continuity and the persistence of objects: when the whole is greater than the sum of the parts.

    Science.gov (United States)

    Hall, D G

    1998-10-01

    In three experiments, a total of 480 participants heard a version of the story of the ship of Theseus (Hobbes, 1672/1913), in which a novel object, labeled with a possessive noun phrase, underwent a transformation in which its parts were replaced one at a time. Participants then had to decide which of two objects carried the same possessive noun phrase as the original: the one made entirely of new parts (that could be inferred to be continuous with the original) or one reassembled from the original parts (that could not be inferred to be continuous with the original). Participants often selected the object made of new parts, despite the radical transformation. However, the tendency to do so was significantly stronger (1) if the object was described as an animal than if it was described as an artifact, (2) if the animal's transformation lacked a human cause than if it possessed one, and (3) if the selection was made by adults or 7-year-olds than if it was made by 5-year-olds. The findings suggest that knowledge about specific kinds of objects and their canonical transformations exerts an increasingly powerful effect, over the course of development, upon people's tendency to rely on continuity as a criterion for attributing persistence to objects that undergo change. Copyright 1998 Academic Press.

  18. Low-frequency noise from large wind turbines.

    Science.gov (United States)

    Møller, Henrik; Pedersen, Christian Sejer

    2011-06-01

    As wind turbines get larger, worries have emerged that the turbine noise would move down in frequency and that the low-frequency noise would cause annoyance for the neighbors. The noise emission from 48 wind turbines with nominal electric power up to 3.6 MW is analyzed and discussed. The relative amount of low-frequency noise is higher for large turbines (2.3-3.6 MW) than for small turbines (≤ 2 MW), and the difference is statistically significant. The difference can also be expressed as a downward shift of the spectrum of approximately one-third of an octave. A further shift of similar size is suggested for future turbines in the 10-MW range. Due to the air absorption, the higher low-frequency content becomes even more pronounced, when sound pressure levels in relevant neighbor distances are considered. Even when A-weighted levels are considered, a substantial part of the noise is at low frequencies, and for several of the investigated large turbines, the one-third-octave band with the highest level is at or below 250 Hz. It is thus beyond any doubt that the low-frequency part of the spectrum plays an important role in the noise at the neighbors. © 2011 Acoustical Society of America

  19. Integrating Infrastructure and Institutions for Water Security in Large Urban Areas

    Science.gov (United States)

    Padowski, J.; Jawitz, J. W.; Carrera, L.

    2015-12-01

    Urban growth has forced cities to procure more freshwater to meet demands; however the relationship between urban water security, water availability and water management is not well understood. This work quantifies the urban water security of 108 large cities in the United States (n=50) and Africa (n=58) based on their hydrologic, hydraulic and institutional settings. Using publicly available data, urban water availability was estimated as the volume of water available from local water resources and those captured via hydraulic infrastructure (e.g. reservoirs, wellfields, aqueducts) while urban water institutions were assessed according to their ability to deliver, supply and regulate water resources to cities. When assessing availability, cities relying on local water resources comprised a minority (37%) of those assessed. The majority of cities (55%) instead rely on captured water to meet urban demands, with African cities reaching farther and accessing a greater number and variety of sources for water supply than US cities. Cities using captured water generally had poorer access to local water resources and maintained significantly more complex strategies for water delivery, supply and regulatory management. Eight cities, all African, are identified in this work as having water insecurity issues. These cities lack sufficient infrastructure and institutional complexity to capture and deliver adequate amounts of water for urban use. Together, these findings highlight the important interconnection between infrastructure investments and management techniques for urban areas with a limited or dwindling natural abundance of water. Addressing water security challenges in the future will require that more attention be placed not only on increasing water availability, but on developing the institutional support to manage captured water supplies.

  20. Combining rotating-coil measurements of large-aperture accelerator magnets

    CERN Document Server

    AUTHOR|(CDS)2089510

    2016-10-05

    The rotating coil is a widely used tool to measure the magnetic field and the field errors in accelerator magnets. The coil has a length that exceeds the entire magnetic field along the longitudinal dimension of the magnet and gives therefore a two-dimensional representation of the integrated field. Having a very good precision, the rotating coil lacks in versatility. The fixed dimensions make it impractical and inapplicable in situations, when the radial coil dimension is much smaller than the aperture or when the aperture is only little covered by the coil. That being the case for rectangular apertures with large aspect ratio, where a basic measurement by the rotating coil describes the field only in a small area of the magnet. A combination of several measurements at different positions is the topic of this work. Very important for a combination is the error distribution on the measured field harmonics. To preserve the good precision of the higher-order harmonics, the combination must not rely on the main ...

  1. Optical technologies for data communication in large parallel systems

    International Nuclear Information System (INIS)

    Ritter, M B; Vlasov, Y; Kash, J A; Benner, A

    2011-01-01

    Large, parallel systems have greatly aided scientific computation and data collection, but performance scaling now relies on chip and system-level parallelism. This has happened because power density limits have caused processor frequency growth to stagnate, driving the new multi-core architecture paradigm, which would seem to provide generations of performance increases as transistors scale. However, this paradigm will be constrained by electrical I/O bandwidth limits; first off the processor card, then off the processor module itself. We will present best-estimates of these limits, then show how optical technologies can help provide more bandwidth to allow continued system scaling. We will describe the current status of optical transceiver technology which is already being used to exceed off-board electrical bandwidth limits, then present work on silicon nanophotonic transceivers and 3D integration technologies which, taken together, promise to allow further increases in off-module and off-card bandwidth. Finally, we will show estimated limits of nanophotonic links and discuss breakthroughs that are needed for further progress, and will speculate on whether we will reach Exascale-class machine performance at affordable powers.

  2. Optical technologies for data communication in large parallel systems

    Energy Technology Data Exchange (ETDEWEB)

    Ritter, M B; Vlasov, Y; Kash, J A [IBM T.J. Watson Research Center, Yorktown Heights, NY (United States); Benner, A, E-mail: mritter@us.ibm.com [IBM Poughkeepsie, Poughkeepsie, NY (United States)

    2011-01-15

    Large, parallel systems have greatly aided scientific computation and data collection, but performance scaling now relies on chip and system-level parallelism. This has happened because power density limits have caused processor frequency growth to stagnate, driving the new multi-core architecture paradigm, which would seem to provide generations of performance increases as transistors scale. However, this paradigm will be constrained by electrical I/O bandwidth limits; first off the processor card, then off the processor module itself. We will present best-estimates of these limits, then show how optical technologies can help provide more bandwidth to allow continued system scaling. We will describe the current status of optical transceiver technology which is already being used to exceed off-board electrical bandwidth limits, then present work on silicon nanophotonic transceivers and 3D integration technologies which, taken together, promise to allow further increases in off-module and off-card bandwidth. Finally, we will show estimated limits of nanophotonic links and discuss breakthroughs that are needed for further progress, and will speculate on whether we will reach Exascale-class machine performance at affordable powers.

  3. Large-scale brain networks in affective and social neuroscience: Towards an integrative functional architecture of the brain

    Science.gov (United States)

    Barrett, Lisa Feldman; Satpute, Ajay

    2013-01-01

    Understanding how a human brain creates a human mind ultimately depends on mapping psychological categories and concepts to physical measurements of neural response. Although it has long been assumed that emotional, social, and cognitive phenomena are realized in the operations of separate brain regions or brain networks, we demonstrate that it is possible to understand the body of neuroimaging evidence using a framework that relies on domain general, distributed structure-function mappings. We review current research in affective and social neuroscience and argue that the emerging science of large-scale intrinsic brain networks provides a coherent framework for a domain-general functional architecture of the human brain. PMID:23352202

  4. Research into condensed matter using large-scale apparatus. Physics, chemistry, biology. Progress report 1992-1995. Summarizing reports

    International Nuclear Information System (INIS)

    1996-01-01

    Activities for research into condensed matter have been supported by the German BMBF with approx. 102 million Deutschmarks in the years 1992 through 1995. These financial means have been distributed among 314 research projects in the fields of physics, chemistry, biology, materials science, and other fields, which all rely on the intensive utilization of photon and particle beams generated in large-scale apparatus of institutions for basic research. The volume in hand first gives information of a general kind and statistical data on the distribution of financial means, for a number of priority research projects. The project reports are summarizing reports on the progress achieved in the various projects. (CB) [de

  5. Large animal models for vaccine development and testing.

    Science.gov (United States)

    Gerdts, Volker; Wilson, Heather L; Meurens, Francois; van Drunen Littel-van den Hurk, Sylvia; Wilson, Don; Walker, Stewart; Wheler, Colette; Townsend, Hugh; Potter, Andrew A

    2015-01-01

    The development of human vaccines continues to rely on the use of animals for research. Regulatory authorities require novel vaccine candidates to undergo preclinical assessment in animal models before being permitted to enter the clinical phase in human subjects. Substantial progress has been made in recent years in reducing and replacing the number of animals used for preclinical vaccine research through the use of bioinformatics and computational biology to design new vaccine candidates. However, the ultimate goal of a new vaccine is to instruct the immune system to elicit an effective immune response against the pathogen of interest, and no alternatives to live animal use currently exist for evaluation of this response. Studies identifying the mechanisms of immune protection; determining the optimal route and formulation of vaccines; establishing the duration and onset of immunity, as well as the safety and efficacy of new vaccines, must be performed in a living system. Importantly, no single animal model provides all the information required for advancing a new vaccine through the preclinical stage, and research over the last two decades has highlighted that large animals more accurately predict vaccine outcome in humans than do other models. Here we review the advantages and disadvantages of large animal models for human vaccine development and demonstrate that much of the success in bringing a new vaccine to market depends on choosing the most appropriate animal model for preclinical testing. © The Author 2015. Published by Oxford University Press on behalf of the Institute for Laboratory Animal Research. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  6. Underground large scale test facility for rocks

    International Nuclear Information System (INIS)

    Sundaram, P.N.

    1981-01-01

    This brief note discusses two advantages of locating the facility for testing rock specimens of large dimensions in an underground space. Such an environment can be made to contribute part of the enormous axial load and stiffness requirements needed to get complete stress-strain behavior. The high pressure vessel may also be located below the floor level since the lateral confinement afforded by the rock mass may help to reduce the thickness of the vessel

  7. The three-large-primes variant of the number field sieve

    NARCIS (Netherlands)

    S.H. Cavallar

    2002-01-01

    textabstractThe Number Field Sieve (NFS) is the asymptotically fastest known factoringalgorithm for large integers.This method was proposed by John Pollard in 1988. Sincethen several variants have been implemented with the objective of improving thesiever which is the most time consuming part of

  8. Workshop Report on Additive Manufacturing for Large-Scale Metal Components - Development and Deployment of Metal Big-Area-Additive-Manufacturing (Large-Scale Metals AM) System

    Energy Technology Data Exchange (ETDEWEB)

    Babu, Sudarsanam Suresh [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Manufacturing Demonstration Facility; Love, Lonnie J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Manufacturing Demonstration Facility; Peter, William H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Manufacturing Demonstration Facility; Dehoff, Ryan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Manufacturing Demonstration Facility

    2016-05-01

    Additive manufacturing (AM) is considered an emerging technology that is expected to transform the way industry can make low-volume, high value complex structures. This disruptive technology promises to replace legacy manufacturing methods for the fabrication of existing components in addition to bringing new innovation for new components with increased functional and mechanical properties. This report outlines the outcome of a workshop on large-scale metal additive manufacturing held at Oak Ridge National Laboratory (ORNL) on March 11, 2016. The charter for the workshop was outlined by the Department of Energy (DOE) Advanced Manufacturing Office program manager. The status and impact of the Big Area Additive Manufacturing (BAAM) for polymer matrix composites was presented as the background motivation for the workshop. Following, the extension of underlying technology to low-cost metals was proposed with the following goals: (i) High deposition rates (approaching 100 lbs/h); (ii) Low cost (<$10/lbs) for steel, iron, aluminum, nickel, as well as, higher cost titanium, (iii) large components (major axis greater than 6 ft) and (iv) compliance of property requirements. The above concept was discussed in depth by representatives from different industrial sectors including welding, metal fabrication machinery, energy, construction, aerospace and heavy manufacturing. In addition, DOE’s newly launched High Performance Computing for Manufacturing (HPC4MFG) program was reviewed. This program will apply thermo-mechanical models to elucidate deeper understanding of the interactions between design, process, and materials during additive manufacturing. Following these presentations, all the attendees took part in a brainstorming session where everyone identified the top 10 challenges in large-scale metal AM from their own perspective. The feedback was analyzed and grouped in different categories including, (i) CAD to PART software, (ii) selection of energy source, (iii

  9. How to explain mistakes

    DEFF Research Database (Denmark)

    Hallerstede, Stefan; Leuschel, Michael

    2009-01-01

    to improve understanding of formal models. We demonstrate how animation can help finding an explanation for a failing proof. We also demonstrate where animation or model-checking may not help and where proving may not help. For most part use of another tool pays off. Proof obligations present intentionally......Usually we teach formal methods relying for a large part on one kind of reasoning technique about a formal model. For instance, we either use formal proof or we use model-checking. It would appear that it is already hard enough to learn one technique and having to cope with two puts just another...... burden on the students. This is not our experience. Especially model-checking is easily used to complement formal proof. It only relies on an intuitive operational understanding of a formal model. In this article we show how using model-checking, animation, and formal proof together can be used...

  10. Energetic solutions of Rock Sandpipers to harsh winter conditions rely on prey quality

    Science.gov (United States)

    Ruthrauff, Daniel R.; Dekinga, Anne; Gill, Robert E.; Piersma, Theunis

    2018-01-01

    Rock Sandpipers Calidris ptilocnemis have the most northerly non-breeding distribution of any shorebird in the Pacific Basin (upper Cook Inlet, Alaska; 61°N, 151°W). In terms of freezing temperatures, persistent winds and pervasive ice, this site is the harshest used by shorebirds during winter. We integrated physiological, metabolic, behavioural and environmental aspects of the non-breeding ecology of Rock Sandpipers at the northern extent of their range to determine the relative importance of these factors in facilitating their unique non-breeding ecology. Not surprisingly, estimated daily energetic demands were greatest during January, the coldest period of winter. These estimates were greatest for foraging birds, and exceeded basal metabolic rates by a factor of 6.5, a scope of increase that approaches the maximum sustained rate of energetic output by shorebirds during periods of migration, but far exceeds these periods in duration. We assessed the quality of their primary prey, the bivalve Macoma balthica, to determine the daily foraging duration required by Rock Sandpipers to satisfy such energetic demands. Based on size-specific estimates of M. balthica quality, Rock Sandpipers require over 13 h/day of foraging time in upper Cook Inlet in January, even when feeding on the highest quality prey. This range approaches the average daily duration of mudflat availability in this region (c. 18 h), a maximum value that annually decreases due to the accumulation of shore-fast ice. Rock Sandpipers are likely to maximize access to foraging sites by following the exposure of ice-free mudflats across the upper Cook Inlet region and by selecting smaller, higher quality M. balthica to minimize foraging times. Ultimately, this unusual non-breeding ecology relies on the high quality of their prey resources. Compared with other sites across their range, M. balthica from upper Cook Inlet have relatively light shells, potentially the result of the region

  11. Joint optimization of redundancy level and spare part inventories

    International Nuclear Information System (INIS)

    Sleptchenko, Andrei; Heijden, Matthieu van der

    2016-01-01

    We consider a “k-out-of-N” system with different standby modes. Each of the N components consists of multiple part types. Upon failure, a component can be repaired within a certain time by switching the failed part by a spare, if available. We develop both an exact and a fast approximate analysis to compute the system availability. Next, we jointly optimize the component redundancy level with the inventories of the various spare parts. We find that our approximations are very accurate and suitable for large systems. We apply our model to a case study at a public organization in Qatar, and find that we can improve the availability-to-cost ratio by reducing the redundancy level and increasing the spare part inventories. In general, high redundancy levels appear to be useful only when components are relatively cheap and part replacement times are high. - Highlights: • We analyze a redundant system (k-out-of-N) with multiple parts and spares. • We jointly optimize the redundancy level and the spare part inventories. • We develop an exact method and an approximation to evaluate the system availability. • Adding spare parts and reducing the redundancy level cuts cost by 50% in a case study. • The availability is not very sensitive to the shape of the failure time distribution.

  12. Business excellence as a success factor for the performance of large Croatian enterprises

    Directory of Open Access Journals (Sweden)

    Ivica Zdrilić

    2016-06-01

    Full Text Available Croatian companies need a new approach that will provide them with sufficient competitive strength, based on business excellence. Focusing only on financial indicators and measures is insufficient. Therefore new concepts should be introduced, especially by large companies that are traditionally inert and exposed to global competition, and situated in the countries with ongoing transition, such as Croatia. Today 75% of the source of value within a company cannot be measured by means of the standard accounting techniques anymore, and in the 21st century it is impossible to rely exclusively on measuring financial parameters. According to the authors, in addition to financial measuring, a way should be found to measure non-financial parameters within a company. The paper is therefore aimed at exploring the influence of business excellence and its values on business in the Croatian business practice. The authors carried out a research on 106 large Croatian enterprises with more than 250 employees, exploring the connection between the values of business excellence and company performance, Results show a positive correlation between applying the principles of business excellence and successful company performance in practice.

  13. Measuring patient-centered medical home access and continuity in clinics with part-time clinicians.

    Science.gov (United States)

    Rosland, Ann-Marie; Krein, Sarah L; Kim, Hyunglin Myra; Greenstone, Clinton L; Tremblay, Adam; Ratz, David; Saffar, Darcy; Kerr, Eve A

    2015-05-01

    Common patient-centered medical home (PCMH) performance measures value access to a single primary care provider (PCP), which may have unintended consequences for clinics that rely on part-time PCPs and team-based care. Retrospective analysis of 110,454 primary care visits from 2 Veterans Health Administration clinics from 2010 to 2012. Multi-level models examined associations between PCP availability in clinic, and performance on access and continuity measures. Patient experiences with access and continuity were compared using 2012 patient survey data (N = 2881). Patients of PCPs with fewer half-day clinic sessions per week were significantly less likely to get a requested same-day appointment with their usual PCP (predicted probability 17% for PCPs with 2 sessions/week, 20% for 5 sessions/week, and 26% for 10 sessions/week). Among requests that did not result in a same-day appointment with the usual PCP, there were no significant differences in same-day access to a different PCP, or access within 2 to 7 days with patients' usual PCP. Overall, patients had >92% continuity with their usual PCP at the hospital-based site regardless of PCP sessions/week. Patients of full-time PCPs reported timely appointments for urgent needs more often than patients of part-time PCPs (82% vs 71%; P Part-time PCP performance appeared worse when using measures focused on same-day access to patients' usual PCP. However, clinic-level same-day access, same-week access to the usual PCP, and overall continuity were similar for patients of part-time and full-time PCPs. Measures of in-person access to a usual PCP do not capture alternate access approaches encouraged by PCMH, and often used by part-time providers, such as team-based or non-face-to-face care.

  14. Combining electronic structure and many-body theory with large databases: A method for predicting the nature of 4 f states in Ce compounds

    Science.gov (United States)

    Herper, H. C.; Ahmed, T.; Wills, J. M.; Di Marco, I.; Björkman, T.; Iuşan, D.; Balatsky, A. V.; Eriksson, O.

    2017-08-01

    Recent progress in materials informatics has opened up the possibility of a new approach to accessing properties of materials in which one assays the aggregate properties of a large set of materials within the same class in addition to a detailed investigation of each compound in that class. Here we present a large scale investigation of electronic properties and correlated magnetism in Ce-based compounds accompanied by a systematic study of the electronic structure and 4 f -hybridization function of a large body of Ce compounds. We systematically study the electronic structure and 4 f -hybridization function of a large body of Ce compounds with the goal of elucidating the nature of the 4 f states and their interrelation with the measured Kondo energy in these compounds. The hybridization function has been analyzed for more than 350 data sets (being part of the IMS database) of cubic Ce compounds using electronic structure theory that relies on a full-potential approach. We demonstrate that the strength of the hybridization function, evaluated in this way, allows us to draw precise conclusions about the degree of localization of the 4 f states in these compounds. The theoretical results are entirely consistent with all experimental information, relevant to the degree of 4 f localization for all investigated materials. Furthermore, a more detailed analysis of the electronic structure and the hybridization function allows us to make precise statements about Kondo correlations in these systems. The calculated hybridization functions, together with the corresponding density of states, reproduce the expected exponential behavior of the observed Kondo temperatures and prove a consistent trend in real materials. This trend allows us to predict which systems may be correctly identified as Kondo systems. A strong anticorrelation between the size of the hybridization function and the volume of the systems has been observed. The information entropy for this set of systems is

  15. Analysis of the Dynamic Response in the Railway Vehicles to the Track Vertical Irregularities. Part I: The Theoretical Model and the Vehicle Response Functions

    Directory of Open Access Journals (Sweden)

    M. Dumitriu

    2015-11-01

    Full Text Available The paper herein focuses on the dynamic response of a two-bogie vehicle to the excitations derived from the track vertical irregularities. The symmetrical and antisymmetrical modes due from the bounce and pitch motions of the axles’ planes in the two bogies are being considered. The analysis of the dynamic response in the vehicle relies on the response functions in three reference points of the carbody, composed by means of these response functions to the symmetrical and antisymmetrical excitation modes. Similarly, the dynamic response of the vehicle to the track stochastic irregularities is examined and expressed as a power spectral density of the carbody vertical acceleration and the root mean square of the acceleration and the index of the partial comfort to the vertical vibrations is calculated. The paper is structured into two parts. The Part I includes all the theoretical elements required for the analysis of the dynamic response in the vehicle, while Part II introduces the results of the numerical analysis.

  16. Reliability, availability and maintenance aspects of large-scale offshore wind farms, a concepts study

    NARCIS (Netherlands)

    Van Bussel, G.J.W.; Zaayer, M.B.

    2001-01-01

    The DOWEC projects aims at implementation of large wind turbines in large scale wind farms. part of the DOWEC project a concepts study was performed regarding the achievable reliability and availability levels. A reduction with a factor of 2 with regard to the present state of the art seems fairly

  17. Information and processes underlying semantic and episodic memory across tasks, items, and individuals.

    Science.gov (United States)

    Cox, Gregory E; Hemmer, Pernille; Aue, William R; Criss, Amy H

    2018-04-01

    The development of memory theory has been constrained by a focus on isolated tasks rather than the processes and information that are common to situations in which memory is engaged. We present results from a study in which 453 participants took part in five different memory tasks: single-item recognition, associative recognition, cued recall, free recall, and lexical decision. Using hierarchical Bayesian techniques, we jointly analyzed the correlations between tasks within individuals-reflecting the degree to which tasks rely on shared cognitive processes-and within items-reflecting the degree to which tasks rely on the same information conveyed by the item. Among other things, we find that (a) the processes involved in lexical access and episodic memory are largely separate and rely on different kinds of information, (b) access to lexical memory is driven primarily by perceptual aspects of a word, (c) all episodic memory tasks rely to an extent on a set of shared processes which make use of semantic features to encode both single words and associations between words, and (d) recall involves additional processes likely related to contextual cuing and response production. These results provide a large-scale picture of memory across different tasks which can serve to drive the development of comprehensive theories of memory. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  18. Hybrid Laser Welding of Large Steel Structures

    DEFF Research Database (Denmark)

    Farrokhi, Farhang

    Manufacturing of large steel structures requires the processing of thick-section steels. Welding is one of the main processes during the manufacturing of such structures and includes a significant part of the production costs. One of the ways to reduce the production costs is to use the hybrid...... laser welding technology instead of the conventional arc welding methods. However, hybrid laser welding is a complicated process that involves several complex physical phenomena that are highly coupled. Understanding of the process is very important for obtaining quality welds in an efficient way....... This thesis investigates two different challenges related to the hybrid laser welding of thick-section steel plates. Employing empirical and analytical approaches, this thesis attempts to provide further knowledge towards obtaining quality welds in the manufacturing of large steel structures....

  19. Automated assembly of micro mechanical parts in a Microfactory setup

    DEFF Research Database (Denmark)

    Eriksson, Torbjörn Gerhard; Hansen, Hans Nørgaard; Gegeckaite, Asta

    2006-01-01

    Many micro products in use today are manufactured using semi-automatic assembly. Handling, assembly and transport of the parts are especially labour intense processes. Automation of these processes holds a large potential, especially if flexible, modular microfactories can be developed. This paper...... focuses on the issues that have to be taken into consideration in order to go from a semi-automatic production into an automated microfactory. The application in this study is a switch consisting of 7 parts. The development of a microfactory setup to take care of the automated assembly of the switch...

  20. A conceptual analysis of standard setting in large-scale assessments

    NARCIS (Netherlands)

    van der Linden, Willem J.

    1994-01-01

    Elements of arbitrariness in the standard setting process are explored, and an alternative to the use of cut scores is presented. The first part of the paper analyzes the use of cut scores in large-scale assessments, discussing three different functions: (1) cut scores define the qualifications used

  1. Dephasing due to Nuclear Spins in Large-Amplitude Electric Dipole Spin Resonance.

    Science.gov (United States)

    Chesi, Stefano; Yang, Li-Ping; Loss, Daniel

    2016-02-12

    We analyze effects of the hyperfine interaction on electric dipole spin resonance when the amplitude of the quantum-dot motion becomes comparable or larger than the quantum dot's size. Away from the well-known small-drive regime, the important role played by transverse nuclear fluctuations leads to a Gaussian decay with characteristic dependence on drive strength and detuning. A characterization of spin-flip gate fidelity, in the presence of such additional drive-dependent dephasing, shows that vanishingly small errors can still be achieved at sufficiently large amplitudes. Based on our theory, we analyze recent electric dipole spin resonance experiments relying on spin-orbit interactions or the slanting field of a micromagnet. We find that such experiments are already in a regime with significant effects of transverse nuclear fluctuations and the form of decay of the Rabi oscillations can be reproduced well by our theory.

  2. Displacement and deformation measurement for large structures by camera network

    Science.gov (United States)

    Shang, Yang; Yu, Qifeng; Yang, Zhen; Xu, Zhiqiang; Zhang, Xiaohu

    2014-03-01

    A displacement and deformation measurement method for large structures by a series-parallel connection camera network is presented. By taking the dynamic monitoring of a large-scale crane in lifting operation as an example, a series-parallel connection camera network is designed, and the displacement and deformation measurement method by using this series-parallel connection camera network is studied. The movement range of the crane body is small, and that of the crane arm is large. The displacement of the crane body, the displacement of the crane arm relative to the body and the deformation of the arm are measured. Compared with a pure series or parallel connection camera network, the designed series-parallel connection camera network can be used to measure not only the movement and displacement of a large structure but also the relative movement and deformation of some interesting parts of the large structure by a relatively simple optical measurement system.

  3. Multi-megajoule Nd: glass fusion laser design

    International Nuclear Information System (INIS)

    Manes, K.R.

    1986-01-01

    New technologies make multi-megajoule glass lasers economically feasible. Laser architectures using harmonic switchout, target plane holographic injection, phase conjugation, continuous apodization and higher amplifier efficiencies have been devised. A plan for a multi-megajoule laser which can be built for an acceptable cost relies on manufacturing economies of scale and the demonstration of the new technologies presented here. These include continuous pour glass production, rapid harmonic crystal growth, switching of large blocks of power using larger capcaitors packed more economically and by using large identical parts counts

  4. Mare Orientale: Widely Accepted Large Impact or a Regular Tectonic Depression?

    Science.gov (United States)

    Kochemasov, G. G.

    2018-04-01

    Mare Orientale is one of the critical features on Moon surface explaining its tectonics. The impact origin of it is widely accepted, but an attentive examination shows that this large Mare is a part of endogenous tectonic structure, not a random impact.

  5. Job sharing. Part 1.

    Science.gov (United States)

    Anderson, K; Forbes, R

    1989-01-01

    This article is the first of a three part series discussing the impact of nurses job sharing at University Hospital, London, Ontario. This first article explores the advantages and disadvantages of job sharing for staff nurses and their supervising nurse manager, as discussed in the literature. The results of a survey conducted on a unit with a large number of job sharing positions, concur with literature findings. The second article will present the evaluation of a pilot project in which two nurses job share a first line managerial position in the Operating Room. The third article will relate the effects of job sharing on women's perceived general well being. Job sharing in all areas, is regarded as a positive experience by both nurse and administrators.

  6. TrackingNet: A Large-Scale Dataset and Benchmark for Object Tracking in the Wild

    KAUST Repository

    Mü ller, Matthias; Bibi, Adel Aamer; Giancola, Silvio; Al-Subaihi, Salman; Ghanem, Bernard

    2018-01-01

    Despite the numerous developments in object tracking, further development of current tracking algorithms is limited by small and mostly saturated datasets. As a matter of fact, data-hungry trackers based on deep-learning currently rely on object detection datasets due to the scarcity of dedicated large-scale tracking datasets. In this work, we present TrackingNet, the first large-scale dataset and benchmark for object tracking in the wild. We provide more than 30K videos with more than 14 million dense bounding box annotations. Our dataset covers a wide selection of object classes in broad and diverse context. By releasing such a large-scale dataset, we expect deep trackers to further improve and generalize. In addition, we introduce a new benchmark composed of 500 novel videos, modeled with a distribution similar to our training dataset. By sequestering the annotation of the test set and providing an online evaluation server, we provide a fair benchmark for future development of object trackers. Deep trackers fine-tuned on a fraction of our dataset improve their performance by up to 1.6% on OTB100 and up to 1.7% on TrackingNet Test. We provide an extensive benchmark on TrackingNet by evaluating more than 20 trackers. Our results suggest that object tracking in the wild is far from being solved.

  7. TrackingNet: A Large-Scale Dataset and Benchmark for Object Tracking in the Wild

    KAUST Repository

    Müller, Matthias

    2018-03-28

    Despite the numerous developments in object tracking, further development of current tracking algorithms is limited by small and mostly saturated datasets. As a matter of fact, data-hungry trackers based on deep-learning currently rely on object detection datasets due to the scarcity of dedicated large-scale tracking datasets. In this work, we present TrackingNet, the first large-scale dataset and benchmark for object tracking in the wild. We provide more than 30K videos with more than 14 million dense bounding box annotations. Our dataset covers a wide selection of object classes in broad and diverse context. By releasing such a large-scale dataset, we expect deep trackers to further improve and generalize. In addition, we introduce a new benchmark composed of 500 novel videos, modeled with a distribution similar to our training dataset. By sequestering the annotation of the test set and providing an online evaluation server, we provide a fair benchmark for future development of object trackers. Deep trackers fine-tuned on a fraction of our dataset improve their performance by up to 1.6% on OTB100 and up to 1.7% on TrackingNet Test. We provide an extensive benchmark on TrackingNet by evaluating more than 20 trackers. Our results suggest that object tracking in the wild is far from being solved.

  8. Monitoring of large steam turbines, as seen by the constructor and the operator

    International Nuclear Information System (INIS)

    Blanchet, J.M.; Bourcier, P.B.; Malherbe, C.

    1986-01-01

    The electricity in France is produced by large steam turbines in the range of 125 000 kW to 1 300 000 kW in nuclear power plants. Some operation problems are encountered on these large machines. The aim of this study is to justify and to describe the monitoring process implemented on the large steam turbines. This short study is divided into three parts: the monitoring justification during the start-up period, one example of a monitoring system, the turbine monitoring during the operation period [fr

  9. The large quark mass expansion of {Gamma}(Z{sup 0} {yields}hadrons) and {Gamma}({tau}{sup -} {yields}{nu}{sub {tau}}+ hadrons) in the order {alpha}{sup 3}{sub s}

    Energy Technology Data Exchange (ETDEWEB)

    Larin, S.A.; Ritbergen, T. van; Vermaseren, J.A.M.

    1994-09-01

    We present the analytical {alpha}{sub s}{sup 3}, correction to the Z{sup 0} decay rate into hadrons. We calculate this correction up to (and including) terms of the order (m{sub Z}{sup 2}/m{sup 2}{sub top}){sup 3} in the large top quark mass expansion. We rely on the technique of the large mass expansion of individual Feynman diagrams and treat its application in detail. We convert the obtained results of six flavour QCD to the restults in the effective theory with five active flavours, checking the decoupling relation of the QCD coupling constant. We also derive the large charm quark mass expansion of the semihadronic {tau} lepton decay rate in the {alpha}{sub s}{sup 3} approximation. (orig.).

  10. Study of a large scale neutron measurement channel

    International Nuclear Information System (INIS)

    Amarouayache, Anissa; Ben Hadid, Hayet.

    1982-12-01

    A large scale measurement channel allows the processing of the signal coming from an unique neutronic sensor, during three different running modes: impulses, fluctuations and current. The study described in this note includes three parts: - A theoretical study of the large scale channel and its brief description are given. The results obtained till now in that domain are presented. - The fluctuation mode is thoroughly studied and the improvements to be done are defined. The study of a fluctuation linear channel with an automatic commutation of scales is described and the results of the tests are given. In this large scale channel, the method of data processing is analogical. - To become independent of the problems generated by the use of a an analogical processing of the fluctuation signal, a digital method of data processing is tested. The validity of that method is improved. The results obtained on a test system realized according to this method are given and a preliminary plan for further research is defined [fr

  11. Large-scale fracture mechancis testing -- requirements and possibilities

    International Nuclear Information System (INIS)

    Brumovsky, M.

    1993-01-01

    Application of fracture mechanics to very important and/or complicated structures, like reactor pressure vessels, brings also some questions about the reliability and precision of such calculations. These problems become more pronounced in cases of elastic-plastic conditions of loading and/or in parts with non-homogeneous materials (base metal and austenitic cladding, property gradient changes through material thickness) or with non-homogeneous stress fields (nozzles, bolt threads, residual stresses etc.). For such special cases some verification by large-scale testing is necessary and valuable. This paper discusses problems connected with planning of such experiments with respect to their limitations, requirements to a good transfer of received results to an actual vessel. At the same time, an analysis of possibilities of small-scale model experiments is also shown, mostly in connection with application of results between standard, small-scale and large-scale experiments. Experience from 30 years of large-scale testing in SKODA is used as an example to support this analysis. 1 fig

  12. Characterising large scenario earthquakes and their influence on NDSHA maps

    Science.gov (United States)

    Magrin, Andrea; Peresan, Antonella; Panza, Giuliano F.

    2016-04-01

    The neo-deterministic approach to seismic zoning, NDSHA, relies on physically sound modelling of ground shaking from a large set of credible scenario earthquakes, which can be defined based on seismic history and seismotectonics, as well as incorporating information from a wide set of geological and geophysical data (e.g. morphostructural features and present day deformation processes identified by Earth observations). NDSHA is based on the calculation of complete synthetic seismograms; hence it does not make use of empirical attenuation models (i.e. ground motion prediction equations). From the set of synthetic seismograms, maps of seismic hazard that describe the maximum of different ground shaking parameters at the bedrock can be produced. As a rule, the NDSHA, defines the hazard as the envelope ground shaking at the site, computed from all of the defined seismic sources; accordingly, the simplest outcome of this method is a map where the maximum of a given seismic parameter is associated to each site. In this way, the standard NDSHA maps permit to account for the largest observed or credible earthquake sources identified in the region in a quite straightforward manner. This study aims to assess the influence of unavoidable uncertainties in the characterisation of large scenario earthquakes on the NDSHA estimates. The treatment of uncertainties is performed by sensitivity analyses for key modelling parameters and accounts for the uncertainty in the prediction of fault radiation and in the use of Green's function for a given medium. Results from sensitivity analyses with respect to the definition of possible seismic sources are discussed. A key parameter is the magnitude of seismic sources used in the simulation, which is based on information from earthquake catalogue, seismogenic zones and seismogenic nodes. The largest part of the existing Italian catalogues is based on macroseismic intensities, a rough estimate of the error in peak values of ground motion can

  13. A regularized vortex-particle mesh method for large eddy simulation

    Science.gov (United States)

    Spietz, H. J.; Walther, J. H.; Hejlesen, M. M.

    2017-11-01

    We present recent developments of the remeshed vortex particle-mesh method for simulating incompressible fluid flow. The presented method relies on a parallel higher-order FFT based solver for the Poisson equation. Arbitrary high order is achieved through regularization of singular Green's function solutions to the Poisson equation and recently we have derived novel high order solutions for a mixture of open and periodic domains. With this approach the simulated variables may formally be viewed as the approximate solution to the filtered Navier Stokes equations, hence we use the method for Large Eddy Simulation by including a dynamic subfilter-scale model based on test-filters compatible with the aforementioned regularization functions. Further the subfilter-scale model uses Lagrangian averaging, which is a natural candidate in light of the Lagrangian nature of vortex particle methods. A multiresolution variation of the method is applied to simulate the benchmark problem of the flow past a square cylinder at Re = 22000 and the obtained results are compared to results from the literature.

  14. GreekLex 2: A comprehensive lexical database with part-of-speech, syllabic, phonological, and stress information.

    Science.gov (United States)

    Kyparissiadis, Antonios; van Heuven, Walter J B; Pitchford, Nicola J; Ledgeway, Timothy

    2017-01-01

    Databases containing lexical properties on any given orthography are crucial for psycholinguistic research. In the last ten years, a number of lexical databases have been developed for Greek. However, these lack important part-of-speech information. Furthermore, the need for alternative procedures for calculating syllabic measurements and stress information, as well as combination of several metrics to investigate linguistic properties of the Greek language are highlighted. To address these issues, we present a new extensive lexical database of Modern Greek (GreekLex 2) with part-of-speech information for each word and accurate syllabification and orthographic information predictive of stress, as well as several measurements of word similarity and phonetic information. The addition of detailed statistical information about Greek part-of-speech, syllabification, and stress neighbourhood allowed novel analyses of stress distribution within different grammatical categories and syllabic lengths to be carried out. Results showed that the statistical preponderance of stress position on the pre-final syllable that is reported for Greek language is dependent upon grammatical category. Additionally, analyses showed that a proportion higher than 90% of the tokens in the database would be stressed correctly solely by relying on stress neighbourhood information. The database and the scripts for orthographic and phonological syllabification as well as phonetic transcription are available at http://www.psychology.nottingham.ac.uk/greeklex/.

  15. Automated Bug Assignment: Ensemble-based Machine Learning in Large Scale Industrial Contexts

    OpenAIRE

    Jonsson, Leif; Borg, Markus; Broman, David; Sandahl, Kristian; Eldh, Sigrid; Runeson, Per

    2016-01-01

    Bug report assignment is an important part of software maintenance. In particular, incorrect assignments of bug reports to development teams can be very expensive in large software development projects. Several studies propose automating bug assignment techniques using machine learning in open source software contexts, but no study exists for large-scale proprietary projects in industry. The goal of this study is to evaluate automated bug assignment techniques that are based on machine learni...

  16. Oligopolistic competition in wholesale electricity markets: Large-scale simulation and policy analysis using complementarity models

    Science.gov (United States)

    Helman, E. Udi

    This dissertation conducts research into the large-scale simulation of oligopolistic competition in wholesale electricity markets. The dissertation has two parts. Part I is an examination of the structure and properties of several spatial, or network, equilibrium models of oligopolistic electricity markets formulated as mixed linear complementarity problems (LCP). Part II is a large-scale application of such models to the electricity system that encompasses most of the United States east of the Rocky Mountains, the Eastern Interconnection. Part I consists of Chapters 1 to 6. The models developed in this part continue research into mixed LCP models of oligopolistic electricity markets initiated by Hobbs [67] and subsequently developed by Metzler [87] and Metzler, Hobbs and Pang [88]. Hobbs' central contribution is a network market model with Cournot competition in generation and a price-taking spatial arbitrage firm that eliminates spatial price discrimination by the Cournot firms. In one variant, the solution to this model is shown to be equivalent to the "no arbitrage" condition in a "pool" market, in which a Regional Transmission Operator optimizes spot sales such that the congestion price between two locations is exactly equivalent to the difference in the energy prices at those locations (commonly known as locational marginal pricing). Extensions to this model are presented in Chapters 5 and 6. One of these is a market model with a profit-maximizing arbitrage firm. This model is structured as a mathematical program with equilibrium constraints (MPEC), but due to the linearity of its constraints, can be solved as a mixed LCP. Part II consists of Chapters 7 to 12. The core of these chapters is a large-scale simulation of the U.S. Eastern Interconnection applying one of the Cournot competition with arbitrage models. This is the first oligopolistic equilibrium market model to encompass the full Eastern Interconnection with a realistic network representation (using

  17. Rucio – The next generation of large scale distributed system for ATLAS data management

    International Nuclear Information System (INIS)

    Garonne, V; Vigne, R; Stewart, G; Barisits, M; Eermann, T B; Lassnig, M; Serfon, C; Goossens, L; Nairz, A

    2014-01-01

    Rucio is the next-generation Distributed Data Management (DDM) system benefiting from recent advances in cloud and 'Big Data' computing to address HEP experiments scaling requirements. Rucio is an evolution of the ATLAS DDM system Don Quijote 2 (DQ2), which has demonstrated very large scale data management capabilities with more than 140 petabytes spread worldwide across 130 sites, and accesses from 1,000 active users. However, DQ2 is reaching its limits in terms of scalability, requiring a large number of support staff to operate and being hard to extend with new technologies. Rucio will deal with these issues by relying on a conceptual data model and new technology to ensure system scalability, address new user requirements and employ new automation framework to reduce operational overheads. We present the key concepts of Rucio, including its data organization/representation and a model of how to manage central group and user activities. The Rucio design, and the technology it employs, is described, specifically looking at its RESTful architecture and the various software components it uses. We show also the performance of the system.

  18. Application of Logic Models in a Large Scientific Research Program

    Science.gov (United States)

    O'Keefe, Christine M.; Head, Richard J.

    2011-01-01

    It is the purpose of this article to discuss the development and application of a logic model in the context of a large scientific research program within the Commonwealth Scientific and Industrial Research Organisation (CSIRO). CSIRO is Australia's national science agency and is a publicly funded part of Australia's innovation system. It conducts…

  19. In-Situ Printing of Conductive Polylactic Acid (cPLA) Strain Sensors Embedded into Additively Manufactured Parts using Fused Deposition Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Ouellette, Brittany Joy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-02-16

    Additive Manufacturing (AM) technology has been around for decades, but until recently, machines have been expensive, relatively large, and not available to most institutions. Increased technological advances in 3D printing and awareness throughout industry, universities, and even hobbyists has increased demand to substitute AM parts in place of traditionally manufactured (subtractive) designs; however, there is a large variability of part quality and mechanical behavior due to the inherent printing process, which must be understood before AM parts are used for load bearing and structural design.

  20. Why science? to know, to understand, and to rely on results

    CERN Document Server

    Newton, Roger G

    2012-01-01

    This book aims to describe, for readers uneducated in science, the development of humanity's desire to know and understand the world around us through the various stages of its development to the present, when science is almost universally recognized - at least in the Western world - as the most reliable way of knowing. The book describes the history of the large-scale exploration of the surface of the earth by sea, beginning with the Vikings and the Chinese, and of the unknown interiors of the American and African continents by foot and horseback. After the invention of the telescope, visual exploration of the surfaces of the Moon and Mars were made possible, and finally a visit to the Moon. The book then turns to our legacy from the ancient Greeks of wanting to understand rather than just know, and why the scientific way of understanding is valued. For concreteness, it relates the lives and accomplishments of six great scientists, four from the nineteenth century and two from the twentieth. Finally, the boo...

  1. Gold-catalyzed aerobic epoxidation of trans-stilbene in methylcyclohexane. Part II: Identification and quantification of a key reaction intermediate

    KAUST Repository

    Guillois, Kevin

    2013-03-01

    The gold-catalyzed aerobic oxidations of alkenes are thought to rely on the in situ synthesis of hydroperoxide species, which have however never been clearly identified. Here, we show direct experimental evidence for the presence of 1-methylcyclohexyl hydroperoxide in the aerobic co-oxidation of stilbene and methylcyclohexane catalyzed by the Au/SiO2-R972 optimized catalyst prepared in Part I. Determination of its response in gas chromatography, by triphenylphosphine titration followed by 31P NMR, allows to easily follow its concentration throughout the co-oxidation process and to clearly highlight the simultaneous existence of the methylcyclohexane autoxidation pathway and the stilbene epoxidation pathway. © 2012 Elsevier B.V. All rights reserved.

  2. Gold-catalyzed aerobic epoxidation of trans-stilbene in methylcyclohexane. Part II: Identification and quantification of a key reaction intermediate

    KAUST Repository

    Guillois, Kevin; Mangematin, Sté phane; Tuel, Alain; Caps, Valerie

    2013-01-01

    The gold-catalyzed aerobic oxidations of alkenes are thought to rely on the in situ synthesis of hydroperoxide species, which have however never been clearly identified. Here, we show direct experimental evidence for the presence of 1-methylcyclohexyl hydroperoxide in the aerobic co-oxidation of stilbene and methylcyclohexane catalyzed by the Au/SiO2-R972 optimized catalyst prepared in Part I. Determination of its response in gas chromatography, by triphenylphosphine titration followed by 31P NMR, allows to easily follow its concentration throughout the co-oxidation process and to clearly highlight the simultaneous existence of the methylcyclohexane autoxidation pathway and the stilbene epoxidation pathway. © 2012 Elsevier B.V. All rights reserved.

  3. How to correct for long-term externalities of large-scale wind power development by a capacity mechanism?

    International Nuclear Information System (INIS)

    Cepeda, Mauricio; Finon, Dominique

    2013-01-01

    This paper deals with the practical problems related to long-term security of supply in electricity markets in the presence of large-scale wind power development. The success of recent renewable promotion schemes adds a new dimension to ensuring long-term security of supply: it necessitates designing second-best policies to prevent large-scale wind power development from distorting long-run equilibrium prices and investments in conventional generation and in particular in peaking units. We rely upon a long-term simulation model which simulates electricity market players' investment decisions in a market regime and incorporates large-scale wind power development in the presence of either subsidized or market driven development scenarios. We test the use of capacity mechanisms to compensate for long-term effects of large-scale wind power development on prices and reliability of supply. The first finding is that capacity mechanisms can help to reduce the social cost of large scale wind power development in terms of decrease of loss of load probability. The second finding is that, in a market-based wind power deployment without subsidy, wind generators are penalised for insufficient contribution to the long term system's reliability. - Highlights: • We model power market players’ investment decisions incorporating wind power. • We examine two market designs: an energy-only market and a capacity mechanism. • We test two types of wind power development paths: subsidised and market-driven. • Capacity mechanisms compensate for the externalities of wind power developments

  4. Study of a large BGO crystal in a charged particle beam

    International Nuclear Information System (INIS)

    Zhang, N.; Ding, Z.; Wu, Y.; Salomon, M.

    1990-01-01

    We have studied two large crystals of Bismuth Germanate (BGO) with sources and in a pion beam. The response and uniformity have been investigated with several types of reflectors. The temperature dependance of the emitted light was determined, as well as the timing resolution. As the crystal is intended to be part of a large array with very good energy resolution in the detection of high energy gamma rays and electrons, uniformities of better than 0.5% are required. Various methods to achieve this will be discussed

  5. Update of Part 61 impacts analysis methodology

    International Nuclear Information System (INIS)

    Oztunali, O.I.; Roles, G.W.

    1986-01-01

    The US Nuclear Regulatory Commission is expanding the impacts analysis methodology used during the development of the 10 CFR Part 61 rule to allow improved consideration of costs and impacts of disposal of waste that exceeds Class C concentrations. The project includes updating the computer codes that comprise the methodology, reviewing and updating data assumptions on waste streams and disposal technologies, and calculation of costs for small as well as large disposal facilities. This paper outlines work done to date on this project

  6. Update of Part 61 impacts analysis methodology

    International Nuclear Information System (INIS)

    Oztunali, O.I.; Roles, G.W.; US Nuclear Regulatory Commission, Washington, DC 20555)

    1985-01-01

    The US Nuclear Regulatory Commission is expanding the impacts analysis methodology used during the development of the 10 CFR Part 61 regulation to allow improved consideration of costs and impacts of disposal of waste that exceeds Class C concentrations. The project includes updating the computer codes that comprise the methodology, reviewing and updating data assumptions on waste streams and disposal technologies, and calculation of costs for small as well as large disposal facilities. This paper outlines work done to date on this project

  7. Large-D gravity and low-D strings.

    Science.gov (United States)

    Emparan, Roberto; Grumiller, Daniel; Tanabe, Kentaro

    2013-06-21

    We show that in the limit of a large number of dimensions a wide class of nonextremal neutral black holes has a universal near-horizon limit. The limiting geometry is the two-dimensional black hole of string theory with a two-dimensional target space. Its conformal symmetry explains the properties of massless scalars found recently in the large-D limit. For black branes with string charges, the near-horizon geometry is that of the three-dimensional black strings of Horne and Horowitz. The analogies between the α' expansion in string theory and the large-D expansion in gravity suggest a possible effective string description of the large-D limit of black holes. We comment on applications to several subjects, in particular to the problem of critical collapse.

  8. Loss of locality in gravitational correlators with a large number of insertions

    Science.gov (United States)

    Ghosh, Sudip; Raju, Suvrat

    2017-09-01

    We review lessons from the AdS/CFT correspondence that indicate that the emergence of locality in quantum gravity is contingent upon considering observables with a small number of insertions. Correlation functions, where the number of insertions scales with a power of the central charge of the CFT, are sensitive to nonlocal effects in the bulk theory, which arise from a combination of the effects of the bulk Gauss law and a breakdown of perturbation theory. To examine whether a similar effect occurs in flat space, we consider the scattering of massless particles in the bosonic string and the superstring in the limit, where the number of external particles, n, becomes very large. We use estimates of the volume of the Weil-Petersson moduli space of punctured Riemann surfaces to argue that string amplitudes grow factorially in this limit. We verify this factorial behavior through an extensive numerical analysis of string amplitudes at large n. Our numerical calculations rely on the observation that, in the large n limit, the string scattering amplitude localizes on the Gross-Mende saddle points, even though individual particle energies are small. This factorial growth implies the breakdown of string perturbation theory for n ˜(M/plE ) d -2 in d dimensions, where E is the typical individual particle energy. We explore the implications of this breakdown for the black hole information paradox. We show that the loss of locality suggested by this breakdown is precisely sufficient to resolve the cloning and strong subadditivity paradoxes.

  9. The effects of an invasive alien plant (Chromolaena odorata on large African mammals

    Directory of Open Access Journals (Sweden)

    Lihle Dumalisile

    2017-11-01

    Full Text Available Alien plants have invaded most ecosystem types (terrestrial, fresh water and marine and are responsible for the loss of irreplaceable natural services on which humankind relies. They alter food quantity, quality and accessibility, and may result in declines in native species richness, which may ultimately result in extinction. For an effective management of invasive alien plants, it is important to understand the effects that such plants have on all levels of biodiversity. However, the effects that invasive alien plants, such as the Triffid weed (Chromolaena odorata, have on mammalian biodiversity, especially large mammalian species, are not well-known, although they play major ecological roles in areas such as nutrient cycling. Also, little is known about the recovery of the ecosystem following alien plant removal. This study investigated the effects of C. odorata invasion on large mammalian herbivores in Hluhluwe-iMfolozi Park and whether clearing of this plant helped in rehabilitating the habitat. We used track counts to estimate and compare species richness, diversity and abundance indices for large mammalian species between areas with differing C. odorata invasion durations (ca 2 years, ca 10 years, ca 20 years, areas with differing clearing times (cl < 2 years, cl 3–5 years and an area without any history of C. odorata invasion as a control. The results from this study show that large mammalian species utilised the uninvaded and the cleared areas more than the invaded areas. Species richness, abundance and diversity decreased with increasing invasion duration and cleared areas showed an increasing species richness and abundance. We conclude that this invasive alien plant modifies habitats and their removal does aid in the restoration of the ecosystem.

  10. Shellfish Fishery Severely Reduces Condition and Survival of Oystercatchers Despite Creation of Large Marine Protected Areas

    Directory of Open Access Journals (Sweden)

    Simon Verhulst

    2004-06-01

    Full Text Available Fisheries and other human activities pose a global threat to the marine environment. Marine protected areas (MPAs are an emerging tool to cope with such threats. In the Dutch Wadden Sea, large MPAs (covering 31% of all intertidal flats have been created to protect shellfish-eating birds and allow recovery of important habitats. Even though shellfish fishing is prohibited in these areas, populations of shellfish-eating birds in the Wadden Sea have declined sharply. The role of shellfish fisheries in these declines is hotly debated, therefore, we investigated the effectiveness of MPAs for protecting oystercatcher (Haematopus ostralegus populations. Shellfish stocks (cockles, Cerastoderma edule were substantially higher in the MPAs, but surprisingly this has not resulted in a redistribution of wintering oystercatchers. Oystercatchers in unprotected areas had less shellfish in their diet and lower condition (a combined measure of mass and haematological parameters, and their estimated mortality was 43% higher. It is likely, therefore, that shellfish fishing explains at least part of the 40% decline in oystercatcher numbers in recent years. Condition and mortality effects were strongest in males, and the population sex ratio was female biased, in agreement with the fact that males rely more on shellfish. The unprotected areas apparently function as an "ecological trap," because oystercatchers did not respond as anticipated to the artificial spatial heterogeneity in food supply. Consequently, the MPAs are effective on a local scale, but not on a global scale. Similar problems are likely to exist in terrestrial ecosystems, and distribution strategies of target species need to be considered when designing terrestrial and marine protected areas if they are to be effective.

  11. Lying relies on the truth

    NARCIS (Netherlands)

    Debey, E.; De Houwer, J.; Verschuere, B.

    2014-01-01

    Cognitive models of deception focus on the conflict-inducing nature of the truth activation during lying. Here we tested the counterintuitive hypothesis that the truth can also serve a functional role in the act of lying. More specifically, we examined whether the construction of a lie can involve a

  12. Sociological investigation Students of Universities' Social-Political Trust in Iran: relying on secondary analysis of some national surveys

    Directory of Open Access Journals (Sweden)

    Sayed Mahdi Etemadifard

    2013-12-01

    Full Text Available Social system based on mutual trust among members continues to exist. Social trust in modem era is more important than earlier periods. Subject of current report is focused on the trust of Iranian students in different aspects. Main question in this investigation is about social-political trust of these students (based on trust in Islamic Republic of Iran at the past and present. This matter explored by secondary analysis of data, Relying on secondary analysis of some national surveys. Based on data and consequences of other researches we are going to illustrate the objective aspects of student's trust in current decades. The main sources for data collection at this stage include: All the public surveys conducted in the past four decades, the general data about students and their related assays. Trust students were evaluated on the following dimensions: trust in trade unions and various groups, trust in the clergy, directors of public trust and confidence in judges. Furthermore, the level of political engagement and participation in elections, satisfaction with economic situation, political situation and level of satisfaction with confidence in radio and television news. Reduction of public trust leads to reduced maximum student trust especially in the social and political dimensions.

  13. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    The fields of sensitivity and uncertainty analysis have traditionally been dominated by statistical techniques when large-scale modeling codes are being analyzed. These methods are able to estimate sensitivities, generate response surfaces, and estimate response probability distributions given the input parameter probability distributions. Because the statistical methods are computationally costly, they are usually applied only to problems with relatively small parameter sets. Deterministic methods, on the other hand, are very efficient and can handle large data sets, but generally require simpler models because of the considerable programming effort required for their implementation. The first part of this paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. This second part of the paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. This paper is applicable to low-level radioactive waste disposal system performance assessment

  14. Innovative Methods for Displaying Large Schematics on Small Screens

    National Research Council Canada - National Science Library

    Wang, Jing

    2000-01-01

    This SBIR topic identifies an emerging important and widespread need as more and more military tasks and civilian applications rely on information display and input using portable computers and mobile devices...

  15. Large scale structure from the Higgs fields of the supersymmetric standard model

    International Nuclear Information System (INIS)

    Bastero-Gil, M.; Di Clemente, V.; King, S.F.

    2003-01-01

    We propose an alternative implementation of the curvaton mechanism for generating the curvature perturbations which does not rely on a late decaying scalar decoupled from inflation dynamics. In our mechanism the supersymmetric Higgs scalars are coupled to the inflaton in a hybrid inflation model, and this allows the conversion of the isocurvature perturbations of the Higgs fields to the observed curvature perturbations responsible for large scale structure to take place during reheating. We discuss an explicit model which realizes this mechanism in which the μ term in the Higgs superpotential is generated after inflation by the vacuum expectation value of a singlet field. The main prediction of the model is that the spectral index should deviate significantly from unity, vertical bar n-1 vertical bar ∼0.1. We also expect relic isocurvature perturbations in neutralinos and baryons, but no significant departures from Gaussianity and no observable effects of gravity waves in the CMB spectrum

  16. First Very Large Telescope/X-shooter spectroscopy of early-type stars outside the Local Group

    NARCIS (Netherlands)

    Hartoog, O.E.; Sana, H.; de Koter, A.; Kaper, L.

    2012-01-01

    As part of the Very Large Telescope (VLT)/X-shooter science verification, we obtained the first optical medium-resolution spectrum of a previously identified bright O-type object in NGC 55, a Large Magellanic Cloud (LMC)-like galaxy at a distance of ∼2.0 Mpc. Based on the stellar and nebular

  17. Job tasks, computer use, and the decreasing part-time pay penalty for women in the UK

    OpenAIRE

    Elsayed, A.E.A.; de Grip, A.; Fouarge, D.

    2014-01-01

    Using data from the UK Skills Surveys, we show that the part-time pay penalty for female workers within low- and medium-skilled occupations decreased significantly over the period 1997-2006. The convergence in computer use between part-time and full-time workers within these occupations explains a large share of the decrease in the part-time pay penalty. However, the lower part-time pay penalty is also related to lower wage returns to reading and writing which are performed more intensively b...

  18. The description-experience gap and its relation to instructional control: Do people rely more on their experience than in objective descriptions?

    Directory of Open Access Journals (Sweden)

    Álvaro Viúdez

    2017-11-01

    Full Text Available The present work aims to reveal contradictory results obtained on two different fields; particularly from two studies conducted on the description-experience gap field showing that descriptions are neglected when personal experience is available (1,2, and several others conducted on the instructional control field getting to the opposite conclusion (3–8. To account for this contradiction, we hypothesized that participants from the studies of Jessup, Bishara and Busemeyer (1 and Lejarraga and Gonzalez (2 relied on their experience rather than on the descriptions because of the difficult, demanding nature of the probabilistic descriptions they faced. Enriched descriptions were created in our experiment to assess the contribution of this factor to the differential influence of the descriptions in choice behavior. Nonetheless, our hypothesis did not find support in the results and further research is needed to account for the aforementioned contradiction.

  19. Mapping resistance to powdery mildew in barley reveals a large-effect nonhost resistance QTL.

    Science.gov (United States)

    Romero, Cynara C T; Vermeulen, Jasper P; Vels, Anton; Himmelbach, Axel; Mascher, Martin; Niks, Rients E

    2018-05-01

    Resistance factors against non-adapted powdery mildews were mapped in barley. Some QTLs seem effective only to non-adapted mildews, while others also play a role in defense against the adapted form. The durability and effectiveness of nonhost resistance suggests promising practical applications for crop breeding, relying upon elucidation of key aspects of this type of resistance. We investigated which genetic factors determine the nonhost status of barley (Hordeum vulgare L.) to powdery mildews (Blumeria graminis). We set out to verify whether genes involved in nonhost resistance have a wide effectiveness spectrum, and whether nonhost resistance genes confer resistance to the barley adapted powdery mildew. Two barley lines, SusBgt SC and SusBgt DC , with some susceptibility to the wheat powdery mildew B. graminis f.sp. tritici (Bgt) were crossed with cv Vada to generate two mapping populations. Each population was assessed for level of infection against four B. graminis ff.spp, and QTL mapping analyses were performed. Our results demonstrate polygenic inheritance for nonhost resistance, with some QTLs effective only to non-adapted mildews, while others play a role against adapted and non-adapted forms. Histology analyses of nonhost interaction show that most penetration attempts are stopped in association with papillae, and also suggest independent layers of defence at haustorium establishment and conidiophore formation. Nonhost resistance of barley to powdery mildew relies mostly on non-hypersensitive mechanisms. A large-effect nonhost resistance QTL mapped to a 1.4 cM interval is suitable for map-based cloning.

  20. Cosmological bounds on large extra dimensions from nonthermal production of Kaluza-Klein modes

    International Nuclear Information System (INIS)

    Allahverdi, Rouzbeh; Bird, Chris; Groot Nibbelink, Stefan; Pospelov, Maxim

    2004-01-01

    The existing cosmological constraints on theories with large extra dimensions rely on the thermal production of the Kaluza-Klein (KK) modes of gravitons and radions in the early Universe. Successful inflation and reheating, as well as baryogenesis, typically requires the existence of a TeV-scale field in the bulk, most notably the inflaton. The nonthermal production of KK modes with masses of order 100 GeV accompanying the inflaton decay sets the lower bounds on the fundamental scale M * . For a 1-TeV inflaton, the late decay of these modes distorts the successful predictions of big bang nucleosynthesis unless M * >35, 13, 7, 5, and 3 TeV for two, three, four, five, and six extra dimensions, respectively. This improves the existing bounds from cosmology on M * for four, five, and six extra dimensions. Even more stringent bounds are derived for a heavier inflaton

  1. Resonator reset in circuit QED by optimal control for large open quantum systems

    Science.gov (United States)

    Boutin, Samuel; Andersen, Christian Kraglund; Venkatraman, Jayameenakshi; Ferris, Andrew J.; Blais, Alexandre

    2017-10-01

    We study an implementation of the open GRAPE (gradient ascent pulse engineering) algorithm well suited for large open quantum systems. While typical implementations of optimal control algorithms for open quantum systems rely on explicit matrix exponential calculations, our implementation avoids these operations, leading to a polynomial speedup of the open GRAPE algorithm in cases of interest. This speedup, as well as the reduced memory requirements of our implementation, are illustrated by comparison to a standard implementation of open GRAPE. As a practical example, we apply this open-system optimization method to active reset of a readout resonator in circuit QED. In this problem, the shape of a microwave pulse is optimized such as to empty the cavity from measurement photons as fast as possible. Using our open GRAPE implementation, we obtain pulse shapes, leading to a reset time over 4 times faster than passive reset.

  2. Technique investigation on large area neutron scintillation detector array

    International Nuclear Information System (INIS)

    Chen Jiabin

    2006-12-01

    The detailed project for developing Large Area Neutron Scintillation Detector Array (LaNSA) to be used for measuring fusion fuel area density on Shenguang III prototype is presented, including experimental principle, detector working principle, electronics system design and the needs for target chamber etc. The detailed parameters for parts are given and the main causes affecting the system function are analyzed. The realization path is introduced. (authors)

  3. Limits to the development of feed-forward structures in large recurrent neuronal networks

    Directory of Open Access Journals (Sweden)

    Susanne Kunkel

    2011-02-01

    Full Text Available Spike-timing dependent plasticity (STDP has traditionally been of great interest to theoreticians, as it seems to provide an answer to the question of how the brain can develop functional structure in response to repeated stimuli. However, despite this high level of interest, convincing demonstrations of this capacity in large, initially random networks have not been forthcoming. Such demonstrations as there are typically rely on constraining the problem artificially. Techniques include employing additional pruning mechanisms or STDP rules that enhance symmetry breaking, simulating networks with low connectivity that magnify competition between synapses, or combinations of the above. In this paper we first review modeling choices that carry particularly high risks of producing non-generalizable results in the context of STDP in recurrent networks. We then develop a theory for the development of feed-forward structure in random networks and conclude that an unstable fixed point in the dynamics prevents the stable propagation of structure in recurrent networks with weight-dependent STDP. We demonstrate that the key predictions of the theory hold in large-scale simulations. The theory provides insight into the reasons why such development does not take place in unconstrained systems and enables us to identify candidate biologically motivated adaptations to the balanced random network model that might enable it.

  4. A Spatial Interpolation Framework for Efficient Valuation of Large Portfolios of Variable Annuities

    Directory of Open Access Journals (Sweden)

    Seyed Amir Hejazi

    2017-07-01

    Full Text Available Variable Annuity (VA products expose insurance companies to considerable risk becauseof the guarantees they provide to buyers of these products. Managing and hedging these risks requireinsurers to find the values of key risk metrics for a large portfolio of VA products. In practice, manycompanies rely on nested Monte Carlo (MC simulations to find key risk metrics. MC simulations arecomputationally demanding, forcing insurance companies to invest hundreds of thousands of dollars incomputational infrastructure per year. Moreover, existing academic methodologies are focused on fairvaluation of a single VA contract, exploiting ideas in option theory and regression. In most cases, thecomputational complexity of these methods surpasses the computational requirements of MC simulations.Therefore, academic methodologies cannot scale well to large portfolios of VA contracts. In thispaper, we present a framework for valuing such portfolios based on spatial interpolation. We providea comprehensive study of this framework and compare existing interpolation schemes. Our numericalresults show superior performance, in terms of both computational effciency and accuracy, for thesemethods compared to nested MC simulations. We also present insights into the challenge of findingan effective interpolation scheme in this framework, and suggest guidelines that help us build a fullyautomated scheme that is effcient and accurate.

  5. 27 CFR 41.39 - Determination of sale price of large cigars.

    Science.gov (United States)

    2010-04-01

    ... addition to money, goods or services exchanged for cigars may be considered as part of the sale price. See... TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY (CONTINUED) TOBACCO IMPORTATION OF TOBACCO PRODUCTS, CIGARETTE PAPERS AND TUBES, AND PROCESSED TOBACCO Taxes Classification of Large Cigars and Cigarettes § 41...

  6. Vibration amplitude rule study for rotor under large time scale

    International Nuclear Information System (INIS)

    Yang Xuan; Zuo Jianli; Duan Changcheng

    2014-01-01

    The rotor is an important part of the rotating machinery; its vibration performance is one of the important factors affecting the service life. This paper presents both theoretical analyses and experimental demonstrations of the vibration rule of the rotor under large time scales. The rule can be used for the service life estimation of the rotor. (authors)

  7. 3D large-scale calculations using the method of characteristics

    International Nuclear Information System (INIS)

    Dahmani, M.; Roy, R.; Koclas, J.

    2004-01-01

    An overview of the computational requirements and the numerical developments made in order to be able to solve 3D large-scale problems using the characteristics method will be presented. To accelerate the MCI solver, efficient acceleration techniques were implemented and parallelization was performed. However, for the very large problems, the size of the tracking file used to store the tracks can still become prohibitive and exceed the capacity of the machine. The new 3D characteristics solver MCG will now be introduced. This methodology is dedicated to solve very large 3D problems (a part or a whole core) without spatial homogenization. In order to eliminate the input/output problems occurring when solving these large problems, we define a new computing scheme that requires more CPU resources than the usual one, based on sweeps over large tracking files. The huge capacity of storage needed in some problems and the related I/O queries needed by the characteristics solver are replaced by on-the-fly recalculation of tracks at each iteration step. Using this technique, large 3D problems are no longer I/O-bound, and distributed CPU resources can be efficiently used. (author)

  8. ALICE A Large Ion Collider Experiment

    CERN Multimedia

    Mager, M; Rohr, D M; Suljic, M; Miskowiec, D C; Donigus, B; Mercado-perez, J; Lohner, D; Bertelsen, H; Kox, S; Cheynis, B; Sambyal, S S; Usai, G; Agnello, M; Toscano, L; Miake, Y; Inaba, M; Maldonado cervantes, I A; Fernandez tellez, A; Kulibaba, V; Zinovjev, G; Martynov, Y; Usenko, E; Pshenichnov, I; Nikolaev, S; Vasiliev, A; Vinogradov, A; Moukhanova, T; Vasilyev, A; Kozlov, Y; Voloshin, K; Kiselev, S; Kirilko, Y; Lyublev, E; Kondratyeva, N; Gameiro munhoz, M; Alarcon do passo suaide, A; Lagana fernandes, C; Carlin filho, N; Yin, Z; Zhu, J; Luo, J; Pikna, M; Bombara, M; Pastircak, B; Marangio, G; Gianotti, P; Muccifora, V; Sputowska, I A; Ilkiv, I; Christiansen, P; Dodokhov, V; Yurevich, V; Fedunov, A; Malakhov, A; Efremov, A; Feofilov, G; Vinogradov, L; Asryan, A; Kovalenko, V; Piyarathna, D; Myers, C J; Martashvili, I; Oh, H; Cherney, M G; D'erasmo, G; Wagner, V; Smakal, R; Sartorelli, G; Xaplanteris karampatsos, L; Mlynarz, J; Murray, C J; Oh, S; Becker, B; Zbroszczyk, H P; Feldkamp, L; Pappalardo, G; Khlebnikov, A; Basmanov, V; Punin, V; Demanov, V; Naseer, M A; Gotovac, S; Zgura, S I; Yang, H; Vernet, R; Son, C; Shtejer diaz, K; Hwang, S; Alfaro molina, J R; Jahnke, C; Richter, M R; Garcia-solis, E J; Hitchcock, T M; Bazo alba, J L; Utrobicic, A; Brun, R; Divia, R; Hillemanns, H; Schukraft, J; Riedler, P; Eulisse, G; Von haller, B; Kushpil, V; Ivanov, M; Malzacher, P; Schweda, K O; Renfordt, R A E; Reygers, K J; Pachmayer, Y C; Gaardhoeje, J J; Bearden, I G; Porteboeuf, S J; Borel, H; Pereira da costa, H D A; Faivre, J; Germain, M; Schutz, Y R; Delagrange, H; Batigne, G; Stocco, D; Estienne, M D; Bergognon, A A E; Zoccarato, Y D; Jones, P G; Levai, P; Bencedi, G; Khan, M M; Mahapatra, D P; Ghosh, P; Das, T K; Cicalo, C; De falco, A; Mazzoni, A M; Cerello, P; De marco, N; Riccati, L; Saavedra san martin, O; Paic, G; Ovchynnyk, V; Karavicheva, T; Kucheryaeva, M; Skuratovskiy, O; Mal kevich, D; Bogdanov, A; Pereira, L G; Cai, X; Zhu, X; Wang, M; Kar, S; Fan, F; Sitar, B; Cerny, V; Aggarwal, M M; Bianchi, N; Torii, H; Hori, Y; Tsuji, T; Herrera corral, G A; Kowalski, M; Rybicki, A; Deloff, A; Petrovici, A; Nomokonov, P; Parfenov, A; Koshurnikov, E; Shahaliyev, E; Rogochaya, E; Kondratev, V; Oreshkina, N; Tarasov, A; Norenberg, M; Bodnya, E; Bogolyubskiy, M; Symons, T; Blanco, F; Madagodahettige don, D M; Umaka, E N; Schaefer, B; De pasquale, S; Fusco girard, M; Kim, J; Jeon, H; Nandi, B K; Kumar, J; Sarkar - sinha, T; Arcelli, S; Scapparone, E; Shevel, A; Nikulin, V; Komkov, B; Voloshin, S; Hille, P T; Kannan, S; Dainese, A; Matynia, R M; Dabala, L B; Zimmermann, M B; Vinogradov, Y; Vikhlyantsev, O; Telnov, A; Tumkin, A; Van leeuwen, M; Erdal, H A; Keidel, R; Rui, R; Yeo, I; Vilakazi, Z; Klay, J L; Boswell, B D; Lindenstruth, V; Tveter, T S; Batzing, P C; Breitner, T G; Sahoo, R; Roy, A; Musa, L; Perini, D; Vande vyvre, P; Fuchs, U; Oberegger, M; Aglieri rinella, G; Salgueiro domingues da silva, R M; Kalweit, A P; Greco, V; Bellini, F; Bond, P M; Mohammadi, N; Marin, A M; Glassel, P; Schicker, R M; Staley, F M; Castillo castellanos, J E; Furget, C; Real, J; Martino, J F; Evans, D; Sahu, P K; Sahu, S K; Ahammed, Z; Saini, J; Bala, R; Gupta, R; Di bari, D; Biasotto, M; Nappi, G; Esumi, S; Sano, M; Roehrich, D; Lonne, P; Drakin, Y; Manko, V; Nikulin, S; Yushmanov, I; Kozlov, K; Kerbikov, B; Stavinskiy, A; Sultanov, R; Raniwala, R; Zhou, D; Zhu, H; Meres, M; Kralik, I; Parmar, S; Rizzi, V; Orlandi, A; Lea, R; Kuijer, P G; Figiel, J; Gorlich, L M; Shabratova, G; Lobanov, V; Zaporozhets, S; Ivanov, A; Iglovikov, V; Ochirov, A; Petrov, V; Jacobs, P M; De gruttola, D; Corsi, F; Varma, R; Nania, R; Wilkinson, J J; Samsonov, V; Pruneau, C A; Caines, H L; Aronsson, T; Adare, A M; Zwick, S M; Fearick, R W; Ostrowski, P K; Kulasinski, K; Heine, N; Wilk, A; Ilkaev, R; Ilkaeva, L; Pavlov, V; Mikhaylyukov, K; Rybin, A; Naumov, N; Mudnic, E; Cortese, P; Listratenko, O; Stan, I; Nooren, G; Song, J; Krawutschke, T; Kim, S Y; Hwang, D S; Lee, S H; Leon monzon, I; Vorobyev, I; Skaali, B; Wikne, J; Dordic, O; Yan, Y; Mazumder, R; Shahoyan, R; Kluge, A; Pellegrino, F; Safarik, K; Tauro, A; Foka, P; Frankenfeld, U M; Masciocchi, S; Schwarz, K E; Bailhache, R M; Anguelov, V; Hansen, A; Vulpescu, B; Baldisseri, A; Aphecetche, L B; Berenyi, D; Sahoo, S; Nayak, T K; Muhuri, S; Patra, R N; Adhya, S P; Potukuchi, B; Masoni, A; Scomparin, E; Beole, S; Mizuno, S; Enyo, H; Cuautle flores, E; Gonzalez zamora, P; Djuvsland, O; Altinpinar, S; Wagner, B; Fehlker, D; Velure, A; Potin, S; Kurepin, A; Ryabinkin, E; Kiselev, I; Pestov, Y; Hayrapetyan, A; Manukyan, N; Lutz, J; Belikov, I; Roy, C S; Takahashi, J; Araujo silva figueredo, M; Tang, S; Szarka, I; Kapusta, S; Hasko, J; Putis, M; Sandor, L; Vrlakova, J; Das, S; Hayashi, S; Van rijn, A J; Siemiarczuk, T; Petrovici, M; Petris, M; Stenlund, E A; Malinina, L; Fateev, O; Kolozhvari, A; Altsybeev, I; Sadovskiy, S; Soloviev, A; Ploskon, M A; Mayes, B W; Sorensen, S P; Mazer, J A; Awes, T; Virgili, T; Pagano, P; Krus, M; Sett, P; Bhatt, H; Sinha, B; Khan, P; Antonioli, P; Scioli, G; Sakaguchi, H; Volkov, S; Khanzadeev, A; Malaev, M; Lisa, M A; Loggins, V R; Schuster, T R; Scharenberg, R P; Turrisi, R; Debski, P R; Oleniacz, J; Westerhoff, U; Yanovskiy, V; Domrachev, S; Smirnova, Y; Zimmermann, S; Veldhoen, M; Van der maarel, J; Kileng, B; Seo, J; Lopez torres, E; Camerini, P; Jang, H J; Buthelezi, E Z; Suleymanov, M K O; Belmont moreno, E; Zhao, C; Perales, M; Kobdaj, C; Spyropoulou-stassinaki, M; Roukoutakis, F; Keil, M; Morsch, A; Rademakers, A; Soos, C; Zampolli, C; Grigoras, C; Chibante barroso, V M; Schuchmann, S; Grigoras, A G; Lafuente mazuecos, A; Wegrzynek, A T; Bielcikova, J; Kushpil, S; Braun-munzinger, P; Andronic, A; Zimmermann, A; Rosnet, P; Ramillien barret, V; Lopez, X B; Arbor, N; Erazmus, B E; Pichot, P; Pillot, P; Grossiord, J; Boldizsar, L; Khan, S; Puddu, G; Marras, D; Siddhanta, S; Costanza, S; Botta, E; Gallio, M; Masera, M; Simonetti, L; Prino, F; Oppedisano, C; Vargas trevino, A D; Nystrand, J I; Ullaland, K; Haaland, O S; Huang, M; Naumov, S; Zinovjev, M; Trubnikov, V; Alkin, A; Ivanytskyi, O; Guber, F; Karavichev, O; Nyanin, A; Sibiryak, Y; Peresunko, D Y; Patarakin, O; Aleksandrov, D; Blau, D; Yasnopolskiy, S; Chumakov, M; Vetlitskiy, I; Nedosekin, A; Selivanov, A; Okorokov, V; Grigoryan, A; Papikyan, V; Kuhn, C C; Wan, R; Cajko, F; Siska, M; Mares, J; Zavada, P; Ceballos sanchez, C; Reolon, A R; Gunji, T; Snellings, R; Mayer, C; Klusek-gawenda, M J; Schiaua, C C; Andrei, C; Herghelegiu, A I; Soegaard, C; Panebrattsev, Y; Penev, V; Efimov, L; Zanevskiy, Y; Vechernin, V; Zarochentsev, A; Kolevatov, R; Agapov, A; Polishchuk, B; Nattrass, C; Anticic, T; Kwon, Y; Kim, M; Moon, T; Seger, J E; Petran, M; Sahoo, B; Das bose, L; Hushnud, H; Hatzifotiadou, D; Shigaki, K; Jha, D M; Murray, S; Badala, A; Putevskoy, S; Shapovalova, E; Haiduc, M; Mitu, C M; Mischke, A; Grelli, A; Hetland, K F; Rachevski, A; Menchaca-rocha, A A; De cuveland, J; Hutter, D; Langhammer, M; Dahms, T; Watkins, E P; Gago medina, A M; Planinic, M; Riegler, W; Telesca, A; Knichel, M L; Lazaridis, L; Ferencei, J; Martin, N A; Appelshaeuser, H; Heckel, S T; Windelband, B S; Nielsen, B S; Chojnacki, M; Baldit, A; Manso, F; Crochet, P; Espagnon, B; Uras, A; Lietava, R; Lemmon, R C; Agocs, A G; Viyogi, Y; Pal, S K; Singhal, V; Khan, S A; Alam, S N; Rodriguez cahuantzi, M; Maslov, M; Kurepin, A; Ippolitov, M; Lebedev, V; Tsvetkov, A; Klimov, A; Agafonov, G; Martemiyanov, A; Loginov, V; Kononov, S; Hnatic, M; Kalinak, P; Trzaska, W H; Raha, S; Canoa roman, V; Cruz albino, R; Botje, M; Gladysz-dziadus, E; Marszal, T; Oskarsson, A N E; Otterlund, I; Tydesjo, H; Ljunggren, H M; Vodopyanov, A; Akichine, P; Kuznetsov, A; Vedeneyev, V; Naumenko, P; Bilov, N; Rogalev, R; Evdokimov, S; Braidot, E; Bellwied, R; De caro, A; Kang, J H; Gorbunov, Y; Lee, J; Pachr, M; Dash, S; Roy, P K; Cifarelli, L; Laurenti, G; Margotti, A; Sugitate, T; Ivanov, V; Zhalov, M; Salzwedel, J S N; Pavlinov, A; Harris, J W; Caballero orduna, D; Fiore, E M; Pluta, J M; Kisiel, A R; Wrobel, D; Klein-boesing, C; Grimaldi, A; Zhitnik, A; Nazarenko, S; Zavyalov, N; Miroshnikov, D; Kuryakin, A; Vyushin, A; Mamonov, A; Vickovic, L; Niculescu, M; Fragiacomo, E; Ahn, S U; Ahn, S; Foertsch, S V; Brown, C R; Lovhoiden, G; Harton, A V; Khosonthongkee, K; Langoy, R; Schmidt, H R; Betev, L; Buncic, P; Di mauro, A; Martinengo, P; Gargiulo, C; Grosse-oetringhaus, J F; Costa, F; Baltasar dos santos pedrosa, F; Laudi, E; Adamova, D; Lippmann, C; Schmidt, C J; Book, J H; Grajcarek, R; Christensen, C H; Dupieux, P; Bastid, N; Rakotozafindrabe, A M; Conesa balbastre, G; Martinez-garcia, G; Suire, C P; Ducroux, L; Tieulent, R N; Jusko, A; Barnafoldi, G G; Pochybova, S; Hussain, T; Dubey, A K; Acharya, S; Gupta, A; Ricci, R A; Meddi, F; Vercellin, E; Chujo, T; Watanabe, K; Onishi, H; Akiba, Y; Vergara limon, S; Tejeda munoz, G; Skjerdal, K; Svistunov, S; Reshetin, A; Maevskaya, A; Antonenko, V; Mishustin, N; Meleshko, E; Korsheninnikov, A; Balygin, K; Zagreev, B; Akindinov, A; Mikhaylov, K; Gushchin, O; Grigoryev, V; Gulkanyan, H; Sanchez castro, X; Peretti pezzi, R; Oliveira da silva, A C; Harmanova, Z; Vokal, S; Beitlerova, A; Rak, J; Ghosh, S K; Bhati, A K; Spiriti, E; Ronchetti, F; Casanova diaz, A O; Kuzmin, N; Melkumov, G; Zinchenko, A; Shklovskaya, A; Bunzarov, Z I; Chernenko, S; Rogachevskiy, O; Toulina, T; Kompaniets, M; Titov, A; Kharlov, Y; Dantsevich, G; Stolpovskiy, M; Porter, R J; Datskova, O V; Kim, D S; Jung, W W; Kim, H; Bielcik, J; Pospisil, V; Cepila, J; Das, D; Williams, C; Pesci, A; Roshchin, E; Grounds, A; Humanic, T; Steinpreis, M D; Yaldo, C G; Smirnov, N; Heinz, M T; Connors, M E; Barile, F; Lunardon, M; Orzan, G; Wielanek, D H; Servais, E L J; Patecki, M; Passfeld, A; Zhelezov, S; Morkin, A; Zabelin, O; Hobbs, D A; Gul, M; Ramello, L; Van den brink, A; Bertens, R A; Lodato, D F; Haque, M R; Kim, E J; Coccetti, F; Margagliotti, G V; Rauf, A W; Sandoval, A; Berger, M E; Munzer, R H; Qvigstad, H; Lindal, S; Cervantes jr, M; Kebschull, U W; Engel, H; Karasu uysal, A; Lien, J A; Hess, B A; Calvo villar, E; Augustinus, A; Carena, W; Chochula, P; Chapeland, S; Dobrin, A F; Reidt, F; Bock, F; Festanti, A; Galdames perez, A; Sumbera, M; Averbeck, R P; Garabatos cuadrado, J; Reichelt, P S; Marquard, M; Stachel, J; Wang, Y; Boggild, H; Gulbrandsen, K H; Hansen, J C; Charvet, J F; Shabetai, A; Hadjidakis, C M; Krivda, M; Vertesi, R; Mitra, J; Altini, V; Ferretti, A; Gagliardi, M; Sakata, D; Niida, T; Martinez hernandez, M I; Yang, S; Karpechev, E; Veselovskiy, A; Konevskikh, A; Finogeev, D; Fokin, S; Karadzhev, K; Kucheryaev, Y; Plotnikov, V; Ryabinin, M; Golubev, A; Kaplin, V; Ter-minasyan, A; Abramyan, A; Raniwala, S; Hippolyte, B; Strmen, P; Krivan, F; Kalliokoski, T E A; Chang, B; De cataldo, G; Paticchio, V; Fantoni, A; Gomez jimenez, R; Christakoglou, P; Cyz, A; Wilk, G A; Kurashvili, P; Pop, A; Arefiev, V; Batyunya, B; Lioubochits, V; Zryuev, V; Sokolov, M; Patalakha, D; Pinsky, L; Timmins, A R; Petracek, V; Krelina, M; Chattopadhyay, S; Basile, M; Falchieri, D; Miftakhov, N; Garner, R M; Konyushikhin, M; Joseph, N; Srivastava, B K; Cleymans, J W A; Dietel, T; Soramel, F; Pawlak, T J; Kucinski, M; Janik, M A; Surma, K D; Wessels, J P; Riggi, F; Ivanov, A; Selin, I; Budnikov, D; Filchagin, S; Sitta, M; Gheata, M; Danu, A; Peitzmann, T; Reicher, M; Helstrup, H; Subasi, M; Mathis, A M; Nilsson, M S; Rist, J A S; Jena, C; Lara martinez, C E; Vasileiou, M

    2002-01-01

    %title\\\\ \\\\ALICE is a general-purpose heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma in nucleus-nucleus collisions at the LHC. It currently includes more than 750~physicists and $\\sim$70 institutions in 27 countries.\\\\ \\\\The detector is designed to cope with the highest particle multiplicities anticipated for Pb-Pb reactions (dN/dy~$\\approx$~8000) and it will be operational at the start-up of the LHC. In addition to heavy systems, the ALICE Collaboration will study collisions of lower-mass ions, which are a means of varying the energy density, and protons (both pp and p-nucleus), which provide reference data for the nucleus-nucleus collisions.\\\\ \\\\ALICE consists of a central part, which measures event-by-event hadrons, electrons and photons, and a forward spectrometer to measure muons. The central part, which covers polar angles from 45$^{0} $ to 135$^{0} $ ($\\mid \\eta \\mid $ < 0.9) over the full azimuth, is embedded in the large L3 solenoidal mag...

  9. A Brief Analysis of Large Classroom’s English Teaching Management Skills

    Directory of Open Access Journals (Sweden)

    Weixuan Zhong

    2014-05-01

    Full Text Available Classroom is the basic place of teaching, where intertwined with a variety of teaching factors, and all these factors forms various kinds of connections. Scientific and effective class teaching management is the necessary and powerful measure of improving the teaching quality. Effective English teaching management skills are parts of the elements of successful large classroom teaching. Under the new educational situation, how to organize, regulate, manage large classrooms in order to train the students' English proficiency within certain time, which is very important to improve English classes management efficiency and teaching quality.

  10. Part I. A study of the decays D → Kππeν and D → K*πeν. Part II. SLD Cherenkov Ring Imaging Detector development

    International Nuclear Information System (INIS)

    Huber, J.S.

    1992-01-01

    A thesis in two independent halves. Part I. A search for the exclusive semileptonic decay modes D + → bar K ππ + ν e and D + → bar K * πe + ν e are presented using data from the Fermi-lab photoproduction experiment E691. With good sensitivity, the author observes no signals in the channels D + → K - π + π degrees e + ν e and D + → bar K degrees π + π - e + ν e , and set upper limits that represent only a small fraction of the inclusive semileptonic branching ration. The experiment was conducted at the Fermi-lab tagged Photon Laboratory, using a large acceptance spectrometer with silicon microvertex detector to extract a large, clean charm sample. Part II. The physics, design, and results of the Stanford Large Detector (SLD) Cherenkov Ring Imaging Detector (CRID) are described. The physics motivation and performance for the SLD CRID, the principles of Cherenkov detection, and a description of the SLD CRID are combined with a detailed description of the production and testing of the mirrors. In addition, results from the engineering run and cosmic ray tests demonstrate the current status of the system

  11. 76 FR 27897 - Security and Safety Zone Regulations, Large Passenger Vessel Protection, Captain of the Port...

    Science.gov (United States)

    2011-05-13

    ... DEPARTMENT OF HOMELAND SECURITY Coast Guard 33 CFR Part 165 [Docket No. USCG-2011-0342] Security and Safety Zone Regulations, Large Passenger Vessel Protection, Captain of the Port Columbia River... will enforce the security and safety zone in 33 CFR 165.1318 for large passenger vessels operating in...

  12. National Ignition Facility Incorporates P2/E2 in Aqueous Parts Cleaning of Optics Hardware

    International Nuclear Information System (INIS)

    Gabor, K

    2001-01-01

    When completed, Lawrence Livermore National Laboratory's (LLNL) National Ignition Facility (NIF) will be the world's largest laser with experimental capabilities applicable to stockpile stewardship, energy research, science and astrophysics. As construction of the conventional facilities nears completion, operations supporting the installation of specialized laser equipment have come online. Playing a critical role in the precision cleaning of mechanical parts from the NIF beamline are three pieces of aqueous cleaning equipment. Housed in the Optics Assembly Building (OAB), adjacent to NIF's laser bay, are the large mechanical parts gross cleaner (LMPGC), the large mechanical parts precision cleaner (LMPPC), and the small mechanical parts gross and precision cleaner (SMPGPC). These aqueous units, designed and built by Sonic Systems, Inc., of Newtown, Pennsylvania, not only accommodate parts that vary greatly in size, weight, geometry, surface finish and material, but also produce cleaned parts that meet the stringent NIF cleanliness standards (MIL-STD-1246C Level 83 for particles and A/10 for non-volatile residue). Each unit was designed with extensive water- and energy-conserving features, and the technology used minimizes hazardous waste generation associated with solvent wipe cleaning, the traditional method for cleaning laser mechanical components. The LMPGC provides preliminary gross cleaning for large mechanical parts. Collection, filtration and reuse of the wash and primary rinse water in the LMPGC limit its routine discharge to the volume of the low-pressure, deionized secondary rinse. After an initial gross cleaning in the LMPGC, a large mechanical part goes to the LMPPC. This piece of equipment, unique because of its size, consists of four 2700-gallon tanks. Parts held securely on specialized metal pallets (jointly weighing up to 1500 pounds) move through the tanks on an automated system. Operators program all movement, speeds and process times to

  13. Large Scale GW Calculations on the Cori System

    Science.gov (United States)

    Deslippe, Jack; Del Ben, Mauro; da Jornada, Felipe; Canning, Andrew; Louie, Steven

    The NERSC Cori system, powered by 9000+ Intel Xeon-Phi processors, represents one of the largest HPC systems for open-science in the United States and the world. We discuss the optimization of the GW methodology for this system, including both node level and system-scale optimizations. We highlight multiple large scale (thousands of atoms) case studies and discuss both absolute application performance and comparison to calculations on more traditional HPC architectures. We find that the GW method is particularly well suited for many-core architectures due to the ability to exploit a large amount of parallelism across many layers of the system. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, as part of the Computational Materials Sciences Program.

  14. Arsenate tolerance in Silene paradoxa does not rely on phytochelatin-dependent sequestration

    International Nuclear Information System (INIS)

    Arnetoli, Miluscia; Vooijs, Riet; Bookum, Wilma ten; Galardi, Francesca; Gonnelli, Cristina; Gabbrielli, Roberto; Schat, Henk; Verkleij, Jos A.C.

    2008-01-01

    Arsenate tolerance, As accumulation and As-induced phytochelatin accumulation were compared in populations of Silene paradoxa, one from a mine site enriched in As, Cu and Zn, the other from an uncontaminated site. The mine population was significantly more arsenate-tolerant. Arsenate uptake and root-to-shoot transport were slightly but significantly higher in the non-mine plants. The difference in uptake was quantitatively insufficient to explain the difference in tolerance between the populations. As accumulation in the roots was similar in both populations, but the mine plants accumulated much less phytochelatins than the non-mine plants. The mean phytochelatin chain length, however, was higher in the mine population, possibly due to a constitutively lower cellular glutathione level. It is argued that the mine plants must possess an arsenic detoxification mechanism other than arsenate reduction and subsequent phytochelatin-based sequestration. This alternative mechanism might explain at least some part of the superior tolerance in the mine plants. - Neither decreased uptake nor phytochelatins seem to play a role in the As tolerance in Silene paradoxa

  15. Titanium Powder Sintering in a Graphite Furnace and Mechanical Properties of Sintered Parts

    Directory of Open Access Journals (Sweden)

    Changzhou Yu

    2017-02-01

    Full Text Available Recent accreditation of titanium powder products for commercial aircraft applications marks a milestone in titanium powder metallurgy. Currently, powder metallurgical titanium production primarily relies on vacuum sintering. This work reported on the feasibility of powder sintering in a non-vacuum furnace and the tensile properties of the as-sintered Ti. Specifically, we investigated atmospheric sintering of commercially pure (C.P. titanium in a graphite furnace backfilled with argon and studied the effects of common contaminants (C, O, N on sintering densification of titanium. It is found that on the surface of the as-sintered titanium, a severely contaminated porous scale was formed and identified as titanium oxycarbonitride. Despite the porous surface, the sintered density in the sample interiors increased with increasing sintering temperature and holding time. Tensile specimens cut from different positions within a large sintered cylinder reveal different tensile properties, strongly dependent on the impurity level mainly carbon and oxygen. Depending on where the specimen is taken from the sintered compact, ultimate tensile strength varied from 300 to 580 MPa. An average tensile elongation of 5% to 7% was observed. Largely depending on the interstitial contents, the fracture modes from typical brittle intergranular fracture to typical ductile fracture.

  16. Control problems in very large accelerators

    International Nuclear Information System (INIS)

    Crowley-Milling, M.C.

    1985-06-01

    There is no fundamental difference of kind in the control requirements between a small and a large accelerator since they are built of the same types of components, which individually have similar control inputs and outputs. The main difference is one of scale; the large machine has many more components of each type, and the distances involved are much greater. Both of these factors must be taken into account in determining the optimum way of carrying out the control functions. Small machines should use standard equipment and software for control as much as possible, as special developments for small quantities cannot normally be justified if all costs are taken into account. On the other hand, the very great number of devices needed for a large machine means that, if special developments can result in simplification, they may make possible an appreciable reduction in the control equipment costs. It is the purpose of this report to look at the special control problems of large accelerators, which the author shall arbitarily define as those with a length of circumference in excess of 10 km, and point out where special developments, or the adoption of developments from outside the accelerator control field, can be of assistance in minimizing the cost of the control system. Most of the first part of this report was presented as a paper to the 1985 Particle Accelerator Conference. It has now been extended to include a discussion on the special case of the controls for the SSC

  17. Spatiotemporal Fractionation Schemes for Irradiating Large Cerebral Arteriovenous Malformations

    Energy Technology Data Exchange (ETDEWEB)

    Unkelbach, Jan, E-mail: junkelbach@mgh.harvard.edu [Department of Radiation Oncology, Massachusetts General Hospital, Boston, Massachusetts (United States); Bussière, Marc R. [Department of Radiation Oncology, Massachusetts General Hospital, Boston, Massachusetts (United States); Chapman, Paul H. [Department of Neurosurgery, Massachusetts General Hospital, Boston, Massachusetts (United States); Loeffler, Jay S.; Shih, Helen A. [Department of Radiation Oncology, Massachusetts General Hospital, Boston, Massachusetts (United States)

    2016-07-01

    Purpose: To optimally exploit fractionation effects in the context of radiosurgery treatments of large cerebral arteriovenous malformations (AVMs). In current practice, fractionated treatments divide the dose evenly into several fractions, which generally leads to low obliteration rates. In this work, we investigate the potential benefit of delivering distinct dose distributions in different fractions. Methods and Materials: Five patients with large cerebral AVMs were reviewed and replanned for intensity modulated arc therapy delivered with conventional photon beams. Treatment plans allowing for different dose distributions in all fractions were obtained by performing treatment plan optimization based on the cumulative biologically effective dose delivered at the end of treatment. Results: We show that distinct treatment plans can be designed for different fractions, such that high single-fraction doses are delivered to complementary parts of the AVM. All plans create a similar dose bath in the surrounding normal brain and thereby exploit the fractionation effect. This partial hypofractionation in the AVM along with fractionation in normal brain achieves a net improvement of the therapeutic ratio. We show that a biological dose reduction of approximately 10% in the healthy brain can be achieved compared with reference treatment schedules that deliver the same dose distribution in all fractions. Conclusions: Boosting complementary parts of the target volume in different fractions may provide a therapeutic advantage in fractionated radiosurgery treatments of large cerebral AVMs. The strategy allows for a mean dose reduction in normal brain that may be valuable for a patient population with an otherwise normal life expectancy.

  18. Piotron at SIN - a large superconducting double torus spectrometer

    International Nuclear Information System (INIS)

    Horvath, I.; Vecsey, G.; Zellweger, J.

    1981-01-01

    A test facility for radiation therapy with negative /pi/-mesons was constructed in Switzerland. The facility is a large double torus spectrometer similar to the Stanford design. For variation of stopping depth different momenta are selected by variation of the magnetic field. Superconducting ac magnets are necessary for tumor scanning and represent a major part of such a facility. Main design features and performance are reported. 10 refs

  19. Comments on correlation functions of large spin operators and null polygonal Wilson loops

    Energy Technology Data Exchange (ETDEWEB)

    Cardona, Carlos A., E-mail: cargicar@iafe.uba.ar [Instituto de Astronomia y Fisica del Espacio (CONICET-UBA), C.C. 67 - Suc. 28, 1428 Buenos Aires (Argentina); Physics Department, University of Buenos Aires, CONICET, Ciudad Universitaria, 1428 Buenos Aires (Argentina)

    2013-02-11

    We discuss the relation between correlation functions of twist-two large spin operators and expectation values of Wilson loops along light-like trajectories. After presenting some heuristic field theoretical arguments suggesting this relation, we compute the divergent part of the correlator in the limit of large 't Hooft coupling and large spins, using a semi-classical world-sheet which asymptotically looks like a GKP rotating string. We show this diverges as expected from the expectation value of a null Wilson loop, namely, as (ln{mu}{sup -2}){sup 2}, {mu} being a cut-off of the theory.

  20. Comments on correlation functions of large spin operators and null polygonal Wilson loops

    International Nuclear Information System (INIS)

    Cardona, Carlos A.

    2013-01-01

    We discuss the relation between correlation functions of twist-two large spin operators and expectation values of Wilson loops along light-like trajectories. After presenting some heuristic field theoretical arguments suggesting this relation, we compute the divergent part of the correlator in the limit of large 't Hooft coupling and large spins, using a semi-classical world-sheet which asymptotically looks like a GKP rotating string. We show this diverges as expected from the expectation value of a null Wilson loop, namely, as (lnμ −2 ) 2 , μ being a cut-off of the theory.

  1. Retrofitting adjustable speed drives for large induction motors

    International Nuclear Information System (INIS)

    Wuestefeld, M.R.; Merriam, C.H.; Porter, N.S.

    2004-01-01

    Adjustable speed drives (ASDs) are used in many power plants to control process flow by varying the speed of synchronous and induction motors. In applications where the flow requirements vary significantly, ASDs reduce energy and maintenance requirements when compared with drag valves, dampers or other methods to control flow. Until recently, high horsepower ASDs were not available for induction motors. However, advances in power electronics technology have demonstrated the reliability and cost effectiveness of ASDs for large horsepower induction motors. Emphasis on reducing operation and maintenance costs and increasing the capacity factor of nuclear power plants has led some utilities to consider replacing flow control devices in systems powered by large induction motors with ASDs. ASDs provide a high degree of reliability and significant energy savings in situations where full flow operation is not needed for a substantial part of the time. This paper describes the basic adjustable speed drive technologies available for large induction motor applications, ASD operating experience and retrofitting ASDs to replace the existing GE Boiling Water Reactor recirculation flow control system

  2. Survey of large protein complexes D. vulgaris reveals great structural diversity

    Energy Technology Data Exchange (ETDEWEB)

    Han, B.-G.; Dong, M.; Liu, H.; Camp, L.; Geller, J.; Singer, M.; Hazen, T. C.; Choi, M.; Witkowska, H. E.; Ball, D. A.; Typke, D.; Downing, K. H.; Shatsky, M.; Brenner, S. E.; Chandonia, J.-M.; Biggin, M. D.; Glaeser, R. M.

    2009-08-15

    An unbiased survey has been made of the stable, most abundant multi-protein complexes in Desulfovibrio vulgaris Hildenborough (DvH) that are larger than Mr {approx} 400 k. The quaternary structures for 8 of the 16 complexes purified during this work were determined by single-particle reconstruction of negatively stained specimens, a success rate {approx}10 times greater than that of previous 'proteomic' screens. In addition, the subunit compositions and stoichiometries of the remaining complexes were determined by biochemical methods. Our data show that the structures of only two of these large complexes, out of the 13 in this set that have recognizable functions, can be modeled with confidence based on the structures of known homologs. These results indicate that there is significantly greater variability in the way that homologous prokaryotic macromolecular complexes are assembled than has generally been appreciated. As a consequence, we suggest that relying solely on previously determined quaternary structures for homologous proteins may not be sufficient to properly understand their role in another cell of interest.

  3. The Event Communication Vector of Efficiency of Moroccan Large Companies

    Directory of Open Access Journals (Sweden)

    Najwa El Omari

    2016-06-01

    Full Text Available The event communication has for objective to give another dimension to the company or to the brand, by bringing it out of its daily life and by developing relations with its target public, around their centers of interests. It may be by sharing the same passions, by making live feelings to a group, by federating and by creating links; because today we need a more emotional and more real component. Since a few years, the event communication seems to be "revisited" by companies and appears to stand out as an alternative to media or other more traditional tools. For the upholders of the relationship marketing, this communication delivers “a social message which affects the spectator or the auditor in its inhalation to be a part of a social, sports or artistic community” (Perlstein and Picket, 1985. Therefore, we are going to expose our researches and would try to answer the following problem: "what is the impact of the event communication on the Moroccan large company, independently of any different parasite variable? ". The objective of our research is to try to make notions understand around the event communication, and especially the evaluation of its added value on the efficiency of the Moroccan large company. To try to answer these questions derived of our problem, our research will concentrate on: a first theoretical part around a set of concepts, a second part will be the object of an empirical study.

  4. Summary record of the topical session at WPDD-10: Management of large components from decommissioning to storage and disposal, 18-19 November 2009

    International Nuclear Information System (INIS)

    Dutzer, Michel

    2010-01-01

    At its tenth meeting, the WPDD held a topical session on Management of Large Components from Decommissioning to Storage and Disposal. The topical session was organised by a new task group of the WPDD that recently began work on this topic. The group is aiming to prepare a technical guide that provides a methodology to assess different management options and facilitates involvement of the different interested parties in the process of selecting the preferred management option. This report is made of 3 parts: Part 1 presents the Main Outcomes of Topical Session on Management of Large Components from Decommissioning to Storage and Disposal (Summary of Presentations and Discussions and Rapporteurs Report); Part 2 presents the Agenda of the Topical Session on Management of Large Components from Decommissioning to Storage and Disposal; and Part 3 is the List of Participants

  5. Using Flipped Classroom Approach to Explore Deep Learning in Large Classrooms

    Science.gov (United States)

    Danker, Brenda

    2015-01-01

    This project used two Flipped Classroom approaches to stimulate deep learning in large classrooms during the teaching of a film module as part of a Diploma in Performing Arts course at Sunway University, Malaysia. The flipped classes utilized either a blended learning approach where students first watched online lectures as homework, and then…

  6. A Single Sex Pheromone Receptor Determines Chemical Response Specificity of Sexual Behavior in the Silkmoth Bombyx mori

    OpenAIRE

    Sakurai, Takeshi; Mitsuno, Hidefumi; Haupt, Stephan Shuichi; Uchino, Keiro; Yokohari, Fumio; Nishioka, Takaaki; Kobayashi, Isao; Sezutsu, Hideki; Tamura, Toshiki; Kanzaki, Ryohei

    2011-01-01

    In insects and other animals, intraspecific communication between individuals of the opposite sex is mediated in part by chemical signals called sex pheromones. In most moth species, male moths rely heavily on species-specific sex pheromones emitted by female moths to identify and orient towards an appropriate mating partner among a large number of sympatric insect species. The silkmoth, Bombyx mori, utilizes the simplest possible pheromone system, in which a single pheromone component, (E, Z...

  7. Working, declarative and procedural memory in specific language impairment

    OpenAIRE

    Lum, Jarrad A.G.; Conti-Ramsden, Gina; Page, Debra; Ullman, Michael T.

    2012-01-01

    According to the Procedural Deficit Hypothesis (PDH), abnormalities of brain structures underlying procedural memory largely explain the language deficits in children with specific language impairment (SLI). These abnormalities are posited to result in core deficits of procedural memory, which in turn explain the grammar problems in the disorder. The abnormalities are also likely to lead to problems with other, non-procedural functions, such as working memory, that rely at least partly on the...

  8. A highly efficient multi-core algorithm for clustering extremely large datasets

    Directory of Open Access Journals (Sweden)

    Kraus Johann M

    2010-04-01

    Full Text Available Abstract Background In recent years, the demand for computational power in computational biology has increased due to rapidly growing data sets from microarray and other high-throughput technologies. This demand is likely to increase. Standard algorithms for analyzing data, such as cluster algorithms, need to be parallelized for fast processing. Unfortunately, most approaches for parallelizing algorithms largely rely on network communication protocols connecting and requiring multiple computers. One answer to this problem is to utilize the intrinsic capabilities in current multi-core hardware to distribute the tasks among the different cores of one computer. Results We introduce a multi-core parallelization of the k-means and k-modes cluster algorithms based on the design principles of transactional memory for clustering gene expression microarray type data and categorial SNP data. Our new shared memory parallel algorithms show to be highly efficient. We demonstrate their computational power and show their utility in cluster stability and sensitivity analysis employing repeated runs with slightly changed parameters. Computation speed of our Java based algorithm was increased by a factor of 10 for large data sets while preserving computational accuracy compared to single-core implementations and a recently published network based parallelization. Conclusions Most desktop computers and even notebooks provide at least dual-core processors. Our multi-core algorithms show that using modern algorithmic concepts, parallelization makes it possible to perform even such laborious tasks as cluster sensitivity and cluster number estimation on the laboratory computer.

  9. Phenomenology from SIDIS and e+e- multiplicities: multiplicities and phenomenology - part I

    Science.gov (United States)

    Bacchetta, Alessandro; Echevarria, Miguel G.; Radici, Marco; Signori, Andrea

    2015-01-01

    This study is part of a project to investigate the transverse momentum dependence in parton distribution and fragmentation functions, analyzing (semi-)inclusive high-energy processes within a proper QCD framework. We calculate the transverse-momentum-dependent (TMD) multiplicities for e+e- annihilation into two hadrons (considering different combinations of pions and kaons) aiming to investigate the impact of intrinsic and radiative partonic transverse momentum and their mixing with flavor. Different descriptions of the non-perturbative evolution kernel (see, e.g., Refs. [1-5]) are available on the market and there are 200 sets of flavor configurations for the unpolarized TMD fragmentation functions (FFs) resulting from a Monte Carlo fit of Semi-Inclusive Deep-Inelastic Scattering (SIDIS) data at Hermes (see Ref. [6]). We build our predictions of e+e- multiplicities relying on this rich phenomenology. The comparison of these calculations with future experimental data (from Belle and BaBar collaborations) will shed light on non-perturbative aspects of hadron structure, opening important insights into the physics of spin, flavor and momentum structure of hadrons.

  10. Large-spored Alternaria pathogens in section Porri disentangled

    OpenAIRE

    Woudenberg, J.H.C.; Truter, M.; Groenewald, J.Z.; Crous, P.W.

    2014-01-01

    The omnipresent fungal genus Alternaria was recently divided into 24 sections based on molecular and morphological data. Alternaria sect. Porri is the largest section, containing almost all Alternaria species with medium to large conidia and long beaks, some of which are important plant pathogens (e.g. Alternaria porri, A. solani and A. tomatophila). We constructed a multi-gene phylogeny on parts of the ITS, GAPDH, RPB2, TEF1 and Alt a 1 gene regions, which, supplemented with morphological an...

  11. Global optimization of truss topology with discrete bar areas-Part II: Implementation and numerical results

    DEFF Research Database (Denmark)

    Achtziger, Wolfgang; Stolpe, Mathias

    2009-01-01

    we use the theory developed in Part I to design a convergent nonlinear branch-and-bound method tailored to solve large-scale instances of the original discrete problem. The problem formulation and the needed theoretical results from Part I are repeated such that this paper is self-contained. We focus...... the largest discrete topology design problems solved by means of global optimization....

  12. Product management of making large pieces through Rapid Prototyping PolyJet® technology

    Science.gov (United States)

    Belgiu, G.; Cărăuşu, C.; Şerban, D.; Turc, C. G.

    2017-08-01

    The rapid prototyping process has already become a classic manufacturing process for parts and assemblies, either polymeric or metal parts. Besides the well-known advantages and disadvantages of the process, the use of 3D printers has a great inconvenience: the overall dimensions of the parts are limited. Obviously, there is a possibility to purchase a larger (and more expensive) 3D printer, but there are always larger pieces to be manufactured. One solution to this problem is the splitting of parts into several components that can be manufactured. The component parts can then be assembled in a single piece by known methods such as welding, gluing, screwing, etc. This paper shows our experience in making large pieces on the Strarasys® Objet24 printer, pieces larger than the tray sizes. The results obtained are valid for any 3D printer using the PolyJet® process.

  13. Small part ultrasound in childhood and adolescence

    Energy Technology Data Exchange (ETDEWEB)

    Wunsch, R., E-mail: R.Wunsch@kinderklinik-datteln.de [Department of Pediatric Radiology, Vestic Children' s Hospital Datteln, University of Witten/Herdecke, Dr.-Friedrich-Steiner-Strasse 5, D-45711 Datteln (Germany); Rohden, L. von, E-mail: l.vonrohden@gmx.de [Department of Pediatric Radiology, Otto-von-Guericke-University Magdeburg, Klinik f. Radiologie und Nuklearmedizin – Kinderradiologie, Leipziger Straße 44, D-39120 Magdeburg (Germany); Cleaveland, R. [Department of Pediatric Radiology, Vestic Children' s Hospital Datteln, University of Witten/Herdecke, Dr.-Friedrich-Steiner-Strasse 5, D-45711 Datteln (Germany); Aumann, V., E-mail: volker.aumann@med.ovgu.de [Department of Pediatric Haematology and Oncology, Otto-von-Guericke-University Magdeburg, Universitätskinderklinik (H 10), Pädiatrische Hämatologie und Onkologie, Leipziger Straße 44, D-39120 Magdeburg (Germany)

    2014-09-15

    Small-part sonography refers to the display of small, near-surface structures using high-frequency linear array transducers. Traditional applications for small part ultrasound imaging include visualization and differential diagnostic evaluation in unclear superficial bodily structures with solid, liquid and mixed texture, as well as similar structures in nearly superficial organs such as the thyroid glands and the testes. Furthermore indications in the head and neck regions are the assessment of the outer CSF spaces in infants, the sonography of the orbit, the sonography of the walls of the large neck vessels, the visualization of superficially situated lymph nodes and neoplasms. Clinical evidence concludes that sonography, having of all imaging modalities the highest spatial resolution in the millimeter- and micrometer range (100–1000 μm), can be considered the best suited technique for examining superficial pathological formations and near-surface organs. In addition, it delivers important information about characteristic, often pathognomonic tissue architecture in pathological processes.

  14. Small part ultrasound in childhood and adolescence

    International Nuclear Information System (INIS)

    Wunsch, R.; Rohden, L. von; Cleaveland, R.; Aumann, V.

    2014-01-01

    Small-part sonography refers to the display of small, near-surface structures using high-frequency linear array transducers. Traditional applications for small part ultrasound imaging include visualization and differential diagnostic evaluation in unclear superficial bodily structures with solid, liquid and mixed texture, as well as similar structures in nearly superficial organs such as the thyroid glands and the testes. Furthermore indications in the head and neck regions are the assessment of the outer CSF spaces in infants, the sonography of the orbit, the sonography of the walls of the large neck vessels, the visualization of superficially situated lymph nodes and neoplasms. Clinical evidence concludes that sonography, having of all imaging modalities the highest spatial resolution in the millimeter- and micrometer range (100–1000 μm), can be considered the best suited technique for examining superficial pathological formations and near-surface organs. In addition, it delivers important information about characteristic, often pathognomonic tissue architecture in pathological processes

  15. Entropy-stable summation-by-parts discretization of the Euler equations on general curved elements

    Science.gov (United States)

    Crean, Jared; Hicken, Jason E.; Del Rey Fernández, David C.; Zingg, David W.; Carpenter, Mark H.

    2018-03-01

    We present and analyze an entropy-stable semi-discretization of the Euler equations based on high-order summation-by-parts (SBP) operators. In particular, we consider general multidimensional SBP elements, building on and generalizing previous work with tensor-product discretizations. In the absence of dissipation, we prove that the semi-discrete scheme conserves entropy; significantly, this proof of nonlinear L2 stability does not rely on integral exactness. Furthermore, interior penalties can be incorporated into the discretization to ensure that the total (mathematical) entropy decreases monotonically, producing an entropy-stable scheme. SBP discretizations with curved elements remain accurate, conservative, and entropy stable provided the mapping Jacobian satisfies the discrete metric invariants; polynomial mappings at most one degree higher than the SBP operators automatically satisfy the metric invariants in two dimensions. In three-dimensions, we describe an elementwise optimization that leads to suitable Jacobians in the case of polynomial mappings. The properties of the semi-discrete scheme are verified and investigated using numerical experiments.

  16. Methods for preparation of mixtures of gases in air at the parts-per-billion to parts-per-million concentration range for calibration of monitors

    International Nuclear Information System (INIS)

    Karpas, Z.; Melloul, S.; Pollevoy, Y.; Matmor, A.

    1992-05-01

    Static and dynamic methods for generating mixture of gases and vapors in air at the parts-per-billion (ppb) to parts-per-million (ppm) concentration range were surveyed. The dynamic methods include: a dynamic flow and mixing system; injection of samples into large volumes of air; exponential dilution; permeation and diffusion tubes; and generation of the target gas by chemical reaction or electrolysis. The static methods include preparation of mixtures by weighing the components, by volumetric mixing and by partial pressure method. The principles governing the utilization of these methods for the appropriate applications were discussed, and examples in which they were used to calibrate an ion mobility spectrometer (IMS) were given. (authors)

  17. Large liquid-scintillator trackers for neutrino experiments

    CERN Document Server

    Benussi, L; D'Ambrosio, N; Déclais, Y; Dupraz, J P; Fabre, Jean-Paul; Fanti, V; Forton, E; Frekers, D; Frenkel, A; Girerd, C; Golovkin, S V; Grégoire, G; Harrison, K; Jonkmans, G; Jonsson, P; Katsanevas, S; Kreslo, I; Marteau, J; Martellotti, G; Martínez, S; Medvedkov, A M; Moret, G; Niwa, K; Novikov, V; Van Beek, G; Penso, G; Vasilchenko, V G; Vuilleumier, J L; Wilquet, G; Zucchelli, P; Kreslo, I E

    2002-01-01

    Results are given on tests of large particle trackers for the detection of neutrino interactions in long-baseline experiments. Module prototypes have been assembled using TiO$_2$-doped polycarbonate panels. These were subdivided into cells of $\\sim 1$~cm$^2$ cross section and 6~m length, filled with liquid scintillator. A wavelength-shifting fibre inserted in each cell captured a part of the scintillation light emitted when a cell was traversed by an ionizing particle. Two different fibre-readout systems have been tested: an optoelectronic chain comprising an image intensifier and an Electron Bombarded CCD (EBCCD); and a hybrid photodiode~(HPD). New, low-cost liquid scintillators have been investigated for applications in large underground detectors. Testbeam studies have been performed using a commercially available liquid scintillator. The number of detected photoelectrons for minimum-ionizing particles crossing a module at different distances from the fibre readout end was 6 to 12 with the EBCCD chain and ...

  18. A Cost-Effective Two-Part Experiment for Teaching Introductory Organic Chemistry Techniques

    Science.gov (United States)

    Sadek, Christopher M.; Brown, Brenna A.; Wan, Hayley

    2011-01-01

    This two-part laboratory experiment is designed to be a cost-effective method for teaching basic organic laboratory techniques (recrystallization, thin-layer chromatography, column chromatography, vacuum filtration, and melting point determination) to large classes of introductory organic chemistry students. Students are exposed to different…

  19. High-Luminosity Large Hadron Collider (HL-LHC) Technical Design Report V. 0.1

    CERN Document Server

    Béjar Alonso I.; Brüning O.; Fessia P.; Lamont M.; Rossi L.; Tavian L.

    2017-01-01

    The Large Hadron Collider (LHC) is one of the largest scientific instruments ever built. Since opening up a newenergy frontier for exploration in 2010, it has gathered a global user community of about 7,000 scientists work-ing in fundamental particle physics and the physics of hadronic matter at extreme temperature and density. Tosustain and extend its discovery potential, the LHC will need a major upgrade in the 2020s. This will increase itsinstantaneous luminosity (rate of collisions) by a factor of five beyond the original design value and the integratedluminosity (total collisions created) by a factor ten. The LHC is already a highly complex and exquisitely opti-mised machine so this upgrade must be carefully conceived and will require about ten years to implement. Thenew configuration, known as High Luminosity LHC (HL-LHC), relies on a number of key innovations that pushaccelerator technology beyond its present limits. Among these are cutting-edge 11-12 tesla superconducting mag-nets, compact superconduc...

  20. High Luminosity Large Hadron Collider A description for the European Strategy Preparatory Group

    CERN Document Server

    Rossi, L

    2012-01-01

    The Large Hadron Collider (LHC) is the largest scientific instrument ever built. It has been exploring the new energy frontier since 2009, gathering a global user community of 7,000 scientists. It will remain the most powerful accelerator in the world for at least two decades, and its full exploitation is the highest priority in the European Strategy for Particle Physics, adopted by the CERN Council and integrated into the ESFRI Roadmap. To extend its discovery potential, the LHC will need a major upgrade around 2020 to increase its luminosity (rate of collisions) by a factor of 10 beyond its design value. As a highly complex and optimized machine, such an upgrade of the LHC must be carefully studied and requires about 10 years to implement. The novel machine configuration, called High Luminosity LHC (HL-LHC), will rely on a number of key innovative technologies, representing exceptional technological challenges, such as cutting-edge 13 tesla superconducting magnets, very compact and ultra-precise superconduc...

  1. Pure crystal orientation and anisotropic charge transport in large-area hybrid perovskite films

    KAUST Repository

    Cho, Nam Chul

    2016-11-10

    Controlling crystal orientations and macroscopic morphology is vital to develop the electronic properties of hybrid perovskites. Here we show that a large-area, orientationally pure crystalline (OPC) methylammonium lead iodide (MAPbI3) hybrid perovskite film can be fabricated using a thermal-gradient-assisted directional crystallization method that relies on the sharp liquid-to-solid transition of MAPbI3 from ionic liquid solution. We find that the OPC films spontaneously form periodic microarrays that are distinguishable from general polycrystalline perovskite materials in terms of their crystal orientation, film morphology and electronic properties. X-ray diffraction patterns reveal that the film is strongly oriented in the (112) and (200) planes parallel to the substrate. This film is structurally confined by directional crystal growth, inducing intense anisotropy in charge transport. In addition, the low trap-state density (7.9 × 1013 cm−3) leads to strong amplified stimulated emission. This ability to control crystal orientation and morphology could be widely adopted in optoelectronic devices.

  2. Large-scale production of diesel-like biofuels - process design as an inherent part of microorganism development.

    Science.gov (United States)

    Cuellar, Maria C; Heijnen, Joseph J; van der Wielen, Luuk A M

    2013-06-01

    Industrial biotechnology is playing an important role in the transition to a bio-based economy. Currently, however, industrial implementation is still modest, despite the advances made in microorganism development. Given that the fuels and commodity chemicals sectors are characterized by tight economic margins, we propose to address overall process design and efficiency at the start of bioprocess development. While current microorganism development is targeted at product formation and product yield, addressing process design at the start of bioprocess development means that microorganism selection can also be extended to other critical targets for process technology and process scale implementation, such as enhancing cell separation or increasing cell robustness at operating conditions that favor the overall process. In this paper we follow this approach for the microbial production of diesel-like biofuels. We review current microbial routes with both oleaginous and engineered microorganisms. For the routes leading to extracellular production, we identify the process conditions for large scale operation. The process conditions identified are finally translated to microorganism development targets. We show that microorganism development should be directed at anaerobic production, increasing robustness at extreme process conditions and tailoring cell surface properties. All the same time, novel process configurations integrating fermentation and product recovery, cell reuse and low-cost technologies for product separation are mandatory. This review provides a state-of-the-art summary of the latest challenges in large-scale production of diesel-like biofuels. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Hierarchical and Matrix Structures in a Large Organizational Email Network: Visualization and Modeling Approaches

    OpenAIRE

    Sims, Benjamin H.; Sinitsyn, Nikolai; Eidenbenz, Stephan J.

    2014-01-01

    This paper presents findings from a study of the email network of a large scientific research organization, focusing on methods for visualizing and modeling organizational hierarchies within large, complex network datasets. In the first part of the paper, we find that visualization and interpretation of complex organizational network data is facilitated by integration of network data with information on formal organizational divisions and levels. By aggregating and visualizing email traffic b...

  4. 3D knitting using large circular knitting machines

    Science.gov (United States)

    Simonis, K.; Gloy, Y.-S.; Gries, T.

    2017-10-01

    For the first time 3D structures can now be produced on large circular knitting machines. Till date, such structures could only be manufactured on flat knitting machines. Since large circular knitting machines operate much faster, this development increases the overall productivity of 3D knits. It thus opens up a totally new avenue for cost reduction for applications in sportswear, upholstery, aerospace and automotive industry. The following paper presents the state of the art regarding the realisation of three dimensional fabrics. In addition, current knitting technologies regarding three dimensional formations will be explained. Results of the pretrials explaining the change in knitted fabrics´ dimension, executed at the Institut für Textiltechnik of the RWTH Aachen University, will be presented. Finally, the description of the 3D knit prototype developed will be provided as a part of this paper.

  5. Active power reserves evaluation in large scale PVPPs

    DEFF Research Database (Denmark)

    Crăciun, Bogdan-Ionut; Kerekes, Tamas; Sera, Dezso

    2013-01-01

    The present trend on investing in renewable ways of producing electricity in the detriment of conventional fossil fuel-based plants will lead to a certain point where these plants have to provide ancillary services and contribute to overall grid stability. Photovoltaic (PV) power has the fastest...... growth among all renewable energies and managed to reach high penetration levels creating instabilities which at the moment are corrected by the conventional generation. This paradigm will change in the future scenarios where most of the power is supplied by large scale renewable plants and parts...... of the ancillary services have to be shared by the renewable plants. The main focus of the proposed paper is to technically and economically analyze the possibility of having active power reserves in large scale PV power plants (PVPPs) without any auxiliary storage equipment. The provided reserves should...

  6. Plan selection in Medicare Part D: Evidence from administrative data

    Science.gov (United States)

    Heiss, Florian; Leive, Adam; McFadden, Daniel; Winter, Joachim

    2014-01-01

    We study the Medicare Part D prescription drug insurance program as a bellwether for designs of private, non-mandatory health insurance markets, focusing on the ability of consumers to evaluate and optimize their choices of plans. Our analysis of administrative data on medical claims in Medicare Part D suggests that fewer than 25 percent of individuals enroll in plans that are ex ante as good as the least cost plan specified by the Plan Finder tool made available to seniors by the Medicare administration, and that consumers on average have expected excess spending of about $300 per year, or about 15 percent of expected total out-of-pocket cost for drugs and Part D insurance. These numbers are hard to reconcile with decision costs alone; it appears that unless a sizeable fraction of consumers place large values on plan features other than cost, they are not optimizing effectively. PMID:24308882

  7. Differences in the occurrence and characteristics of injuries between full-time and part-time dancers.

    Science.gov (United States)

    Vassallo, Amy Jo; Pappas, Evangelos; Stamatakis, Emmanuel; Hiller, Claire E

    2018-01-01

    Professional dancers are at significant risk of injury due to the physical demands of their career. Despite their high numbers, the experience of injury in freelance or part-time dancers is not well understood. Therefore, the aim of this study was to examine the occurrence and characteristics of injury in part-time compared with full-time Australian professional dancers. Data were collected using a cross-sectional survey distributed to employees of small and large dance companies and freelance dancers in Australia. Statistical comparisons between full-time and part-time dancer demographics, dance training, injury prevalence and characteristics were made using χ 2 , two-tailed Fisher's exact tests, independent t-tests and Mann-Whitney U tests. A total of 89 full-time and 57 part-time dancers were included for analysis. A higher proportion of full-time dancers (79.8%) than part-time dancers (63.2%) experienced an injury that impacted on their ability to dance in the past 12 months (p=0.035). Injuries characteristics were similar between groups with fatigue being the most cited contributing factor. Part-time dancers took longer to seek treatment while a higher proportion of full-time dancers were unable to dance in any capacity following their injury. More full-time dancers sustained an injury in the past 12 months, and were unable to dance in any capacity following their injury. However injuries still commonly occurred in part-time dancers without necessarily a large volume of dance activity. Part-time dancers often access general community clinicians for treatment, who may need additional education to practically advise on appropriate return to dance.

  8. Prediction Model of Machining Failure Trend Based on Large Data Analysis

    Science.gov (United States)

    Li, Jirong

    2017-12-01

    The mechanical processing has high complexity, strong coupling, a lot of control factors in the machining process, it is prone to failure, in order to improve the accuracy of fault detection of large mechanical equipment, research on fault trend prediction requires machining, machining fault trend prediction model based on fault data. The characteristics of data processing using genetic algorithm K mean clustering for machining, machining feature extraction which reflects the correlation dimension of fault, spectrum characteristics analysis of abnormal vibration of complex mechanical parts processing process, the extraction method of the abnormal vibration of complex mechanical parts processing process of multi-component spectral decomposition and empirical mode decomposition Hilbert based on feature extraction and the decomposition results, in order to establish the intelligent expert system for the data base, combined with large data analysis method to realize the machining of the Fault trend prediction. The simulation results show that this method of fault trend prediction of mechanical machining accuracy is better, the fault in the mechanical process accurate judgment ability, it has good application value analysis and fault diagnosis in the machining process.

  9. Technique for simultaneous adjustment of large nuclear data libraries

    International Nuclear Information System (INIS)

    Harris, D.R.; Wilson, W.B.

    1975-01-01

    Adjustment of the nuclear data base to agree with integral observations in design work has been limited in part by problems in the required inversion of matrices. It is shown that this inversion problem can be circumvented and arbitrarily large nuclear data libraries can be adjusted simultaneously when the basic data are uncorrelated. The technique is illustrated by adjusting nuclear data to integral observations made on fast reactor benchmark critical assemblies. 3 tables

  10. Correction of population stratification in large multi-ethnic association studies.

    Directory of Open Access Journals (Sweden)

    David Serre

    2008-01-01

    Full Text Available The vast majority of genetic risk factors for complex diseases have, taken individually, a small effect on the end phenotype. Population-based association studies therefore need very large sample sizes to detect significant differences between affected and non-affected individuals. Including thousands of affected individuals in a study requires recruitment in numerous centers, possibly from different geographic regions. Unfortunately such a recruitment strategy is likely to complicate the study design and to generate concerns regarding population stratification.We analyzed 9,751 individuals representing three main ethnic groups - Europeans, Arabs and South Asians - that had been enrolled from 154 centers involving 52 countries for a global case/control study of acute myocardial infarction. All individuals were genotyped at 103 candidate genes using 1,536 SNPs selected with a tagging strategy that captures most of the genetic diversity in different populations. We show that relying solely on self-reported ethnicity is not sufficient to exclude population stratification and we present additional methods to identify and correct for stratification.Our results highlight the importance of carefully addressing population stratification and of carefully "cleaning" the sample prior to analyses to obtain stronger signals of association and to avoid spurious results.

  11. Large scale atmospheric tropical circulation changes and consequences during global warming

    International Nuclear Information System (INIS)

    Gastineau, G.

    2008-01-01

    The changes of the tropical large scale circulation during climate change can have large impacts on human activities. In a first part, the meridional atmospheric tropical circulation was studied in the different coupled models. During climate change, we find, on the one hand, that the Hadley meridional circulation and the subtropical jet are significantly shifted poleward, and on the other hand, that the intensity of the tropical circulation weakens. The slow down of the atmospheric circulation results from the dry static stability changes affecting the tropical troposphere. Secondly, idealized simulations are used to explain the tropical circulation changes. Ensemble simulation using the model LMDZ4 are set up to study the results from the coupled model IPSLCM4. The weakening of the large scale tropical circulation and the poleward shift of the Hadley cells are explained by both the uniform change and the meridional gradient change of the sea surface temperature. Then, we used the atmospheric model LMDZ4 in an aqua-planet configuration. The Hadley circulation changes are explained in a simple framework by the required poleward energy transport. In a last part, we focus on the water vapor distribution and feedback in the climate models. The Hadley circulation changes were shown to have a significant impact on the water vapour feedback during climate change. (author)

  12. CT imaging in a large part of the world: what we know and what we can learn

    Energy Technology Data Exchange (ETDEWEB)

    Rehani, Madan M. [Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States)

    2014-10-15

    This paper describes how cooperation among international organizations, as modeled in Europe, can work to improve imaging safety and standards for children throughout the world. This is demonstrated in the mechanisms employed in a large-scale multi-national study on CT imaging safety practices described elsewhere in this issue of Pediatric Radiology. Here we learn approaches through which CT safety standards have been achieved and the international resources available to help in standardizing safety practices in medical imaging. There are unique strengths of the approach in Europe, which has mandatory requirements on member states to facilitate strengthening of radiation protection. Most countries have national regulatory mechanisms for radiation protection in medicine. International organizations play a significant role in supporting projects in lower-resource countries such that a large proportion of radiologic professionals in low-resource countries are trained through assistance by these organizations. Many of these international organizations make it possible for professionals worldwide to download free training material. Collaboration among international organizations and the Image Gently campaign toward consensus with regard to radiation protection can go further than individual opinions in promoting a higher standard of radiation protection around the world. (orig.)

  13. CT imaging in a large part of the world: what we know and what we can learn

    International Nuclear Information System (INIS)

    Rehani, Madan M.

    2014-01-01

    This paper describes how cooperation among international organizations, as modeled in Europe, can work to improve imaging safety and standards for children throughout the world. This is demonstrated in the mechanisms employed in a large-scale multi-national study on CT imaging safety practices described elsewhere in this issue of Pediatric Radiology. Here we learn approaches through which CT safety standards have been achieved and the international resources available to help in standardizing safety practices in medical imaging. There are unique strengths of the approach in Europe, which has mandatory requirements on member states to facilitate strengthening of radiation protection. Most countries have national regulatory mechanisms for radiation protection in medicine. International organizations play a significant role in supporting projects in lower-resource countries such that a large proportion of radiologic professionals in low-resource countries are trained through assistance by these organizations. Many of these international organizations make it possible for professionals worldwide to download free training material. Collaboration among international organizations and the Image Gently campaign toward consensus with regard to radiation protection can go further than individual opinions in promoting a higher standard of radiation protection around the world. (orig.)

  14. Turbulent mixed convection from a large, high temperature, vertical flat surface

    International Nuclear Information System (INIS)

    Evans, G.; Greif, R.; Siebers, D.; Tieszen, S.

    2005-01-01

    Turbulent mixed convection heat transfer at high temperatures and large length scales is an important and seldom studied phenomenon that can represent a significant part of the overall heat transfer in applications ranging from solar central receivers to objects in fires. This work is part of a study to validate turbulence models for predicting heat transfer to or from surfaces at large temperature differences and large length scales. Here, turbulent, three-dimensional, mixed convection heat transfer in air from a large (3m square) vertical flat surface at high temperatures is studied using two RANS turbulence models: a standard k-ε model and the v2-bar -f model. Predictions for three cases spanning the range of the experiment (Siebers, D.L., Schwind, R.G., Moffat, R.F., 1982. Experimental mixed convection from a large, vertical plate in a horizontal flow. Paper MC13, vol. 3, Proc. 7th Int. Heat Transfer Conf., Munich; Siebers, D.L., 1983. Experimental mixed convection heat transfer from a large, vertical surface in a horizontal flow. PhD thesis, Stanford University) from forced (GrH/ReL2=0.18) to mixed (GrH/ReL2=3.06) to natural (GrH/ReL2=∼) convection are compared with data. The results show a decrease in the heat transfer coefficient as GrH/ReL2 is increased from 0.18 to 3.06, for a free-stream velocity of 4.4m/s. In the natural convection case, the experimental heat transfer coefficient is approximately constant in the fully turbulent region, whereas the calculated heat transfer coefficients show a slight increase with height. For the three cases studied, the calculated and experimental heat transfer coefficients agree to within 5-35% over most of the surface with the v2-bar -f model results showing better agreement with the data. Calculated temperature and velocity profiles show good agreement with the data

  15. Including investment risk in large-scale power market models

    DEFF Research Database (Denmark)

    Lemming, Jørgen Kjærgaard; Meibom, P.

    2003-01-01

    Long-term energy market models can be used to examine investments in production technologies, however, with market liberalisation it is crucial that such models include investment risks and investor behaviour. This paper analyses how the effect of investment risk on production technology selection...... can be included in large-scale partial equilibrium models of the power market. The analyses are divided into a part about risk measures appropriate for power market investors and a more technical part about the combination of a risk-adjustment model and a partial-equilibrium model. To illustrate...... the analyses quantitatively, a framework based on an iterative interaction between the equilibrium model and a separate risk-adjustment module was constructed. To illustrate the features of the proposed modelling approach we examined how uncertainty in demand and variable costs affects the optimal choice...

  16. Part-whole reasoning in medical ontologies revisited--introducing SEP triplets into classification-based description logics.

    Science.gov (United States)

    Schulz, S; Romacker, M; Hahn, U

    1998-01-01

    The development of powerful and comprehensive medical ontologies that support formal reasoning on a large scale is one of the key requirements for clinical computing in the next millennium. Taxonomic medical knowledge, a major portion of these ontologies, is mainly characterized by generalization and part-whole relations between concepts. While reasoning in generalization hierarchies is quite well understood, no fully conclusive mechanism as yet exists for part-whole reasoning. The approach we take emulates part-whole reasoning via classification-based reasoning using SEP triplets, a special data structure for encoding part-whole relations that is fully embedded in the formal framework of standard description logics.

  17. Decontamination of large horizontal concrete surfaces outdoors

    International Nuclear Information System (INIS)

    Barbier, M.M.; Chester, C.V.

    1980-01-01

    A study is being conducted of the resources and planning that would be required to clean up an extensive contamination of the outdoor environment. As part of this study, an assessment of the fleet of machines needed for decontaminating large outdoor surfaces of horizontal concrete will be attempted. The operations required are described. The performance of applicable existing equipment is analyzed in terms of area cleaned per unit time, and the comprehensive cost of decontamination per unit area is derived. Shielded equipment for measuring directional radiation and continuously monitoring decontamination work are described. Shielding of drivers' cabs and remote control vehicles is addressed

  18. Linux software for large topology optimization problems

    DEFF Research Database (Denmark)

    evolving product, which allows a parallel solution of the PDE, it lacks the important feature that the matrix-generation part of the computations is localized to each processor. This is well-known to be critical for obtaining a useful speedup on a Linux cluster and it motivates the search for a COMSOL......-like package for large topology optimization problems. One candidate for such software is developed for Linux by Sandia Nat’l Lab in the USA being the Sundance system. Sundance also uses a symbolic representation of the PDE and a scalable numerical solution is achieved by employing the underlying Trilinos...

  19. Strong discontinuity with cam clay under large deformations

    DEFF Research Database (Denmark)

    Katic, Natasa; Hededal, Ole

    2008-01-01

    The work shows simultaneous implementation of Strong discontinuity approach (SDA) by means of Enhanced Assumed Strain (EAS) and Critical State Soil Mechanics CSSM) in large strain regime. The numerical model is based on an additive decomposition of the displacement gradient into a conforming and ...... and an enhanced part. The localized deformations are approximated by means of a discontinuous displacement field. The applied algorithm leads to a predictor/corrector procedure which is formally identical to the returnmapping algorithm of classical (local and continuous) Cam clay model....

  20. Technical instrumentation R&D for ILD SiW ECAL large scale device

    Science.gov (United States)

    Balagura, V.

    2018-03-01

    Calorimeters with silicon detectors have many unique features and are proposed for several world-leading experiments. We describe the R&D program of the large scale detector element with up to 12 000 readout channels for the International Large Detector (ILD) at the future e+e‑ ILC collider. The program is focused on the readout front-end electronics embedded inside the calorimeter. The first part with 2 000 channels and two small silicon sensors has already been constructed, the full prototype is planned for the beginning of 2018.

  1. Technical instrumentation R&D for ILD SiW ECAL large scale device

    CERN Document Server

    Balagura, V. (on behalf of SIW ECAL ILD collaboration)

    2018-01-01

    Calorimeters with silicon detectors have many unique features and are proposed for several world-leading experiments. We describe the R&D program of the large scale detector element with up to 12 000 readout channels for the International Large Detector (ILD) at the future e+e- ILC collider. The program is focused on the readout front-end electronics embedded inside the calorimeter. The first part with 2 000 channels and two small silicon sensors has already been constructed, the full prototype is planned for the beginning of 2018.

  2. Photorealistic large-scale urban city model reconstruction.

    Science.gov (United States)

    Poullis, Charalambos; You, Suya

    2009-01-01

    The rapid and efficient creation of virtual environments has become a crucial part of virtual reality applications. In particular, civil and defense applications often require and employ detailed models of operations areas for training, simulations of different scenarios, planning for natural or man-made events, monitoring, surveillance, games, and films. A realistic representation of the large-scale environments is therefore imperative for the success of such applications since it increases the immersive experience of its users and helps reduce the difference between physical and virtual reality. However, the task of creating such large-scale virtual environments still remains a time-consuming and manual work. In this work, we propose a novel method for the rapid reconstruction of photorealistic large-scale virtual environments. First, a novel, extendible, parameterized geometric primitive is presented for the automatic building identification and reconstruction of building structures. In addition, buildings with complex roofs containing complex linear and nonlinear surfaces are reconstructed interactively using a linear polygonal and a nonlinear primitive, respectively. Second, we present a rendering pipeline for the composition of photorealistic textures, which unlike existing techniques, can recover missing or occluded texture information by integrating multiple information captured from different optical sensors (ground, aerial, and satellite).

  3. Reducing Information Overload in Large Seismic Data Sets

    Energy Technology Data Exchange (ETDEWEB)

    HAMPTON,JEFFERY W.; YOUNG,CHRISTOPHER J.; MERCHANT,BION J.; CARR,DORTHE B.; AGUILAR-CHANG,JULIO

    2000-08-02

    Event catalogs for seismic data can become very large. Furthermore, as researchers collect multiple catalogs and reconcile them into a single catalog that is stored in a relational database, the reconciled set becomes even larger. The sheer number of these events makes searching for relevant events to compare with events of interest problematic. Information overload in this form can lead to the data sets being under-utilized and/or used incorrectly or inconsistently. Thus, efforts have been initiated to research techniques and strategies for helping researchers to make better use of large data sets. In this paper, the authors present their efforts to do so in two ways: (1) the Event Search Engine, which is a waveform correlation tool and (2) some content analysis tools, which area combination of custom-built and commercial off-the-shelf tools for accessing, managing, and querying seismic data stored in a relational database. The current Event Search Engine is based on a hierarchical clustering tool known as the dendrogram tool, which is written as a MatSeis graphical user interface. The dendrogram tool allows the user to build dendrogram diagrams for a set of waveforms by controlling phase windowing, down-sampling, filtering, enveloping, and the clustering method (e.g. single linkage, complete linkage, flexible method). It also allows the clustering to be based on two or more stations simultaneously, which is important to bridge gaps in the sparsely recorded event sets anticipated in such a large reconciled event set. Current efforts are focusing on tools to help the researcher winnow the clusters defined using the dendrogram tool down to the minimum optimal identification set. This will become critical as the number of reference events in the reconciled event set continually grows. The dendrogram tool is part of the MatSeis analysis package, which is available on the Nuclear Explosion Monitoring Research and Engineering Program Web Site. As part of the research

  4. The Rights and Responsibility of Test Takers When Large-Scale Testing Is Used for Classroom Assessment

    Science.gov (United States)

    van Barneveld, Christina; Brinson, Karieann

    2017-01-01

    The purpose of this research was to identify conflicts in the rights and responsibility of Grade 9 test takers when some parts of a large-scale test are marked by teachers and used in the calculation of students' class marks. Data from teachers' questionnaires and students' questionnaires from a 2009-10 administration of a large-scale test of…

  5. Fluorescent dyes with large Stokes shifts for super-resolution optical microscopy of biological objects: a review

    International Nuclear Information System (INIS)

    Sednev, Maksim V; Belov, Vladimir N; Hell, Stefan W

    2015-01-01

    The review deals with commercially available organic dyes possessing large Stokes shifts and their applications as fluorescent labels in optical microscopy based on stimulated emission depletion (STED). STED microscopy breaks Abbe’s diffraction barrier and provides optical resolution beyond the diffraction limit. STED microscopy is non-invasive and requires photostable fluorescent markers attached to biomolecules or other objects of interest. Up to now, in most biology-related STED experiments, bright and photoresistant dyes with small Stokes shifts of 20–40 nm were used. The rapid progress in STED microscopy showed that organic fluorophores possessing large Stokes shifts are indispensable in multi-color super-resolution techniques. The ultimate result of the imaging relies on the optimal combination of a dye, the bio-conjugation procedure and the performance of the optical microscope. Modern bioconjugation methods, basics of STED microscopy, as well as structures and spectral properties of the presently available fluorescent markers are reviewed and discussed. In particular, the spectral properties of the commercial dyes are tabulated and correlated with the available depletion wavelengths found in STED microscopes produced by LEICA Microsytems, Abberior Instruments and Picoquant GmbH. (topical review)

  6. Optimization of large bore gas engine

    International Nuclear Information System (INIS)

    Laiminger, S.

    1999-01-01

    This doctoral thesis is concerned with the experimental part of combustion optimization of a large bore gas engine. Nevertheless there was a very close co-operation with the simultaneous numeric simulation. The terms of reference were a systematic investigation of the optimization potential of the current combustion mode with the objective target to get a higher brake efficiency and lower NO x emissions. In a second part a new combustion mode for fuels containing H 2 , for fuels with very low heating value and for special fuels should be developed. The optimization contained all relevant components of the engine to achieve a stable and well suited combustion with short duration even with very lean mixture. After the optimization the engine was running stable with substantial lower NO x emissions. It was world-wide the first time when a gas medium-sized engine could reach a total electrical efficiency of more than 40 percent. Finally a combustion mode for gaseous fuels containing H 2 was developed. The engine is running now with direct ignition and with prechamber ignition. Both modes reach approximately the same efficiency and thermodynamic stability. (author)

  7. Assessment of small versus large hydro-power developments - a Norwegian case study

    Energy Technology Data Exchange (ETDEWEB)

    Bakken, Tor Haakon; Harby, Atle

    2010-07-01

    Full text: The era of new, large hydro-power development projects seems to be over in Norway. Partly as a response to this, a large number of applications for the development of smallscale hydro power projects up to 10 MW overflow the Water Resources and Energy Directorate, resulting in an extensive development of small tributaries and water courses in Norway. This study has developed a framework for the assessment and comparison of several small versus many large hydro-power projects based on a multi-criteria analysis (MCA) approach, and further tested this approach on planned or developed projects in the Helgeland region, Norway. Multi-criteria analysis is a decision-support tool aimed at providing a systematic approach for the comparison of various alternatives with often non-commensurable and conflicting attributes. At the same time, the technique enables complex problems and various alternatives to be assessed in a transparent and simple way. The MCA-software was in our case equipped with 2 overall criteria (objectives) with a number of sub criteria; Production with sub-criteria like volume of energy production, installed effect, storage capacity and economical profit; Environmental impacts with sub-criteria like fishing interests, biodiversity, protection of unexploited nature The data used in the case study is based on the planned development of Vefsna (large project) with the energy/effect production estimated and the environmental impacts identified as part of the feasibility studies (the project never reached the authorities' licensing system with a formal EIA). The small-scale hydro-power projects used for comparison are based on realized projects in the Helgeland region and a number of proposed projects, up scaled to the size of the proposed Vefsna-development. The results from the study indicate that a large number of small-scale hydro-power projects need to be implemented in order to balance the volume of produced electricity/effect from one

  8. Advancing the match-mismatch framework for large herbivores in the Arctic: Evaluating the evidence for a trophic mismatch in caribou.

    Directory of Open Access Journals (Sweden)

    David Gustine

    Full Text Available Climate-induced shifts in plant phenology may adversely affect animals that cannot or do not shift the timing of their reproductive cycle. The realized effect of potential trophic "mismatches" between a consumer and its food varies with the degree to which species rely on dietary income and stored capital. Large Arctic herbivores rely heavily on maternal capital to reproduce and give birth near the onset of the growing season but are they vulnerable to trophic mismatch? We evaluated the long-term changes in the temperatures and characteristics of the growing seasons (1970-2013, and compared growing conditions and dynamics of forage quality for caribou at peak parturition, peak lactation, and peak forage biomass, and plant senescence between two distinct time periods over 36 years (1977 and 2011-13. Despite advanced thaw dates (7-12 days earlier, increased growing season lengths (15-21 days longer, and consistent parturition dates, we found no decline in forage quality and therefore no evidence within this dataset for a trophic mismatch at peak parturition or peak lactation from 1977 to 2011-13. In Arctic ungulates that use stored capital for reproduction, reproductive demands are largely met by body stores deposited in the previous summer and autumn, which reduces potential adverse effects of any mismatch between food availability and timing of parturition. Climate-induced effects on forages growing in the summer and autumn ranges, however, do correspond with the demands of female caribou and their offspring to gain mass for the next reproductive cycle and winter. Therefore, we suggest the window of time to examine the match-mismatch framework in Arctic ungulates is not at parturition but in late summer-autumn, where the multiplier effects of small changes in forage quality are amplified by forage abundance, peak forage intake, and resultant mass gains in mother-offspring pairs.

  9. Single-field consistency relations of large scale structure part III: test of the equivalence principle

    Energy Technology Data Exchange (ETDEWEB)

    Creminelli, Paolo [Abdus Salam International Centre for Theoretical Physics, Strada Costiera 11, Trieste, 34151 (Italy); Gleyzes, Jérôme; Vernizzi, Filippo [CEA, Institut de Physique Théorique, Gif-sur-Yvette cédex, F-91191 France (France); Hui, Lam [Physics Department and Institute for Strings, Cosmology and Astroparticle Physics, Columbia University, New York, NY, 10027 (United States); Simonović, Marko, E-mail: creminel@ictp.it, E-mail: jerome.gleyzes@cea.fr, E-mail: lhui@astro.columbia.edu, E-mail: msimonov@sissa.it, E-mail: filippo.vernizzi@cea.fr [SISSA, via Bonomea 265, Trieste, 34136 (Italy)

    2014-06-01

    The recently derived consistency relations for Large Scale Structure do not hold if the Equivalence Principle (EP) is violated. We show it explicitly in a toy model with two fluids, one of which is coupled to a fifth force. We explore the constraints that galaxy surveys can set on EP violation looking at the squeezed limit of the 3-point function involving two populations of objects. We find that one can explore EP violations of order 10{sup −3}÷10{sup −4} on cosmological scales. Chameleon models are already very constrained by the requirement of screening within the Solar System and only a very tiny region of the parameter space can be explored with this method. We show that no violation of the consistency relations is expected in Galileon models.

  10. Techniques for Large-Scale Bacterial Genome Manipulation and Characterization of the Mutants with Respect to In Silico Metabolic Reconstructions.

    Science.gov (United States)

    diCenzo, George C; Finan, Turlough M

    2018-01-01

    The rate at which all genes within a bacterial genome can be identified far exceeds the ability to characterize these genes. To assist in associating genes with cellular functions, a large-scale bacterial genome deletion approach can be employed to rapidly screen tens to thousands of genes for desired phenotypes. Here, we provide a detailed protocol for the generation of deletions of large segments of bacterial genomes that relies on the activity of a site-specific recombinase. In this procedure, two recombinase recognition target sequences are introduced into known positions of a bacterial genome through single cross-over plasmid integration. Subsequent expression of the site-specific recombinase mediates recombination between the two target sequences, resulting in the excision of the intervening region and its loss from the genome. We further illustrate how this deletion system can be readily adapted to function as a large-scale in vivo cloning procedure, in which the region excised from the genome is captured as a replicative plasmid. We next provide a procedure for the metabolic analysis of bacterial large-scale genome deletion mutants using the Biolog Phenotype MicroArray™ system. Finally, a pipeline is described, and a sample Matlab script is provided, for the integration of the obtained data with a draft metabolic reconstruction for the refinement of the reactions and gene-protein-reaction relationships in a metabolic reconstruction.

  11. Combustion and heat transfer monitoring in large utility boilers

    Energy Technology Data Exchange (ETDEWEB)

    Diez, L.I.; Cortes, C.; Arauzo, I.; Valero, A. [University of Zaragoza, Zaragoza (Spain). Center of Power Plant Efficiency Research

    2001-05-01

    The optimization and control of complex energy systems can presently take advantage of highly sophisticated engineering techniques, such as CFD calculations and correlation algorithms based on artificial intelligence concepts. However, the most advanced numerical prediction still relies on strong simplifications of the exact transport equations. Likewise, the output of a neural network is actually based on a long record of observed past responses. Therefore, the implementation of modern diagnosis tools generally requires a great amount of experimental data, in order to achieve an adequate validation of the method. Consequently, a sort of paradox results, since the validation data cannot be less accurate or complete than the predictions sought. To remedy this situation, there are several alternatives. In contrast to laboratory work or well-instrumented pilot plants, the information obtained in the full scale installation offers the advantages of realism and low cost. This paper presents the case-study of a large, pulverized-coal fired utility boiler, discussing both the evaluation of customary measurements and the adoption of supplementary instruments. The generic outcome is that it is possible to significantly improve the knowledge on combustion and heat transfer performance within a reasonable cost. Based on the experience and results, a general methodology is outlined to cope with this kind of analysis.

  12. Visual Data Analysis as an Integral Part of Environmental Management

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Joerg; Bethel, E. Wes; Horsman, Jennifer L.; Hubbard, Susan S.; Krishnan, Harinarayan; Romosan,, Alexandru; Keating, Elizabeth H.; Monroe, Laura; Strelitz, Richard; Moore, Phil; Taylor, Glenn; Torkian, Ben; Johnson, Timothy C.; Gorton, Ian

    2012-10-01

    The U.S. Department of Energy's (DOE) Office of Environmental Management (DOE/EM) currently supports an effort to understand and predict the fate of nuclear contaminants and their transport in natural and engineered systems. Geologists, hydrologists, physicists and computer scientists are working together to create models of existing nuclear waste sites, to simulate their behavior and to extrapolate it into the future. We use visualization as an integral part in each step of this process. In the first step, visualization is used to verify model setup and to estimate critical parameters. High-performance computing simulations of contaminant transport produces massive amounts of data, which is then analyzed using visualization software specifically designed for parallel processing of large amounts of structured and unstructured data. Finally, simulation results are validated by comparing simulation results to measured current and historical field data. We describe in this article how visual analysis is used as an integral part of the decision-making process in the planning of ongoing and future treatment options for the contaminated nuclear waste sites. Lessons learned from visually analyzing our large-scale simulation runs will also have an impact on deciding on treatment measures for other contaminated sites.

  13. Protection of the CERN Large Hadron Collider

    Science.gov (United States)

    Schmidt, R.; Assmann, R.; Carlier, E.; Dehning, B.; Denz, R.; Goddard, B.; Holzer, E. B.; Kain, V.; Puccio, B.; Todd, B.; Uythoven, J.; Wenninger, J.; Zerlauth, M.

    2006-11-01

    The Large Hadron Collider (LHC) at CERN will collide two counter-rotating proton beams, each with an energy of 7 TeV. The energy stored in the superconducting magnet system will exceed 10 GJ, and each beam has a stored energy of 362 MJ which could cause major damage to accelerator equipment in the case of uncontrolled beam loss. Safe operation of the LHC will therefore rely on a complex system for equipment protection. The systems for protection of the superconducting magnets in case of quench must be fully operational before powering the magnets. For safe injection of the 450 GeV beam into the LHC, beam absorbers must be in their correct positions and specific procedures must be applied. Requirements for safe operation throughout the cycle necessitate early detection of failures within the equipment, and active monitoring of the beam with fast and reliable beam instrumentation, mainly beam loss monitors (BLM). When operating with circulating beams, the time constant for beam loss after a failure extends from apms to a few minutes—failures must be detected sufficiently early and transmitted to the beam interlock system that triggers a beam dump. It is essential that the beams are properly extracted on to the dump blocks at the end of a fill and in case of emergency, since the beam dump blocks are the only elements of the LHC that can withstand the impact of the full beam.

  14. Recent Advances in Multidisciplinary Analysis and Optimization, part 3

    Science.gov (United States)

    Barthelemy, Jean-Francois M. (Editor)

    1989-01-01

    This three-part document contains a collection of technical papers presented at the Second NASA/Air Force Symposium on Recent Advances in Multidisciplinary Analysis and Optimization, held September 28-30, 1988 in Hampton, Virginia. The topics covered include: aircraft design, aeroelastic tailoring, control of aeroelastic structures, dynamics and control of flexible structures, structural design, design of large engineering systems, application of artificial intelligence, shape optimization, software development and implementation, and sensitivity analysis.

  15. Associations Between Medicare Part D and Out-of-Pocket Spending, HIV Viral Load, Adherence, and ADAP Use in Dual Eligibles With HIV.

    Science.gov (United States)

    Belenky, Nadya; Pence, Brian W; Cole, Stephen R; Dusetzina, Stacie B; Edmonds, Andrew; Oberlander, Jonathan; Plankey, Michael W; Adedimeji, Adebola; Wilson, Tracey E; Cohen, Jennifer; Cohen, Mardge H; Milam, Joel E; Golub, Elizabeth T; Adimora, Adaora A

    2018-01-01

    The implementation of Medicare part D on January 1, 2006 required all adults who were dually enrolled in Medicaid and Medicare (dual eligibles) to transition prescription drug coverage from Medicaid to Medicare part D. Changes in payment systems and utilization management along with the loss of Medicaid protections had the potential to disrupt medication access, with uncertain consequences for dual eligibles with human immunodeficiency virus (HIV) who rely on consistent prescription coverage to suppress their HIV viral load (VL). To estimate the effect of Medicare part D on self-reported out-of-pocket prescription drug spending, AIDS Drug Assistance Program (ADAP) use, antiretroviral adherence, and HIV VL suppression among dual eligibles with HIV. Using 2003-2008 data from the Women's Interagency HIV Study, we created a propensity score-matched cohort and used a difference-in-differences approach to compare dual eligibles' outcomes pre-Medicare and post-Medicare part D to those enrolled in Medicaid alone. Transition to Medicare part D was associated with a sharp increase in the proportion of dual eligibles with self-reported out-of-pocket prescription drug costs, followed by an increase in ADAP use. Despite the increase in out-of-pocket costs, both adherence and HIV VL suppression remained stable. Medicare part D was associated with increased out-of-pocket spending, although the increased spending did not seem to compromise antiretroviral therapy adherence or HIV VL suppression. It is possible that increased ADAP use mitigated the increase in out-of-pocket spending, suggesting successful coordination between Medicare part D and ADAP as well as the vital role of ADAP during insurance transitions.

  16. Kinota: An Open-Source NoSQL implementation of OGC SensorThings for large-scale high-resolution real-time environmental monitoring

    Science.gov (United States)

    Miles, B.; Chepudira, K.; LaBar, W.

    2017-12-01

    The Open Geospatial Consortium (OGC) SensorThings API (STA) specification, ratified in 2016, is a next-generation open standard for enabling real-time communication of sensor data. Building on over a decade of OGC Sensor Web Enablement (SWE) Standards, STA offers a rich data model that can represent a range of sensor and phenomena types (e.g. fixed sensors sensing fixed phenomena, fixed sensors sensing moving phenomena, mobile sensors sensing fixed phenomena, and mobile sensors sensing moving phenomena) and is data agnostic. Additionally, and in contrast to previous SWE standards, STA is developer-friendly, as is evident from its convenient JSON serialization, and expressive OData-based query language (with support for geospatial queries); with its Message Queue Telemetry Transport (MQTT), STA is also well-suited to efficient real-time data publishing and discovery. All these attributes make STA potentially useful for use in environmental monitoring sensor networks. Here we present Kinota(TM), an Open-Source NoSQL implementation of OGC SensorThings for large-scale high-resolution real-time environmental monitoring. Kinota, which roughly stands for Knowledge from Internet of Things Analyses, relies on Cassandra its underlying data store, which is a horizontally scalable, fault-tolerant open-source database that is often used to store time-series data for Big Data applications (though integration with other NoSQL or rational databases is possible). With this foundation, Kinota can scale to store data from an arbitrary number of sensors collecting data every 500 milliseconds. Additionally, Kinota architecture is very modular allowing for customization by adopters who can choose to replace parts of the existing implementation when desirable. The architecture is also highly portable providing the flexibility to choose between cloud providers like azure, amazon, google etc. The scalable, flexible and cloud friendly architecture of Kinota makes it ideal for use in next

  17. Aero-elastic Stability Analysis for Large-Scale Wind Turbines

    NARCIS (Netherlands)

    Meng, F.

    2011-01-01

    Nowadays, many modern countries are relying heavily on non-renewable resources. One common example of non-renewable resources is fossil fuel. Non-renewable resources are ?nite resources that will eventually dwindle, becoming too expensive or too environmentally damaging to retrieve. In contrast,

  18. Parts and Components Reliability Assessment: A Cost Effective Approach

    Science.gov (United States)

    Lee, Lydia

    2009-01-01

    System reliability assessment is a methodology which incorporates reliability analyses performed at parts and components level such as Reliability Prediction, Failure Modes and Effects Analysis (FMEA) and Fault Tree Analysis (FTA) to assess risks, perform design tradeoffs, and therefore, to ensure effective productivity and/or mission success. The system reliability is used to optimize the product design to accommodate today?s mandated budget, manpower, and schedule constraints. Stand ard based reliability assessment is an effective approach consisting of reliability predictions together with other reliability analyses for electronic, electrical, and electro-mechanical (EEE) complex parts and components of large systems based on failure rate estimates published by the United States (U.S.) military or commercial standards and handbooks. Many of these standards are globally accepted and recognized. The reliability assessment is especially useful during the initial stages when the system design is still in the development and hard failure data is not yet available or manufacturers are not contractually obliged by their customers to publish the reliability estimates/predictions for their parts and components. This paper presents a methodology to assess system reliability using parts and components reliability estimates to ensure effective productivity and/or mission success in an efficient manner, low cost, and tight schedule.

  19. Pharmaceutical ethnobotany in the western part of Granada province (southern Spain): ethnopharmacological synthesis.

    Science.gov (United States)

    Benítez, G; González-Tejero, M R; Molero-Mesa, J

    2010-05-04

    The aim of this work is to catalogue, document, and make known the uses of plants for folk medicine in the western part of the province of Granada (southern Spain). An analysis was made of the species used, parts of the plant employed, preparation methods, administration means, and the ailments treated in relation to pathological groups. The work was performed in 16 municipalities within the study zone. The participants were located mainly by questionnaires distributed in public and private centres. The information, gathered through semi-structured open interviews of a total of 279 people, was included in a database for subsequent analysis. A floristic catalogue of the territory was compiled, enabling analyses of the relevance of certain botanical families in popular medicine. Great diversity was established among medicinal species in the region. A total of 229 species of plants were catalogued for use in human medicine to prevent or treat 100 different health problems covering 14 different pathological groups. The number of references reached 1963. The popular pharmacopoeia of this area relies primarily on plants to treat digestive, respiratory, and circulatory problems, using mainly the soft parts of the plant (leaves and flowers) prepared in simple ways (decoction, infusion). An analysis of the medicinal ritual uses of 34 species and the different symptoms reflected a certain acculturation in relation to ethnobotanical knowledge in the last 20 years. The traditional knowledge of plants was shown in relation to medicinal use, reflecting a striking diversity of species and uses, as well as their importance in popular plant therapy in the study zone. These traditions could pave the way for future phytochemical and pharmacological studies and thereby give rise to new medicinal resources. (c) 2010 Elsevier Ireland Ltd. All rights reserved.

  20. Ductile fracture of cylindrical vessels containing a large flaw

    Science.gov (United States)

    Erdogan, F.; Irwin, G. R.; Ratwani, M.

    1976-01-01

    The fracture process in pressurized cylindrical vessels containing a relatively large flaw is considered. The flaw is assumed to be a part-through or through meridional crack. The flaw geometry, the yield behavior of the material, and the internal pressure are assumed to be such that in the neighborhood of the flaw the cylinder wall undergoes large-scale plastic deformations. Thus, the problem falls outside the range of applicability of conventional brittle fracture theories. To study the problem, plasticity considerations are introduced into the shell theory through the assumptions of fully-yielded net ligaments using a plastic strip model. Then a ductile fracture criterion is developed which is based on the concept of net ligament plastic instability. A limited verification is attempted by comparing the theoretical predictions with some existing experimental results.