WorldWideScience

Sample records for models involving large

  1. Severities of transportation accidents involving large packages

    Energy Technology Data Exchange (ETDEWEB)

    Dennis, A.W.; Foley, J.T. Jr.; Hartman, W.F.; Larson, D.W.

    1978-05-01

    The study was undertaken to define in a quantitative nonjudgmental technical manner the abnormal environments to which a large package (total weight over 2 tons) would be subjected as the result of a transportation accident. Because of this package weight, air shipment was not considered as a normal transportation mode and was not included in the study. The abnormal transportation environments for shipment by motor carrier and train were determined and quantified. In all cases the package was assumed to be transported on an open flat-bed truck or an open flat-bed railcar. In an earlier study, SLA-74-0001, the small-package environments were investigated. A third transportation study, related to the abnormal environment involving waterways transportation, is now under way at Sandia Laboratories and should complete the description of abnormal transportation environments. Five abnormal environments were defined and investigated, i.e., fire, impact, crush, immersion, and puncture. The primary interest of the study was directed toward the type of large package used to transport radioactive materials; however, the findings are not limited to this type of package but can be applied to a much larger class of material shipping containers.

  2. Severities of transportation accidents involving large packages

    International Nuclear Information System (INIS)

    Dennis, A.W.; Foley, J.T. Jr.; Hartman, W.F.; Larson, D.W.

    1978-05-01

    The study was undertaken to define in a quantitative nonjudgmental technical manner the abnormal environments to which a large package (total weight over 2 tons) would be subjected as the result of a transportation accident. Because of this package weight, air shipment was not considered as a normal transportation mode and was not included in the study. The abnormal transportation environments for shipment by motor carrier and train were determined and quantified. In all cases the package was assumed to be transported on an open flat-bed truck or an open flat-bed railcar. In an earlier study, SLA-74-0001, the small-package environments were investigated. A third transportation study, related to the abnormal environment involving waterways transportation, is now under way at Sandia Laboratories and should complete the description of abnormal transportation environments. Five abnormal environments were defined and investigated, i.e., fire, impact, crush, immersion, and puncture. The primary interest of the study was directed toward the type of large package used to transport radioactive materials; however, the findings are not limited to this type of package but can be applied to a much larger class of material shipping containers

  3. A Reasoned Action Model of Male Client Involvement in Commercial Sex Work in Kibera, A Large Informal Settlement in Nairobi, Kenya.

    Science.gov (United States)

    Roth, Eric Abella; Ngugi, Elizabeth; Benoit, Cecilia; Jansson, Mikael; Hallgrimsdottir, Helga

    2014-01-01

    Male clients of female sex workers (FSWs) are epidemiologically important because they can form bridge groups linking high- and low-risk subpopulations. However, because male clients are hard to locate, they are not frequently studied. Recent research emphasizes searching for high-risk behavior groups in locales where new sexual partnerships form and the threat of HIV transmission is high. Sub-Saharan Africa public drinking venues satisfy these criteria. Accordingly, this study developed and implemented a rapid assessment methodology to survey men in bars throughout the large informal settlement of Kibera, Nairobi, Kenya, with the goal of delineating cultural and economic rationales associated with male participation in commercial sex. The study sample consisted of 220 male patrons of 110 bars located throughout Kibera's 11 communities. Logistic regression analysis incorporating a modified Reasoned Action Model indicated that a social norm condoning commercial sex among male peers and the cultural belief that men should practice sex before marriage support commercial sex involvement. Conversely, lacking money to drink and/or pay for sexual services were barriers to male commercial sex involvement. Results are interpreted in light of possible harm reduction programs focusing on FSWs' male clients.

  4. Large scale model testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.

    1989-01-01

    Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs

  5. Fatal crashes involving large numbers of vehicles and weather.

    Science.gov (United States)

    Wang, Ying; Liang, Liming; Evans, Leonard

    2017-12-01

    Adverse weather has been recognized as a significant threat to traffic safety. However, relationships between fatal crashes involving large numbers of vehicles and weather are rarely studied according to the low occurrence of crashes involving large numbers of vehicles. By using all 1,513,792 fatal crashes in the Fatality Analysis Reporting System (FARS) data, 1975-2014, we successfully described these relationships. We found: (a) fatal crashes involving more than 35 vehicles are most likely to occur in snow or fog; (b) fatal crashes in rain are three times as likely to involve 10 or more vehicles as fatal crashes in good weather; (c) fatal crashes in snow [or fog] are 24 times [35 times] as likely to involve 10 or more vehicles as fatal crashes in good weather. If the example had used 20 vehicles, the risk ratios would be 6 for rain, 158 for snow, and 171 for fog. To reduce the risk of involvement in fatal crashes with large numbers of vehicles, drivers should slow down more than they currently do under adverse weather conditions. Driver deaths per fatal crash increase slowly with increasing numbers of involved vehicles when it is snowing or raining, but more steeply when clear or foggy. We conclude that in order to reduce risk of involvement in crashes involving large numbers of vehicles, drivers must reduce speed in fog, and in snow or rain, reduce speed by even more than they already do. Copyright © 2017 National Safety Council and Elsevier Ltd. All rights reserved.

  6. Consumers' Reaction towards Involvement of Large Retailers in ...

    African Journals Online (AJOL)

    User

    markets, fair trade products need LRs distribution channels and not the old system of using speciality ... analysis employed to identify customers' reaction to large retailers' involvement in selling ...... The Journal of Socio-Economics. Vol. 37: pp.

  7. Modeling interdisciplinary activities involving Mathematics

    DEFF Research Database (Denmark)

    Iversen, Steffen Møllegaard

    2006-01-01

    In this paper a didactical model is presented. The goal of the model is to work as a didactical tool, or conceptual frame, for developing, carrying through and evaluating interdisciplinary activities involving the subject of mathematics and philosophy in the high schools. Through the terms...... of Horizontal Intertwining, Vertical Structuring and Horizontal Propagation the model consists of three phases, each considering different aspects of the nature of interdisciplinary activities. The theoretical modelling is inspired by work which focuses on the students abilities to concept formation in expanded...... domains (Michelsen, 2001, 2005a, 2005b). Furthermore the theoretical description rest on a series of qualitative interviews with teachers from the Danish high school (grades 9-11) conducted recently. The special case of concrete interdisciplinary activities between mathematics and philosophy is also...

  8. LARGE VESSEL INVOLVEMENT IN BEHCET’S DISEASE

    Directory of Open Access Journals (Sweden)

    AR. Jamshidi F. Davatchi

    2004-08-01

    Full Text Available Large vessel involvement is one of the hallmarks of Behcet’s disease (BD but its prevalence varies widely due to ethnic variation or environmental factors. The aim of this study is to find the characteristics of vasculo-Behcet (VB in Iran. In a cohort of 4769 patients with BD, those with vascular involvement were selected. Different manifestations of disease were compared with the remaining group of patients. A confidence interval at 95% (CI was calculated for each item. Vascular involvement was seen in 409 cases (8.6%; CI, 0.8. Venous involvement was seen in 396 cases, deep vein thrombosis in 294 (6.2%; CI, 0.7, superficial phlebitis in 108 (2.3%; CI, 0.4 and large vein thrombosis in 45 (0.9%; CI, 0.3. Arterial involvement was seen in 28 patients (25 aneurysms and 4 thromboses. Thirteen patients showed both arterial and venous involvement. The mean age of the patients with VB was slightly higher (P<0.03, but the disease duration was significantly longer (P<0.0003. VB was more common in men. As the presenting sign, ocular lesions were less frequent in VB (P<0.0006, while skin lesions were over 2 times more common in these cases (P<0.000001. VB was associated with a higher frequency of genital aphthosis, skin involvement, joint manifestations, epididymitis, CNS lesions and GI involvement. The juvenile form was less common in VB (P<0.03. High ESR was more frequent in VB (P=0.000002, but the frequency of false positive VDRL, pathergy phenomenon, HLA-B5 or HLA-B27 showed no significant difference between the two groups. In Iranian patients with BD, vascular involvement is not common and large vessel involvement is rare. It may be sex-related, and is more common in well-established disease with multiple organ involvement and longer disease duration.

  9. Large vessel involvement by IgG4-related disease

    Science.gov (United States)

    Perugino, Cory A.; Wallace, Zachary S.; Meyersohn, Nandini; Oliveira, George; Stone, James R.; Stone, John H.

    2016-01-01

    Abstract Objectives: IgG4-related disease (IgG4-RD) is an immune-mediated fibroinflammatory condition that can affect multiple organs and lead to tumefactive, tissue-destructive lesions. Reports have described inflammatory aortitis and periaortitis, the latter in the setting of retroperitoneal fibrosis (RPF), but have not distinguished adequately between these 2 manifestations. The frequency, radiologic features, and response of vascular complications to B cell depletion remain poorly defined. We describe the clinical features, radiology findings, and treatment response in a cohort of 36 patients with IgG4-RD affecting large blood vessels. Methods: Clinical records of all patients diagnosed with IgG4-RD in our center were reviewed. All radiologic studies were reviewed. We distinguished between primary large blood vessel inflammation and secondary vascular involvement. Primary involvement was defined as inflammation in the blood vessel wall as a principal focus of disease. Secondary vascular involvement was defined as disease caused by the effects of adjacent inflammation on the blood vessel wall. Results: Of the 160 IgG4-RD patients in this cohort, 36 (22.5%) had large-vessel involvement. The mean age at disease onset of the patients with large-vessel IgG4-RD was 54.6 years. Twenty-eight patients (78%) were male and 8 (22%) were female. Thirteen patients (36%) had primary IgG4-related vasculitis and aortitis with aneurysm formation comprised the most common manifestation. This affected 5.6% of the entire IgG4-RD cohort and was observed in the thoracic aorta in 8 patients, the abdominal aorta in 4, and both the thoracic and abdominal aorta in 3. Three of these aneurysms were complicated by aortic dissection or contained perforation. Periaortitis secondary to RPF accounted for 27 of 29 patients (93%) of secondary vascular involvement by IgG4-RD. Only 5 patients demonstrated evidence of both primary and secondary blood vessel involvement. Of those treated with

  10. Observations involving broadband impedance modelling

    Energy Technology Data Exchange (ETDEWEB)

    Berg, J S [Stanford Linear Accelerator Center, Menlo Park, CA (United States)

    1996-08-01

    Results for single- and multi-bunch instabilities can be significantly affected by the precise model that is used for the broadband impedance. This paper discusses three aspects of broadband impedance modelling. The first is an observation of the effect that a seemingly minor change in an impedance model has on the single-bunch mode coupling threshold. The second is a successful attempt to construct a model for the high-frequency tails of an r.f. cavity. The last is a discussion of requirements for the mathematical form of an impedance which follow from the general properties of impedances. (author)

  11. Observations involving broadband impedance modelling

    International Nuclear Information System (INIS)

    Berg, J.S.

    1995-08-01

    Results for single- and multi-bunch instabilities can be significantly affected by the precise model that is used for the broadband impendance. This paper discusses three aspects of broadband impendance modeling. The first is an observation of the effect that a seemingly minor change in an impedance model has on the single-bunch mode coupling threshold. The second is a successful attempt to construct a model for the high-frequency tails of an r.f cavity. The last is a discussion of requirements for the mathematical form of an impendance which follow from the general properties of impendances

  12. Multifocal Extranodal Involvement of Diffuse Large B-Cell Lymphoma

    Directory of Open Access Journals (Sweden)

    Devrim Cabuk

    2013-01-01

    Full Text Available Endobronchial involvement of extrapulmonary malignant tumors is uncommon and mostly associated with breast, kidney, colon, and rectum carcinomas. A 68-year-old male with a prior diagnosis of colon non-Hodgkin lymphoma (NHL was admitted to the hospital with a complaint of cough, sputum, and dyspnea. The chest radiograph showed right hilar enlargement and opacity at the right middle zone suggestive of a mass lesion. Computed tomography of thorax revealed a right-sided mass lesion extending to thoracic wall with the destruction of the third and the fourth ribs and a right hilar mass lesion. Fiberoptic bronchoscopy was performed in order to evaluate endobronchial involvement and showed stenosis with mucosal tumor infiltration in right upper lobe bronchus. The pathological examination of bronchoscopic biopsy specimen reported diffuse large B-cell lymphoma and the patient was accepted as the endobronchial recurrence of sigmoid colon NHL. The patient is still under treatment of R-ICE (rituximab-ifosfamide-carboplatin-etoposide chemotherapy and partial regression of pulmonary lesions was noted after 3 courses of treatment.

  13. Metal-Oxide Film Conversions Involving Large Anions

    International Nuclear Information System (INIS)

    Pretty, S.; Zhang, X.; Shoesmith, D.W.; Wren, J.C.

    2008-01-01

    The main objective of my research is to establish the mechanism and kinetics of metal-oxide film conversions involving large anions (I - , Br - , S 2- ). Within a given group, the anions will provide insight on the effect of anion size on the film conversion, while comparison of Group 6 and Group 7 anions will provide insight on the effect of anion charge. This research has a range of industrial applications, for example, hazardous radioiodine can be immobilized by reaction with Ag to yield AgI. From the perspective of public safety, radioiodine is one of the most important fission products from the uranium fuel because of its large fuel inventory, high volatility, and radiological hazard. Additionally, because of its mobility, the gaseous iodine concentration is a critical parameter for safety assessment and post-accident management. A full kinetic analysis using electrochemical techniques has been performed on the conversion of Ag 2 O to (1) AgI and (2) AgBr. (authors)

  14. Metal-Oxide Film Conversions Involving Large Anions

    Energy Technology Data Exchange (ETDEWEB)

    Pretty, S.; Zhang, X.; Shoesmith, D.W.; Wren, J.C. [The University of Western Ontario, Chemistry Department, 1151 Richmond St., N6A 5B7, London, Ontario (Canada)

    2008-07-01

    The main objective of my research is to establish the mechanism and kinetics of metal-oxide film conversions involving large anions (I{sup -}, Br{sup -}, S{sup 2-}). Within a given group, the anions will provide insight on the effect of anion size on the film conversion, while comparison of Group 6 and Group 7 anions will provide insight on the effect of anion charge. This research has a range of industrial applications, for example, hazardous radioiodine can be immobilized by reaction with Ag to yield AgI. From the perspective of public safety, radioiodine is one of the most important fission products from the uranium fuel because of its large fuel inventory, high volatility, and radiological hazard. Additionally, because of its mobility, the gaseous iodine concentration is a critical parameter for safety assessment and post-accident management. A full kinetic analysis using electrochemical techniques has been performed on the conversion of Ag{sub 2}O to (1) AgI and (2) AgBr. (authors)

  15. Bullying Prevention and the Parent Involvement Model

    Science.gov (United States)

    Kolbert, Jered B.; Schultz, Danielle; Crothers, Laura M.

    2014-01-01

    A recent meta-analysis of bullying prevention programs provides support for social-ecological theory, in which parent involvement addressing child bullying behaviors is seen as important in preventing school-based bullying. The purpose of this manuscript is to suggest how Epstein and colleagues' parent involvement model can be used as a…

  16. Model of large pool fires

    Energy Technology Data Exchange (ETDEWEB)

    Fay, J.A. [Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)]. E-mail: jfay@mit.edu

    2006-08-21

    A two zone entrainment model of pool fires is proposed to depict the fluid flow and flame properties of the fire. Consisting of combustion and plume zones, it provides a consistent scheme for developing non-dimensional scaling parameters for correlating and extrapolating pool fire visible flame length, flame tilt, surface emissive power, and fuel evaporation rate. The model is extended to include grey gas thermal radiation from soot particles in the flame zone, accounting for emission and absorption in both optically thin and thick regions. A model of convective heat transfer from the combustion zone to the liquid fuel pool, and from a water substrate to cryogenic fuel pools spreading on water, provides evaporation rates for both adiabatic and non-adiabatic fires. The model is tested against field measurements of large scale pool fires, principally of LNG, and is generally in agreement with experimental values of all variables.

  17. Model of large pool fires

    International Nuclear Information System (INIS)

    Fay, J.A.

    2006-01-01

    A two zone entrainment model of pool fires is proposed to depict the fluid flow and flame properties of the fire. Consisting of combustion and plume zones, it provides a consistent scheme for developing non-dimensional scaling parameters for correlating and extrapolating pool fire visible flame length, flame tilt, surface emissive power, and fuel evaporation rate. The model is extended to include grey gas thermal radiation from soot particles in the flame zone, accounting for emission and absorption in both optically thin and thick regions. A model of convective heat transfer from the combustion zone to the liquid fuel pool, and from a water substrate to cryogenic fuel pools spreading on water, provides evaporation rates for both adiabatic and non-adiabatic fires. The model is tested against field measurements of large scale pool fires, principally of LNG, and is generally in agreement with experimental values of all variables

  18. Learning models of activities involving interacting objects

    DEFF Research Database (Denmark)

    Manfredotti, Cristina; Pedersen, Kim Steenstrup; Hamilton, Howard J.

    2013-01-01

    We propose the LEMAIO multi-layer framework, which makes use of hierarchical abstraction to learn models for activities involving multiple interacting objects from time sequences of data concerning the individual objects. Experiments in the sea navigation domain yielded learned models that were t...

  19. Modeling and simulation of large HVDC systems

    Energy Technology Data Exchange (ETDEWEB)

    Jin, H.; Sood, V.K.

    1993-01-01

    This paper addresses the complexity and the amount of work in preparing simulation data and in implementing various converter control schemes and the excessive simulation time involved in modelling and simulation of large HVDC systems. The Power Electronic Circuit Analysis program (PECAN) is used to address these problems and a large HVDC system with two dc links is simulated using PECAN. A benchmark HVDC system is studied to compare the simulation results with those from other packages. The simulation time and results are provided in the paper.

  20. Involving parents from the start: formative evaluation for a large ...

    African Journals Online (AJOL)

    While HIV prevention research conducted among adolescent populations may encounter parental resistance, the active engagement of parents from inception to trial completion may alleviate opposition. In preparation for implementing a large randomised controlled trial (RCT) examining the efficacy of a behavioural ...

  1. Predicted occurrence rate of severe transportation accidents involving large casks

    International Nuclear Information System (INIS)

    Dennis, A.W.

    1978-01-01

    A summary of the results of an investigation of the severities of highway and railroad accidents as they relate to the shipment of large radioactive materials casks is discussed. The accident environments considered are fire, impact, crash, immersion, and puncture. For each of these environments, the accident severities and their predicted frequencies of occurrence are presented. These accident environments are presented in tabular and graphic form to allow the reader to evaluate the probabilities of occurrence of the accident parameter severities he selects

  2. A large duplication involving the IHH locus mimics acrocallosal syndrome.

    Science.gov (United States)

    Yuksel-Apak, Memnune; Bögershausen, Nina; Pawlik, Barbara; Li, Yun; Apak, Selcuk; Uyguner, Oya; Milz, Esther; Nürnberg, Gudrun; Karaman, Birsen; Gülgören, Ayan; Grzeschik, Karl-Heinz; Nürnberg, Peter; Kayserili, Hülya; Wollnik, Bernd

    2012-06-01

    Indian hedgehog (Ihh) signaling is a major determinant of various processes during embryonic development and has a pivotal role in embryonic skeletal development. A specific spatial and temporal expression of Ihh within the developing limb buds is essential for accurate digit outgrowth and correct digit number. Although missense mutations in IHH cause brachydactyly type A1, small tandem duplications involving the IHH locus have recently been described in patients with mild syndactyly and craniosynostosis. In contrast, a ∼600-kb deletion 5' of IHH in the doublefoot mouse mutant (Dbf) leads to severe polydactyly without craniosynostosis, but with craniofacial dysmorphism. We now present a patient resembling acrocallosal syndrome (ACS) with extensive polysyndactyly of the hands and feet, craniofacial abnormalities including macrocephaly, agenesis of the corpus callosum, dysplastic and low-set ears, severe hypertelorism and profound psychomotor delay. Single-nucleotide polymorphism (SNP) array copy number analysis identified a ∼900-kb duplication of the IHH locus, which was confirmed by an independent quantitative method. A fetus from a second pregnancy of the mother by a different spouse showed similar craniofacial and limb malformations and the same duplication of the IHH-locus. We defined the exact breakpoints and showed that the duplications are identical tandem duplications in both sibs. No copy number changes were observed in the healthy mother. To our knowledge, this is the first report of a human phenotype similar to the Dbf mutant and strikingly overlapping with ACS that is caused by a copy number variation involving the IHH locus on chromosome 2q35.

  3. Spatial occupancy models for large data sets

    Science.gov (United States)

    Johnson, Devin S.; Conn, Paul B.; Hooten, Mevin B.; Ray, Justina C.; Pond, Bruce A.

    2013-01-01

    Since its development, occupancy modeling has become a popular and useful tool for ecologists wishing to learn about the dynamics of species occurrence over time and space. Such models require presence–absence data to be collected at spatially indexed survey units. However, only recently have researchers recognized the need to correct for spatially induced overdisperison by explicitly accounting for spatial autocorrelation in occupancy probability. Previous efforts to incorporate such autocorrelation have largely focused on logit-normal formulations for occupancy, with spatial autocorrelation induced by a random effect within a hierarchical modeling framework. Although useful, computational time generally limits such an approach to relatively small data sets, and there are often problems with algorithm instability, yielding unsatisfactory results. Further, recent research has revealed a hidden form of multicollinearity in such applications, which may lead to parameter bias if not explicitly addressed. Combining several techniques, we present a unifying hierarchical spatial occupancy model specification that is particularly effective over large spatial extents. This approach employs a probit mixture framework for occupancy and can easily accommodate a reduced-dimensional spatial process to resolve issues with multicollinearity and spatial confounding while improving algorithm convergence. Using open-source software, we demonstrate this new model specification using a case study involving occupancy of caribou (Rangifer tarandus) over a set of 1080 survey units spanning a large contiguous region (108 000 km2) in northern Ontario, Canada. Overall, the combination of a more efficient specification and open-source software allows for a facile and stable implementation of spatial occupancy models for large data sets.

  4. Structuring very large domain models

    DEFF Research Database (Denmark)

    Störrle, Harald

    2010-01-01

    View/Viewpoint approaches like IEEE 1471-2000, or Kruchten's 4+1-view model are used to structure software architectures at a high level of granularity. While research has focused on architectural languages and with consistency between multiple views, practical questions such as the structuring a...

  5. Computational Modeling of Large Wildfires: A Roadmap

    KAUST Repository

    Coen, Janice L.; Douglas, Craig C.

    2010-01-01

    Wildland fire behavior, particularly that of large, uncontrolled wildfires, has not been well understood or predicted. Our methodology to simulate this phenomenon uses high-resolution dynamic models made of numerical weather prediction (NWP) models

  6. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  7. Efficient querying of large process model repositories

    NARCIS (Netherlands)

    Jin, Tao; Wang, Jianmin; La Rosa, M.; Hofstede, ter A.H.M.; Wen, Lijie

    2013-01-01

    Recent years have seen an increased uptake of business process management technology in industries. This has resulted in organizations trying to manage large collections of business process models. One of the challenges facing these organizations concerns the retrieval of models from large business

  8. Establishing model credibility involves more than validation

    International Nuclear Information System (INIS)

    Kirchner, T.

    1991-01-01

    One widely used definition of validation is that the quantitative test of the performance of a model through the comparison of model predictions to independent sets of observations from the system being simulated. The ability to show that the model predictions compare well with observations is often thought to be the most rigorous test that can be used to establish credibility for a model in the scientific community. However, such tests are only part of the process used to establish credibility, and in some cases may be either unnecessary or misleading. Naylor and Finger extended the concept of validation to include the establishment of validity for the postulates embodied in the model and the test of assumptions used to select postulates for the model. Validity of postulates is established through concurrence by experts in the field of study that the mathematical or conceptual model contains the structural components and mathematical relationships necessary to adequately represent the system with respect to the goals for the model. This extended definition of validation provides for consideration of the structure of the model, not just its performance, in establishing credibility. Evaluation of a simulation model should establish the correctness of the code and the efficacy of the model within its domain of applicability. (24 refs., 6 figs.)

  9. Modeling human learning involved in car driving

    NARCIS (Netherlands)

    Wewerinke, P.H.

    1994-01-01

    In this paper, car driving is considered at the level of human tracking and maneuvering in the context of other traffic. A model analysis revealed the most salient features determining driving performance and safety. Learning car driving is modelled based on a system theoretical approach and based

  10. Buried Waste Integrated Demonstration stakeholder involvement model

    International Nuclear Information System (INIS)

    Kaupanger, R.M.; Kostelnik, K.M.; Milam, L.M.

    1994-04-01

    The Buried Waste Integrated Demonstration (BWID) is a program funded by the US Department of Energy (DOE) Office of Technology Development. BWID supports the applied research, development, demonstration, and evaluation of a suite of advanced technologies that together form a comprehensive remediation system for the effective and efficient remediation of buried waste. Stakeholder participation in the DOE Environmental Management decision-making process is critical to remediation efforts. Appropriate mechanisms for communication with the public, private sector, regulators, elected officials, and others are being aggressively pursued by BWID to permit informed participation. This document summarizes public outreach efforts during FY-93 and presents a strategy for expanded stakeholder involvement during FY-94

  11. Regularization modeling for large-eddy simulation

    NARCIS (Netherlands)

    Geurts, Bernardus J.; Holm, D.D.

    2003-01-01

    A new modeling approach for large-eddy simulation (LES) is obtained by combining a "regularization principle" with an explicit filter and its inversion. This regularization approach allows a systematic derivation of the implied subgrid model, which resolves the closure problem. The central role of

  12. Models for large superconducting toroidal magnet systems

    International Nuclear Information System (INIS)

    Arendt, F.; Brechna, H.; Erb, J.; Komarek, P.; Krauth, H.; Maurer, W.

    1976-01-01

    Prior to the design of large GJ toroidal magnet systems it is appropriate to procure small scale models, which can simulate their pertinent properties and allow to investigate their relevant phenomena. The important feature of the model is to show under which circumstances the system performance can be extrapolated to large magnets. Based on parameters such as the maximum magnetic field and the current density, the maximum tolerable magneto-mechanical stresses, a simple method of designing model magnets is presented. It is shown how pertinent design parameters are changed when the toroidal dimensions are altered. In addition some conductor cost estimations are given based on reactor power output and wall loading

  13. Constituent models and large transverse momentum reactions

    International Nuclear Information System (INIS)

    Brodsky, S.J.

    1975-01-01

    The discussion of constituent models and large transverse momentum reactions includes the structure of hard scattering models, dimensional counting rules for large transverse momentum reactions, dimensional counting and exclusive processes, the deuteron form factor, applications to inclusive reactions, predictions for meson and photon beams, the charge-cubed test for the e/sup +-/p → e/sup +-/γX asymmetry, the quasi-elastic peak in inclusive hadronic reactions, correlations, and the multiplicity bump at large transverse momentum. Also covered are the partition method for bound state calculations, proofs of dimensional counting, minimal neutralization and quark--quark scattering, the development of the constituent interchange model, and the A dependence of high transverse momentum reactions

  14. Large Mammalian Animal Models of Heart Disease

    Directory of Open Access Journals (Sweden)

    Paula Camacho

    2016-10-01

    Full Text Available Due to the biological complexity of the cardiovascular system, the animal model is an urgent pre-clinical need to advance our knowledge of cardiovascular disease and to explore new drugs to repair the damaged heart. Ideally, a model system should be inexpensive, easily manipulated, reproducible, a biological representative of human disease, and ethically sound. Although a larger animal model is more expensive and difficult to manipulate, its genetic, structural, functional, and even disease similarities to humans make it an ideal model to first consider. This review presents the commonly-used large animals—dog, sheep, pig, and non-human primates—while the less-used other large animals—cows, horses—are excluded. The review attempts to introduce unique points for each species regarding its biological property, degrees of susceptibility to develop certain types of heart diseases, and methodology of induced conditions. For example, dogs barely develop myocardial infarction, while dilated cardiomyopathy is developed quite often. Based on the similarities of each species to the human, the model selection may first consider non-human primates—pig, sheep, then dog—but it also depends on other factors, for example, purposes, funding, ethics, and policy. We hope this review can serve as a basic outline of large animal models for cardiovascular researchers and clinicians.

  15. Managing large-scale models: DBS

    International Nuclear Information System (INIS)

    1981-05-01

    A set of fundamental management tools for developing and operating a large scale model and data base system is presented. Based on experience in operating and developing a large scale computerized system, the only reasonable way to gain strong management control of such a system is to implement appropriate controls and procedures. Chapter I discusses the purpose of the book. Chapter II classifies a broad range of generic management problems into three groups: documentation, operations, and maintenance. First, system problems are identified then solutions for gaining management control are disucssed. Chapters III, IV, and V present practical methods for dealing with these problems. These methods were developed for managing SEAS but have general application for large scale models and data bases

  16. Modelling and control of large cryogenic refrigerator

    International Nuclear Information System (INIS)

    Bonne, Francois

    2014-01-01

    This manuscript is concern with both the modeling and the derivation of control schemes for large cryogenic refrigerators. The particular case of those which are submitted to highly variable pulsed heat load is studied. A model of each object that normally compose a large cryo-refrigerator is proposed. The methodology to gather objects model into the model of a subsystem is presented. The manuscript also shows how to obtain a linear equivalent model of the subsystem. Based on the derived models, advances control scheme are proposed. Precisely, a linear quadratic controller for warm compression station working with both two and three pressures state is derived, and a predictive constrained one for the cold-box is obtained. The particularity of those control schemes is that they fit the computing and data storage capabilities of Programmable Logic Controllers (PLC) with are well used in industry. The open loop model prediction capability is assessed using experimental data. Developed control schemes are validated in simulation and experimentally on the 400W1.8K SBT's cryogenic test facility and on the CERN's LHC warm compression station. (author) [fr

  17. Iron Malabsorption in a Patient With Large Cell Lymphoma Involving the Duodenum

    Science.gov (United States)

    1992-01-01

    hemoglobin. The lymphomas (5-7). The presenting symptoms mimic chest radiograph in May demonstrated an anterior me- those of celiac disease and include...compounded the anemia in a pa- tion in celiac disease were reversible by the institution tient with diffuse large cell lymphoma involving the of a gluten...etiologies (usually 2-3 h) is expected in patients who are iron (e.g., celiac disease , pancreatic insufliciency). however, deficient and have normal

  18. Isolated cutaneous involvement in a child with nodal anaplastic large cell lymphoma

    Directory of Open Access Journals (Sweden)

    Vibhu Mendiratta

    2016-01-01

    Full Text Available Non-Hodgkin lymphoma is a common childhood T-cell and B-cell neoplasm that originates primarily from lymphoid tissue. Cutaneous involvement can be in the form of a primary extranodal lymphoma, or secondary to metastasis from a non-cutaneous location. The latter is uncommon, and isolated cutaneous involvement is rarely reported. We report a case of isolated secondary cutaneous involvement from nodal anaplastic large cell lymphoma (CD30 + and ALK + in a 7-year-old boy who was on chemotherapy. This case is reported for its unusual clinical presentation as an acute febrile, generalized papulonodular eruption that mimicked deep fungal infection, with the absence of other foci of systemic metastasis.

  19. ALK-positive anaplastic large cell lymphoma with soft tissue involvement in a young woman

    Directory of Open Access Journals (Sweden)

    Gao KH

    2016-07-01

    Full Text Available Kehai Gao, Hongtao Li, Caihong Huang, Huazhuang Li, Jun Fang, Chen Tian Department of Orthopaedics, Yidu Central Hospital, Shandong, People’s Republic of China Introduction: Anaplastic large cell lymphoma (ALCL is a type of non-Hodgkin lymphoma that has strong expression of CD30. ALCL can sometimes involve the bone marrow, and in advanced stages, it can produce destructive extranodal lesions. But anaplastic large cell lymphoma kinase (ALK+ ALCL with soft tissue involvement is very rare.Case report: A 35-year-old woman presented with waist pain for over 1 month. The biopsy of soft tissue lesions showed that these cells were positive for ALK-1, CD30, TIA-1, GranzymeB, CD4, CD8, and Ki67 (90%+ and negative for CD3, CD5, CD20, CD10, cytokeratin (CK, TdT, HMB-45, epithelial membrane antigen (EMA, and pan-CK, which identified ALCL. After six cycles of Hyper-CVAD/MA regimen, she achieved partial remission. Three months later, she died due to disease progression.Conclusion: This case illustrates the unusual presentation of ALCL in soft tissue with a bad response to chemotherapy. Because of the tendency for rapid progression, ALCL in young adults with extranodal lesions are often treated with high-grade chemotherapy, such as Hyper-CVAD/MA. Keywords: anaplastic large cell lymphoma, ALK+, soft tissue involvement, Hyper-CVAD/MA

  20. Secondary pancreatic involvement by a diffuse large B-cell lymphoma presenting as acute pancreatitis

    Institute of Scientific and Technical Information of China (English)

    M Wasif Saif; Sapna Khubchandani; Marek Walczak

    2007-01-01

    Diffuse large B-cell lymphoma is the most common type of non-Hodgkin's lymphoma. More than 50% of patients have some site of extra-nodal involvement at diagnosis,including the gastrointestinal tract and bone marrow.However, a diffuse large B-cell lymphoma presenting as acute pancreatitis is rare. A 57-year-old female presented with abdominal pain and matted lymph nodes in her axilla. She was admitted with a diagnosis of acute pancreatitis. Abdominal computed tomography (CT) scan showed diffusely enlarged pancreas due to infiltrative neoplasm and peripancreatic lymphadenopathy. Biopsy of the axillary mass revealed a large B-cell lymphoma.The patient was classified as stage Ⅳ, based on the Ann Arbor Classification, and as having a high-risk lymphoma,based on the International Prognostic Index. She was started on chemotherapy with CHOP (cyclophosphamide,doxorubicin, vincristine and prednisone). Within a week after chemotherapy, the patient's abdominal pain resolved. Follow-up CT scan of the abdomen revealed a marked decrease in the size of the pancreas and peripancreatic lymphadenopathy. A literature search revealed only seven cases of primary involvement of the pancreas in B-cell lymphoma presenting as acute pancreatitis. However, only one case of secondary pancreatic involvement by B-cell lymphoma presenting as acute pancreatitis has been published. Our case appears to be the second report of such a manifestation.Both cases responded well to chemotherapy.

  1. Large-scale modelling of neuronal systems

    International Nuclear Information System (INIS)

    Castellani, G.; Verondini, E.; Giampieri, E.; Bersani, F.; Remondini, D.; Milanesi, L.; Zironi, I.

    2009-01-01

    The brain is, without any doubt, the most, complex system of the human body. Its complexity is also due to the extremely high number of neurons, as well as the huge number of synapses connecting them. Each neuron is capable to perform complex tasks, like learning and memorizing a large class of patterns. The simulation of large neuronal systems is challenging for both technological and computational reasons, and can open new perspectives for the comprehension of brain functioning. A well-known and widely accepted model of bidirectional synaptic plasticity, the BCM model, is stated by a differential equation approach based on bistability and selectivity properties. We have modified the BCM model extending it from a single-neuron to a whole-network model. This new model is capable to generate interesting network topologies starting from a small number of local parameters, describing the interaction between incoming and outgoing links from each neuron. We have characterized this model in terms of complex network theory, showing how this, learning rule can be a support For network generation.

  2. Computational Modeling of Large Wildfires: A Roadmap

    KAUST Repository

    Coen, Janice L.

    2010-08-01

    Wildland fire behavior, particularly that of large, uncontrolled wildfires, has not been well understood or predicted. Our methodology to simulate this phenomenon uses high-resolution dynamic models made of numerical weather prediction (NWP) models coupled to fire behavior models to simulate fire behavior. NWP models are capable of modeling very high resolution (< 100 m) atmospheric flows. The wildland fire component is based upon semi-empirical formulas for fireline rate of spread, post-frontal heat release, and a canopy fire. The fire behavior is coupled to the atmospheric model such that low level winds drive the spread of the surface fire, which in turn releases sensible heat, latent heat, and smoke fluxes into the lower atmosphere, feeding back to affect the winds directing the fire. These coupled dynamic models capture the rapid spread downwind, flank runs up canyons, bifurcations of the fire into two heads, and rough agreement in area, shape, and direction of spread at periods for which fire location data is available. Yet, intriguing computational science questions arise in applying such models in a predictive manner, including physical processes that span a vast range of scales, processes such as spotting that cannot be modeled deterministically, estimating the consequences of uncertainty, the efforts to steer simulations with field data ("data assimilation"), lingering issues with short term forecasting of weather that may show skill only on the order of a few hours, and the difficulty of gathering pertinent data for verification and initialization in a dangerous environment. © 2010 IEEE.

  3. Large-scale multimedia modeling applications

    International Nuclear Information System (INIS)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications

  4. On spinfoam models in large spin regime

    International Nuclear Information System (INIS)

    Han, Muxin

    2014-01-01

    We study the semiclassical behavior of Lorentzian Engle–Pereira–Rovelli–Livine (EPRL) spinfoam model, by taking into account the sum over spins in the large spin regime. We also employ the method of stationary phase analysis with parameters and the so-called, almost analytic machinery, in order to find the asymptotic behavior of the contributions from all possible large spin configurations in the spinfoam model. The spins contributing the sum are written as J f = λj f , where λ is a large parameter resulting in an asymptotic expansion via stationary phase approximation. The analysis shows that at least for the simplicial Lorentzian geometries (as spinfoam critical configurations), they contribute the leading order approximation of spinfoam amplitude only when their deficit angles satisfy γ Θ-ring f ≤λ −1/2 mod 4πZ. Our analysis results in a curvature expansion of the semiclassical low energy effective action from the spinfoam model, where the UV modifications of Einstein gravity appear as subleading high-curvature corrections. (paper)

  5. Abnormal binding and disruption in large scale networks involved in human partial seizures

    Directory of Open Access Journals (Sweden)

    Bartolomei Fabrice

    2013-12-01

    Full Text Available There is a marked increase in the amount of electrophysiological and neuroimaging works dealing with the study of large scale brain connectivity in the epileptic brain. Our view of the epileptogenic process in the brain has largely evolved over the last twenty years from the historical concept of “epileptic focus” to a more complex description of “Epileptogenic networks” involved in the genesis and “propagation” of epileptic activities. In particular, a large number of studies have been dedicated to the analysis of intracerebral EEG signals to characterize the dynamic of interactions between brain areas during temporal lobe seizures. These studies have reported that large scale functional connectivity is dramatically altered during seizures, particularly during temporal lobe seizure genesis and development. Dramatic changes in neural synchrony provoked by epileptic rhythms are also responsible for the production of ictal symptoms or changes in patient’s behaviour such as automatisms, emotional changes or consciousness alteration. Beside these studies dedicated to seizures, large-scale network connectivity during the interictal state has also been investigated not only to define biomarkers of epileptogenicity but also to better understand the cognitive impairments observed between seizures.

  6. Investigation of large α production in reactions involving weakly bound 7Li

    Science.gov (United States)

    Pandit, S. K.; Shrivastava, A.; Mahata, K.; Parkar, V. V.; Palit, R.; Keeley, N.; Rout, P. C.; Kumar, A.; Ramachandran, K.; Bhattacharyya, S.; Nanal, V.; Palshetkar, C. S.; Nag, T. N.; Gupta, Shilpi; Biswas, S.; Saha, S.; Sethi, J.; Singh, P.; Chatterjee, A.; Kailas, S.

    2017-10-01

    The origin of the large α -particle production cross sections in systems involving weakly bound 7Li projectiles has been investigated by measuring the cross sections of all possible fragment-capture as well as complete fusion using the particle-γ coincidence, in-beam, and off-beam γ -ray counting techniques for the 7Li+93Nb system at near Coulomb barrier energies. Almost all of the inclusive α -particle yield has been accounted for. While the t -capture mechanism is found to be dominant (˜70 % ), compound nuclear evaporation and breakup processes contribute ˜15 % each to the inclusive α -particle production in the measured energy range. Systematic behavior of the t capture and inclusive α cross sections for reactions involving 7Li over a wide mass range is also reported.

  7. Mathematical modeling of large floating roof reservoir temperature arena

    Directory of Open Access Journals (Sweden)

    Liu Yang

    2018-03-01

    Full Text Available The current study is a simplification of related components of large floating roof tank and modeling for three dimensional temperature field of large floating roof tank. The heat transfer involves its transfer between the hot fluid in the oil tank, between the hot fluid and the tank wall and between the tank wall and the external environment. The mathematical model of heat transfer and flow of oil in the tank simulates the temperature field of oil in tank. Oil temperature field of large floating roof tank is obtained by numerical simulation, map the curve of central temperature dynamics with time and analyze axial and radial temperature of storage tank. It determines the distribution of low temperature storage tank location based on the thickness of the reservoir temperature. Finally, it compared the calculated results and the field test data; eventually validated the calculated results based on the experimental results.

  8. Five challenges for stochastic epidemic models involving global transmission

    Directory of Open Access Journals (Sweden)

    Tom Britton

    2015-03-01

    Full Text Available The most basic stochastic epidemic models are those involving global transmission, meaning that infection rates depend only on the type and state of the individuals involved, and not on their location in the population. Simple as they are, there are still several open problems for such models. For example, when will such an epidemic go extinct and with what probability (questions depending on the population being fixed, changing or growing? How can a model be defined explaining the sometimes observed scenario of frequent mid-sized epidemic outbreaks? How can evolution of the infectious agent transmission rates be modelled and fitted to data in a robust way?

  9. Black holes from large N singlet models

    Science.gov (United States)

    Amado, Irene; Sundborg, Bo; Thorlacius, Larus; Wintergerst, Nico

    2018-03-01

    The emergent nature of spacetime geometry and black holes can be directly probed in simple holographic duals of higher spin gravity and tensionless string theory. To this end, we study time dependent thermal correlation functions of gauge invariant observables in suitably chosen free large N gauge theories. At low temperature and on short time scales the correlation functions encode propagation through an approximate AdS spacetime while interesting departures emerge at high temperature and on longer time scales. This includes the existence of evanescent modes and the exponential decay of time dependent boundary correlations, both of which are well known indicators of bulk black holes in AdS/CFT. In addition, a new time scale emerges after which the correlation functions return to a bulk thermal AdS form up to an overall temperature dependent normalization. A corresponding length scale was seen in equal time correlation functions in the same models in our earlier work.

  10. On the modelling of microsegregation in steels involving thermodynamic databases

    International Nuclear Information System (INIS)

    You, D; Bernhard, C; Michelic, S; Wieser, G; Presoly, P

    2016-01-01

    A microsegregation model involving thermodynamic database based on Ohnaka's model is proposed. In the model, the thermodynamic database is applied for equilibrium calculation. Multicomponent alloy effects on partition coefficients and equilibrium temperatures are accounted for. Microsegregation and partition coefficients calculated using different databases exhibit significant differences. The segregated concentrations predicted using the optimized database are in good agreement with the measured inter-dendritic concentrations. (paper)

  11. Extending SME to Handle Large-Scale Cognitive Modeling.

    Science.gov (United States)

    Forbus, Kenneth D; Ferguson, Ronald W; Lovett, Andrew; Gentner, Dedre

    2017-07-01

    Analogy and similarity are central phenomena in human cognition, involved in processes ranging from visual perception to conceptual change. To capture this centrality requires that a model of comparison must be able to integrate with other processes and handle the size and complexity of the representations required by the tasks being modeled. This paper describes extensions to Structure-Mapping Engine (SME) since its inception in 1986 that have increased its scope of operation. We first review the basic SME algorithm, describe psychological evidence for SME as a process model, and summarize its role in simulating similarity-based retrieval and generalization. Then we describe five techniques now incorporated into the SME that have enabled it to tackle large-scale modeling tasks: (a) Greedy merging rapidly constructs one or more best interpretations of a match in polynomial time: O(n 2 log(n)); (b) Incremental operation enables mappings to be extended as new information is retrieved or derived about the base or target, to model situations where information in a task is updated over time; (c) Ubiquitous predicates model the varying degrees to which items may suggest alignment; (d) Structural evaluation of analogical inferences models aspects of plausibility judgments; (e) Match filters enable large-scale task models to communicate constraints to SME to influence the mapping process. We illustrate via examples from published studies how these enable it to capture a broader range of psychological phenomena than before. Copyright © 2016 Cognitive Science Society, Inc.

  12. Comparing the performance of SIMD computers by running large air pollution models

    DEFF Research Database (Denmark)

    Brown, J.; Hansen, Per Christian; Wasniewski, J.

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on these computers. Using a realistic large-scale model, we gained detailed insight about the performance of the computers involved when used to solve large-scale scientific...... problems that involve several types of numerical computations. The computers used in our study are the Connection Machines CM-200 and CM-5, and the MasPar MP-2216...

  13. Questionnaire: involved actors in large disused components management - Summary Of Responses To The Questionnaire

    International Nuclear Information System (INIS)

    2012-01-01

    The aim of the Questionnaire is to establish an overview of the various bodies [Actors] that have responsibilities or input to the issue of large component decommissioning. In answering the intent is to cover the overall organisation and those bits that have most relevance to large components. The answers should reflect the areas from site operations to decommissioning as well as the wider issue of disposal at another location. The Questionnaire covers the following points: 1 - What is the country (institutional) structure for decommissioning? 2 - who does what and where lie the responsibilities? 3 - Which bodies have responsibility for onsite safety regulation, discharges and disposal? 4 - Which body(s) owns the facilities? 5 - Describe the responsibilities for funding of the decommissioning plan and disposal plan. Are they one and the same body? Whilst there are differences between countries there are some common threads. Regulation is through the state though the number of regulators involved may vary. In summary, the IAEA principles concerning independence of the regulatory body are followed. Funding arrangements vary but there are plans. Similarly, ownership of facilities is a mix of state and private. Some systems require a separate decommissioning license with Spain having the clearest demarcation of responsibilities for the decommissioning phase and waste management responsibilities

  14. Compilation of information on uncertainties involved in deposition modeling

    International Nuclear Information System (INIS)

    Lewellen, W.S.; Varma, A.K.; Sheng, Y.P.

    1985-04-01

    The current generation of dispersion models contains very simple parameterizations of deposition processes. The analysis here looks at the physical mechanisms governing these processes in an attempt to see if more valid parameterizations are available and what level of uncertainty is involved in either these simple parameterizations or any more advanced parameterization. The report is composed of three parts. The first, on dry deposition model sensitivity, provides an estimate of the uncertainty existing in current estimates of the deposition velocity due to uncertainties in independent variables such as meteorological stability, particle size, surface chemical reactivity and canopy structure. The range of uncertainty estimated for an appropriate dry deposition velocity for a plume generated by a nuclear power plant accident is three orders of magnitude. The second part discusses the uncertainties involved in precipitation scavenging rates for effluents resulting from a nuclear reactor accident. The conclusion is that major uncertainties are involved both as a result of the natural variability of the atmospheric precipitation process and due to our incomplete understanding of the underlying process. The third part involves a review of the important problems associated with modeling the interaction between the atmosphere and a forest. It gives an indication of the magnitude of the problem involved in modeling dry deposition in such environments. Separate analytics have been done for each section and are contained in the EDB

  15. Pituitary and adrenal involvement in diffuse large B-cell lymphoma, with recovery of their function after chemotherapy

    OpenAIRE

    Nakashima, Yasuhiro; Shiratsuchi, Motoaki; Abe, Ichiro; Matsuda, Yayoi; Miyata, Noriyuki; Ohno, Hirofumi; Ikeda, Motohiko; Matsushima, Takamitsu; Nomura, Masatoshi; Takayanagi, Ryoichi

    2013-01-01

    Background Diffuse large B-cell lymphoma sometimes involves the endocrine organs, but involvement of both the pituitary and adrenal glands is extremely rare. Involvement of these structures can lead to hypopituitarism and adrenal insufficiency, and subsequent recovery of their function is rarely seen. The present report describes an extremely rare case of pituitary and adrenal diffuse large B-cell lymphoma presenting with hypopituitarism and adrenal insufficiency with subsequent recovery of p...

  16. Large scale injection test (LASGIT) modelling

    International Nuclear Information System (INIS)

    Arnedo, D.; Olivella, S.; Alonso, E.E.

    2010-01-01

    Document available in extended abstract form only. With the objective of understanding the gas flow processes through clay barriers in schemes of radioactive waste disposal, the Lasgit in situ experiment was planned and is currently in progress. The modelling of the experiment will permit to better understand of the responses, to confirm hypothesis of mechanisms and processes and to learn in order to design future experiments. The experiment and modelling activities are included in the project FORGE (FP7). The in situ large scale injection test Lasgit is currently being performed at the Aespoe Hard Rock Laboratory by SKB and BGS. An schematic layout of the test is shown. The deposition hole follows the KBS3 scheme. A copper canister is installed in the axe of the deposition hole, surrounded by blocks of highly compacted MX-80 bentonite. A concrete plug is placed at the top of the buffer. A metallic lid anchored to the surrounding host rock is included in order to prevent vertical movements of the whole system during gas injection stages (high gas injection pressures are expected to be reached). Hydration of the buffer material is achieved by injecting water through filter mats, two placed at the rock walls and two at the interfaces between bentonite blocks. Water is also injected through the 12 canister filters. Gas injection stages are performed injecting gas to some of the canister injection filters. Since the water pressure and the stresses (swelling pressure development) will be high during gas injection, it is necessary to inject at high gas pressures. This implies mechanical couplings as gas penetrates after the gas entry pressure is achieved and may produce deformations which in turn lead to permeability increments. A 3D hydro-mechanical numerical model of the test using CODE-BRIGHT is presented. The domain considered for the modelling is shown. The materials considered in the simulation are the MX-80 bentonite blocks (cylinders and rings), the concrete plug

  17. Environmental Management Model for Road Maintenance Operation Involving Community Participation

    Science.gov (United States)

    Triyono, A. R. H.; Setyawan, A.; Sobriyah; Setiono, P.

    2017-07-01

    Public expectations of Central Java, which is very high on demand fulfillment, especially road infrastructure as outlined in the number of complaints and community expectations tweeter, Short Mail Massage (SMS), e-mail and public reports from various media, Highways Department of Central Java province requires development model of environmental management in the implementation of a routine way by involving the community in order to fulfill the conditions of a representative, may serve road users safely and comfortably. This study used survey method with SEM analysis and SWOT with Latent Independent Variable (X), namely; Public Participation in the regulation, development, construction and supervision of road (PSM); Public behavior in the utilization of the road (PMJ) Provincial Road Service (PJP); Safety in the Provincial Road (KJP); Integrated Management System (SMT) and latent dependent variable (Y) routine maintenance of the provincial road that is integrated with the environmental management system and involve the participation of the community (MML). The result showed the implementation of routine maintenance of road conditions in Central Java province has yet to implement an environmental management by involving the community; Therefore developed environmental management model with the results of H1: Community Participation (PSM) has positive influence on the Model of Environmental Management (MML); H2: Behavior Society in Jalan Utilization (PMJ) positive effect on Model Environmental Management (MML); H3: Provincial Road Service (PJP) positive effect on Model Environmental Management (MML); H4: Safety in the Provincial Road (KJP) positive effect on Model Environmental Management (MML); H5: Integrated Management System (SMT) has positive influence on the Model of Environmental Management (MML). From the analysis obtained formulation model describing the relationship / influence of the independent variables PSM, PMJ, PJP, KJP, and SMT on the dependent variable

  18. Modeling of modification experiments involving neutral-gas release

    International Nuclear Information System (INIS)

    Bernhardt, P.A.

    1983-01-01

    Many experiments involve the injection of neutral gases into the upper atmosphere. Examples are critical velocity experiments, MHD wave generation, ionospheric hole production, plasma striation formation, and ion tracing. Many of these experiments are discussed in other sessions of the Active Experiments Conference. This paper limits its discussion to: (1) the modeling of the neutral gas dynamics after injection, (2) subsequent formation of ionosphere holes, and (3) use of such holes as experimental tools

  19. Fires involving radioactive materials : transference model; operative recommendations

    International Nuclear Information System (INIS)

    Rodriguez, C.E.; Puntarulo, L.J.; Canibano, J.A.

    1988-01-01

    In all aspects related to the nuclear activity, the occurrence of an explosion, fire or burst type accident, with or without victims, is directly related to the characteristics of the site. The present work analyses the different parameters involved, describing a transference model and recommendations for evaluation and control of the radiological risk for firemen. Special emphasis is placed on the measurement of the variables existing in this kind of operations

  20. Modeling, Analysis, and Optimization Issues for Large Space Structures

    Science.gov (United States)

    Pinson, L. D. (Compiler); Amos, A. K. (Compiler); Venkayya, V. B. (Compiler)

    1983-01-01

    Topics concerning the modeling, analysis, and optimization of large space structures are discussed including structure-control interaction, structural and structural dynamics modeling, thermal analysis, testing, and design.

  1. Patterns of failure of diffuse large B-cell lymphoma patients after involved-site radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Holzhaeuser, Eva; Berlin, Maximilian; Bezold, Thomas; Mayer, Arnulf; Schmidberger, Heinz [University Medical Center Mainz, Department of Radiation Oncology and Radiotherapy, Mainz (Germany); Wollschlaeger, Daniel [University Medical Center Mainz, Institute for Medical Biostatistics, Epidemiology and Informatics, Mainz (Germany); Hess, Georg [University Medical Center Mainz, Department of Internal Medicine, Mainz (Germany)

    2017-12-15

    Radiotherapy (RT) in combination with chemoimmunotherapy is highly efficient in the treatment of diffuse large B-cell lymphoma (DLBCL). This retrospective analysis evaluated the efficacy of the treatment volume and the dose concept of involved-site RT (ISRT). We identified 60 histologically confirmed stage I-IV DLBCL patients treated with multimodal cytotoxic chemoimmunotherapy and followed by consolidative ISRT from 2005-2015. Progression-free survival (PFS) and overall survival (OS) were estimated by Kaplan-Meier method. Univariate analyses were performed by log-rank test and Mann-Whitney U-test. After initial chemoimmunotherapy (mostly R-CHOP; rituximab, cyclophosphamide, doxorubicin, vincristine, and prednisolone), 19 (36%) patients achieved complete response (CR), 34 (64%) partial response (PR) or less. Excluded were 7 (12%) patients with progressive disease after chemoimmunotherapy. All patients underwent ISRT with a dose of 40 Gy. After a median follow-up of 44 months, 79% of the patients remained disease free, while 21% presented with failure, progressive systemic disease, or death. All patients who achieved CR after chemoimmunotherapy remained in CR. Of the patients achieving PR after chemotherapy only 2 failed at the initial site within the ISRT volume. No marginal relapse was observed. Ann Arbor clinical stage I/II showed significantly improved PFS compared to stage III/IV (93% vs 65%; p ≤ 0.021). International Prognostic Index (IPI) score of 0 or 1 compared to 2-5 has been associated with significantly increased PFS (100% vs 70%; p ≤ 0.031). Postchemoimmunotherapy status of CR compared to PR was associated with significantly increased PFS (100% vs 68%; p ≤ 0.004) and OS (100% vs 82%; p ≤ 0.026). Only 3 of 53 patients developed grade II late side effects, whereas grade III or IV side effects have not been observed. These data suggest that a reduction of the RT treatment volume from involved-field (IF) to involved-site (IS) is sufficient because

  2. Small scale models equal large scale savings

    International Nuclear Information System (INIS)

    Lee, R.; Segroves, R.

    1994-01-01

    A physical scale model of a reactor is a tool which can be used to reduce the time spent by workers in the containment during an outage and thus to reduce the radiation dose and save money. The model can be used for worker orientation, and for planning maintenance, modifications, manpower deployment and outage activities. Examples of the use of models are presented. These were for the La Salle 2 and Dresden 1 and 2 BWRs. In each case cost-effectiveness and exposure reduction due to the use of a scale model is demonstrated. (UK)

  3. Random Coefficient Logit Model for Large Datasets

    NARCIS (Netherlands)

    C. Hernández-Mireles (Carlos); D. Fok (Dennis)

    2010-01-01

    textabstractWe present an approach for analyzing market shares and products price elasticities based on large datasets containing aggregate sales data for many products, several markets and for relatively long time periods. We consider the recently proposed Bayesian approach of Jiang et al [Jiang,

  4. Concepts for Future Large Fire Modeling

    Science.gov (United States)

    A. P. Dimitrakopoulos; R. E. Martin

    1987-01-01

    A small number of fires escape initial attack suppression efforts and become large, but their effects are significant and disproportionate. In 1983, of 200,000 wildland fires in the United States, only 4,000 exceeded 100 acres. However, these escaped fires accounted for roughly 95 percent of wildfire-related costs and damages (Pyne, 1984). Thus, future research efforts...

  5. Coarse-Grained Model for Water Involving a Virtual Site.

    Science.gov (United States)

    Deng, Mingsen; Shen, Hujun

    2016-02-04

    In this work, we propose a new coarse-grained (CG) model for water by combining the features of two popular CG water models (BMW and MARTINI models) as well as by adopting a topology similar to that of the TIP4P water model. In this CG model, a CG unit, representing four real water molecules, consists of a virtual site, two positively charged particles, and a van der Waals (vdW) interaction center. Distance constraint is applied to the bonds formed between the vdW interaction center and the positively charged particles. The virtual site, which carries a negative charge, is determined by the locations of the two positively charged particles and the vdW interaction center. For the new CG model of water, we coined the name "CAVS" (charge is attached to a virtual site) due to the involvment of the virtual site. After being tested in molecular dynamic (MD) simulations of bulk water at various time steps, under different temperatures and in different salt (NaCl) concentrations, the CAVS model offers encouraging predictions for some bulk properties of water (such as density, dielectric constant, etc.) when compared to experimental ones.

  6. Approximate Model Checking of PCTL Involving Unbounded Path Properties

    Science.gov (United States)

    Basu, Samik; Ghosh, Arka P.; He, Ru

    We study the problem of applying statistical methods for approximate model checking of probabilistic systems against properties encoded as PCTL formulas. Such approximate methods have been proposed primarily to deal with state-space explosion that makes the exact model checking by numerical methods practically infeasible for large systems. However, the existing statistical methods either consider a restricted subset of PCTL, specifically, the subset that can only express bounded until properties; or rely on user-specified finite bound on the sample path length. We propose a new method that does not have such restrictions and can be effectively used to reason about unbounded until properties. We approximate probabilistic characteristics of an unbounded until property by that of a bounded until property for a suitably chosen value of the bound. In essence, our method is a two-phase process: (a) the first phase is concerned with identifying the bound k 0; (b) the second phase computes the probability of satisfying the k 0-bounded until property as an estimate for the probability of satisfying the corresponding unbounded until property. In both phases, it is sufficient to verify bounded until properties which can be effectively done using existing statistical techniques. We prove the correctness of our technique and present its prototype implementations. We empirically show the practical applicability of our method by considering different case studies including a simple infinite-state model, and large finite-state models such as IPv4 zeroconf protocol and dining philosopher protocol modeled as Discrete Time Markov chains.

  7. Modeling of nonlinear responses for reciprocal transducers involving polarization switching

    DEFF Research Database (Denmark)

    Willatzen, Morten; Wang, Linxiang

    2007-01-01

    Nonlinearities and hysteresis effects in a reciprocal PZT transducer are examined by use of a dynamical mathematical model on the basis of phase-transition theory. In particular, we consider the perovskite piezoelectric ceramic in which the polarization process in the material can be modeled...... by Landau theory for the first-order phase transformation, in which each polarization state is associated with a minimum of the Landau free-energy function. Nonlinear constitutive laws are obtained by using thermodynamical equilibrium conditions, and hysteretic behavior of the material can be modeled...... intrinsically. The time-dependent Ginzburg-Landau theory is used in the parameter identification involving hysteresis effects. We use the Chebyshev collocation method in the numerical simulations. The elastic field is assumed to be coupled linearly with other fields, and the nonlinearity is in the E-D coupling...

  8. Fluid Methods for Modeling Large, Heterogeneous Networks

    National Research Council Canada - National Science Library

    Towsley, Don; Gong, Weibo; Hollot, Kris; Liu, Yong; Misra, Vishal

    2005-01-01

    .... The resulting fluid models were used to develop novel active queue management mechanisms resulting in more stable TCP performance and novel rate controllers for the purpose of providing minimum rate...

  9. Laboratory modeling of aspects of large fires

    Science.gov (United States)

    Carrier, G. F.; Fendell, F. E.; Fleeter, R. D.; Gat, N.; Cohen, L. M.

    1984-04-01

    The design, construction, and use of a laboratory-scale combustion tunnel for simulating aspects of large-scale free-burning fires are described. The facility consists of an enclosed, rectangular-cross section (1.12 m wide x 1.27 m high) test section of about 5.6 m in length, fitted with large sidewall windows for viewing. A long upwind section permits smoothing (by screens and honeycombs) of a forced-convective flow, generated by a fan and adjustable in wind speed (up to a maximum speed of about 20 m/s prior to smoothing). Special provision is made for unconstrained ascent of a strongly buoyant plume, the duct over the test section being about 7 m in height. Also, a translatable test-section ceiling can be used to prevent jet-type spreading into the duct of the impressed flow; that is, the wind arriving at a site (say) half-way along the test section can be made (by ceiling movement) approximately the same as that at the leading edge of the test section with a fully open duct (fully retracted ceiling). Of particular interest here are the rate and structure of wind-aided flame spread streamwise along a uniform matrix of vertically oriented small fuel elements (such as toothpicks or coffee-strirrers), implanted in clay stratum on the test-section floor; this experiment is motivated by flame spread across strewn debris, such as may be anticipated in an urban environment after severe blast damage.

  10. Grid computing in large pharmaceutical molecular modeling.

    Science.gov (United States)

    Claus, Brian L; Johnson, Stephen R

    2008-07-01

    Most major pharmaceutical companies have employed grid computing to expand their compute resources with the intention of minimizing additional financial expenditure. Historically, one of the issues restricting widespread utilization of the grid resources in molecular modeling is the limited set of suitable applications amenable to coarse-grained parallelization. Recent advances in grid infrastructure technology coupled with advances in application research and redesign will enable fine-grained parallel problems, such as quantum mechanics and molecular dynamics, which were previously inaccessible to the grid environment. This will enable new science as well as increase resource flexibility to load balance and schedule existing workloads.

  11. Does Business Model Affect CSR Involvement? A Survey of Polish Manufacturing and Service Companies

    Directory of Open Access Journals (Sweden)

    Marzanna Katarzyna Witek-Hajduk

    2016-02-01

    Full Text Available The study explores links between types of business models used by companies and their involvement in CSR. As the main part of our conceptual framework we used a business model taxonomy developed by Dudzik and Witek-Hajduk, which identifies five types of models: traditionalists, market players, contractors, distributors, and integrators. From shared characteristics of the business model profiles, we proposed that market players and integrators will show significantly higher levels of involvement in CSR than the three other classes of companies. Among other things, both market players and integrators relied strongly on building own brand value and fostering harmonious supply channel relations, which served as a rationale for our hypothesis. The data for the study were obtained through a combined CATI and CAWI survey on a group of 385 managers of medium and large enterprises. The sample was representative for the three Polish industries of chemical manufacturing, food production, and retailing. Statistical methods included confirmatory factor analysis and one-way ANOVA with contrasts and post hoc tests. The findings supported our hypothesis, showing that market players and integrators were indeed more engaged in CSR than other groups of firms. This may suggest that managers in control of these companies could bolster the integrity of their business models by increasing CSR involvement. Another important contribution of the study was to propose and validate a versatile scale for assessing CSR involvement, which showed measurement invariance for all involved industries.

  12. Exactly soluble models for surface partition of large clusters

    International Nuclear Information System (INIS)

    Bugaev, K.A.; Bugaev, K.A.; Elliott, J.B.

    2007-01-01

    The surface partition of large clusters is studied analytically within a framework of the 'Hills and Dales Model'. Three formulations are solved exactly by using the Laplace-Fourier transformation method. In the limit of small amplitude deformations, the 'Hills and Dales Model' gives the upper and lower bounds for the surface entropy coefficient of large clusters. The found surface entropy coefficients are compared with those of large clusters within the 2- and 3-dimensional Ising models

  13. Using Quality Circles to Enhance Student Involvement and Course Quality in a Large Undergraduate Food Science and Human Nutrition Course

    Science.gov (United States)

    Schmidt, S. J.; Parmer, M. S.; Bohn, D. M.

    2005-01-01

    Large undergraduate classes are a challenge to manage, to engage, and to assess, yet such formidable classes can flourish when student participation is facilitated. One method of generating authentic student involvement is implementation of quality circles by means of a Student Feedback Committee (SFC), which is a volunteer problem-solving and…

  14. Emergency preparedness: medical management of nuclear accidents involving large groups of victims

    International Nuclear Information System (INIS)

    Parmentier, N.; Nenot, J.C.

    1988-01-01

    The treatment of overexposed individuals implies hospitalisation in a specialized unit applying hematological intense care. If the accident results in a small number of casualties, the medical management does not raise major problems in most of the countries, where specialized units exist, as roughly 7% of the beds are available at any time. But an accident which would involved tens or hundreds of people raises much more problems for hospitalization. Such problems are also completely different and will involve steps in the medical handling, mainly triage, (combined injuries), determination of whole body dose levels, transient hospitalization. In this case, preplanning is necessary, adapted to the system of medical care in case of a catastrophic event in the given Country, with the main basic principles : emergency concerns essentially the classical injuries (burns and trauma) - and contamination problems in some cases - treatment of radiation syndrome is not an emergency during the first days but some essential actions have to be taken such as early blood sampling for biological dosimetry and for HLa typing

  15. ABOUT MODELING COMPLEX ASSEMBLIES IN SOLIDWORKS – LARGE AXIAL BEARING

    Directory of Open Access Journals (Sweden)

    Cătălin IANCU

    2017-12-01

    Full Text Available In this paperwork is presented the modeling strategy used in SOLIDWORKS for modeling special items as large axial bearing and the steps to be taken in order to obtain a better design. In the paper are presented the features that are used for modeling parts, and then the steps that must be taken in order to obtain the 3D model of a large axial bearing used for bucket-wheel equipment for charcoal moving.

  16. Heart of Lymphoma: Primary Mediastinal Large B-Cell Lymphoma with Endomyocardial Involvement

    Directory of Open Access Journals (Sweden)

    Elisa Rogowitz

    2013-01-01

    Full Text Available Primary mediastinal B-cell lymphoma (PMBCL is an uncommon aggressive subset of diffuse large B-cell lymphomas. Although PMBCL frequently spreads locally from the thymus into the pleura or pericardium, it rarely invades directly through the heart. Herein, we report a case of a young Mexican female diagnosed with PMBCL with clear infiltration of lymphoma through the cardiac wall and into the right atrium and tricuspid valve leading to tricuspid regurgitation. This was demonstrated by cardiac MRI and transthoracic echocardiogram. In addition, cardiac MRI and CT scan of the chest revealed the large mediastinal mass completely surrounding and eroding into the superior vena cava (SVC wall causing a collar of stokes. The cardiac and SVC infiltration created a significant therapeutic challenge as lymphomas are very responsive to chemotherapy, and treatment could potentially lead to vascular wall rupture and hemorrhage. Despite the lack of conclusive data on chemotherapy-induced hemodynamic compromise in such scenarios, her progressive severe SVC syndrome and respiratory distress necessitated urgent intervention. In addition to the unique presentation of this rare lymphoma, our case report highlights the safety of R-CHOP treatment.

  17. Long-Term Calculations with Large Air Pollution Models

    DEFF Research Database (Denmark)

    Ambelas Skjøth, C.; Bastrup-Birk, A.; Brandt, J.

    1999-01-01

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  18. A Low-involvement Choice Model for Consumer Panel Data

    OpenAIRE

    Brugha, Cathal; Turley, Darach

    1987-01-01

    The long overdue surge of interest in consumer behaviour texts in low-involvement purchasing has only begun to gather momemtum. It often takes the form of asking whether concepts usually associated with high-involvement purchasing can be applied, albeit in a modified form, to low-involvement purchasing. One such concept is evoked set, that is the range of brands deemed acceptable by a consumer in a particular product area. This has characteristically been associated with consumption involving...

  19. Using Agent Base Models to Optimize Large Scale Network for Large System Inventories

    Science.gov (United States)

    Shameldin, Ramez Ahmed; Bowling, Shannon R.

    2010-01-01

    The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.

  20. Constituent rearrangement model and large transverse momentum reactions

    International Nuclear Information System (INIS)

    Igarashi, Yuji; Imachi, Masahiro; Matsuoka, Takeo; Otsuki, Shoichiro; Sawada, Shoji.

    1978-01-01

    In this chapter, two models based on the constituent rearrangement picture for large p sub( t) phenomena are summarized. One is the quark-junction model, and the other is the correlating quark rearrangement model. Counting rules of the models apply to both two-body reactions and hadron productions. (author)

  1. Mesothelioma With a Large Prevascular Lymph Node: N1 Involvement or Something Different?

    Science.gov (United States)

    Berzenji, Lawek; Van Schil, Paul E; Snoeckx, Annemie; Hertoghs, Marjan; Carp, Laurens

    2018-05-01

    A 64-year-old man presented with a large amount of right-sided pleural fluid on imaging, together with calcified pleural plaques and an enlarged nodular structure in the prevascular mediastinum, presumably an enlarged lymph node. Pleural biopsies were obtained during video-assisted thoracoscopic surgery to exclude malignancy. Histopathology showed an epithelial malignant pleural mesothelioma. Induction chemotherapy with cisplatin and pemetrexed was administered followed by an extended pleurectomy and decortication with systematic nodal dissection. Histopathology confirmed the diagnosis of a ypT3N0M0 (stage IB) mesothelioma, and an unexpected thymoma type B2 (stage II) was discovered in the prevascular nodule. Simultaneous occurrence of a mesothelioma and thymoma is extremely rare. Copyright © 2018 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  2. A deployable in vivo EPR tooth dosimeter for triage after a radiation event involving large populations

    International Nuclear Information System (INIS)

    Williams, Benjamin B.; Dong, Ruhong; Flood, Ann Barry; Grinberg, Oleg; Kmiec, Maciej; Lesniewski, Piotr N.; Matthews, Thomas P.; Nicolalde, Roberto J.; Raynolds, Tim; Salikhov, Ildar K.; Swartz, Harold M.

    2011-01-01

    In order to meet the potential need for emergency large-scale retrospective radiation biodosimetry following an accident or attack, we have developed instrumentation and methodology for in vivo electron paramagnetic resonance spectroscopy to quantify concentrations of radiation-induced radicals within intact teeth. This technique has several very desirable characteristics for triage, including independence from confounding biologic factors, a non-invasive measurement procedure, the capability to make measurements at any time after the event, suitability for use by non-expert operators at the site of an event, and the ability to provide immediate estimates of individual doses. Throughout development there has been a particular focus on the need for a deployable system, including instrumental requirements for transport and field use, the need for high throughput, and use by minimally trained operators. Numerous measurements have been performed using this system in clinical and other non-laboratory settings, including in vivo measurements with unexposed populations as well as patients undergoing radiation therapies. The collection and analyses of sets of three serially-acquired spectra with independent placements of the resonator, in a data collection process lasting approximately 5 min, provides dose estimates with standard errors of prediction of approximately 1 Gy. As an example, measurements were performed on incisor teeth of subjects who had either received no irradiation or 2 Gy total body irradiation for prior bone marrow transplantation; this exercise provided a direct and challenging test of our capability to identify subjects who would be in need of acute medical care. -- Highlights: → Advances in radiation biodosimetry are needed for large-scale emergency response. → Radiation-induced radicals in tooth enamel can be measured using in vivo EPR. → A novel transportable spectrometer was applied in the laboratory and at remote sites. → The current

  3. A deployable in vivo EPR tooth dosimeter for triage after a radiation event involving large populations

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Benjamin B., E-mail: Benjamin.B.Williams@dartmouth.edu [Dartmouth Physically Based Biodosimetry Center for Medical Countermeasures Against Radiation (Dart-Dose CMCR), Dartmouth Medical School, Hanover, NH 03768 (United States); Section of Radiation Oncology, Department of Medicine, Dartmouth Hitchcock Medical Center, Lebanon, NH (United States); Dong, Ruhong [Dartmouth Physically Based Biodosimetry Center for Medical Countermeasures Against Radiation (Dart-Dose CMCR), Dartmouth Medical School, Hanover, NH 03768 (United States); Flood, Ann Barry [Dartmouth Physically Based Biodosimetry Center for Medical Countermeasures Against Radiation (Dart-Dose CMCR), Dartmouth Medical School, Hanover, NH 03768 (United States); Clin-EPR, LLC, Lyme, NH (United States); Grinberg, Oleg [Clin-EPR, LLC, Lyme, NH (United States); Kmiec, Maciej; Lesniewski, Piotr N.; Matthews, Thomas P.; Nicolalde, Roberto J.; Raynolds, Tim [Dartmouth Physically Based Biodosimetry Center for Medical Countermeasures Against Radiation (Dart-Dose CMCR), Dartmouth Medical School, Hanover, NH 03768 (United States); Salikhov, Ildar K. [Clin-EPR, LLC, Lyme, NH (United States); Swartz, Harold M. [Dartmouth Physically Based Biodosimetry Center for Medical Countermeasures Against Radiation (Dart-Dose CMCR), Dartmouth Medical School, Hanover, NH 03768 (United States); Clin-EPR, LLC, Lyme, NH (United States)

    2011-09-15

    In order to meet the potential need for emergency large-scale retrospective radiation biodosimetry following an accident or attack, we have developed instrumentation and methodology for in vivo electron paramagnetic resonance spectroscopy to quantify concentrations of radiation-induced radicals within intact teeth. This technique has several very desirable characteristics for triage, including independence from confounding biologic factors, a non-invasive measurement procedure, the capability to make measurements at any time after the event, suitability for use by non-expert operators at the site of an event, and the ability to provide immediate estimates of individual doses. Throughout development there has been a particular focus on the need for a deployable system, including instrumental requirements for transport and field use, the need for high throughput, and use by minimally trained operators. Numerous measurements have been performed using this system in clinical and other non-laboratory settings, including in vivo measurements with unexposed populations as well as patients undergoing radiation therapies. The collection and analyses of sets of three serially-acquired spectra with independent placements of the resonator, in a data collection process lasting approximately 5 min, provides dose estimates with standard errors of prediction of approximately 1 Gy. As an example, measurements were performed on incisor teeth of subjects who had either received no irradiation or 2 Gy total body irradiation for prior bone marrow transplantation; this exercise provided a direct and challenging test of our capability to identify subjects who would be in need of acute medical care. -- Highlights: > Advances in radiation biodosimetry are needed for large-scale emergency response. > Radiation-induced radicals in tooth enamel can be measured using in vivo EPR. > A novel transportable spectrometer was applied in the laboratory and at remote sites. > The current instrument

  4. Comparison Between Overtopping Discharge in Small and Large Scale Models

    DEFF Research Database (Denmark)

    Helgason, Einar; Burcharth, Hans F.

    2006-01-01

    The present paper presents overtopping measurements from small scale model test performed at the Haudraulic & Coastal Engineering Laboratory, Aalborg University, Denmark and large scale model tests performed at the Largde Wave Channel,Hannover, Germany. Comparison between results obtained from...... small and large scale model tests show no clear evidence of scale effects for overtopping above a threshold value. In the large scale model no overtopping was measured for waveheights below Hs = 0.5m as the water sunk into the voids between the stones on the crest. For low overtopping scale effects...

  5. Multistability in Large Scale Models of Brain Activity.

    Directory of Open Access Journals (Sweden)

    Mathieu Golos

    2015-12-01

    Full Text Available Noise driven exploration of a brain network's dynamic repertoire has been hypothesized to be causally involved in cognitive function, aging and neurodegeneration. The dynamic repertoire crucially depends on the network's capacity to store patterns, as well as their stability. Here we systematically explore the capacity of networks derived from human connectomes to store attractor states, as well as various network mechanisms to control the brain's dynamic repertoire. Using a deterministic graded response Hopfield model with connectome-based interactions, we reconstruct the system's attractor space through a uniform sampling of the initial conditions. Large fixed-point attractor sets are obtained in the low temperature condition, with a bigger number of attractors than ever reported so far. Different variants of the initial model, including (i a uniform activation threshold or (ii a global negative feedback, produce a similarly robust multistability in a limited parameter range. A numerical analysis of the distribution of the attractors identifies spatially-segregated components, with a centro-medial core and several well-delineated regional patches. Those different modes share similarity with the fMRI independent components observed in the "resting state" condition. We demonstrate non-stationary behavior in noise-driven generalizations of the models, with different meta-stable attractors visited along the same time course. Only the model with a global dynamic density control is found to display robust and long-lasting non-stationarity with no tendency toward either overactivity or extinction. The best fit with empirical signals is observed at the edge of multistability, a parameter region that also corresponds to the highest entropy of the attractors.

  6. How and Why Fathers Are Involved in Their Children's Education: Gendered Model of Parent Involvement

    Science.gov (United States)

    Kim, Sung won

    2018-01-01

    Accumulating evidence points to the unique contributions fathers make to their children's academic outcomes. However, the large body of multi-disciplinary literature on fatherhood does not address how fathers engage in specific practices relevant to education, while the educational research in the United States focused on parent involvement often…

  7. Large-signal modeling method for power FETs and diodes

    Energy Technology Data Exchange (ETDEWEB)

    Sun Lu; Wang Jiali; Wang Shan; Li Xuezheng; Shi Hui; Wang Na; Guo Shengping, E-mail: sunlu_1019@126.co [School of Electromechanical Engineering, Xidian University, Xi' an 710071 (China)

    2009-06-01

    Under a large signal drive level, a frequency domain black box model of the nonlinear scattering function is introduced into power FETs and diodes. A time domain measurement system and a calibration method based on a digital oscilloscope are designed to extract the nonlinear scattering function of semiconductor devices. The extracted models can reflect the real electrical performance of semiconductor devices and propose a new large-signal model to the design of microwave semiconductor circuits.

  8. Large-signal modeling method for power FETs and diodes

    International Nuclear Information System (INIS)

    Sun Lu; Wang Jiali; Wang Shan; Li Xuezheng; Shi Hui; Wang Na; Guo Shengping

    2009-01-01

    Under a large signal drive level, a frequency domain black box model of the nonlinear scattering function is introduced into power FETs and diodes. A time domain measurement system and a calibration method based on a digital oscilloscope are designed to extract the nonlinear scattering function of semiconductor devices. The extracted models can reflect the real electrical performance of semiconductor devices and propose a new large-signal model to the design of microwave semiconductor circuits.

  9. Multivariate statistical analysis software technologies for astrophysical research involving large data bases

    Science.gov (United States)

    Djorgovski, S. George

    1994-01-01

    We developed a package to process and analyze the data from the digital version of the Second Palomar Sky Survey. This system, called SKICAT, incorporates the latest in machine learning and expert systems software technology, in order to classify the detected objects objectively and uniformly, and facilitate handling of the enormous data sets from digital sky surveys and other sources. The system provides a powerful, integrated environment for the manipulation and scientific investigation of catalogs from virtually any source. It serves three principal functions: image catalog construction, catalog management, and catalog analysis. Through use of the GID3* Decision Tree artificial induction software, SKICAT automates the process of classifying objects within CCD and digitized plate images. To exploit these catalogs, the system also provides tools to merge them into a large, complete database which may be easily queried and modified when new data or better methods of calibrating or classifying become available. The most innovative feature of SKICAT is the facility it provides to experiment with and apply the latest in machine learning technology to the tasks of catalog construction and analysis. SKICAT provides a unique environment for implementing these tools for any number of future scientific purposes. Initial scientific verification and performance tests have been made using galaxy counts and measurements of galaxy clustering from small subsets of the survey data, and a search for very high redshift quasars. All of the tests were successful, and produced new and interesting scientific results. Attachments to this report give detailed accounts of the technical aspects for multivariate statistical analysis of small and moderate-size data sets, called STATPROG. The package was tested extensively on a number of real scientific applications, and has produced real, published results.

  10. Multivariate Statistical Analysis Software Technologies for Astrophysical Research Involving Large Data Bases

    Science.gov (United States)

    Djorgovski, S. G.

    1994-01-01

    We developed a package to process and analyze the data from the digital version of the Second Palomar Sky Survey. This system, called SKICAT, incorporates the latest in machine learning and expert systems software technology, in order to classify the detected objects objectively and uniformly, and facilitate handling of the enormous data sets from digital sky surveys and other sources. The system provides a powerful, integrated environment for the manipulation and scientific investigation of catalogs from virtually any source. It serves three principal functions: image catalog construction, catalog management, and catalog analysis. Through use of the GID3* Decision Tree artificial induction software, SKICAT automates the process of classifying objects within CCD and digitized plate images. To exploit these catalogs, the system also provides tools to merge them into a large, complex database which may be easily queried and modified when new data or better methods of calibrating or classifying become available. The most innovative feature of SKICAT is the facility it provides to experiment with and apply the latest in machine learning technology to the tasks of catalog construction and analysis. SKICAT provides a unique environment for implementing these tools for any number of future scientific purposes. Initial scientific verification and performance tests have been made using galaxy counts and measurements of galaxy clustering from small subsets of the survey data, and a search for very high redshift quasars. All of the tests were successful and produced new and interesting scientific results. Attachments to this report give detailed accounts of the technical aspects of the SKICAT system, and of some of the scientific results achieved to date. We also developed a user-friendly package for multivariate statistical analysis of small and moderate-size data sets, called STATPROG. The package was tested extensively on a number of real scientific applications and has

  11. Plastic limit pressure of spherical vessels with combined hardening involving large deformation

    International Nuclear Information System (INIS)

    Leu, S.-Y.; Liao, K.-C.; Lin, Y.-C.

    2014-01-01

    The paper aims to investigate plastic limit pressure of spherical vessels of nonlinear combined isotropic/kinematic hardening materials. The Armstrong-Frederick kinematic hardening model is adopted and the Voce hardening law is incorporated for isotropic hardening behavior. Analytically, we extend sequential limit analysis to deal with combined isotropic/kinematic hardening materials. Further, exact solutions of plastic limit pressure were developed analytically by conducting both static and kinematic limit analysis. The onset of instability was also derived and solved iteratively by Newton's method. Numerically, elastic–plastic analysis is also performed by the commercial finite-element code ABAQUS incorporated with the user subroutine UMAT implemented with user materials of combined hardening. Finally, the problem formulation and the solution derivations presented here are validated by a very good agreement between the numerical results of exact solutions and the results of elastic–plastic finite-element analysis by ABAQUS. -- Highlights: • Sequential limit analysis is extended to consider combined hardening. • Exact solutions of plastic limit pressure are developed. • The onset of instability of a spherical vessel is derived and solved numerically

  12. A model of the supplier involvement in the product innovation

    Directory of Open Access Journals (Sweden)

    Kumar Manoj

    2017-01-01

    Full Text Available In this paper we examine the product innovation in a supply chain by a supplier and derive a model for a supplier’s product innovation policy. The product innovation of a supplier can contribute to the long-term competitiveness for the supply chain, and as it is for many supply chains a major factor, it should be considered in the development of strategies for a supplier. Here, we evaluate the effectiveness of supplier product innovation as a strategic tool to enhance the competitiveness and viability of supply chain. This paper explores the dynamic research performance of a supplier with endogenous time preference under a given arrangement of product innovation. We find that the optimal effort level and the achieved product innovation obey a saddle point path, or show tremendous fluctuations even without introducing the stochastic nature of product innovative activity. We also find that the fluctuation frequency is largely dependent both on the supplier’s characteristics such as supplier’s product innovative ability and on the nature of product innovation process per se. Short-run analyses are also made on the effect of supply chain cooperation in the product innovation process.

  13. Large scale stochastic spatio-temporal modelling with PCRaster

    NARCIS (Netherlands)

    Karssenberg, D.J.; Drost, N.; Schmitz, O.; Jong, K. de; Bierkens, M.F.P.

    2013-01-01

    PCRaster is a software framework for building spatio-temporal models of land surface processes (http://www.pcraster.eu). Building blocks of models are spatial operations on raster maps, including a large suite of operations for water and sediment routing. These operations are available to model

  14. An accurate and simple large signal model of HEMT

    DEFF Research Database (Denmark)

    Liu, Qing

    1989-01-01

    A large-signal model of discrete HEMTs (high-electron-mobility transistors) has been developed. It is simple and suitable for SPICE simulation of hybrid digital ICs. The model parameters are extracted by using computer programs and data provided by the manufacturer. Based on this model, a hybrid...

  15. Estimation in a multiplicative mixed model involving a genetic relationship matrix

    Directory of Open Access Journals (Sweden)

    Eccleston John A

    2009-04-01

    Full Text Available Abstract Genetic models partitioning additive and non-additive genetic effects for populations tested in replicated multi-environment trials (METs in a plant breeding program have recently been presented in the literature. For these data, the variance model involves the direct product of a large numerator relationship matrix A, and a complex structure for the genotype by environment interaction effects, generally of a factor analytic (FA form. With MET data, we expect a high correlation in genotype rankings between environments, leading to non-positive definite covariance matrices. Estimation methods for reduced rank models have been derived for the FA formulation with independent genotypes, and we employ these estimation methods for the more complex case involving the numerator relationship matrix. We examine the performance of differing genetic models for MET data with an embedded pedigree structure, and consider the magnitude of the non-additive variance. The capacity of existing software packages to fit these complex models is largely due to the use of the sparse matrix methodology and the average information algorithm. Here, we present an extension to the standard formulation necessary for estimation with a factor analytic structure across multiple environments.

  16. Shell model in large spaces and statistical spectroscopy

    International Nuclear Information System (INIS)

    Kota, V.K.B.

    1996-01-01

    For many nuclear structure problems of current interest it is essential to deal with shell model in large spaces. For this, three different approaches are now in use and two of them are: (i) the conventional shell model diagonalization approach but taking into account new advances in computer technology; (ii) the shell model Monte Carlo method. A brief overview of these two methods is given. Large space shell model studies raise fundamental questions regarding the information content of the shell model spectrum of complex nuclei. This led to the third approach- the statistical spectroscopy methods. The principles of statistical spectroscopy have their basis in nuclear quantum chaos and they are described (which are substantiated by large scale shell model calculations) in some detail. (author)

  17. Nuclear spectroscopy in large shell model spaces: recent advances

    International Nuclear Information System (INIS)

    Kota, V.K.B.

    1995-01-01

    Three different approaches are now available for carrying out nuclear spectroscopy studies in large shell model spaces and they are: (i) the conventional shell model diagonalization approach but taking into account new advances in computer technology; (ii) the recently introduced Monte Carlo method for the shell model; (iii) the spectral averaging theory, based on central limit theorems, in indefinitely large shell model spaces. The various principles, recent applications and possibilities of these three methods are described and the similarity between the Monte Carlo method and the spectral averaging theory is emphasized. (author). 28 refs., 1 fig., 5 tabs

  18. Detonation and fragmentation modeling for the description of large scale vapor explosions

    International Nuclear Information System (INIS)

    Buerger, M.; Carachalios, C.; Unger, H.

    1985-01-01

    The thermal detonation modeling of large-scale vapor explosions is shown to be indispensable for realistic safety evaluations. A steady-state as well as transient detonation model have been developed including detailed descriptions of the dynamics as well as the fragmentation processes inside a detonation wave. Strong restrictions for large-scale vapor explosions are obtained from this modeling and they indicate that the reactor pressure vessel would even withstand explosions with unrealistically high masses of corium involved. The modeling is supported by comparisons with a detonation experiment and - concerning its key part - hydronamic fragmentation experiments. (orig.) [de

  19. Numerical Modeling of Large-Scale Rocky Coastline Evolution

    Science.gov (United States)

    Limber, P.; Murray, A. B.; Littlewood, R.; Valvo, L.

    2008-12-01

    Seventy-five percent of the world's ocean coastline is rocky. On large scales (i.e. greater than a kilometer), many intertwined processes drive rocky coastline evolution, including coastal erosion and sediment transport, tectonics, antecedent topography, and variations in sea cliff lithology. In areas such as California, an additional aspect of rocky coastline evolution involves submarine canyons that cut across the continental shelf and extend into the nearshore zone. These types of canyons intercept alongshore sediment transport and flush sand to abyssal depths during periodic turbidity currents, thereby delineating coastal sediment transport pathways and affecting shoreline evolution over large spatial and time scales. How tectonic, sediment transport, and canyon processes interact with inherited topographic and lithologic settings to shape rocky coastlines remains an unanswered, and largely unexplored, question. We will present numerical model results of rocky coastline evolution that starts with an immature fractal coastline. The initial shape is modified by headland erosion, wave-driven alongshore sediment transport, and submarine canyon placement. Our previous model results have shown that, as expected, an initial sediment-free irregularly shaped rocky coastline with homogeneous lithology will undergo smoothing in response to wave attack; headlands erode and mobile sediment is swept into bays, forming isolated pocket beaches. As this diffusive process continues, pocket beaches coalesce, and a continuous sediment transport pathway results. However, when a randomly placed submarine canyon is introduced to the system as a sediment sink, the end results are wholly different: sediment cover is reduced, which in turn increases weathering and erosion rates and causes the entire shoreline to move landward more rapidly. The canyon's alongshore position also affects coastline morphology. When placed offshore of a headland, the submarine canyon captures local sediment

  20. Rotation sequence to report humerothoracic kinematics during 3D motion involving large horizontal component: application to the tennis forehand drive.

    Science.gov (United States)

    Creveaux, Thomas; Sevrez, Violaine; Dumas, Raphaël; Chèze, Laurence; Rogowski, Isabelle

    2018-03-01

    The aim of this study was to examine the respective aptitudes of three rotation sequences (Y t X f 'Y h '', Z t X f 'Y h '', and X t Z f 'Y h '') to effectively describe the orientation of the humerus relative to the thorax during a movement involving a large horizontal abduction/adduction component: the tennis forehand drive. An optoelectronic system was used to record the movements of eight elite male players, each performing ten forehand drives. The occurrences of gimbal lock, phase angle discontinuity and incoherency in the time course of the three angles defining humerothoracic rotation were examined for each rotation sequence. Our results demonstrated that no single sequence effectively describes humerothoracic motion without discontinuities throughout the forehand motion. The humerothoracic joint angles can nevertheless be described without singularities when considering the backswing/forward-swing and the follow-through phases separately. Our findings stress that the sequence choice may have implications for the report and interpretation of 3D joint kinematics during large shoulder range of motion. Consequently, the use of Euler/Cardan angles to represent 3D orientation of the humerothoracic joint in sport tasks requires the evaluation of the rotation sequence regarding singularity occurrence before analysing the kinematic data, especially when the task involves a large shoulder range of motion in the horizontal plane.

  1. Differences in passenger car and large truck involved crash frequencies at urban signalized intersections: an exploratory analysis.

    Science.gov (United States)

    Dong, Chunjiao; Clarke, David B; Richards, Stephen H; Huang, Baoshan

    2014-01-01

    The influence of intersection features on safety has been examined extensively because intersections experience a relatively large proportion of motor vehicle conflicts and crashes. Although there are distinct differences between passenger cars and large trucks-size, operating characteristics, dimensions, and weight-modeling crash counts across vehicle types is rarely addressed. This paper develops and presents a multivariate regression model of crash frequencies by collision vehicle type using crash data for urban signalized intersections in Tennessee. In addition, the performance of univariate Poisson-lognormal (UVPLN), multivariate Poisson (MVP), and multivariate Poisson-lognormal (MVPLN) regression models in establishing the relationship between crashes, traffic factors, and geometric design of roadway intersections is investigated. Bayesian methods are used to estimate the unknown parameters of these models. The evaluation results suggest that the MVPLN model possesses most of the desirable statistical properties in developing the relationships. Compared to the UVPLN and MVP models, the MVPLN model better identifies significant factors and predicts crash frequencies. The findings suggest that traffic volume, truck percentage, lighting condition, and intersection angle significantly affect intersection safety. Important differences in car, car-truck, and truck crash frequencies with respect to various risk factors were found to exist between models. The paper provides some new or more comprehensive observations that have not been covered in previous studies. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Modeling and Forecasting Large Realized Covariance Matrices and Portfolio Choice

    NARCIS (Netherlands)

    Callot, Laurent A.F.; Kock, Anders B.; Medeiros, Marcelo C.

    2017-01-01

    We consider modeling and forecasting large realized covariance matrices by penalized vector autoregressive models. We consider Lasso-type estimators to reduce the dimensionality and provide strong theoretical guarantees on the forecast capability of our procedure. We show that we can forecast

  3. Dynamic Modeling, Optimization, and Advanced Control for Large Scale Biorefineries

    DEFF Research Database (Denmark)

    Prunescu, Remus Mihail

    with a complex conversion route. Computational fluid dynamics is used to model transport phenomena in large reactors capturing tank profiles, and delays due to plug flows. This work publishes for the first time demonstration scale real data for validation showing that the model library is suitable...

  4. Modelling and measurements of wakes in large wind farms

    International Nuclear Information System (INIS)

    Barthelmie, R J; Rathmann, O; Frandsen, S T; Hansen, K S; Politis, E; Prospathopoulos, J; Rados, K; Cabezon, D; Schlez, W; Phillips, J; Neubert, A; Schepers, J G; Pijl, S P van der

    2007-01-01

    The paper presents research conducted in the Flow workpackage of the EU funded UPWIND project which focuses on improving models of flow within and downwind of large wind farms in complex terrain and offshore. The main activity is modelling the behaviour of wind turbine wakes in order to improve power output predictions

  5. Modelling and measurements of wakes in large wind farms

    DEFF Research Database (Denmark)

    Barthelmie, Rebecca Jane; Rathmann, Ole; Frandsen, Sten Tronæs

    2007-01-01

    The paper presents research conducted in the Flow workpackage of the EU funded UPWIND project which focuses on improving models of flow within and downwind of large wind farms in complex terrain and offshore. The main activity is modelling the behaviour of wind turbine wakes in order to improve...

  6. Testing hypotheses involving Cronbach's alpha using marginal models

    NARCIS (Netherlands)

    Kuijpers, R.E.; van der Ark, L.A.; Croon, M.A.

    2013-01-01

    We discuss the statistical testing of three relevant hypotheses involving Cronbach's alpha: one where alpha equals a particular criterion; a second testing the equality of two alpha coefficients for independent samples; and a third testing the equality of two alpha coefficients for dependent

  7. Fast sampling from a Hidden Markov Model posterior for large data

    DEFF Research Database (Denmark)

    Bonnevie, Rasmus; Hansen, Lars Kai

    2014-01-01

    Hidden Markov Models are of interest in a broad set of applications including modern data driven systems involving very large data sets. However, approximate inference methods based on Bayesian averaging are precluded in such applications as each sampling step requires a full sweep over the data...

  8. Research on Francis Turbine Modeling for Large Disturbance Hydropower Station Transient Process Simulation

    Directory of Open Access Journals (Sweden)

    Guangtao Zhang

    2015-01-01

    Full Text Available In the field of hydropower station transient process simulation (HSTPS, characteristic graph-based iterative hydroturbine model (CGIHM has been widely used when large disturbance hydroturbine modeling is involved. However, by this model, iteration should be used to calculate speed and pressure, and slow convergence or no convergence problems may be encountered for some reasons like special characteristic graph profile, inappropriate iterative algorithm, or inappropriate interpolation algorithm, and so forth. Also, other conventional large disturbance hydroturbine models are of some disadvantages and difficult to be used widely in HSTPS. Therefore, to obtain an accurate simulation result, a simple method for hydroturbine modeling is proposed. By this method, both the initial operating point and the transfer coefficients of linear hydroturbine model keep changing during simulation. Hence, it can reflect the nonlinearity of the hydroturbine and be used for Francis turbine simulation under large disturbance condition. To validate the proposed method, both large disturbance and small disturbance simulations of a single hydrounit supplying a resistive, isolated load were conducted. It was shown that the simulation result is consistent with that of field test. Consequently, the proposed method is an attractive option for HSTPS involving Francis turbine modeling under large disturbance condition.

  9. An improved large signal model of InP HEMTs

    Science.gov (United States)

    Li, Tianhao; Li, Wenjun; Liu, Jun

    2018-05-01

    An improved large signal model for InP HEMTs is proposed in this paper. The channel current and charge model equations are constructed based on the Angelov model equations. Both the equations for channel current and gate charge models were all continuous and high order drivable, and the proposed gate charge model satisfied the charge conservation. For the strong leakage induced barrier reduction effect of InP HEMTs, the Angelov current model equations are improved. The channel current model could fit DC performance of devices. A 2 × 25 μm × 70 nm InP HEMT device is used to demonstrate the extraction and validation of the model, in which the model has predicted the DC I–V, C–V and bias related S parameters accurately. Project supported by the National Natural Science Foundation of China (No. 61331006).

  10. Active Exploration of Large 3D Model Repositories.

    Science.gov (United States)

    Gao, Lin; Cao, Yan-Pei; Lai, Yu-Kun; Huang, Hao-Zhi; Kobbelt, Leif; Hu, Shi-Min

    2015-12-01

    With broader availability of large-scale 3D model repositories, the need for efficient and effective exploration becomes more and more urgent. Existing model retrieval techniques do not scale well with the size of the database since often a large number of very similar objects are returned for a query, and the possibilities to refine the search are quite limited. We propose an interactive approach where the user feeds an active learning procedure by labeling either entire models or parts of them as "like" or "dislike" such that the system can automatically update an active set of recommended models. To provide an intuitive user interface, candidate models are presented based on their estimated relevance for the current query. From the methodological point of view, our main contribution is to exploit not only the similarity between a query and the database models but also the similarities among the database models themselves. We achieve this by an offline pre-processing stage, where global and local shape descriptors are computed for each model and a sparse distance metric is derived that can be evaluated efficiently even for very large databases. We demonstrate the effectiveness of our method by interactively exploring a repository containing over 100 K models.

  11. Modeling and Simulation Techniques for Large-Scale Communications Modeling

    National Research Council Canada - National Science Library

    Webb, Steve

    1997-01-01

    .... Tests of random number generators were also developed and applied to CECOM models. It was found that synchronization of random number strings in simulations is easy to implement and can provide significant savings for making comparative studies. If synchronization is in place, then statistical experiment design can be used to provide information on the sensitivity of the output to input parameters. The report concludes with recommendations and an implementation plan.

  12. Feasibility of an energy conversion system in Canada involving large-scale integrated hydrogen production using solid fuels

    International Nuclear Information System (INIS)

    Gnanapragasam, Nirmal V.; Reddy, Bale V.; Rosen, Marc A.

    2010-01-01

    A large-scale hydrogen production system is proposed using solid fuels and designed to increase the sustainability of alternative energy forms in Canada, and the technical and economic aspects of the system within the Canadian energy market are examined. The work investigates the feasibility and constraints in implementing such a system within the energy infrastructure of Canada. The proposed multi-conversion and single-function system produces hydrogen in large quantities using energy from solid fuels such as coal, tar sands, biomass, municipal solid waste (MSW) and agricultural/forest/industrial residue. The proposed system involves significant technology integration, with various energy conversion processes (such as gasification, chemical looping combustion, anaerobic digestion, combustion power cycles-electrolysis and solar-thermal converters) interconnected to increase the utilization of solid fuels as much as feasible within cost, environmental and other constraints. The analysis involves quantitative and qualitative assessments based on (i) energy resources availability and demand for hydrogen, (ii) commercial viability of primary energy conversion technologies, (iii) academia, industry and government participation, (iv) sustainability and (v) economics. An illustrative example provides an initial road map for implementing such a system. (author)

  13. Statistical process control charts for attribute data involving very large sample sizes: a review of problems and solutions.

    Science.gov (United States)

    Mohammed, Mohammed A; Panesar, Jagdeep S; Laney, David B; Wilson, Richard

    2013-04-01

    The use of statistical process control (SPC) charts in healthcare is increasing. The primary purpose of SPC is to distinguish between common-cause variation which is attributable to the underlying process, and special-cause variation which is extrinsic to the underlying process. This is important because improvement under common-cause variation requires action on the process, whereas special-cause variation merits an investigation to first find the cause. Nonetheless, when dealing with attribute or count data (eg, number of emergency admissions) involving very large sample sizes, traditional SPC charts often produce tight control limits with most of the data points appearing outside the control limits. This can give a false impression of common and special-cause variation, and potentially misguide the user into taking the wrong actions. Given the growing availability of large datasets from routinely collected databases in healthcare, there is a need to present a review of this problem (which arises because traditional attribute charts only consider within-subgroup variation) and its solutions (which consider within and between-subgroup variation), which involve the use of the well-established measurements chart and the more recently developed attribute charts based on Laney's innovative approach. We close by making some suggestions for practice.

  14. Estimation and Inference for Very Large Linear Mixed Effects Models

    OpenAIRE

    Gao, K.; Owen, A. B.

    2016-01-01

    Linear mixed models with large imbalanced crossed random effects structures pose severe computational problems for maximum likelihood estimation and for Bayesian analysis. The costs can grow as fast as $N^{3/2}$ when there are N observations. Such problems arise in any setting where the underlying factors satisfy a many to many relationship (instead of a nested one) and in electronic commerce applications, the N can be quite large. Methods that do not account for the correlation structure can...

  15. Bayesian hierarchical model for large-scale covariance matrix estimation.

    Science.gov (United States)

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  16. Involving stakeholders in building integrated fisheries models using Bayesian methods

    DEFF Research Database (Denmark)

    Haapasaari, Päivi Elisabet; Mäntyniemi, Samu; Kuikka, Sakari

    2013-01-01

    the potential of the study to contribute to the development of participatory modeling practices. It is concluded that the subjective perspective to knowledge, that is fundamental in Bayesian theory, suits participatory modeling better than a positivist paradigm that seeks the objective truth. The methodology...

  17. Large-scale groundwater modeling using global datasets: A test case for the Rhine-Meuse basin

    NARCIS (Netherlands)

    Sutanudjaja, E.H.; Beek, L.P.H. van; Jong, S.M. de; Geer, F.C. van; Bierkens, M.F.P.

    2011-01-01

    Large-scale groundwater models involving aquifers and basins of multiple countries are still rare due to a lack of hydrogeological data which are usually only available in developed countries. In this study, we propose a novel approach to construct large-scale groundwater models by using global

  18. Large-scale groundwater modeling using global datasets: a test case for the Rhine-Meuse basin

    NARCIS (Netherlands)

    Sutanudjaja, E.H.; Beek, L.P.H. van; Jong, S.M. de; Geer, F.C. van; Bierkens, M.F.P.

    2011-01-01

    The current generation of large-scale hydrological models does not include a groundwater flow component. Large-scale groundwater models, involving aquifers and basins of multiple countries, are still rare mainly due to a lack of hydro-geological data which are usually only available in

  19. Large-scale groundwater modeling using global datasets: A test case for the Rhine-Meuse basin

    NARCIS (Netherlands)

    Sutanudjaja, E.H.; Beek, L.P.H. van; Jong, S.M. de; Geer, F.C. van; Bierkens, M.F.P.

    2011-01-01

    The current generation of large-scale hydrological models does not include a groundwater flow component. Large-scale groundwater models, involving aquifers and basins of multiple countries, are still rare mainly due to a lack of hydro-geological data which are usually only available in developed

  20. Disinformative data in large-scale hydrological modelling

    Directory of Open Access Journals (Sweden)

    A. Kauffeldt

    2013-07-01

    Full Text Available Large-scale hydrological modelling has become an important tool for the study of global and regional water resources, climate impacts, and water-resources management. However, modelling efforts over large spatial domains are fraught with problems of data scarcity, uncertainties and inconsistencies between model forcing and evaluation data. Model-independent methods to screen and analyse data for such problems are needed. This study aimed at identifying data inconsistencies in global datasets using a pre-modelling analysis, inconsistencies that can be disinformative for subsequent modelling. The consistency between (i basin areas for different hydrographic datasets, and (ii between climate data (precipitation and potential evaporation and discharge data, was examined in terms of how well basin areas were represented in the flow networks and the possibility of water-balance closure. It was found that (i most basins could be well represented in both gridded basin delineations and polygon-based ones, but some basins exhibited large area discrepancies between flow-network datasets and archived basin areas, (ii basins exhibiting too-high runoff coefficients were abundant in areas where precipitation data were likely affected by snow undercatch, and (iii the occurrence of basins exhibiting losses exceeding the potential-evaporation limit was strongly dependent on the potential-evaporation data, both in terms of numbers and geographical distribution. Some inconsistencies may be resolved by considering sub-grid variability in climate data, surface-dependent potential-evaporation estimates, etc., but further studies are needed to determine the reasons for the inconsistencies found. Our results emphasise the need for pre-modelling data analysis to identify dataset inconsistencies as an important first step in any large-scale study. Applying data-screening methods before modelling should also increase our chances to draw robust conclusions from subsequent

  1. A hierarchical causal modeling for large industrial plants supervision

    International Nuclear Information System (INIS)

    Dziopa, P.; Leyval, L.

    1994-01-01

    A supervision system has to analyse the process current state and the way it will evolve after a modification of the inputs or disturbance. It is proposed to base this analysis on a hierarchy of models, witch differ by the number of involved variables and the abstraction level used to describe their temporal evolution. In a first step, special attention is paid to causal models building, from the most abstract one. Once the hierarchy of models has been build, the most detailed model parameters are estimated. Several models of different abstraction levels can be used for on line prediction. These methods have been applied to a nuclear reprocessing plant. The abstraction level could be chosen on line by the operator. Moreover when an abnormal process behaviour is detected a more detailed model is automatically triggered in order to focus the operator attention on the suspected subsystem. (authors). 11 refs., 11 figs

  2. Including investment risk in large-scale power market models

    DEFF Research Database (Denmark)

    Lemming, Jørgen Kjærgaard; Meibom, P.

    2003-01-01

    Long-term energy market models can be used to examine investments in production technologies, however, with market liberalisation it is crucial that such models include investment risks and investor behaviour. This paper analyses how the effect of investment risk on production technology selection...... can be included in large-scale partial equilibrium models of the power market. The analyses are divided into a part about risk measures appropriate for power market investors and a more technical part about the combination of a risk-adjustment model and a partial-equilibrium model. To illustrate...... the analyses quantitatively, a framework based on an iterative interaction between the equilibrium model and a separate risk-adjustment module was constructed. To illustrate the features of the proposed modelling approach we examined how uncertainty in demand and variable costs affects the optimal choice...

  3. Model Experiments for the Determination of Airflow in Large Spaces

    DEFF Research Database (Denmark)

    Nielsen, Peter V.

    Model experiments are one of the methods used for the determination of airflow in large spaces. This paper will discuss the formation of the governing dimensionless numbers. It is shown that experiments with a reduced scale often will necessitate a fully developed turbulence level of the flow....... Details of the flow from supply openings are very important for the determination of room air distribution. It is in some cases possible to make a simplified supply opening for the model experiment....

  4. A two-level solvable model involving competing pairing interactions

    International Nuclear Information System (INIS)

    Dussel, G.G.; Maqueda, E.E.; Perazzo, R.P.J.; Evans, J.A.

    1986-01-01

    A model is considered consisting of nucleons moving in two non-degenerate l-shells and interacting through two pairing residual interactions with (S, T) = (1, 0) and (0, 1). These, together with the single particle hamiltonian induce mutually destructive correlations, giving rise to various collective pictures that can be discussed as representing a two-dimensional space of phases. The model is solved exactly using an O(8)xO(8) group theoretical classification scheme. The transfer of correlated pairs and quartets is also discussed. (orig.)

  5. Searches for phenomena beyond the Standard Model at the Large ...

    Indian Academy of Sciences (India)

    metry searches at the LHC is thus the channel with large missing transverse momentum and jets of high transverse momentum. No excess above the expected SM background is observed and limits are set on supersymmetric models. Figures 1 and 2 show the limits from ATLAS [11] and CMS [12]. In addition to setting limits ...

  6. A stochastic large deformation model for computational anatomy

    DEFF Research Database (Denmark)

    Arnaudon, Alexis; Holm, Darryl D.; Pai, Akshay Sadananda Uppinakudru

    2017-01-01

    In the study of shapes of human organs using computational anatomy, variations are found to arise from inter-subject anatomical differences, disease-specific effects, and measurement noise. This paper introduces a stochastic model for incorporating random variations into the Large Deformation...

  7. Solving large linear systems in an implicit thermohaline ocean model

    NARCIS (Netherlands)

    de Niet, Arie Christiaan

    2007-01-01

    The climate on earth is largely determined by the global ocean circulation. Hence it is important to predict how the flow will react to perturbation by for example melting icecaps. To answer questions about the stability of the global ocean flow, a computer model has been developed that is able to

  8. Penalized Estimation in Large-Scale Generalized Linear Array Models

    DEFF Research Database (Denmark)

    Lund, Adam; Vincent, Martin; Hansen, Niels Richard

    2017-01-01

    Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...

  9. Application of Logic Models in a Large Scientific Research Program

    Science.gov (United States)

    O'Keefe, Christine M.; Head, Richard J.

    2011-01-01

    It is the purpose of this article to discuss the development and application of a logic model in the context of a large scientific research program within the Commonwealth Scientific and Industrial Research Organisation (CSIRO). CSIRO is Australia's national science agency and is a publicly funded part of Australia's innovation system. It conducts…

  10. Large scale solar district heating. Evaluation, modelling and designing - Appendices

    Energy Technology Data Exchange (ETDEWEB)

    Heller, A.

    2000-07-01

    The appendices present the following: A) Cad-drawing of the Marstal CSHP design. B) Key values - large-scale solar heating in Denmark. C) Monitoring - a system description. D) WMO-classification of pyranometers (solarimeters). E) The computer simulation model in TRNSYS. F) Selected papers from the author. (EHS)

  11. Large psub(T) pion production and clustered parton model

    Energy Technology Data Exchange (ETDEWEB)

    Kanki, T [Osaka Univ., Toyonaka (Japan). Coll. of General Education

    1977-05-01

    Recent experimental results on the large p sub(T) inclusive ..pi../sup 0/ productions by pp and ..pi..p collisions are interpreted by the parton model in which the constituent quarks are defined to be the clusters of the quark-partons and gluons.

  12. Verifying large SDL-specifications using model checking

    NARCIS (Netherlands)

    Sidorova, N.; Steffen, M.; Reed, R.; Reed, J.

    2001-01-01

    In this paper we propose a methodology for model-checking based verification of large SDL specifications. The methodology is illustrated by a case study of an industrial medium-access protocol for wireless ATM. To cope with the state space explosion, the verification exploits the layered and modular

  13. Modeling of 3D Aluminum Polycrystals during Large Deformations

    International Nuclear Information System (INIS)

    Maniatty, Antoinette M.; Littlewood, David J.; Lu Jing; Pyle, Devin

    2007-01-01

    An approach for generating, meshing, and modeling 3D polycrystals, with a focus on aluminum alloys, subjected to large deformation processes is presented. A Potts type model is used to generate statistically representative grain structures with periodicity to allow scale-linking. The grain structures are compared to experimentally observed grain structures to validate that they are representative. A procedure for generating a geometric model from the voxel data is developed allowing for adaptive meshing of the generated grain structure. Material behavior is governed by an appropriate crystal, elasto-viscoplastic constitutive model. The elastic-viscoplastic model is implemented in a three-dimensional, finite deformation, mixed, finite element program. In order to handle the large-scale problems of interest, a parallel implementation is utilized. A multiscale procedure is used to link larger scale models of deformation processes to the polycrystal model, where periodic boundary conditions on the fluctuation field are enforced. Finite-element models, of 3D polycrystal grain structures will be presented along with observations made from these simulations

  14. Particle production at large transverse momentum and hard collision models

    International Nuclear Information System (INIS)

    Ranft, G.; Ranft, J.

    1977-04-01

    The majority of the presently available experimental data is consistent with hard scattering models. Therefore the hard scattering model seems to be well established. There is good evidence for jets in large transverse momentum reactions as predicted by these models. The overall picture is however not yet well enough understood. We mention only the empirical hard scattering cross section introduced in most of the models, the lack of a deep theoretical understanding of the interplay between quark confinement and jet production, and the fact that we are not yet able to discriminate conclusively between the many proposed hard scattering models. The status of different hard collision models discussed in this paper is summarized. (author)

  15. Introducing an Intervention Model for Fostering Affective Involvement with Persons Who Are Congenitally Deafblind

    NARCIS (Netherlands)

    Martens, M.A.W.; Janssen, M.J.; Ruijssenaars, A.J.J.M.; Riksen-Walraven, J.M.A.

    2014-01-01

    The article presented here introduces the Intervention Model for Affective Involvement (IMAI), which was designed to train staff members (for example, teachers, caregivers, support workers) to foster affective involvement during interaction and communication with persons who have congenital

  16. Large deflection of viscoelastic beams using fractional derivative model

    International Nuclear Information System (INIS)

    Bahranini, Seyed Masoud Sotoodeh; Eghtesad, Mohammad; Ghavanloo, Esmaeal; Farid, Mehrdad

    2013-01-01

    This paper deals with large deflection of viscoelastic beams using a fractional derivative model. For this purpose, a nonlinear finite element formulation of viscoelastic beams in conjunction with the fractional derivative constitutive equations has been developed. The four-parameter fractional derivative model has been used to describe the constitutive equations. The deflected configuration for a uniform beam with different boundary conditions and loads is presented. The effect of the order of fractional derivative on the large deflection of the cantilever viscoelastic beam, is investigated after 10, 100, and 1000 hours. The main contribution of this paper is finite element implementation for nonlinear analysis of viscoelastic fractional model using the storage of both strain and stress histories. The validity of the present analysis is confirmed by comparing the results with those found in the literature.

  17. Traffic assignment models in large-scale applications

    DEFF Research Database (Denmark)

    Rasmussen, Thomas Kjær

    the potential of the method proposed and the possibility to use individual-based GPS units for travel surveys in real-life large-scale multi-modal networks. Congestion is known to highly influence the way we act in the transportation network (and organise our lives), because of longer travel times...... of observations of actual behaviour to obtain estimates of the (monetary) value of different travel time components, thereby increasing the behavioural realism of largescale models. vii The generation of choice sets is a vital component in route choice models. This is, however, not a straight-forward task in real......, but the reliability of the travel time also has a large impact on our travel choices. Consequently, in order to improve the realism of transport models, correct understanding and representation of two values that are related to the value of time (VoT) are essential: (i) the value of congestion (VoC), as the Vo...

  18. Research on large-scale wind farm modeling

    Science.gov (United States)

    Ma, Longfei; Zhang, Baoqun; Gong, Cheng; Jiao, Ran; Shi, Rui; Chi, Zhongjun; Ding, Yifeng

    2017-01-01

    Due to intermittent and adulatory properties of wind energy, when large-scale wind farm connected to the grid, it will have much impact on the power system, which is different from traditional power plants. Therefore it is necessary to establish an effective wind farm model to simulate and analyze the influence wind farms have on the grid as well as the transient characteristics of the wind turbines when the grid is at fault. However we must first establish an effective WTGs model. As the doubly-fed VSCF wind turbine has become the mainstream wind turbine model currently, this article first investigates the research progress of doubly-fed VSCF wind turbine, and then describes the detailed building process of the model. After that investigating the common wind farm modeling methods and pointing out the problems encountered. As WAMS is widely used in the power system, which makes online parameter identification of the wind farm model based on off-output characteristics of wind farm be possible, with a focus on interpretation of the new idea of identification-based modeling of large wind farms, which can be realized by two concrete methods.

  19. A large deformation viscoelastic model for double-network hydrogels

    Science.gov (United States)

    Mao, Yunwei; Lin, Shaoting; Zhao, Xuanhe; Anand, Lallit

    2017-03-01

    We present a large deformation viscoelasticity model for recently synthesized double network hydrogels which consist of a covalently-crosslinked polyacrylamide network with long chains, and an ionically-crosslinked alginate network with short chains. Such double-network gels are highly stretchable and at the same time tough, because when stretched the crosslinks in the ionically-crosslinked alginate network rupture which results in distributed internal microdamage which dissipates a substantial amount of energy, while the configurational entropy of the covalently-crosslinked polyacrylamide network allows the gel to return to its original configuration after deformation. In addition to the large hysteresis during loading and unloading, these double network hydrogels also exhibit a substantial rate-sensitive response during loading, but exhibit almost no rate-sensitivity during unloading. These features of large hysteresis and asymmetric rate-sensitivity are quite different from the response of conventional hydrogels. We limit our attention to modeling the complex viscoelastic response of such hydrogels under isothermal conditions. Our model is restricted in the sense that we have limited our attention to conditions under which one might neglect any diffusion of the water in the hydrogel - as might occur when the gel has a uniform initial value of the concentration of water, and the mobility of the water molecules in the gel is low relative to the time scale of the mechanical deformation. We also do not attempt to model the final fracture of such double-network hydrogels.

  20. Global Bedload Flux Modeling and Analysis in Large Rivers

    Science.gov (United States)

    Islam, M. T.; Cohen, S.; Syvitski, J. P.

    2017-12-01

    Proper sediment transport quantification has long been an area of interest for both scientists and engineers in the fields of geomorphology, and management of rivers and coastal waters. Bedload flux is important for monitoring water quality and for sustainable development of coastal and marine bioservices. Bedload measurements, especially for large rivers, is extremely scarce across time, and many rivers have never been monitored. Bedload measurements in rivers, is particularly acute in developing countries where changes in sediment yields is high. The paucity of bedload measurements is the result of 1) the nature of the problem (large spatial and temporal uncertainties), and 2) field costs including the time-consuming nature of the measurement procedures (repeated bedform migration tracking, bedload samplers). Here we present a first of its kind methodology for calculating bedload in large global rivers (basins are >1,000 km. Evaluation of model skill is based on 113 bedload measurements. The model predictions are compared with an empirical model developed from the observational dataset in an attempt to evaluate the differences between a physically-based numerical model and a lumped relationship between bedload flux and fluvial and basin parameters (e.g., discharge, drainage area, lithology). The initial study success opens up various applications to global fluvial geomorphology (e.g. including the relationship between suspended sediment (wash load) and bedload). Simulated results with known uncertainties offers a new research product as a valuable resource for the whole scientific community.

  1. Application of simplified models to CO2 migration and immobilization in large-scale geological systems

    KAUST Repository

    Gasda, Sarah E.

    2012-07-01

    Long-term stabilization of injected carbon dioxide (CO 2) is an essential component of risk management for geological carbon sequestration operations. However, migration and trapping phenomena are inherently complex, involving processes that act over multiple spatial and temporal scales. One example involves centimeter-scale density instabilities in the dissolved CO 2 region leading to large-scale convective mixing that can be a significant driver for CO 2 dissolution. Another example is the potentially important effect of capillary forces, in addition to buoyancy and viscous forces, on the evolution of mobile CO 2. Local capillary effects lead to a capillary transition zone, or capillary fringe, where both fluids are present in the mobile state. This small-scale effect may have a significant impact on large-scale plume migration as well as long-term residual and dissolution trapping. Computational models that can capture both large and small-scale effects are essential to predict the role of these processes on the long-term storage security of CO 2 sequestration operations. Conventional modeling tools are unable to resolve sufficiently all of these relevant processes when modeling CO 2 migration in large-scale geological systems. Herein, we present a vertically-integrated approach to CO 2 modeling that employs upscaled representations of these subgrid processes. We apply the model to the Johansen formation, a prospective site for sequestration of Norwegian CO 2 emissions, and explore the sensitivity of CO 2 migration and trapping to subscale physics. Model results show the relative importance of different physical processes in large-scale simulations. The ability of models such as this to capture the relevant physical processes at large spatial and temporal scales is important for prediction and analysis of CO 2 storage sites. © 2012 Elsevier Ltd.

  2. Precise MRI-based stereotaxic surgery in large animal models

    DEFF Research Database (Denmark)

    Glud, Andreas Nørgaard; Bech, Johannes; Tvilling, Laura

    BACKGROUND: Stereotaxic neurosurgery in large animals is used widely in different sophisticated models, where precision is becoming more crucial as desired anatomical target regions are becoming smaller. Individually calculated coordinates are necessary in large animal models with cortical...... and subcortical anatomical differences. NEW METHOD: We present a convenient method to make an MRI-visible skull fiducial for 3D MRI-based stereotaxic procedures in larger experimental animals. Plastic screws were filled with either copper-sulphate solution or MRI-visible paste from a commercially available...... cranial head marker. The screw fiducials were inserted in the animal skulls and T1 weighted MRI was performed allowing identification of the inserted skull marker. RESULTS: Both types of fiducial markers were clearly visible on the MRÍs. This allows high precision in the stereotaxic space. COMPARISON...

  3. Mechanical test of the model coil wound with large conductor

    International Nuclear Information System (INIS)

    Hiue, Hisaaki; Sugimoto, Makoto; Nakajima, Hideo; Yasukawa, Yukio; Yoshida, Kiyoshi; Hasegawa, Mitsuru; Ito, Ikuo; Konno, Masayuki.

    1992-09-01

    The high rigidity and strength of the winding pack are required to realize the large superconducting magnet for the fusion reactor. This paper describes mechanical tests concerning the rigidity of the winding pack. Samples were prepared to evaluate the adhesive strength between conductors and insulators. Epoxy and Bismaleimide-Triazine resin (BT resin) were used as the conductor insulator. The stainless steel (SS) 304 bars, whose surface was treated mechanically and chemically, was applied to the modeled conductor. The model coil was would with the model conductors covered with the insulator by grand insulator. A winding model combining 3 x 3 conductors was produced for measuring shearing rigidity. The sample was loaded with pure shearing force at the LN 2 temperature. The bar winding sample, by 8 x 6 conductors, was measured the bending rigidity. These three point bending tests were carried out at room temperature. The pancake winding sample was loaded with compressive forces to measure compressive rigidity of winding. (author)

  4. Towards a 'standard model' of large scale structure formation

    International Nuclear Information System (INIS)

    Shafi, Q.

    1994-01-01

    We explore constraints on inflationary models employing data on large scale structure mainly from COBE temperature anisotropies and IRAS selected galaxy surveys. In models where the tensor contribution to the COBE signal is negligible, we find that the spectral index of density fluctuations n must exceed 0.7. Furthermore the COBE signal cannot be dominated by the tensor component, implying n > 0.85 in such models. The data favors cold plus hot dark matter models with n equal or close to unity and Ω HDM ∼ 0.2 - 0.35. Realistic grand unified theories, including supersymmetric versions, which produce inflation with these properties are presented. (author). 46 refs, 8 figs

  5. Aero-Acoustic Modelling using Large Eddy Simulation

    International Nuclear Information System (INIS)

    Shen, W Z; Soerensen, J N

    2007-01-01

    The splitting technique for aero-acoustic computations is extended to simulate three-dimensional flow and acoustic waves from airfoils. The aero-acoustic model is coupled to a sub-grid-scale turbulence model for Large-Eddy Simulations. In the first test case, the model is applied to compute laminar flow past a NACA 0015 airfoil at a Reynolds number of 800, a Mach number of 0.2 and an angle of attack of 20 deg. The model is then applied to compute turbulent flow past a NACA 0015 airfoil at a Reynolds number of 100 000, a Mach number of 0.2 and an angle of attack of 20 deg. The predicted noise spectrum is compared to experimental data

  6. Perturbation theory instead of large scale shell model calculations

    International Nuclear Information System (INIS)

    Feldmeier, H.; Mankos, P.

    1977-01-01

    Results of large scale shell model calculations for (sd)-shell nuclei are compared with a perturbation theory provides an excellent approximation when the SU(3)-basis is used as a starting point. The results indicate that perturbation theory treatment in an SU(3)-basis including 2hω excitations should be preferable to a full diagonalization within the (sd)-shell. (orig.) [de

  7. Field theory of large amplitude collective motion. A schematic model

    International Nuclear Information System (INIS)

    Reinhardt, H.

    1978-01-01

    By using path integral methods the equation for large amplitude collective motion for a schematic two-level model is derived. The original fermion theory is reformulated in terms of a collective (Bose) field. The classical equation of motion for the collective field coincides with the time-dependent Hartree-Fock equation. Its classical solution is quantized by means of the field-theoretical generalization of the WKB method. (author)

  8. Large urban fire environment: trends and model city predictions

    International Nuclear Information System (INIS)

    Larson, D.A.; Small, R.D.

    1983-01-01

    The urban fire environment that would result from a megaton-yield nuclear weapon burst is considered. The dependence of temperatures and velocities on fire size, burning intensity, turbulence, and radiation is explored, and specific calculations for three model urban areas are presented. In all cases, high velocity fire winds are predicted. The model-city results show the influence of building density and urban sprawl on the fire environment. Additional calculations consider large-area fires with the burning intensity reduced in a blast-damaged urban center

  9. ARMA modelling of neutron stochastic processes with large measurement noise

    International Nuclear Information System (INIS)

    Zavaljevski, N.; Kostic, Lj.; Pesic, M.

    1994-01-01

    An autoregressive moving average (ARMA) model of the neutron fluctuations with large measurement noise is derived from langevin stochastic equations and validated using time series data obtained during prompt neutron decay constant measurements at the zero power reactor RB in Vinca. Model parameters are estimated using the maximum likelihood (ML) off-line algorithm and an adaptive pole estimation algorithm based on the recursive prediction error method (RPE). The results show that subcriticality can be determined from real data with high measurement noise using much shorter statistical sample than in standard methods. (author)

  10. Involvement of herbal medicine as a cause of mesenteric phlebosclerosis: results from a large-scale nationwide survey.

    Science.gov (United States)

    Shimizu, Seiji; Kobayashi, Taku; Tomioka, Hideo; Ohtsu, Kensei; Matsui, Toshiyuki; Hibi, Toshifumi

    2017-03-01

    Mesenteric phlebosclerosis (MP) is a rare disease characterized by venous calcification extending from the colonic wall to the mesentery, with chronic ischemic changes from venous return impairment in the intestine. It is an idiopathic disease, but increasing attention has been paid to the potential involvement of herbal medicine, or Kampo, in its etiology. Until now, there were scattered case reports, but no large-scale studies have been conducted to unravel the clinical characteristics and etiology of the disease. A nationwide survey was conducted using questionnaires to assess possible etiology (particularly the involvement of herbal medicine), clinical manifestations, disease course, and treatment of MP. Data from 222 patients were collected. Among the 169 patients (76.1 %), whose history of herbal medicine was obtained, 147 (87.0 %) used herbal medicines. The use of herbal medicines containing sanshishi (gardenia fruit, Gardenia jasminoides Ellis) was reported in 119 out of 147 patients (81.0 %). Therefore, the use of herbal medicine containing sanshishi was confirmed in 70.4 % of 169 patients whose history of herbal medicine was obtained. The duration of sanshishi use ranged from 3 to 51 years (mean 13.6 years). Patients who discontinued sanshishi showed a better outcome compared with those who continued it. The use of herbal medicine containing sanshishi is associated with the etiology of MP. Although it may not be the causative factor, it is necessary for gastroenterologists to be aware of the potential risk of herbal medicine containing sanshishi for the development of MP.

  11. Protein homology model refinement by large-scale energy optimization.

    Science.gov (United States)

    Park, Hahnbeom; Ovchinnikov, Sergey; Kim, David E; DiMaio, Frank; Baker, David

    2018-03-20

    Proteins fold to their lowest free-energy structures, and hence the most straightforward way to increase the accuracy of a partially incorrect protein structure model is to search for the lowest-energy nearby structure. This direct approach has met with little success for two reasons: first, energy function inaccuracies can lead to false energy minima, resulting in model degradation rather than improvement; and second, even with an accurate energy function, the search problem is formidable because the energy only drops considerably in the immediate vicinity of the global minimum, and there are a very large number of degrees of freedom. Here we describe a large-scale energy optimization-based refinement method that incorporates advances in both search and energy function accuracy that can substantially improve the accuracy of low-resolution homology models. The method refined low-resolution homology models into correct folds for 50 of 84 diverse protein families and generated improved models in recent blind structure prediction experiments. Analyses of the basis for these improvements reveal contributions from both the improvements in conformational sampling techniques and the energy function.

  12. Homogenization of Large-Scale Movement Models in Ecology

    Science.gov (United States)

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  13. Multiresolution comparison of precipitation datasets for large-scale models

    Science.gov (United States)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  14. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    Science.gov (United States)

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  15. The Missing Stakeholder Group: Why Patients Should be Involved in Health Economic Modelling.

    Science.gov (United States)

    van Voorn, George A K; Vemer, Pepijn; Hamerlijnck, Dominique; Ramos, Isaac Corro; Teunissen, Geertruida J; Al, Maiwenn; Feenstra, Talitha L

    2016-04-01

    Evaluations of healthcare interventions, e.g. new drugs or other new treatment strategies, commonly include a cost-effectiveness analysis (CEA) that is based on the application of health economic (HE) models. As end users, patients are important stakeholders regarding the outcomes of CEAs, yet their knowledge of HE model development and application, or their involvement therein, is absent. This paper considers possible benefits and risks of patient involvement in HE model development and application for modellers and patients. An exploratory review of the literature has been performed on stakeholder-involved modelling in various disciplines. In addition, Dutch patient experts have been interviewed about their experience in, and opinion about, the application of HE models. Patients have little to no knowledge of HE models and are seldom involved in HE model development and application. Benefits of becoming involved would include a greater understanding and possible acceptance by patients of HE model application, improved model validation, and a more direct infusion of patient expertise. Risks would include patient bias and increased costs of modelling. Patient involvement in HE modelling seems to carry several benefits as well as risks. We claim that the benefits may outweigh the risks and that patients should become involved.

  16. Parents as Role Models: Parental Behavior Affects Adolescents' Plans for Work Involvement

    Science.gov (United States)

    Wiese, Bettina S.; Freund, Alexandra M.

    2011-01-01

    This study (N = 520 high-school students) investigates the influence of parental work involvement on adolescents' own plans regarding their future work involvement. As expected, adolescents' perceptions of parental work behavior affected their plans for own work involvement. Same-sex parents served as main role models for the adolescents' own…

  17. Validating modeled turbulent heat fluxes across large freshwater surfaces

    Science.gov (United States)

    Lofgren, B. M.; Fujisaki-Manome, A.; Gronewold, A.; Anderson, E. J.; Fitzpatrick, L.; Blanken, P.; Spence, C.; Lenters, J. D.; Xiao, C.; Charusambot, U.

    2017-12-01

    Turbulent fluxes of latent and sensible heat are important physical processes that influence the energy and water budgets of the Great Lakes. Validation and improvement of bulk flux algorithms to simulate these turbulent heat fluxes are critical for accurate prediction of hydrodynamics, water levels, weather, and climate over the region. Here we consider five heat flux algorithms from several model systems; the Finite-Volume Community Ocean Model, the Weather Research and Forecasting model, and the Large Lake Thermodynamics Model, which are used in research and operational environments and concentrate on different aspects of the Great Lakes' physical system, but interface at the lake surface. The heat flux algorithms were isolated from each model and driven by meteorological data from over-lake stations in the Great Lakes Evaporation Network. The simulation results were compared with eddy covariance flux measurements at the same stations. All models show the capacity to the seasonal cycle of the turbulent heat fluxes. Overall, the Coupled Ocean Atmosphere Response Experiment algorithm in FVCOM has the best agreement with eddy covariance measurements. Simulations with the other four algorithms are overall improved by updating the parameterization of roughness length scales of temperature and humidity. Agreement between modelled and observed fluxes notably varied with geographical locations of the stations. For example, at the Long Point station in Lake Erie, observed fluxes are likely influenced by the upwind land surface while the simulations do not take account of the land surface influence, and therefore the agreement is worse in general.

  18. Challenges of Modeling Flood Risk at Large Scales

    Science.gov (United States)

    Guin, J.; Simic, M.; Rowe, J.

    2009-04-01

    Flood risk management is a major concern for many nations and for the insurance sector in places where this peril is insured. A prerequisite for risk management, whether in the public sector or in the private sector is an accurate estimation of the risk. Mitigation measures and traditional flood management techniques are most successful when the problem is viewed at a large regional scale such that all inter-dependencies in a river network are well understood. From an insurance perspective the jury is still out there on whether flood is an insurable peril. However, with advances in modeling techniques and computer power it is possible to develop models that allow proper risk quantification at the scale suitable for a viable insurance market for flood peril. In order to serve the insurance market a model has to be event-simulation based and has to provide financial risk estimation that forms the basis for risk pricing, risk transfer and risk management at all levels of insurance industry at large. In short, for a collection of properties, henceforth referred to as a portfolio, the critical output of the model is an annual probability distribution of economic losses from a single flood occurrence (flood event) or from an aggregation of all events in any given year. In this paper, the challenges of developing such a model are discussed in the context of Great Britain for which a model has been developed. The model comprises of several, physically motivated components so that the primary attributes of the phenomenon are accounted for. The first component, the rainfall generator simulates a continuous series of rainfall events in space and time over thousands of years, which are physically realistic while maintaining the statistical properties of rainfall at all locations over the model domain. A physically based runoff generation module feeds all the rivers in Great Britain, whose total length of stream links amounts to about 60,000 km. A dynamical flow routing

  19. Large degeneracy of excited hadrons and quark models

    International Nuclear Information System (INIS)

    Bicudo, P.

    2007-01-01

    The pattern of a large approximate degeneracy of the excited hadron spectra (larger than the chiral restoration degeneracy) is present in the recent experimental report of Bugg. Here we try to model this degeneracy with state of the art quark models. We review how the Coulomb Gauge chiral invariant and confining Bethe-Salpeter equation simplifies in the case of very excited quark-antiquark mesons, including angular or radial excitations, to a Salpeter equation with an ultrarelativistic kinetic energy with the spin-independent part of the potential. The resulting meson spectrum is solved, and the excited chiral restoration is recovered, for all mesons with J>0. Applying the ultrarelativistic simplification to a linear equal-time potential, linear Regge trajectories are obtained, for both angular and radial excitations. The spectrum is also compared with the semiclassical Bohr-Sommerfeld quantization relation. However, the excited angular and radial spectra do not coincide exactly. We then search, with the classical Bertrand theorem, for central potentials producing always classical closed orbits with the ultrarelativistic kinetic energy. We find that no such potential exists, and this implies that no exact larger degeneracy can be obtained in our equal-time framework, with a single principal quantum number comparable to the nonrelativistic Coulomb or harmonic oscillator potentials. Nevertheless we find it plausible that the large experimental approximate degeneracy will be modeled in the future by quark models beyond the present state of the art

  20. Assisted reproduction involving gestational surrogacy: an analysis of the medical, psychosocial and legal issues: experience from a large surrogacy program.

    Science.gov (United States)

    Dar, Shir; Lazer, Tal; Swanson, Sonja; Silverman, Jan; Wasser, Cindy; Moskovtsev, Sergey I; Sojecki, Agata; Librach, Clifford L

    2015-02-01

    What are the medical, psychosocial and legal aspects of gestational surrogacy (GS), including pregnancy outcomes and complications, in a large series? Meticulous multidisciplinary teamwork, involving medical, legal and psychosocial input for both the intended parent(s) (IP) and the gestational carrier (GC), is critical to achieve a successful GS program. Small case series have described pregnancy rates of 17-50% for GS. There are no large case series and the medical, legal and psychological aspects of GS have not been addressed in most of these studies. To our knowledge, this is the largest reported GS case series. A retrospective cohort study was performed. Data were collected from 333 consecutive GC cycles between 1998 and 2012. There were 178 pregnancies achieved out of 333 stimulation cycles, including fresh and frozen transfers. The indications for a GC were divided into two groups. Those who have 'failed to carry', included women with recurrent implantation failure (RIF), recurrent pregnancy loss (RPL) and previous poor pregnancy outcome (n = 96; 132 cycles, pregnancy rate 50.0%). The second group consisted of those who 'cannot carry' including those with severe Asherman's syndrome, uterine malformations/uterine agenesis and maternal medical diseases (n = 108, 139 cycles, pregnancy rate 54.0%). A third group, of same-sex male couples and single men, were analyzed separately (n = 52, 62 cycles, pregnancy rate 59.7%). In 49.2% of cycles, autologous oocytes were used and 50.8% of cycles involved donor oocytes. The 'failed to carry' group consisted of 96 patients who underwent 132 cycles at a mean age of 40.3 years. There were 66 pregnancies (50.0%) with 17 miscarriages (25.8%) and 46 confirmed births (34.8%). The 'cannot carry pregnancy' group consisted of 108 patients who underwent 139 cycles at a mean age of 35.9 years. There were 75 pregnancies (54.0%) with 15 miscarriages (20.0%) and 56 confirmed births (40.3%). The pregnancy, miscarriage and live birth

  1. Large Scale Computing for the Modelling of Whole Brain Connectivity

    DEFF Research Database (Denmark)

    Albers, Kristoffer Jon

    organization of the brain in continuously increasing resolution. From these images, networks of structural and functional connectivity can be constructed. Bayesian stochastic block modelling provides a prominent data-driven approach for uncovering the latent organization, by clustering the networks into groups...... of neurons. Relying on Markov Chain Monte Carlo (MCMC) simulations as the workhorse in Bayesian inference however poses significant computational challenges, especially when modelling networks at the scale and complexity supported by high-resolution whole-brain MRI. In this thesis, we present how to overcome...... these computational limitations and apply Bayesian stochastic block models for un-supervised data-driven clustering of whole-brain connectivity in full image resolution. We implement high-performance software that allows us to efficiently apply stochastic blockmodelling with MCMC sampling on large complex networks...

  2. Design and modelling of innovative machinery systems for large ships

    DEFF Research Database (Denmark)

    Larsen, Ulrik

    Eighty percent of the growing global merchandise trade is transported by sea. The shipping industry is required to reduce the pollution and increase the energy efficiency of ships in the near future. There is a relatively large potential for approaching these requirements by implementing waste heat...... consisting of a two-zone combustion and NOx emission model, a double Wiebe heat release model, the Redlich-Kwong equation of state and the Woschni heat loss correlation. A novel methodology is presented and used to determine the optimum organic Rankine cycle process layout, working fluid and process......, are evaluated with regards to the fuel consumption and NOx emissions trade-off. The results of the calibration and validation of the engine model suggest that the main performance parameters can be predicted with adequate accuracies for the overall purpose. The results of the ORC and the Kalina cycle...

  3. Hydrogen combustion modelling in large-scale geometries

    International Nuclear Information System (INIS)

    Studer, E.; Beccantini, A.; Kudriakov, S.; Velikorodny, A.

    2014-01-01

    Hydrogen risk mitigation issues based on catalytic recombiners cannot exclude flammable clouds to be formed during the course of a severe accident in a Nuclear Power Plant. Consequences of combustion processes have to be assessed based on existing knowledge and state of the art in CFD combustion modelling. The Fukushima accidents have also revealed the need for taking into account the hydrogen explosion phenomena in risk management. Thus combustion modelling in a large-scale geometry is one of the remaining severe accident safety issues. At present day there doesn't exist a combustion model which can accurately describe a combustion process inside a geometrical configuration typical of the Nuclear Power Plant (NPP) environment. Therefore the major attention in model development has to be paid on the adoption of existing approaches or creation of the new ones capable of reliably predicting the possibility of the flame acceleration in the geometries of that type. A set of experiments performed previously in RUT facility and Heiss Dampf Reactor (HDR) facility is used as a validation database for development of three-dimensional gas dynamic model for the simulation of hydrogen-air-steam combustion in large-scale geometries. The combustion regimes include slow deflagration, fast deflagration, and detonation. Modelling is based on Reactive Discrete Equation Method (RDEM) where flame is represented as an interface separating reactants and combustion products. The transport of the progress variable is governed by different flame surface wrinkling factors. The results of numerical simulation are presented together with the comparisons, critical discussions and conclusions. (authors)

  4. Effective models of new physics at the Large Hadron Collider

    International Nuclear Information System (INIS)

    Llodra-Perez, J.

    2011-07-01

    With the start of the Large Hadron Collider runs, in 2010, particle physicists will be soon able to have a better understanding of the electroweak symmetry breaking. They might also answer to many experimental and theoretical open questions raised by the Standard Model. Surfing on this really favorable situation, we will first present in this thesis a highly model-independent parametrization in order to characterize the new physics effects on mechanisms of production and decay of the Higgs boson. This original tool will be easily and directly usable in data analysis of CMS and ATLAS, the huge generalist experiments of LHC. It will help indeed to exclude or validate significantly some new theories beyond the Standard Model. In another approach, based on model-building, we considered a scenario of new physics, where the Standard Model fields can propagate in a flat six-dimensional space. The new spatial extra-dimensions will be compactified on a Real Projective Plane. This orbifold is the unique six-dimensional geometry which possesses chiral fermions and a natural Dark Matter candidate. The scalar photon, which is the lightest particle of the first Kaluza-Klein tier, is stabilized by a symmetry relic of the six dimension Lorentz invariance. Using the current constraints from cosmological observations and our first analytical calculation, we derived a characteristic mass range around few hundred GeV for the Kaluza-Klein scalar photon. Therefore the new states of our Universal Extra-Dimension model are light enough to be produced through clear signatures at the Large Hadron Collider. So we used a more sophisticated analysis of particle mass spectrum and couplings, including radiative corrections at one-loop, in order to establish our first predictions and constraints on the expected LHC phenomenology. (author)

  5. Introducing an Intervention Model for Fostering Affective Involvement with Persons Who Are Congenitally Deafblind

    Science.gov (United States)

    Martens, Marga A. W.; Janssen, Marleen J.; Ruijssenaars, Wied A. J. J. M.; Riksen-Walraven, J. Marianne

    2014-01-01

    The article presented here introduces the Intervention Model for Affective Involvement (IMAI), which was designed to train staff members (for example, teachers, caregivers, support workers) to foster affective involvement during interaction and communication with persons who have congenital deaf-blindness. The model is theoretically underpinned,…

  6. Improving CASINO performance for models with large number of electrons

    International Nuclear Information System (INIS)

    Anton, L.; Alfe, D.; Hood, R.Q.; Tanqueray, D.

    2009-01-01

    Quantum Monte Carlo calculations have at their core algorithms based on statistical ensembles of multidimensional random walkers which are straightforward to use on parallel computers. Nevertheless some computations have reached the limit of the memory resources for models with more than 1000 electrons because of the need to store a large amount of electronic orbitals related data. Besides that, for systems with large number of electrons, it is interesting to study if the evolution of one configuration of random walkers can be done faster in parallel. We present a comparative study of two ways to solve these problems: (1) distributed orbital data done with MPI or Unix inter-process communication tools, (2) second level parallelism for configuration computation

  7. Functional and Aesthetic Outcome of Reconstruction of Large Oro-Facial Defects Involving the Lip after Tumor Resection

    International Nuclear Information System (INIS)

    Denewer, A.D.; Setie, A.E.; Hussein, O.A.; Aly, O.F.

    2006-01-01

    Background: Squamous cell carcinoma of the head and neck is a challenging disease to both surgeons and radiation oncologists due to proximity of many important anatomical structures. Surgery could be curative as these cancers usually metastasize very late by blood stream. Aim of the Work: This work addresses the oncologic, functional and aesthetic factors affecting reconstruction of large orofacial defects involving the lip following tumor resection. Patients and Methods: The study reviews the surgical outcome of one hundred and twelve patients with invasive tumors at. or extending to, the lip(s). treated at the Mansoura University - Surgical Oncology Department, from January 2000 to January 2005. Tumor stage were T 2 (43), T 3 (56) and T 4 (13). Nodal state was N 0 in 80, N 1 in 29 and N 2 in three cases. AJCC stage grouping was II (T 2 N 0 ) in 33 patients. stage III (T 3 N 0 orT 1-3 N 1 ) in 64 cases and stage IV (T 4 due to bone erosion or N 2 ) in 15 cases. The technique used for lip reconstruction was unilateral or bilateral myocutaneous depressor anguli oris flap (MCDAOF) for isolated lip defect (n=63). Bilateral myocutaneous depressor anguli oris (MCDAOF) plus local cervical rotational flap chin defects (n=3). pectorals major myocutaneous pedicled flap for cheek defects involving the lip together with a tongue flap for mucosal reconstruction (n=35). sternocleidomastoid clavicular myo-osseous flap for concomitant mandibular defects (n=] 2). Results: esthetic and functional results are evaluated regarding appearance, oral incompetence, disabling microstomia and eating difficulties. depressor anguli oris reconstruction allowed functioning static and dynamic oral function in all cases in contrast to the Pectorals major flap. there were 18 cases of oral incompetence (46.1%), nine cases of speech difficulty (23%) and five patients with poor cosmetic appearance within the second group total flap loss was not encountered, Partial nap loss affected thirteen

  8. Large animal models for vaccine development and testing.

    Science.gov (United States)

    Gerdts, Volker; Wilson, Heather L; Meurens, Francois; van Drunen Littel-van den Hurk, Sylvia; Wilson, Don; Walker, Stewart; Wheler, Colette; Townsend, Hugh; Potter, Andrew A

    2015-01-01

    The development of human vaccines continues to rely on the use of animals for research. Regulatory authorities require novel vaccine candidates to undergo preclinical assessment in animal models before being permitted to enter the clinical phase in human subjects. Substantial progress has been made in recent years in reducing and replacing the number of animals used for preclinical vaccine research through the use of bioinformatics and computational biology to design new vaccine candidates. However, the ultimate goal of a new vaccine is to instruct the immune system to elicit an effective immune response against the pathogen of interest, and no alternatives to live animal use currently exist for evaluation of this response. Studies identifying the mechanisms of immune protection; determining the optimal route and formulation of vaccines; establishing the duration and onset of immunity, as well as the safety and efficacy of new vaccines, must be performed in a living system. Importantly, no single animal model provides all the information required for advancing a new vaccine through the preclinical stage, and research over the last two decades has highlighted that large animals more accurately predict vaccine outcome in humans than do other models. Here we review the advantages and disadvantages of large animal models for human vaccine development and demonstrate that much of the success in bringing a new vaccine to market depends on choosing the most appropriate animal model for preclinical testing. © The Author 2015. Published by Oxford University Press on behalf of the Institute for Laboratory Animal Research. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  9. Solving large mixed linear models using preconditioned conjugate gradient iteration.

    Science.gov (United States)

    Strandén, I; Lidauer, M

    1999-12-01

    Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.

  10. Monte Carlo technique for very large ising models

    Science.gov (United States)

    Kalle, C.; Winkelmann, V.

    1982-08-01

    Rebbi's multispin coding technique is improved and applied to the kinetic Ising model with size 600*600*600. We give the central part of our computer program (for a CDC Cyber 76), which will be helpful also in a simulation of smaller systems, and describe the other tricks necessary to go to large lattices. The magnetization M at T=1.4* T c is found to decay asymptotically as exp(-t/2.90) if t is measured in Monte Carlo steps per spin, and M( t = 0) = 1 initially.

  11. Inviscid Wall-Modeled Large Eddy Simulations for Improved Efficiency

    Science.gov (United States)

    Aikens, Kurt; Craft, Kyle; Redman, Andrew

    2015-11-01

    The accuracy of an inviscid flow assumption for wall-modeled large eddy simulations (LES) is examined because of its ability to reduce simulation costs. This assumption is not generally applicable for wall-bounded flows due to the high velocity gradients found near walls. In wall-modeled LES, however, neither the viscous near-wall region or the viscous length scales in the outer flow are resolved. Therefore, the viscous terms in the Navier-Stokes equations have little impact on the resolved flowfield. Zero pressure gradient flat plate boundary layer results are presented for both viscous and inviscid simulations using a wall model developed previously. The results are very similar and compare favorably to those from another wall model methodology and experimental data. Furthermore, the inviscid assumption reduces simulation costs by about 25% and 39% for supersonic and subsonic flows, respectively. Future research directions are discussed as are preliminary efforts to extend the wall model to include the effects of unresolved wall roughness. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. Computational resources on TACC Stampede were provided under XSEDE allocation ENG150001.

  12. Resin infusion of large composite structures modeling and manufacturing process

    Energy Technology Data Exchange (ETDEWEB)

    Loos, A.C. [Michigan State Univ., Dept. of Mechanical Engineering, East Lansing, MI (United States)

    2006-07-01

    The resin infusion processes resin transfer molding (RTM), resin film infusion (RFI) and vacuum assisted resin transfer molding (VARTM) are cost effective techniques for the fabrication of complex shaped composite structures. The dry fibrous preform is placed in the mold, consolidated, resin impregnated and cured in a single step process. The fibrous performs are often constructed near net shape using highly automated textile processes such as knitting, weaving and braiding. In this paper, the infusion processes RTM, RFI and VARTM are discussed along with the advantages of each technique compared with traditional composite fabrication methods such as prepreg tape lay up and autoclave cure. The large number of processing variables and the complex material behavior during infiltration and cure make experimental optimization of the infusion processes costly and inefficient. Numerical models have been developed which can be used to simulate the resin infusion processes. The model formulation and solution procedures for the VARTM process are presented. A VARTM process simulation of a carbon fiber preform was presented to demonstrate the type of information that can be generated by the model and to compare the model predictions with experimental measurements. Overall, the predicted flow front positions, resin pressures and preform thicknesses agree well with the measured values. The results of the simulation show the potential cost and performance benefits that can be realized by using a simulation model as part of the development process. (au)

  13. Large-Signal DG-MOSFET Modelling for RFID Rectification

    Directory of Open Access Journals (Sweden)

    R. Rodríguez

    2016-01-01

    Full Text Available This paper analyses the undoped DG-MOSFETs capability for the operation of rectifiers for RFIDs and Wireless Power Transmission (WPT at microwave frequencies. For this purpose, a large-signal compact model has been developed and implemented in Verilog-A. The model has been numerically validated with a device simulator (Sentaurus. It is found that the number of stages to achieve the optimal rectifier performance is inferior to that required with conventional MOSFETs. In addition, the DC output voltage could be incremented with the use of appropriate mid-gap metals for the gate, as TiN. Minor impact of short channel effects (SCEs on rectification is also pointed out.

  14. Photorealistic large-scale urban city model reconstruction.

    Science.gov (United States)

    Poullis, Charalambos; You, Suya

    2009-01-01

    The rapid and efficient creation of virtual environments has become a crucial part of virtual reality applications. In particular, civil and defense applications often require and employ detailed models of operations areas for training, simulations of different scenarios, planning for natural or man-made events, monitoring, surveillance, games, and films. A realistic representation of the large-scale environments is therefore imperative for the success of such applications since it increases the immersive experience of its users and helps reduce the difference between physical and virtual reality. However, the task of creating such large-scale virtual environments still remains a time-consuming and manual work. In this work, we propose a novel method for the rapid reconstruction of photorealistic large-scale virtual environments. First, a novel, extendible, parameterized geometric primitive is presented for the automatic building identification and reconstruction of building structures. In addition, buildings with complex roofs containing complex linear and nonlinear surfaces are reconstructed interactively using a linear polygonal and a nonlinear primitive, respectively. Second, we present a rendering pipeline for the composition of photorealistic textures, which unlike existing techniques, can recover missing or occluded texture information by integrating multiple information captured from different optical sensors (ground, aerial, and satellite).

  15. Numerically modelling the large scale coronal magnetic field

    Science.gov (United States)

    Panja, Mayukh; Nandi, Dibyendu

    2016-07-01

    The solar corona spews out vast amounts of magnetized plasma into the heliosphere which has a direct impact on the Earth's magnetosphere. Thus it is important that we develop an understanding of the dynamics of the solar corona. With our present technology it has not been possible to generate 3D magnetic maps of the solar corona; this warrants the use of numerical simulations to study the coronal magnetic field. A very popular method of doing this, is to extrapolate the photospheric magnetic field using NLFF or PFSS codes. However the extrapolations at different time intervals are completely independent of each other and do not capture the temporal evolution of magnetic fields. On the other hand full MHD simulations of the global coronal field, apart from being computationally very expensive would be physically less transparent, owing to the large number of free parameters that are typically used in such codes. This brings us to the Magneto-frictional model which is relatively simpler and computationally more economic. We have developed a Magnetofrictional Model, in 3D spherical polar co-ordinates to study the large scale global coronal field. Here we present studies of changing connectivities between active regions, in response to photospheric motions.

  16. Modeling Resource Utilization of a Large Data Acquisition System

    CERN Document Server

    AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger

    2017-01-01

    The ATLAS 'Phase-II' upgrade, scheduled to start in 2024, will significantly change the requirements under which the data-acquisition system operates. The input data rate, currently fixed around 150 GB/s, is anticipated to reach 5 TB/s. In order to deal with the challenging conditions, and exploit the capabilities of newer technologies, a number of architectural changes are under consideration. Of particular interest is a new component, known as the Storage Handler, which will provide a large buffer area decoupling real-time data taking from event filtering. Dynamic operational models of the upgraded system can be used to identify the required resources and to select optimal techniques. In order to achieve a robust and dependable model, the current data-acquisition architecture has been used as a test case. This makes it possible to verify and calibrate the model against real operation data. Such a model can then be evolved toward the future ATLAS Phase-II architecture. In this paper we introduce the current ...

  17. Modelling Resource Utilization of a Large Data Acquisition System

    CERN Document Server

    Santos, Alejandro; The ATLAS collaboration

    2017-01-01

    The ATLAS 'Phase-II' upgrade, scheduled to start in 2024, will significantly change the requirements under which the data-acquisition system operates. The input data rate, currently fixed around 150 GB/s, is anticipated to reach 5 TB/s. In order to deal with the challenging conditions, and exploit the capabilities of newer technologies, a number of architectural changes are under consideration. Of particular interest is a new component, known as the Storage Handler, which will provide a large buffer area decoupling real-time data taking from event filtering. Dynamic operational models of the upgraded system can be used to identify the required resources and to select optimal techniques. In order to achieve a robust and dependable model, the current data-acquisition architecture has been used as a test case. This makes it possible to verify and calibrate the model against real operation data. Such a model can then be evolved toward the future ATLAS Phase-II architecture. In this paper we introduce the current ...

  18. Modelling large scale human activity in San Francisco

    Science.gov (United States)

    Gonzalez, Marta

    2010-03-01

    Diverse group of people with a wide variety of schedules, activities and travel needs compose our cities nowadays. This represents a big challenge for modeling travel behaviors in urban environments; those models are of crucial interest for a wide variety of applications such as traffic forecasting, spreading of viruses, or measuring human exposure to air pollutants. The traditional means to obtain knowledge about travel behavior is limited to surveys on travel journeys. The obtained information is based in questionnaires that are usually costly to implement and with intrinsic limitations to cover large number of individuals and some problems of reliability. Using mobile phone data, we explore the basic characteristics of a model of human travel: The distribution of agents is proportional to the population density of a given region, and each agent has a characteristic trajectory size contain information on frequency of visits to different locations. Additionally we use a complementary data set given by smart subway fare cards offering us information about the exact time of each passenger getting in or getting out of the subway station and the coordinates of it. This allows us to uncover the temporal aspects of the mobility. Since we have the actual time and place of individual's origin and destination we can understand the temporal patterns in each visited location with further details. Integrating two described data set we provide a dynamical model of human travels that incorporates different aspects observed empirically.

  19. Modelling of decay heat removal using large water pools

    International Nuclear Information System (INIS)

    Munther, R.; Raussi, P.; Kalli, H.

    1992-01-01

    The main task for investigating of passive safety systems typical for ALWRs (Advanced Light Water Reactors) has been reviewing decay heat removal systems. The reference system for calculations has been represented in Hitachi's SBWR-concept. The calculations for energy transfer to the suppression pool were made using two different fluid mechanics codes, namely FIDAP and PHOENICS. FIDAP is based on finite element methodology and PHOENICS uses finite differences. The reason choosing these codes has been to compare their modelling and calculating abilities. The thermal stratification behaviour and the natural circulation was modelled with several turbulent flow models. Also, energy transport to the suppression pool was calculated for laminar flow conditions. These calculations required a large amount of computer resources and so the CRAY-supercomputer of the state computing centre was used. The results of the calculations indicated that the capabilities of these codes for modelling the turbulent flow regime are limited. Output from these codes should be considered carefully, and whenever possible, experimentally determined parameters should be used as input to enhance the code reliability. (orig.). (31 refs., 21 figs., 3 tabs.)

  20. The Oskarshamn model for public involvement in the siting of nuclear facilities

    Energy Technology Data Exchange (ETDEWEB)

    Aahagen, H. [Ahagen and Co (Sweden); CarIsson, Torsten [Mayor, Oskarshamn (Sweden); Hallberg, K. [Local Competence Building, Oskarshamn (Sweden); Andersson, Kjell [Karinta-Konsult, Taeby(Sweden)

    1999-12-01

    The Oskarshamn model has so far worked extremely well as a tool to achieve openness and public participation. The municipality involvement has been successful in several aspects, e.g.: It has been possible to influence the program, to a large extent, to meet certain municipality conditions and to ensure the local perspective. The local competence has increased to a considerable degree. The activities generated by the six working groups with a total of 40 members have generated a large number of contacts with various organisations, schools, mass media, individuals in the general public and interest groups. For the future, clarification of the disposal method and site selection criteria as well as the site selection process as such is crucial. The municipality has also emphasised the importance of SKB having shown the integration between site selection criteria, the feasibility study and the safety assessment. Furthermore, the programs for the encapsulation facility and the repository must be co-ordinated. For Oskarshamn it will be of utmost importance that the repository is well under way to be realised before the encapsulation facility can be built.

  1. The Oskarshamn model for public involvement in the siting of nuclear facilities

    International Nuclear Information System (INIS)

    Aahagen, H.; CarIsson, Torsten; Hallberg, K.; Andersson, Kjell

    1999-01-01

    The Oskarshamn model has so far worked extremely well as a tool to achieve openness and public participation. The municipality involvement has been successful in several aspects, e.g.: It has been possible to influence the program, to a large extent, to meet certain municipality conditions and to ensure the local perspective. The local competence has increased to a considerable degree. The activities generated by the six working groups with a total of 40 members have generated a large number of contacts with various organisations, schools, mass media, individuals in the general public and interest groups. For the future, clarification of the disposal method and site selection criteria as well as the site selection process as such is crucial. The municipality has also emphasised the importance of SKB having shown the integration between site selection criteria, the feasibility study and the safety assessment. Furthermore, the programs for the encapsulation facility and the repository must be co-ordinated. For Oskarshamn it will be of utmost importance that the repository is well under way to be realised before the encapsulation facility can be built

  2. Comparison of void strengthening in fcc and bcc metals: Large-scale atomic-level modelling

    International Nuclear Information System (INIS)

    Osetsky, Yu.N.; Bacon, D.J.

    2005-01-01

    Strengthening due to voids can be a significant radiation effect in metals. Treatment of this by elasticity theory of dislocations is difficult when atomic structure of the obstacle and dislocation is influential. In this paper, we report results of large-scale atomic-level modelling of edge dislocation-void interaction in fcc (copper) and bcc (iron) metals. Voids of up to 5 nm diameter were studied over the temperature range from 0 to 600 K. We demonstrate that atomistic modelling is able to reveal important effects, which are beyond the continuum approach. Some arise from features of the dislocation core and crystal structure, others involve dislocation climb and temperature effects

  3. Modeling containment of large wildfires using generalized linear mixed-model analysis

    Science.gov (United States)

    Mark Finney; Isaac C. Grenfell; Charles W. McHugh

    2009-01-01

    Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...

  4. Improving large-scale groundwater models by considering fossil gradients

    Science.gov (United States)

    Schulz, Stephan; Walther, Marc; Michelsen, Nils; Rausch, Randolf; Dirks, Heiko; Al-Saud, Mohammed; Merz, Ralf; Kolditz, Olaf; Schüth, Christoph

    2017-05-01

    Due to limited availability of surface water, many arid to semi-arid countries rely on their groundwater resources. Despite the quasi-absence of present day replenishment, some of these groundwater bodies contain large amounts of water, which was recharged during pluvial periods of the Late Pleistocene to Early Holocene. These mostly fossil, non-renewable resources require different management schemes compared to those which are usually applied in renewable systems. Fossil groundwater is a finite resource and its withdrawal implies mining of aquifer storage reserves. Although they receive almost no recharge, some of them show notable hydraulic gradients and a flow towards their discharge areas, even without pumping. As a result, these systems have more discharge than recharge and hence are not in steady state, which makes their modelling, in particular the calibration, very challenging. In this study, we introduce a new calibration approach, composed of four steps: (i) estimating the fossil discharge component, (ii) determining the origin of fossil discharge, (iii) fitting the hydraulic conductivity with a pseudo steady-state model, and (iv) fitting the storage capacity with a transient model by reconstructing head drawdown induced by pumping activities. Finally, we test the relevance of our approach and evaluated the effect of considering or ignoring fossil gradients on aquifer parameterization for the Upper Mega Aquifer (UMA) on the Arabian Peninsula.

  5. Monte Carlo modelling of large scale NORM sources using MCNP.

    Science.gov (United States)

    Wallace, J D

    2013-12-01

    The representative Monte Carlo modelling of large scale planar sources (for comparison to external environmental radiation fields) is undertaken using substantial diameter and thin profile planar cylindrical sources. The relative impact of source extent, soil thickness and sky-shine are investigated to guide decisions relating to representative geometries. In addition, the impact of source to detector distance on the nature of the detector response, for a range of source sizes, has been investigated. These investigations, using an MCNP based model, indicate a soil cylinder of greater than 20 m diameter and of no less than 50 cm depth/height, combined with a 20 m deep sky section above the soil cylinder, are needed to representatively model the semi-infinite plane of uniformly distributed NORM sources. Initial investigation of the effect of detector placement indicate that smaller source sizes may be used to achieve a representative response at shorter source to detector distances. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  6. Effects of deceptive packaging and product involvement on purchase intention: an elaboration likelihood model perspective.

    Science.gov (United States)

    Lammers, H B

    2000-04-01

    From an Elaboration Likelihood Model perspective, it was hypothesized that postexposure awareness of deceptive packaging claims would have a greater negative effect on scores for purchase intention by consumers lowly involved rather than highly involved with a product (n = 40). Undergraduates who were classified as either highly or lowly (ns = 20 and 20) involved with M&Ms examined either a deceptive or non-deceptive package design for M&Ms candy and were subsequently informed of the deception employed in the packaging before finally rating their intention to purchase. As anticipated, highly deceived subjects who were low in involvement rated intention to purchase lower than their highly involved peers. Overall, the results attest to the robustness of the model and suggest that the model has implications beyond advertising effects and into packaging effects.

  7. A turbulence model for large interfaces in high Reynolds two-phase CFD

    International Nuclear Information System (INIS)

    Coste, P.; Laviéville, J.

    2015-01-01

    Highlights: • Two-phase CFD commonly involves interfaces much larger than the computational cells. • A two-phase turbulence model is developed to better take them into account. • It solves k–epsilon transport equations in each phase. • The special treatments and transfer terms at large interfaces are described. • Validation cases are presented. - Abstract: A model for two-phase (six-equation) CFD modelling of turbulence is presented, for the regions of the flow where the liquid–gas interface takes place on length scales which are much larger than the typical computational cell size. In the other regions of the flow, the liquid or gas volume fractions range from 0 to 1. Heat and mass transfer, compressibility of the fluids, are included in the system, which is used at high Reynolds numbers in large scale industrial calculations. In this context, a model based on k and ε transport equations in each phase was chosen. The paper describes the model, with a focus on the large interfaces, which require special treatments and transfer terms between the phases, including some approaches inspired from wall functions. The validation of the model is based on high Reynolds number experiments with turbulent quantities measurements of a liquid jet impinging a free surface and an air water stratified flow. A steam–water stratified condensing flow experiment is also used for an indirect validation in the case of heat and mass transfer

  8. Modeling the behaviour of shape memory materials under large deformations

    Science.gov (United States)

    Rogovoy, A. A.; Stolbova, O. S.

    2017-06-01

    In this study, the models describing the behavior of shape memory alloys, ferromagnetic materials and polymers have been constructed, using a formalized approach to develop the constitutive equations for complex media under large deformations. The kinematic and constitutive equations, satisfying the principles of thermodynamics and objectivity, have been derived. The application of the Galerkin procedure to the systems of equations of solid mechanics allowed us to obtain the Lagrange variational equation and variational formulation of the magnetostatics problems. These relations have been tested in the context of the problems of finite deformation in shape memory alloys and ferromagnetic materials during forward and reverse martensitic transformations and in shape memory polymers during forward and reverse relaxation transitions from a highly elastic to a glassy state.

  9. A large animal model for boron neutron capture therapy

    International Nuclear Information System (INIS)

    Gavin, P.R.; Kraft, S.L.; DeHaan, C.E.; Moore, M.P.; Griebenow, M.L.

    1992-01-01

    An epithermal neutron beam is needed to treat relatively deep seated tumors. The scattering characteristics of neutrons in this energy range dictate that in vivo experiments be conducted in a large animal to prevent unacceptable total body irradiation. The canine species has proven an excellent model to evaluate the various problems of boron neutron capture utilizing an epithermal neutron beam. This paper discusses three major components of the authors study: (1) the pharmacokinetics of borocaptate sodium (NA 2 B 12 H 11 SH or BSH) in dogs with spontaneously occurring brain tumors, (2) the radiation tolerance of normal tissues in the dog using an epithermal beam alone and in combination with borocaptate sodium, and (3) initial treatment of dogs with spontaneously occurring brain tumors utilizing borocaptate sodium and an epithermal neutron beam

  10. Modeling of large-scale oxy-fuel combustion processes

    DEFF Research Database (Denmark)

    Yin, Chungen

    2012-01-01

    Quite some studies have been conducted in order to implement oxy-fuel combustion with flue gas recycle in conventional utility boilers as an effective effort of carbon capture and storage. However, combustion under oxy-fuel conditions is significantly different from conventional air-fuel firing......, among which radiative heat transfer under oxy-fuel conditions is one of the fundamental issues. This paper demonstrates the nongray-gas effects in modeling of large-scale oxy-fuel combustion processes. Oxy-fuel combustion of natural gas in a 609MW utility boiler is numerically studied, in which...... calculation of the oxy-fuel WSGGM remarkably over-predicts the radiative heat transfer to the furnace walls and under-predicts the gas temperature at the furnace exit plane, which also result in a higher incomplete combustion in the gray calculation. Moreover, the gray and non-gray calculations of the same...

  11. Modeling the Relations among Parental Involvement, School Engagement and Academic Performance of High School Students

    Science.gov (United States)

    Al-Alwan, Ahmed F.

    2014-01-01

    The author proposed a model to explain how parental involvement and school engagement related to academic performance. Participants were (671) 9th and 10th graders students who completed two scales of "parental involvement" and "school engagement" in their regular classrooms. Results of the path analysis suggested that the…

  12. Large scale solar district heating. Evaluation, modelling and designing

    Energy Technology Data Exchange (ETDEWEB)

    Heller, A.

    2000-07-01

    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the tool for design studies and on a local energy planning case. The evaluation of the central solar heating technology is based on measurements on the case plant in Marstal, Denmark, and on published and unpublished data for other, mainly Danish, CSDHP plants. Evaluations on the thermal, economical and environmental performances are reported, based on the experiences from the last decade. The measurements from the Marstal case are analysed, experiences extracted and minor improvements to the plant design proposed. For the detailed designing and energy planning of CSDHPs, a computer simulation model is developed and validated on the measurements from the Marstal case. The final model is then generalised to a 'generic' model for CSDHPs in general. The meteorological reference data, Danish Reference Year, is applied to find the mean performance for the plant designs. To find the expectable variety of the thermal performance of such plants, a method is proposed where data from a year with poor solar irradiation and a year with strong solar irradiation are applied. Equipped with a simulation tool design studies are carried out spreading from parameter analysis over energy planning for a new settlement to a proposal for the combination of plane solar collectors with high performance solar collectors, exemplified by a trough solar collector. The methodology of utilising computer simulation proved to be a cheap and relevant tool in the design of future solar heating plants. The thesis also exposed the demand for developing computer models for the more advanced solar collector designs and especially for the control operation of CSHPs. In the final chapter the CSHP technology is put into perspective with respect to other possible technologies to find the relevance of the application

  13. Simulating large-scale pedestrian movement using CA and event driven model: Methodology and case study

    Science.gov (United States)

    Li, Jun; Fu, Siyao; He, Haibo; Jia, Hongfei; Li, Yanzhong; Guo, Yi

    2015-11-01

    Large-scale regional evacuation is an important part of national security emergency response plan. Large commercial shopping area, as the typical service system, its emergency evacuation is one of the hot research topics. A systematic methodology based on Cellular Automata with the Dynamic Floor Field and event driven model has been proposed, and the methodology has been examined within context of a case study involving the evacuation within a commercial shopping mall. Pedestrians walking is based on Cellular Automata and event driven model. In this paper, the event driven model is adopted to simulate the pedestrian movement patterns, the simulation process is divided into normal situation and emergency evacuation. The model is composed of four layers: environment layer, customer layer, clerk layer and trajectory layer. For the simulation of movement route of pedestrians, the model takes into account purchase intention of customers and density of pedestrians. Based on evacuation model of Cellular Automata with Dynamic Floor Field and event driven model, we can reflect behavior characteristics of customers and clerks at the situations of normal and emergency evacuation. The distribution of individual evacuation time as a function of initial positions and the dynamics of the evacuation process is studied. Our results indicate that the evacuation model using the combination of Cellular Automata with Dynamic Floor Field and event driven scheduling can be used to simulate the evacuation of pedestrian flows in indoor areas with complicated surroundings and to investigate the layout of shopping mall.

  14. Applying the Plan-Do-Study-Act (PDSA) approach to a large pragmatic study involving safety net clinics.

    Science.gov (United States)

    Coury, Jennifer; Schneider, Jennifer L; Rivelli, Jennifer S; Petrik, Amanda F; Seibel, Evelyn; D'Agostini, Brieshon; Taplin, Stephen H; Green, Beverly B; Coronado, Gloria D

    2017-06-19

    The Plan-Do-Study-Act (PDSA) cycle is a commonly used improvement process in health care settings, although its documented use in pragmatic clinical research is rare. A recent pragmatic clinical research study, called the Strategies and Opportunities to STOP Colon Cancer in Priority Populations (STOP CRC), used this process to optimize the research implementation of an automated colon cancer screening outreach program in intervention clinics. We describe the process of using this PDSA approach, the selection of PDSA topics by clinic leaders, and project leaders' reactions to using PDSA in pragmatic research. STOP CRC is a cluster-randomized pragmatic study that aims to test the effectiveness of a direct-mail fecal immunochemical testing (FIT) program involving eight Federally Qualified Health Centers in Oregon and California. We and a practice improvement specialist trained in the PDSA process delivered structured presentations to leaders of these centers; the presentations addressed how to apply the PDSA process to improve implementation of a mailed outreach program offering colorectal cancer screening through FIT tests. Center leaders submitted PDSA plans and delivered reports via webinar at quarterly meetings of the project's advisory board. Project staff conducted one-on-one, 45-min interviews with project leads from each health center to assess the reaction to and value of the PDSA process in supporting the implementation of STOP CRC. Clinic-selected PDSA activities included refining the intervention staffing model, improving outreach materials, and changing workflow steps. Common benefits of using PDSA cycles in pragmatic research were that it provided a structure for staff to focus on improving the program and it allowed staff to test the change they wanted to see. A commonly reported challenge was measuring the success of the PDSA process with the available electronic medical record tools. Understanding how the PDSA process can be applied to pragmatic

  15. An Overview of Westinghouse Realistic Large Break LOCA Evaluation Model

    Directory of Open Access Journals (Sweden)

    Cesare Frepoli

    2008-01-01

    Full Text Available Since the 1988 amendment of the 10 CFR 50.46 rule in 1988, Westinghouse has been developing and applying realistic or best-estimate methods to perform LOCA safety analyses. A realistic analysis requires the execution of various realistic LOCA transient simulations where the effect of both model and input uncertainties are ranged and propagated throughout the transients. The outcome is typically a range of results with associated probabilities. The thermal/hydraulic code is the engine of the methodology but a procedure is developed to assess the code and determine its biases and uncertainties. In addition, inputs to the simulation are also affected by uncertainty and these uncertainties are incorporated into the process. Several approaches have been proposed and applied in the industry in the framework of best-estimate methods. Most of the implementations, including Westinghouse, follow the Code Scaling, Applicability and Uncertainty (CSAU methodology. Westinghouse methodology is based on the use of the WCOBRA/TRAC thermal-hydraulic code. The paper starts with an overview of the regulations and its interpretation in the context of realistic analysis. The CSAU roadmap is reviewed in the context of its implementation in the Westinghouse evaluation model. An overview of the code (WCOBRA/TRAC and methodology is provided. Finally, the recent evolution to nonparametric statistics in the current edition of the W methodology is discussed. Sample results of a typical large break LOCA analysis for a PWR are provided.

  16. Diagnosis of abdominal abscess: A large animal model

    International Nuclear Information System (INIS)

    Harper, R.A.; Meek, A.C.; Chidlow, A.D.; Galvin, D.A.J.; McCollum, C.N.

    1988-01-01

    In order to evaluate potential isotopic techniques for the diagnosis of occult sepsis an experimental model in large animals is required. Sponges placed in the abdomen of pigs were injected with mixed colonic bacteria. In 4 animals Kefzol (500 mg IV) and Metronidazole (1 g PR) were administered before the sponges were inserted and compared to 4 given no antibiotics. Finally, in 12 pigs, 20 mls autologous blood was injected into the sponge before antibiotic prophylaxis and bacterial inoculation. 111 In-leucocyte scans and post mortem were then performed 2 weeks later. Without antibiotic cover purulent peritonitis developed in all 4 pigs. Prophylactic antibiotics prevented overwhelming sepsis but at 2 weeks there was only brown fluid surrounding the sponge. Blood added to the sponge produced abscesses in every animal confirmed by leucocytosis of 25.35x10 9 cells/L, 111 In-leucocyte scanning and post mortem. Culturing the thick yellow pus showed a mixed colony of aerobes and anaerobes, similar to those cultured in clinical practice. An intra-abdominal sponge containing blood and faecal organisms in a pig on prophylactic antibiotics reliably produced a chronic abscess. This model is ideal for studies on alternative methods of abscess diagnosis and radiation dosimetry. (orig.)

  17. EXO-ZODI MODELING FOR THE LARGE BINOCULAR TELESCOPE INTERFEROMETER

    Energy Technology Data Exchange (ETDEWEB)

    Kennedy, Grant M.; Wyatt, Mark C.; Panić, Olja; Shannon, Andrew [Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom); Bailey, Vanessa; Defrère, Denis; Hinz, Philip M.; Rieke, George H.; Skemer, Andrew J.; Su, Katherine Y. L. [Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721 (United States); Bryden, Geoffrey; Mennesson, Bertrand; Morales, Farisa; Serabyn, Eugene [Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109 (United States); Danchi, William C.; Roberge, Aki; Stapelfeldt, Karl R. [NASA Goddard Space Flight Center, Exoplanets and Stellar Astrophysics, Code 667, Greenbelt, MD 20771 (United States); Haniff, Chris [Cavendish Laboratory, University of Cambridge, JJ Thomson Avenue, Cambridge CB3 0HE (United Kingdom); Lebreton, Jérémy [Infrared Processing and Analysis Center, MS 100-22, California Institute of Technology, 770 South Wilson Avenue, Pasadena, CA 91125 (United States); Millan-Gabet, Rafael [NASA Exoplanet Science Institute, California Institute of Technology, 770 South Wilson Avenue, Pasadena, CA 91125 (United States); and others

    2015-02-01

    Habitable zone dust levels are a key unknown that must be understood to ensure the success of future space missions to image Earth analogs around nearby stars. Current detection limits are several orders of magnitude above the level of the solar system's zodiacal cloud, so characterization of the brightness distribution of exo-zodi down to much fainter levels is needed. To this end, the Large Binocular Telescope Interferometer (LBTI) will detect thermal emission from habitable zone exo-zodi a few times brighter than solar system levels. Here we present a modeling framework for interpreting LBTI observations, which yields dust levels from detections and upper limits that are then converted into predictions and upper limits for the scattered light surface brightness. We apply this model to the HOSTS survey sample of nearby stars; assuming a null depth uncertainty of 10{sup –4} the LBTI will be sensitive to dust a few times above the solar system level around Sun-like stars, and to even lower dust levels for more massive stars.

  18. ADAPTIVE TEXTURE SYNTHESIS FOR LARGE SCALE CITY MODELING

    Directory of Open Access Journals (Sweden)

    G. Despine

    2015-02-01

    Full Text Available Large scale city models textured with aerial images are well suited for bird-eye navigation but generally the image resolution does not allow pedestrian navigation. One solution to face this problem is to use high resolution terrestrial photos but it requires huge amount of manual work to remove occlusions. Another solution is to synthesize generic textures with a set of procedural rules and elementary patterns like bricks, roof tiles, doors and windows. This solution may give realistic textures but with no correlation to the ground truth. Instead of using pure procedural modelling we present a method to extract information from aerial images and adapt the texture synthesis to each building. We describe a workflow allowing the user to drive the information extraction and to select the appropriate texture patterns. We also emphasize the importance to organize the knowledge about elementary pattern in a texture catalogue allowing attaching physical information, semantic attributes and to execute selection requests. Roofs are processed according to the detected building material. Façades are first described in terms of principal colours, then opening positions are detected and some window features are computed. These features allow selecting the most appropriate patterns from the texture catalogue. We experimented this workflow on two samples with 20 cm and 5 cm resolution images. The roof texture synthesis and opening detection were successfully conducted on hundreds of buildings. The window characterization is still sensitive to the distortions inherent to the projection of aerial images onto the facades.

  19. Adaptive Texture Synthesis for Large Scale City Modeling

    Science.gov (United States)

    Despine, G.; Colleu, T.

    2015-02-01

    Large scale city models textured with aerial images are well suited for bird-eye navigation but generally the image resolution does not allow pedestrian navigation. One solution to face this problem is to use high resolution terrestrial photos but it requires huge amount of manual work to remove occlusions. Another solution is to synthesize generic textures with a set of procedural rules and elementary patterns like bricks, roof tiles, doors and windows. This solution may give realistic textures but with no correlation to the ground truth. Instead of using pure procedural modelling we present a method to extract information from aerial images and adapt the texture synthesis to each building. We describe a workflow allowing the user to drive the information extraction and to select the appropriate texture patterns. We also emphasize the importance to organize the knowledge about elementary pattern in a texture catalogue allowing attaching physical information, semantic attributes and to execute selection requests. Roofs are processed according to the detected building material. Façades are first described in terms of principal colours, then opening positions are detected and some window features are computed. These features allow selecting the most appropriate patterns from the texture catalogue. We experimented this workflow on two samples with 20 cm and 5 cm resolution images. The roof texture synthesis and opening detection were successfully conducted on hundreds of buildings. The window characterization is still sensitive to the distortions inherent to the projection of aerial images onto the facades.

  20. Application of Pareto-efficient combustion modeling framework to large eddy simulations of turbulent reacting flows

    Science.gov (United States)

    Wu, Hao; Ihme, Matthias

    2017-11-01

    The modeling of turbulent combustion requires the consideration of different physico-chemical processes, involving a vast range of time and length scales as well as a large number of scalar quantities. To reduce the computational complexity, various combustion models are developed. Many of them can be abstracted using a lower-dimensional manifold representation. A key issue in using such lower-dimensional combustion models is the assessment as to whether a particular combustion model is adequate in representing a certain flame configuration. The Pareto-efficient combustion (PEC) modeling framework was developed to perform dynamic combustion model adaptation based on various existing manifold models. In this work, the PEC model is applied to a turbulent flame simulation, in which a computationally efficient flamelet-based combustion model is used in together with a high-fidelity finite-rate chemistry model. The combination of these two models achieves high accuracy in predicting pollutant species at a relatively low computational cost. The relevant numerical methods and parallelization techniques are also discussed in this work.

  1. Modelling hydrologic and hydrodynamic processes in basins with large semi-arid wetlands

    Science.gov (United States)

    Fleischmann, Ayan; Siqueira, Vinícius; Paris, Adrien; Collischonn, Walter; Paiva, Rodrigo; Pontes, Paulo; Crétaux, Jean-François; Bergé-Nguyen, Muriel; Biancamaria, Sylvain; Gosset, Marielle; Calmant, Stephane; Tanimoun, Bachir

    2018-06-01

    hydrologic and hydrodynamic modelling proves to be an important tool for integrated evaluation of hydrological processes in such poorly gauged, large scale basins. We hope that this model application provides new ways forward for large scale model development in such systems, involving semi-arid regions and complex floodplains.

  2. Modelling of heat transfer during torrefaction of large lignocellulosic biomass

    Science.gov (United States)

    Regmi, Bharat; Arku, Precious; Tasnim, Syeda Humaira; Mahmud, Shohel; Dutta, Animesh

    2018-07-01

    Preparation of feedstock is a major energy intensive process for the thermochemical conversion of biomass into fuel. By eliminating the need to grind biomass prior to the torrefaction process, there would be a potential gain in the energy requirements as the entire step would be eliminated. In regards to a commercialization of torrefaction technology, this study has examined heat transfer inside large cylindrical biomass both numerically and experimentally during torrefaction. A numerical axis-symmetrical 2-D model for heat transfer during torrefaction at 270°C for 1 h was created in COMSOL Multiphysics 5.1 considering heat generation evaluated from the experiment. The model analyzed the temperature distribution within the core and on the surface of biomass during torrefaction for various sizes. The model results showed similarities with experimental results. The effect of L/D ratio on temperature distribution within biomass was observed by varying length and diameter and compared with experiments in literature to find out an optimal range of cylindrical biomass size suitable for torrefaction. The research demonstrated that a cylindrical biomass sample of 50 mm length with L/D ratio of 2 can be torrefied with a core-surface temperature difference of less than 30 °C. The research also demonstrated that sample length has a negligible effect on core-surface temperature difference during torrefaction when the diameter is fixed at 25 mm. This information will help to design a torrefaction processing system and develop a value chain for biomass supply without using an energy-intensive grinding process.

  3. Modelling of heat transfer during torrefaction of large lignocellulosic biomass

    Science.gov (United States)

    Regmi, Bharat; Arku, Precious; Tasnim, Syeda Humaira; Mahmud, Shohel; Dutta, Animesh

    2018-02-01

    Preparation of feedstock is a major energy intensive process for the thermochemical conversion of biomass into fuel. By eliminating the need to grind biomass prior to the torrefaction process, there would be a potential gain in the energy requirements as the entire step would be eliminated. In regards to a commercialization of torrefaction technology, this study has examined heat transfer inside large cylindrical biomass both numerically and experimentally during torrefaction. A numerical axis-symmetrical 2-D model for heat transfer during torrefaction at 270°C for 1 h was created in COMSOL Multiphysics 5.1 considering heat generation evaluated from the experiment. The model analyzed the temperature distribution within the core and on the surface of biomass during torrefaction for various sizes. The model results showed similarities with experimental results. The effect of L/D ratio on temperature distribution within biomass was observed by varying length and diameter and compared with experiments in literature to find out an optimal range of cylindrical biomass size suitable for torrefaction. The research demonstrated that a cylindrical biomass sample of 50 mm length with L/D ratio of 2 can be torrefied with a core-surface temperature difference of less than 30 °C. The research also demonstrated that sample length has a negligible effect on core-surface temperature difference during torrefaction when the diameter is fixed at 25 mm. This information will help to design a torrefaction processing system and develop a value chain for biomass supply without using an energy-intensive grinding process.

  4. Environmental Impacts of Large Scale Biochar Application Through Spatial Modeling

    Science.gov (United States)

    Huber, I.; Archontoulis, S.

    2017-12-01

    In an effort to study the environmental (emissions, soil quality) and production (yield) impacts of biochar application at regional scales we coupled the APSIM-Biochar model with the pSIMS parallel platform. So far the majority of biochar research has been concentrated on lab to field studies to advance scientific knowledge. Regional scale assessments are highly needed to assist decision making. The overall objective of this simulation study was to identify areas in the USA that have the most gain environmentally from biochar's application, as well as areas which our model predicts a notable yield increase due to the addition of biochar. We present the modifications in both APSIM biochar and pSIMS components that were necessary to facilitate these large scale model runs across several regions in the United States at a resolution of 5 arcminutes. This study uses the AgMERRA global climate data set (1980-2010) and the Global Soil Dataset for Earth Systems modeling as a basis for creating its simulations, as well as local management operations for maize and soybean cropping systems and different biochar application rates. The regional scale simulation analysis is in progress. Preliminary results showed that the model predicts that high quality soils (particularly those common to Iowa cropping systems) do not receive much, if any, production benefit from biochar. However, soils with low soil organic matter ( 0.5%) do get a noteworthy yield increase of around 5-10% in the best cases. We also found N2O emissions to be spatial and temporal specific; increase in some areas and decrease in some other areas due to biochar application. In contrast, we found increases in soil organic carbon and plant available water in all soils (top 30 cm) due to biochar application. The magnitude of these increases (% change from the control) were larger in soil with low organic matter (below 1.5%) and smaller in soils with high organic matter (above 3%) and also dependent on biochar

  5. Modeling and analysis of large-eddy simulations of particle-laden turbulent boundary layer flows

    KAUST Repository

    Rahman, Mustafa M.

    2017-01-05

    We describe a framework for the large-eddy simulation of solid particles suspended and transported within an incompressible turbulent boundary layer (TBL). For the fluid phase, the large-eddy simulation (LES) of incompressible turbulent boundary layer employs stretched spiral vortex subgrid-scale model and a virtual wall model similar to the work of Cheng, Pullin & Samtaney (J. Fluid Mech., 2015). This LES model is virtually parameter free and involves no active filtering of the computed velocity field. Furthermore, a recycling method to generate turbulent inflow is implemented. For the particle phase, the direct quadrature method of moments (DQMOM) is chosen in which the weights and abscissas of the quadrature approximation are tracked directly rather than the moments themselves. The numerical method in this framework is based on a fractional-step method with an energy-conservative fourth-order finite difference scheme on a staggered mesh. This code is parallelized based on standard message passing interface (MPI) protocol and is designed for distributed-memory machines. It is proposed to utilize this framework to examine transport of particles in very large-scale simulations. The solver is validated using the well know result of Taylor-Green vortex case. A large-scale sandstorm case is simulated and the altitude variations of number density along with its fluctuations are quantified.

  6. The use of public participation and economic appraisal for public involvement in large-scale hydropower projects: Case study of the Nam Theun 2 Hydropower Project

    International Nuclear Information System (INIS)

    Mirumachi, Naho; Torriti, Jacopo

    2012-01-01

    Gaining public acceptance is one of the main issues with large-scale low-carbon projects such as hydropower development. It has been recommended by the World Commission on Dams that to gain public acceptance, public involvement is necessary in the decision-making process (). As financially-significant actors in the planning and implementation of large-scale hydropower projects in developing country contexts, the paper examines the ways in which public involvement may be influenced by international financial institutions. Using the case study of the Nam Theun 2 Hydropower Project in Laos, the paper analyses how public involvement facilitated by the Asian Development Bank had a bearing on procedural and distributional justice. The paper analyses the extent of public participation and the assessment of full social and environmental costs of the project in the Cost-Benefit Analysis conducted during the project appraisal stage. It is argued that while efforts were made to involve the public, there were several factors that influenced procedural and distributional justice: the late contribution of the Asian Development Bank in the project appraisal stage; and the issue of non-market values and discount rate to calculate the full social and environmental costs. - Highlights: ► Public acceptance in large-scale hydropower projects is examined. ► Both procedural and distributional justice are important for public acceptance. ► International Financial Institutions can influence the level of public involvement. ► Public involvement benefits consideration of non-market values and discount rates.

  7. A logistics model for large space power systems

    Science.gov (United States)

    Koelle, H. H.

    Space Power Systems (SPS) have to overcome two hurdles: (1) to find an attractive design, manufacturing and assembly concept and (2) to have available a space transportation system that can provide economical logistic support during the construction and operational phases. An initial system feasibility study, some five years ago, was based on a reference system that used terrestrial resources only and was based partially on electric propulsion systems. The conclusion was: it is feasible but not yet economically competitive with other options. This study is based on terrestrial and extraterrestrial resources and on chemical (LH 2/LOX) propulsion systems. These engines are available from the Space Shuttle production line and require small changes only. Other so-called advanced propulsion systems investigated did not prove economically superior if lunar LOX is available! We assume that a Shuttle derived Heavy Lift Launch Vehicle (HLLV) will become available around the turn of the century and that this will be used to establish a research base on the lunar surface. This lunar base has the potential to grow into a lunar factory producing LOX and construction materials for supporting among other projects also the construction of space power systems in geostationary orbit. A model was developed to simulate the logistics support of such an operation for a 50-year life cycle. After 50 years 111 SPS units with 5 GW each and an availability of 90% will produce 100 × 5 = 500 GW. The model comprises 60 equations and requires 29 assumptions of the parameter involved. 60-state variables calculated with the 60 equations mentioned above are given on an annual basis and as averages for the 50-year life cycle. Recycling of defective parts in geostationary orbit is one of the features of the model. The state-of-the-art with respect to SPS technology is introduced as a variable Mg mass/MW electric power delivered. If the space manufacturing facility, a maintenance and repair facility

  8. Exploiting multi-scale parallelism for large scale numerical modelling of laser wakefield accelerators

    International Nuclear Information System (INIS)

    Fonseca, R A; Vieira, J; Silva, L O; Fiuza, F; Davidson, A; Tsung, F S; Mori, W B

    2013-01-01

    A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ∼10 6 cores and sustained performance over ∼2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios. (paper)

  9. Relationships among Adolescents' Leisure Motivation, Leisure Involvement, and Leisure Satisfaction: A Structural Equation Model

    Science.gov (United States)

    Chen, Ying-Chieh; Li, Ren-Hau; Chen, Sheng-Hwang

    2013-01-01

    The purpose of this cross-sectional study was to test a cause-and-effect model of factors affecting leisure satisfaction among Taiwanese adolescents. A structural equation model was proposed in which the relationships among leisure motivation, leisure involvement, and leisure satisfaction were explored. The study collected data from 701 adolescent…

  10. The Role of Student Involvement and Perceptions of Integration in a Causal Model of Student Persistence.

    Science.gov (United States)

    Berger, Joseph B.; Milem, Jeffrey F.

    1999-01-01

    This study refined and applied an integrated model of undergraduate persistence (accounting for both behavioral and perceptual components) to examine first-year retention at a private, highly selective research university. Results suggest that including behaviorally based measures of involvement improves the model's explanatory power concerning…

  11. A Reformulated Model of Barriers to Parental Involvement in Education: Comment on Hornby and Lafaele (2011)

    Science.gov (United States)

    Fan, Weihua; Li, Nan; Sandoval, Jaime Robert

    2018-01-01

    In a 2011 article in this journal, Hornby and Lafaele provided a comprehensive model to understand barriers that may adversely impact effectiveness of parental involvement (PI) in education. The proposed explanatory model provides researchers with a new comprehensive and systematic perspective of the phenomenon in question with references from an…

  12. Adolescents and Music Media: Toward an Involvement-Mediational Model of Consumption and Self-Concept

    Science.gov (United States)

    Kistler, Michelle; Rodgers, Kathleen Boyce; Power, Thomas; Austin, Erica Weintraub; Hill, Laura Griner

    2010-01-01

    Using social cognitive theory and structural regression modeling, we examined pathways between early adolescents' music media consumption, involvement with music media, and 3 domains of self-concept (physical appearance, romantic appeal, and global self-worth; N=124). A mediational model was supported for 2 domains of self-concept. Music media…

  13. Empirical Models of Social Learning in a Large, Evolving Network.

    Directory of Open Access Journals (Sweden)

    Ayşe Başar Bener

    Full Text Available This paper advances theories of social learning through an empirical examination of how social networks change over time. Social networks are important for learning because they constrain individuals' access to information about the behaviors and cognitions of other people. Using data on a large social network of mobile device users over a one-month time period, we test three hypotheses: 1 attraction homophily causes individuals to form ties on the basis of attribute similarity, 2 aversion homophily causes individuals to delete existing ties on the basis of attribute dissimilarity, and 3 social influence causes individuals to adopt the attributes of others they share direct ties with. Statistical models offer varied degrees of support for all three hypotheses and show that these mechanisms are more complex than assumed in prior work. Although homophily is normally thought of as a process of attraction, people also avoid relationships with others who are different. These mechanisms have distinct effects on network structure. While social influence does help explain behavior, people tend to follow global trends more than they follow their friends.

  14. Lumped hydrological models is an Occam' razor for runoff modeling in large Russian Arctic basins

    OpenAIRE

    Ayzel Georgy

    2018-01-01

    This study is aimed to investigate the possibility of three lumped hydrological models to predict daily runoff of large-scale Arctic basins for the modern period (1979-2014) in the case of substantial data scarcity. All models were driven only by meteorological forcing reanalysis dataset without any additional information about landscape, soil or vegetation cover properties of studied basins. We found limitations of model parameters calibration in ungauged basins using global optimization alg...

  15. Large Animal Stroke Models vs. Rodent Stroke Models, Pros and Cons, and Combination?

    Science.gov (United States)

    Cai, Bin; Wang, Ning

    2016-01-01

    Stroke is a leading cause of serious long-term disability worldwide and the second leading cause of death in many countries. Long-time attempts to salvage dying neurons via various neuroprotective agents have failed in stroke translational research, owing in part to the huge gap between animal stroke models and stroke patients, which also suggests that rodent models have limited predictive value and that alternate large animal models are likely to become important in future translational research. The genetic background, physiological characteristics, behavioral characteristics, and brain structure of large animals, especially nonhuman primates, are analogous to humans, and resemble humans in stroke. Moreover, relatively new regional imaging techniques, measurements of regional cerebral blood flow, and sophisticated physiological monitoring can be more easily performed on the same animal at multiple time points. As a result, we can use large animal stroke models to decrease the gap and promote translation of basic science stroke research. At the same time, we should not neglect the disadvantages of the large animal stroke model such as the significant expense and ethical considerations, which can be overcome by rodent models. Rodents should be selected as stroke models for initial testing and primates or cats are desirable as a second species, which was recommended by the Stroke Therapy Academic Industry Roundtable (STAIR) group in 2009.

  16. The Management Challenge: Handling Exams Involving Large Quantities of Students, on and off Campus--A Design Concept

    Science.gov (United States)

    Larsson, Ken

    2014-01-01

    This paper looks at the process of managing large numbers of exams efficiently and secure with the use of a dedicated IT support. The system integrates regulations on different levels, from national to local, (even down to departments) and ensures that the rules are employed in all stages of handling the exams. The system has a proven record of…

  17. How senior entomologists can be involved in the annual meeting: organization and the coming together of a large event

    Science.gov (United States)

    The Annual Meeting for the Entomological Society of America is a large event where planning is started at the end of the previous years’ meeting. The President of the Society named the Program Committee Co-Chairs for Entomology 2017 at the 2015 Annual Meeting, so that they could handle the duties o...

  18. Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam’s Window*

    Science.gov (United States)

    Onorante, Luca; Raftery, Adrian E.

    2015-01-01

    Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam’s window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods. PMID:26917859

  19. Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam's Window.

    Science.gov (United States)

    Onorante, Luca; Raftery, Adrian E

    2016-01-01

    Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam's window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods.

  20. Medical staff involvement in nursing homes: development of a conceptual model and research agenda.

    Science.gov (United States)

    Shield, Renée; Rosenthal, Marsha; Wetle, Terrie; Tyler, Denise; Clark, Melissa; Intrator, Orna

    2014-02-01

    Medical staff (physicians, nurse practitioners, physicians' assistants) involvement in nursing homes (NH) is limited by professional guidelines, government policies, regulations, and reimbursements, creating bureaucratic burden. The conceptual NH Medical Staff Involvement Model, based on our mixed-methods research, applies the Donabedian "structure-process-outcomes" framework to the NH, identifying measures for a coordinated research agenda. Quantitative surveys and qualitative interviews conducted with medical directors, administrators and directors of nursing, other experts, residents and family members and Minimum Data Set, the Online Certification and Reporting System and Medicare Part B claims data related to NH structure, process, and outcomes were analyzed. NH control of medical staff, or structure, affects medical staff involvement in care processes and is associated with better outcomes (e.g., symptom management, appropriate transitions, satisfaction). The model identifies measures clarifying the impact of NH medical staff involvement on care processes and resident outcomes and has strong potential to inform regulatory policies.

  1. Customer involvement in greening the supply chain: an interpretive structural modeling methodology

    Science.gov (United States)

    Kumar, Sanjay; Luthra, Sunil; Haleem, Abid

    2013-04-01

    The role of customers in green supply chain management needs to be identified and recognized as an important research area. This paper is an attempt to explore the involvement aspect of customers towards greening of the supply chain (SC). An empirical research approach has been used to collect primary data to rank different variables for effective customer involvement in green concept implementation in SC. An interpretive structural-based model has been presented, and variables have been classified using matrice d' impacts croises- multiplication appliqué a un classement analysis. Contextual relationships among variables have been established using experts' opinions. The research may help practicing managers to understand the interaction among variables affecting customer involvement. Further, this understanding may be helpful in framing the policies and strategies to green SC. Analyzing interaction among variables for effective customer involvement in greening SC to develop the structural model in the Indian perspective is an effort towards promoting environment consciousness.

  2. Statistical aspects of carbon fiber risk assessment modeling. [fire accidents involving aircraft

    Science.gov (United States)

    Gross, D.; Miller, D. R.; Soland, R. M.

    1980-01-01

    The probabilistic and statistical aspects of the carbon fiber risk assessment modeling of fire accidents involving commercial aircraft are examined. Three major sources of uncertainty in the modeling effort are identified. These are: (1) imprecise knowledge in establishing the model; (2) parameter estimation; and (3)Monte Carlo sampling error. All three sources of uncertainty are treated and statistical procedures are utilized and/or developed to control them wherever possible.

  3. Bone marrow involvement in diffuse large B-cell lymphoma: correlation between FDG-PET uptake and type of cellular infiltrate

    International Nuclear Information System (INIS)

    Paone, Gaetano; Itti, Emmanuel; Lin, Chieh; Meignan, Michel; Haioun, Corinne; Dupuis, Jehan; Gaulard, Philippe

    2009-01-01

    To assess, in patients with diffuse large B-cell lymphoma (DLBCL), whether the low sensitivity of 18 F-fluorodeoxyglucose positron emission tomography (FDG-PET) for bone marrow assessment may be explained by histological characteristics of the cellular infiltrate. From a prospective cohort of 110 patients with newly diagnosed aggressive lymphoma, 21 patients with DLBCL had bone marrow involvement. Pretherapeutic FDG-PET images were interpreted visually and semiquantitatively, then correlated with the type of cellular infiltrate and known prognostic factors. Of these 21 patients, 7 (33%) had lymphoid infiltrates with a prominent component of large transformed lymphoid cells (concordant bone marrow involvement, CBMI) and 14 (67%) had lymphoid infiltrates composed of small cells (discordant bone marrow involvement, DBMI). Only 10 patients (48%) had abnormal bone marrow FDG uptake, 6 of the 7 with CBMI and 4 of the 14 with DBMI. Therefore, FDG-PET positivity in the bone marrow was significantly associated with CBMI, while FDG-PET negativity was associated with DBMI (Fisher's exact test, p=0.024). There were no significant differences in gender, age and overall survival between patients with CBMI and DBMI, while the international prognostic index was significantly higher in patients with CBMI. Our study suggests that in patients with DLBCL with bone marrow involvement bone marrow FDG uptake depends on two types of infiltrate, comprising small (DBMI) or large (CBMI) cells. This may explain the apparent low sensitivity of FDG-PET previously reported for detecting bone marrow involvement. (orig.)

  4. The Effects of Group Relaxation Training/Large Muscle Exercise, and Parental Involvement on Attention to Task, Impulsivity, and Locus of Control among Hyperactive Boys.

    Science.gov (United States)

    Porter, Sally S.; Omizo, Michael M.

    1984-01-01

    The study examined the effects of group relaxation training/large muscle exercise and parental involvement on attention to task, impulsivity, and locus of control among 34 hyperactive boys. Following treatment both experimental groups recorded significantly higher attention to task, lower impulsivity, and lower locus of control scores. (Author/CL)

  5. Concordant bone marrow involvement of diffuse large B-cell lymphoma represents a distinct clinical and biological entity in the era of immunotherapy

    DEFF Research Database (Denmark)

    Yao, Zhilei; Deng, Lijuan; Xu-Monette, Z Y

    2018-01-01

    In diffuse large B-cell lymphoma (DLBCL), the clinical and biological significance of concordant and discordant bone marrow (BM) involvement have not been well investigated. We evaluated 712 de novo DLBCL patients with front-line rituximab-containing treatment, including 263 patients with positiv...

  6. Virtualizing ancient Rome: 3D acquisition and modeling of a large plaster-of-Paris model of imperial Rome

    Science.gov (United States)

    Guidi, Gabriele; Frischer, Bernard; De Simone, Monica; Cioci, Andrea; Spinetti, Alessandro; Carosso, Luca; Micoli, Laura L.; Russo, Michele; Grasso, Tommaso

    2005-01-01

    Computer modeling through digital range images has been used for many applications, including 3D modeling of objects belonging to our cultural heritage. The scales involved range from small objects (e.g. pottery), to middle-sized works of art (statues, architectural decorations), up to very large structures (architectural and archaeological monuments). For any of these applications, suitable sensors and methodologies have been explored by different authors. The object to be modeled within this project is the "Plastico di Roma antica," a large plaster-of-Paris model of imperial Rome (16x17 meters) created in the last century. Its overall size therefore demands an acquisition approach typical of large structures, but it also is characterized extremely tiny details typical of small objects (houses are a few centimeters high; their doors, windows, etc. are smaller than 1 centimeter). This paper gives an account of the procedures followed for solving this "contradiction" and describes how a huge 3D model was acquired and generated by using a special metrology Laser Radar. The procedures for reorienting in a single reference system the huge point clouds obtained after each acquisition phase, thanks to the measurement of fixed redundant references, are described. The data set was split in smaller sub-areas 2 x 2 meters each for purposes of mesh editing. This subdivision was necessary owing to the huge number of points in each individual scan (50-60 millions). The final merge of the edited parts made it possible to create a single mesh. All these processes were made with software specifically designed for this project since no commercial package could be found that was suitable for managing such a large number of points. Preliminary models are presented. Finally, the significance of the project is discussed in terms of the overall project known as "Rome Reborn," of which the present acquisition is an important component.

  7. Automatic Generation of Connectivity for Large-Scale Neuronal Network Models through Structural Plasticity.

    Science.gov (United States)

    Diaz-Pier, Sandra; Naveau, Mikaël; Butz-Ostendorf, Markus; Morrison, Abigail

    2016-01-01

    With the emergence of new high performance computation technology in the last decade, the simulation of large scale neural networks which are able to reproduce the behavior and structure of the brain has finally become an achievable target of neuroscience. Due to the number of synaptic connections between neurons and the complexity of biological networks, most contemporary models have manually defined or static connectivity. However, it is expected that modeling the dynamic generation and deletion of the links among neurons, locally and between different regions of the brain, is crucial to unravel important mechanisms associated with learning, memory and healing. Moreover, for many neural circuits that could potentially be modeled, activity data is more readily and reliably available than connectivity data. Thus, a framework that enables networks to wire themselves on the basis of specified activity targets can be of great value in specifying network models where connectivity data is incomplete or has large error margins. To address these issues, in the present work we present an implementation of a model of structural plasticity in the neural network simulator NEST. In this model, synapses consist of two parts, a pre- and a post-synaptic element. Synapses are created and deleted during the execution of the simulation following local homeostatic rules until a mean level of electrical activity is reached in the network. We assess the scalability of the implementation in order to evaluate its potential usage in the self generation of connectivity of large scale networks. We show and discuss the results of simulations on simple two population networks and more complex models of the cortical microcircuit involving 8 populations and 4 layers using the new framework.

  8. A dynamic performance model for redox-flow batteries involving soluble species

    International Nuclear Information System (INIS)

    Shah, A.A.; Watt-Smith, M.J.; Walsh, F.C.

    2008-01-01

    A transient modelling framework for a vanadium redox-flow battery (RFB) is developed and experiments covering a range of vanadium concentration and electrolyte flow rate are conducted. The two-dimensional model is based on a comprehensive description of mass, charge and momentum transport and conservation, and is combined with a global kinetic model for reactions involving vanadium species. The model is validated against the experimental data and is used to study the effects of variations in concentration, electrolyte flow rate and electrode porosity. Extensions to the model and future work are suggested

  9. Large-scale modeling of rain fields from a rain cell deterministic model

    Science.gov (United States)

    FéRal, Laurent; Sauvageot, Henri; Castanet, Laurent; Lemorton, JoëL.; Cornet, FréDéRic; Leconte, Katia

    2006-04-01

    A methodology to simulate two-dimensional rain rate fields at large scale (1000 × 1000 km2, the scale of a satellite telecommunication beam or a terrestrial fixed broadband wireless access network) is proposed. It relies on a rain rate field cellular decomposition. At small scale (˜20 × 20 km2), the rain field is split up into its macroscopic components, the rain cells, described by the Hybrid Cell (HYCELL) cellular model. At midscale (˜150 × 150 km2), the rain field results from the conglomeration of rain cells modeled by HYCELL. To account for the rain cell spatial distribution at midscale, the latter is modeled by a doubly aggregative isotropic random walk, the optimal parameterization of which is derived from radar observations at midscale. The extension of the simulation area from the midscale to the large scale (1000 × 1000 km2) requires the modeling of the weather frontal area. The latter is first modeled by a Gaussian field with anisotropic covariance function. The Gaussian field is then turned into a binary field, giving the large-scale locations over which it is raining. This transformation requires the definition of the rain occupation rate over large-scale areas. Its probability distribution is determined from observations by the French operational radar network ARAMIS. The coupling with the rain field modeling at midscale is immediate whenever the large-scale field is split up into midscale subareas. The rain field thus generated accounts for the local CDF at each point, defining a structure spatially correlated at small scale, midscale, and large scale. It is then suggested that this approach be used by system designers to evaluate diversity gain, terrestrial path attenuation, or slant path attenuation for different azimuth and elevation angle directions.

  10. Models of user involvement in the mental health context: intentions and implementation challenges.

    Science.gov (United States)

    Storm, Marianne; Edwards, Adrian

    2013-09-01

    Patient-centered care, shared decision-making, patient participation and the recovery model are models of care which incorporate user involvement and patients' perspectives on their treatment and care. The aims of this paper are to examine these different care models and their association with user involvement in the mental health context and discuss some of the challenges associated with their implementation. The sources used are health policy documents and published literature and research on patient-centered care, shared decision-making, patient participation and recovery. The policy documents advocate that mental health services should be oriented towards patients' or users' needs, participation and involvement. These policies also emphasize recovery and integration of people with mental disorders in the community. However, these collaborative care models have generally been subject to limited empirical research about effectiveness. There are also challenges to implementation of the models in inpatient care. What evidence there is indicates tensions between patients' and providers' perspectives on treatment and care. There are issues related to risk and the person's capacity for user involvement, and concerns about what role patients themselves wish to play in decision-making. Lack of competence and awareness among providers are further issues. Further work on training, evaluation and implementation is needed to ensure that inpatient mental health services are adapting user oriented care models at all levels of services.

  11. Economic Model Predictive Control for Large-Scale and Distributed Energy Systems

    DEFF Research Database (Denmark)

    Standardi, Laura

    Sources (RESs) in the smart grids is increasing. These energy sources bring uncertainty to the production due to their fluctuations. Hence,smart grids need suitable control systems that are able to continuously balance power production and consumption.  We apply the Economic Model Predictive Control (EMPC......) strategy to optimise the economic performances of the energy systems and to balance the power production and consumption. In the case of large-scale energy systems, the electrical grid connects a high number of power units. Because of this, the related control problem involves a high number of variables......In this thesis, we consider control strategies for large and distributed energy systems that are important for the implementation of smart grid technologies.  An electrical grid has to ensure reliability and avoid long-term interruptions in the power supply. Moreover, the share of Renewable Energy...

  12. Childhood craniopharyngioma: greater hypothalamic involvement before surgery is associated with higher homeostasis model insulin resistance index

    Directory of Open Access Journals (Sweden)

    Sainte-Rose Christian

    2009-04-01

    Full Text Available Abstract Background Obesity seems to be linked to the hypothalamic involvement in craniopharyngioma. We evaluated the pre-surgery relationship between the degree of this involvement on magnetic resonance imaging and insulin resistance, as evaluated by the homeostasis model insulin resistance index (HOMA. As insulin-like growth factor 1, leptin, soluble leptin receptor (sOB-R and ghrelin may also be involved, we compared their plasma concentrations and their link to weight change. Methods 27 children with craniopharyngioma were classified as either grade 0 (n = 7, no hypothalamic involvement, grade 1 (n = 8, compression without involvement, or grade 2 (n = 12, severe involvement. Results Despite having similar body mass indexes (BMI, the grade 2 patients had higher glucose, insulin and HOMA before surgery than the grade 0 (P = 0.02, The data for the whole population before and 6–18 months after surgery showed increases in BMI (P Conclusion The hypothalamic involvement by the craniopharyngioma before surgery seems to determine the degree of insulin resistance, regardless of the BMI. The pre-surgery HOMA values were correlated with the post-surgery weight gain. This suggests that obesity should be prevented by reducing inn secretion in those cases with hypothalamic involvement.

  13. Towards large scale stochastic rainfall models for flood risk assessment in trans-national basins

    Science.gov (United States)

    Serinaldi, F.; Kilsby, C. G.

    2012-04-01

    While extensive research has been devoted to rainfall-runoff modelling for risk assessment in small and medium size watersheds, less attention has been paid, so far, to large scale trans-national basins, where flood events have severe societal and economic impacts with magnitudes quantified in billions of Euros. As an example, in the April 2006 flood events along the Danube basin at least 10 people lost their lives and up to 30 000 people were displaced, with overall damages estimated at more than half a billion Euros. In this context, refined analytical methods are fundamental to improve the risk assessment and, then, the design of structural and non structural measures of protection, such as hydraulic works and insurance/reinsurance policies. Since flood events are mainly driven by exceptional rainfall events, suitable characterization and modelling of space-time properties of rainfall fields is a key issue to perform a reliable flood risk analysis based on alternative precipitation scenarios to be fed in a new generation of large scale rainfall-runoff models. Ultimately, this approach should be extended to a global flood risk model. However, as the need of rainfall models able to account for and simulate spatio-temporal properties of rainfall fields over large areas is rather new, the development of new rainfall simulation frameworks is a challenging task involving that faces with the problem of overcoming the drawbacks of the existing modelling schemes (devised for smaller spatial scales), but keeping the desirable properties. In this study, we critically summarize the most widely used approaches for rainfall simulation. Focusing on stochastic approaches, we stress the importance of introducing suitable climate forcings in these simulation schemes in order to account for the physical coherence of rainfall fields over wide areas. Based on preliminary considerations, we suggest a modelling framework relying on the Generalized Additive Models for Location, Scale

  14. Investigation of deep inelastic scattering processes involving large p$_{t}$ direct photons in the final state

    CERN Multimedia

    2002-01-01

    This experiment will investigate various aspects of photon-parton scattering and will be performed in the H2 beam of the SPS North Area with high intensity hadron beams up to 350 GeV/c. \\\\\\\\ a) The directly produced photon yield in deep inelastic hadron-hadron collisions. Large p$_{t}$ direct photons from hadronic interactions are presumably a result of a simple annihilation process of quarks and antiquarks or of a QCD-Compton process. The relative contribution of the two processes can be studied by using various incident beam projectiles $\\pi^{+}, \\pi^{-}, p$ and in the future $\\bar{p}$. \\\\\\\\b) The correlations between directly produced photons and their accompanying hadronic jets. We will examine events with a large p$_{t}$ direct photon for away-side jets. If jets are recognised their properties will be investigated. Differences between a gluon and a quark jet may become observable by comparing reactions where valence quark annihilations (away-side jet originates from a gluon) dominate over the QDC-Compton...

  15. The PHD Domain of Np95 (mUHRF1) Is Involved in Large-Scale Reorganization of Pericentromeric Heterochromatin

    Science.gov (United States)

    Papait, Roberto; Pistore, Christian; Grazini, Ursula; Babbio, Federica; Cogliati, Sara; Pecoraro, Daniela; Brino, Laurent; Morand, Anne-Laure; Dechampesme, Anne-Marie; Spada, Fabio; Leonhardt, Heinrich; McBlane, Fraser; Oudet, Pierre

    2008-01-01

    Heterochromatic chromosomal regions undergo large-scale reorganization and progressively aggregate, forming chromocenters. These are dynamic structures that rapidly adapt to various stimuli that influence gene expression patterns, cell cycle progression, and differentiation. Np95-ICBP90 (m- and h-UHRF1) is a histone-binding protein expressed only in proliferating cells. During pericentromeric heterochromatin (PH) replication, Np95 specifically relocalizes to chromocenters where it highly concentrates in the replication factories that correspond to less compacted DNA. Np95 recruits HDAC and DNMT1 to PH and depletion of Np95 impairs PH replication. Here we show that Np95 causes large-scale modifications of chromocenters independently from the H3:K9 and H4:K20 trimethylation pathways, from the expression levels of HP1, from DNA methylation and from the cell cycle. The PHD domain is essential to induce this effect. The PHD domain is also required in vitro to increase access of a restriction enzyme to DNA packaged into nucleosomal arrays. We propose that the PHD domain of Np95-ICBP90 contributes to the opening and/or stabilization of dense chromocenter structures to support the recruitment of modifying enzymes, like HDAC and DNMT1, required for the replication and formation of PH. PMID:18508923

  16. An approach to industrial water conservation--a case study involving two large manufacturing companies based in Australia.

    Science.gov (United States)

    Agana, Bernard A; Reeve, Darrell; Orbell, John D

    2013-01-15

    This study presents the application of an integrated water management strategy at two large Australian manufacturing companies that are contrasting in terms of their respective products. The integrated strategy, consisting of water audit, pinch analysis and membrane process application, was deployed in series to systematically identify water conservation opportunities. Initially, a water audit was deployed to completely characterize all water streams found at each production site. This led to the development of a water balance diagram which, together with water test results, served as a basis for subsequent enquiry. After the water audit, commercially available water pinch software was utilized to identify possible water reuse opportunities, some of which were subsequently implemented on site. Finally, utilizing a laboratory-scale test rig, membrane processes such as UF, NF and RO were evaluated for their suitability to treat the various wastewater streams. The membranes tested generally showed good contaminant rejection rates, slow flux decline rates, low energy usage and were well suited for treatment of specific wastewater streams. The synergy between the various components of this strategy has the potential to reduce substantial amounts of Citywater consumption and wastewater discharge across a diverse range of large manufacturing companies. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.

  17. Environmental Disturbance Modeling for Large Inflatable Space Structures

    National Research Council Canada - National Science Library

    Davis, Donald

    2001-01-01

    Tightening space budgets and stagnating spacelift capabilities are driving the Air Force and other space agencies to focus on inflatable technology as a reliable, inexpensive means of deploying large structures in orbit...

  18. Parallel runs of a large air pollution model on a grid of Sun computers

    DEFF Research Database (Denmark)

    Alexandrov, V.N.; Owczarz, W.; Thomsen, Per Grove

    2004-01-01

    Large -scale air pollution models can successfully be used in different environmental studies. These models are described mathematically by systems of partial differential equations. Splitting procedures followed by discretization of the spatial derivatives leads to several large systems...

  19. Giant-cell arteritis. Concordance study between aortic CT angiography and FDG-PET/CT in detection of large-vessel involvement

    International Nuclear Information System (INIS)

    Boysson, Hubert de; Dumont, Anael; Boutemy, Jonathan; Maigne, Gwenola; Martin Silva, Nicolas; Sultan, Audrey; Bienvenu, Boris; Aouba, Achille; Liozon, Eric; Ly, Kim Heang; Lambert, Marc; Aide, Nicolas; Manrique, Alain

    2017-01-01

    The purpose of our study was to assess the concordance of aortic CT angiography (CTA) and FDG-PET/CT in the detection of large-vessel involvement at diagnosis in patients with giant-cell arteritis (GCA). We created a multicenter cohort of patients with GCA diagnosed between 2010 and 2015, and who underwent both FDG-PET/CT and aortic CTA before or in the first ten days following treatment introduction. Eight vascular segments were studied on each procedure. We calculated concordance between both imaging techniques in a per-patient and a per-segment analysis, using Cohen's kappa concordance index. We included 28 patients (21/7 women/men, median age 67 [56-82]). Nineteen patients had large-vessel involvement on PET/CT and 18 of these patients also presented positive findings on CTA. In a per-segment analysis, a median of 5 [1-7] and 3 [1-6] vascular territories were involved on positive PET/CT and CTA, respectively (p = 0.03). In qualitative analysis, i.e., positivity of the procedure suggesting a large-vessel involvement, the concordance rate between both procedures was 0.85 [0.64-1]. In quantitative analysis, i.e., per-segment analysis in both procedures, the global concordance rate was 0.64 [0.54-0.75]. Using FDG-PET/CT as a reference, CTA showed excellent sensitivity (95%) and specificity (100%) in a per-patient analysis. In a per-segment analysis, sensitivity and specificity were 61% and 97.9%, respectively. CTA and FDG-PET/CT were both able to detect large-vessel involvement in GCA with comparable results in a per-patient analysis. However, PET/CT showed higher performance in a per-segment analysis, especially in the detection of inflammation of the aorta's branches. (orig.)

  20. Giant-cell arteritis. Concordance study between aortic CT angiography and FDG-PET/CT in detection of large-vessel involvement

    Energy Technology Data Exchange (ETDEWEB)

    Boysson, Hubert de; Dumont, Anael; Boutemy, Jonathan; Maigne, Gwenola; Martin Silva, Nicolas; Sultan, Audrey; Bienvenu, Boris; Aouba, Achille [Caen University Hospital, Department of Internal Medicine, Caen (France); Liozon, Eric; Ly, Kim Heang [Limoges University Hospital, Department of Internal Medicine, Limoges (France); Lambert, Marc [Lille University Hospital, Department of Internal Medicine, Lille (France); Aide, Nicolas [Caen University Hospital, Department of Nuclear Medicine, Caen (France); INSERM U1086 ' ' ANTICIPE' ' , Francois Baclesse Cancer Centre, Caen (France); Manrique, Alain [Caen University Hospital, Department of Nuclear Medicine, Caen (France); Normandy University, Caen (France)

    2017-12-15

    The purpose of our study was to assess the concordance of aortic CT angiography (CTA) and FDG-PET/CT in the detection of large-vessel involvement at diagnosis in patients with giant-cell arteritis (GCA). We created a multicenter cohort of patients with GCA diagnosed between 2010 and 2015, and who underwent both FDG-PET/CT and aortic CTA before or in the first ten days following treatment introduction. Eight vascular segments were studied on each procedure. We calculated concordance between both imaging techniques in a per-patient and a per-segment analysis, using Cohen's kappa concordance index. We included 28 patients (21/7 women/men, median age 67 [56-82]). Nineteen patients had large-vessel involvement on PET/CT and 18 of these patients also presented positive findings on CTA. In a per-segment analysis, a median of 5 [1-7] and 3 [1-6] vascular territories were involved on positive PET/CT and CTA, respectively (p = 0.03). In qualitative analysis, i.e., positivity of the procedure suggesting a large-vessel involvement, the concordance rate between both procedures was 0.85 [0.64-1]. In quantitative analysis, i.e., per-segment analysis in both procedures, the global concordance rate was 0.64 [0.54-0.75]. Using FDG-PET/CT as a reference, CTA showed excellent sensitivity (95%) and specificity (100%) in a per-patient analysis. In a per-segment analysis, sensitivity and specificity were 61% and 97.9%, respectively. CTA and FDG-PET/CT were both able to detect large-vessel involvement in GCA with comparable results in a per-patient analysis. However, PET/CT showed higher performance in a per-segment analysis, especially in the detection of inflammation of the aorta's branches. (orig.)

  1. MODELLING OF CARBON MONOXIDE AIR POLLUTION IN LARG CITIES BY EVALUETION OF SPECTRAL LANDSAT8 IMAGES

    Directory of Open Access Journals (Sweden)

    M. Hamzelo

    2015-12-01

    Full Text Available Air pollution in large cities is one of the major problems that resolve and reduce it need multiple applications and environmental management. Of The main sources of this pollution is industrial activities, urban and transport that enter large amounts of contaminants into the air and reduces its quality. With Variety of pollutants and high volume manufacturing, local distribution of manufacturing centers, Testing and measuring emissions is difficult. Substances such as carbon monoxide, sulfur dioxide, and unburned hydrocarbons and lead compounds are substances that cause air pollution and carbon monoxide is most important. Today, data exchange systems, processing, analysis and modeling is of important pillars of management system and air quality control. In this study, using the spectral signature of carbon monoxide gas as the most efficient gas pollution LANDSAT8 images in order that have better spatial resolution than appropriate spectral bands and weather meters،SAM classification algorithm and Geographic Information System (GIS , spatial distribution of carbon monoxide gas in Tehran over a period of one year from the beginning of 2014 until the beginning of 2015 at 11 map have modeled and then to the model valuation ،created maps were compared with the map provided by the Tehran quality comparison air company. Compare involved plans did with the error matrix and results in 4 types of care; overall, producer, user and kappa coefficient was investigated. Results of average accuracy were about than 80%, which indicates the fit method and data used for modeling.

  2. Wall modeled large eddy simulations of complex high Reynolds number flows with synthetic inlet turbulence

    International Nuclear Information System (INIS)

    Patil, Sunil; Tafti, Danesh

    2012-01-01

    Highlights: ► Large eddy simulation. ► Wall layer modeling. ► Synthetic inlet turbulence. ► Swirl flows. - Abstract: Large eddy simulations of complex high Reynolds number flows are carried out with the near wall region being modeled with a zonal two layer model. A novel formulation for solving the turbulent boundary layer equation for the effective tangential velocity in a generalized co-ordinate system is presented and applied in the near wall zonal treatment. This formulation reduces the computational time in the inner layer significantly compared to the conventional two layer formulations present in the literature and is most suitable for complex geometries involving body fitted structured and unstructured meshes. The cost effectiveness and accuracy of the proposed wall model, used with the synthetic eddy method (SEM) to generate inlet turbulence, is investigated in turbulent channel flow, flow over a backward facing step, and confined swirling flows at moderately high Reynolds numbers. Predictions are compared with available DNS, experimental LDV data, as well as wall resolved LES. In all cases, there is at least an order of magnitude reduction in computational cost with no significant loss in prediction accuracy.

  3. Childhood craniopharyngioma: greater hypothalamic involvement before surgery is associated with higher homeostasis model insulin resistance index

    Science.gov (United States)

    Trivin, Christine; Busiah, Kanetee; Mahlaoui, Nizar; Recasens, Christophe; Souberbielle, Jean-Claude; Zerah, Michel; Sainte-Rose, Christian; Brauner, Raja

    2009-01-01

    Background Obesity seems to be linked to the hypothalamic involvement in craniopharyngioma. We evaluated the pre-surgery relationship between the degree of this involvement on magnetic resonance imaging and insulin resistance, as evaluated by the homeostasis model insulin resistance index (HOMA). As insulin-like growth factor 1, leptin, soluble leptin receptor (sOB-R) and ghrelin may also be involved, we compared their plasma concentrations and their link to weight change. Methods 27 children with craniopharyngioma were classified as either grade 0 (n = 7, no hypothalamic involvement), grade 1 (n = 8, compression without involvement), or grade 2 (n = 12, severe involvement). Results Despite having similar body mass indexes (BMI), the grade 2 patients had higher glucose, insulin and HOMA before surgery than the grade 0 (P = 0.02, craniopharyngioma before surgery seems to determine the degree of insulin resistance, regardless of the BMI. The pre-surgery HOMA values were correlated with the post-surgery weight gain. This suggests that obesity should be prevented by reducing inn secretion in those cases with hypothalamic involvement. PMID:19341477

  4. Job involvement of primary healthcare employees: does a service provision model play a role?

    Science.gov (United States)

    Koponen, Anne M; Laamanen, Ritva; Simonsen-Rehn, Nina; Sundell, Jari; Brommels, Mats; Suominen, Sakari

    2010-05-01

    To investigate whether the development of job involvement of primary healthcare (PHC) employees in Southern Municipality (SM), where PHC services were outsourced to an independent non-profit organisation, differed from that in the three comparison municipalities (M1, M2, M3) with municipal service providers. Also, the associations of job involvement with factors describing the psychosocial work environment were investigated. A panel mail survey 2000-02 in Finland (n=369, response rates 73% and 60%). The data were analysed by descriptive statistics and multivariate linear regression analysis. Despite the favourable development in the psychosocial work environment, job involvement decreased most in SM, which faced the biggest organisational changes. Job involvement decreased also in M3, where the psychosocial work environment deteriorated most. Job involvement in 2002 was best predicted by high baseline level of interactional justice and work control, positive change in interactional justice, and higher age. Also other factors, such as organisational stability, seemed to play a role; after controlling for the effect of the psychosocial work characteristics, job involvement was higher in M3 than in SM. Outsourcing of PHC services may decrease job involvement at least during the first years. A particular service provision model is better than the others only if it is superior in providing a favourable and stable psychosocial work environment.

  5. Software Tools For Large Scale Interactive Hydrodynamic Modeling

    NARCIS (Netherlands)

    Donchyts, G.; Baart, F.; van Dam, A; Jagers, B; van der Pijl, S.; Piasecki, M.

    2014-01-01

    Developing easy-to-use software that combines components for simultaneous visualization, simulation and interaction is a great challenge. Mainly, because it involves a number of disciplines, like computational fluid dynamics, computer graphics, high-performance computing. One of the main

  6. Dynamics Modeling and Simulation of Large Transport Airplanes in Upset Conditions

    Science.gov (United States)

    Foster, John V.; Cunningham, Kevin; Fremaux, Charles M.; Shah, Gautam H.; Stewart, Eric C.; Rivers, Robert A.; Wilborn, James E.; Gato, William

    2005-01-01

    As part of NASA's Aviation Safety and Security Program, research has been in progress to develop aerodynamic modeling methods for simulations that accurately predict the flight dynamics characteristics of large transport airplanes in upset conditions. The motivation for this research stems from the recognition that simulation is a vital tool for addressing loss-of-control accidents, including applications to pilot training, accident reconstruction, and advanced control system analysis. The ultimate goal of this effort is to contribute to the reduction of the fatal accident rate due to loss-of-control. Research activities have involved accident analyses, wind tunnel testing, and piloted simulation. Results have shown that significant improvements in simulation fidelity for upset conditions, compared to current training simulations, can be achieved using state-of-the-art wind tunnel testing and aerodynamic modeling methods. This paper provides a summary of research completed to date and includes discussion on key technical results, lessons learned, and future research needs.

  7. An International Collaborative Study of Outcome and Prognostic Factors in Patients with Secondary CNS Involvement By Diffuse Large B-Cell Lymphoma

    DEFF Research Database (Denmark)

    El-Galaly, Tarec Christoffer; Cheah, Chan Yoon; Bendtsen, Mette Dahl

    2016-01-01

    ) determine prognostic factors after SCNS.Patients and methods: We performed a retrospective study of patients diagnosed with SCNS during or after frontline immunochemotherapy (R-CHOP or equivalently effective regimens). SCNS was defined as new involvement of the CNS (parenchymal, leptomeningeal, and/or eye......Background: Secondary CNS involvement (SCNS) is a detrimental complication seen in ~5% of patients with diffuse large B-cell lymphoma (DLBCL) treated with modern immunochemotherapy. Data from older series report short survival following SCNS, typically lt;6 months. However, data in patients...

  8. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    NARCIS (Netherlands)

    Loon, van A.F.; Huijgevoort, van M.H.J.; Lanen, van H.A.J.

    2012-01-01

    Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological

  9. Helpful Components Involved in the Cognitive-Experiential Model of Dream Work

    Science.gov (United States)

    Tien, Hsiu-Lan Shelley; Chen, Shuh-Chi; Lin, Chia-Huei

    2009-01-01

    The purpose of the study was to examine the helpful components involved in the Hill's cognitive-experiential dream work model. Participants were 27 volunteer clients from colleges and universities in northern and central parts of Taiwan. Each of the clients received 1-2 sessions of dream interpretations. The cognitive-experiential dream work model…

  10. Predicting Preschoolers' Attachment Security from Fathers' Involvement, Internal Working Models, and Use of Social Support

    Science.gov (United States)

    Newland, Lisa A.; Coyl, Diana D.; Freeman, Harry

    2008-01-01

    Associations between preschoolers' attachment security, fathers' involvement (i.e. parenting behaviors and consistency) and fathering context (i.e. fathers' internal working models (IWMs) and use of social support) were examined in a subsample of 102 fathers, taken from a larger sample of 235 culturally diverse US families. The authors predicted…

  11. Model correction factor method for reliability problems involving integrals of non-Gaussian random fields

    DEFF Research Database (Denmark)

    Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der

    2002-01-01

    The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...

  12. Retrospective review of adverse incidents involving passengers seated in wheeled mobility devices while traveling in large accessible transit vehicles.

    Science.gov (United States)

    Frost, Karen L; Bertocci, Gina

    2010-04-01

    Characterize wheeled mobility device (WhMD) adverse incidents on large accessible transit vehicles (LATVs) based on vehicle motion, WhMD activity during incident, incident scenario and injury. Retrospective records review. WhMD passengers traveling on LATVs while remaining seated in their. Adverse incidents characterized based on vehicle motion, WhMD activity during incident, and incident scenario. Injury characterized based on outcome, medical attention sought, vehicle activity, WhMD activity and incident scenario. 115 WhMD-related incident reports for years 2000-2005 were analyzed. Most incidents occurred when the LATV was stopped (73.9%), during ingress/egress (42.6%), and at the securement station (33.9%) when the LATV was moving. The combination of WhMD tipping and passenger falling (43.4%) occurred most frequently, and was 1.8 times more likely to occur during ingress/egress than at the securement station. One-third (33.6%) of all incidents resulted in injury, and injuries were equally distributed between ingress/egress (43.6%) and at the securement station (43.6%). WhMD users have a greater chance of incurring injury during ingress/egress than during transit. Research is needed to objectively assess real world transportation experiences of WhMD passengers, and to assess the adequacy of existing federal legislation/guidelines for accessible ramps used in public transportation. Copyright 2009 IPEM. Published by Elsevier Ltd. All rights reserved.

  13. A Research Framework for Understanding the Practical Impact of Family Involvement in the Juvenile Justice System: The Juvenile Justice Family Involvement Model.

    Science.gov (United States)

    Walker, Sarah Cusworth; Bishop, Asia S; Pullmann, Michael D; Bauer, Grace

    2015-12-01

    Family involvement is recognized as a critical element of service planning for children's mental health, welfare and education. For the juvenile justice system, however, parents' roles in this system are complex due to youths' legal rights, public safety, a process which can legally position parents as plaintiffs, and a historical legacy of blaming parents for youth indiscretions. Three recent national surveys of juvenile justice-involved parents reveal that the current paradigm elicits feelings of stress, shame and distrust among parents and is likely leading to worse outcomes for youth, families and communities. While research on the impact of family involvement in the justice system is starting to emerge, the field currently has no organizing framework to guide a research agenda, interpret outcomes or translate findings for practitioners. We propose a research framework for family involvement that is informed by a comprehensive review and content analysis of current, published arguments for family involvement in juvenile justice along with a synthesis of family involvement efforts in other child-serving systems. In this model, family involvement is presented as an ascending, ordinal concept beginning with (1) exclusion, and moving toward climates characterized by (2) information-giving, (3) information-eliciting and (4) full, decision-making partnerships. Specific examples of how courts and facilities might align with these levels are described. Further, the model makes predictions for how involvement will impact outcomes at multiple levels with applications for other child-serving systems.

  14. Large scale hydro-economic modelling for policy support

    Science.gov (United States)

    de Roo, Ad; Burek, Peter; Bouraoui, Faycal; Reynaud, Arnaud; Udias, Angel; Pistocchi, Alberto; Lanzanova, Denis; Trichakis, Ioannis; Beck, Hylke; Bernhard, Jeroen

    2014-05-01

    To support European Union water policy making and policy monitoring, a hydro-economic modelling environment has been developed to assess optimum combinations of water retention measures, water savings measures, and nutrient reduction measures for continental Europe. This modelling environment consists of linking the agricultural CAPRI model, the LUMP land use model, the LISFLOOD water quantity model, the EPIC water quality model, the LISQUAL combined water quantity, quality and hydro-economic model, and a multi-criteria optimisation routine. With this modelling environment, river basin scale simulations are carried out to assess the effects of water-retention measures, water-saving measures, and nutrient-reduction measures on several hydro-chemical indicators, such as the Water Exploitation Index (WEI), Nitrate and Phosphate concentrations in rivers, the 50-year return period river discharge as an indicator for flooding, and economic losses due to water scarcity for the agricultural sector, the manufacturing-industry sector, the energy-production sector and the domestic sector, as well as the economic loss due to flood damage. Recently, this model environment is being extended with a groundwater model to evaluate the effects of measures on the average groundwater table and available resources. Also, water allocation rules are addressed, while having environmental flow included as a minimum requirement for the environment. Economic functions are currently being updated as well. Recent development and examples will be shown and discussed, as well as open challenges.

  15. A devolved model for public involvement in the field of mental health research: case study learning.

    Science.gov (United States)

    Moule, Pam; Davies, Rosie

    2016-12-01

    Patient and public involvement in all aspects of research is espoused and there is a continued interest in understanding its wider impact. Existing investigations have identified both beneficial outcomes and remaining issues. This paper presents the impact of public involvement in one case study led by a mental health charity conducted as part of a larger research project. The case study used a devolved model of working, contracting with service user-led organizations to maximize the benefits of local knowledge on the implementation of personalized budgets, support recruitment and local user-led organizations. To understand the processes and impact of public involvement in a devolved model of working with user-led organizations. Multiple data collection methods were employed throughout 2012. These included interviews with the researchers (n = 10) and research partners (n = 5), observation of two case study meetings and the review of key case study documentation. Analysis was conducted in NVivo10 using a coding framework developed following a literature review. Five key themes emerged from the data; Devolved model, Nature of involvement, Enabling factors, Implementation challenges and Impact. While there were some challenges of implementing the devolved model it is clear that our findings add to the growing understanding of the positive benefits research partners can bring to complex research. A devolved model can support the involvement of user-led organizations in research if there is a clear understanding of the underpinning philosophy and support mechanisms are in place. © 2015 The Authors. Health Expectations Published by John Wiley & Sons Ltd.

  16. Large scale debris-flow hazard assessment: a geotechnical approach and GIS modelling

    Directory of Open Access Journals (Sweden)

    G. Delmonaco

    2003-01-01

    Full Text Available A deterministic distributed model has been developed for large-scale debris-flow hazard analysis in the basin of River Vezza (Tuscany Region – Italy. This area (51.6 km 2 was affected by over 250 landslides. These were classified as debris/earth flow mainly involving the metamorphic geological formations outcropping in the area, triggered by the pluviometric event of 19 June 1996. In the last decades landslide hazard and risk analysis have been favoured by the development of GIS techniques permitting the generalisation, synthesis and modelling of stability conditions on a large scale investigation (>1:10 000. In this work, the main results derived by the application of a geotechnical model coupled with a hydrological model for the assessment of debris flows hazard analysis, are reported. This analysis has been developed starting by the following steps: landslide inventory map derived by aerial photo interpretation, direct field survey, generation of a database and digital maps, elaboration of a DTM and derived themes (i.e. slope angle map, definition of a superficial soil thickness map, geotechnical soil characterisation through implementation of a backanalysis on test slopes, laboratory test analysis, inference of the influence of precipitation, for distinct return times, on ponding time and pore pressure generation, implementation of a slope stability model (infinite slope model and generalisation of the safety factor for estimated rainfall events with different return times. Such an approach has allowed the identification of potential source areas of debris flow triggering. This is used to detected precipitation events with estimated return time of 10, 50, 75 and 100 years. The model shows a dramatic decrease of safety conditions for the simulation when is related to a 75 years return time rainfall event. It corresponds to an estimated cumulated daily intensity of 280–330 mm. This value can be considered the hydrological triggering

  17. Modeling Temporal Behavior in Large Networks: A Dynamic Mixed-Membership Model

    Energy Technology Data Exchange (ETDEWEB)

    Rossi, R; Gallagher, B; Neville, J; Henderson, K

    2011-11-11

    Given a large time-evolving network, how can we model and characterize the temporal behaviors of individual nodes (and network states)? How can we model the behavioral transition patterns of nodes? We propose a temporal behavior model that captures the 'roles' of nodes in the graph and how they evolve over time. The proposed dynamic behavioral mixed-membership model (DBMM) is scalable, fully automatic (no user-defined parameters), non-parametric/data-driven (no specific functional form or parameterization), interpretable (identifies explainable patterns), and flexible (applicable to dynamic and streaming networks). Moreover, the interpretable behavioral roles are generalizable, computationally efficient, and natively supports attributes. We applied our model for (a) identifying patterns and trends of nodes and network states based on the temporal behavior, (b) predicting future structural changes, and (c) detecting unusual temporal behavior transitions. We use eight large real-world datasets from different time-evolving settings (dynamic and streaming). In particular, we model the evolving mixed-memberships and the corresponding behavioral transitions of Twitter, Facebook, IP-Traces, Email (University), Internet AS, Enron, Reality, and IMDB. The experiments demonstrate the scalability, flexibility, and effectiveness of our model for identifying interesting patterns, detecting unusual structural transitions, and predicting the future structural changes of the network and individual nodes.

  18. Truck Route Choice Modeling using Large Streams of GPS Data

    Science.gov (United States)

    2017-07-31

    The primary goal of this research was to use large streams of truck-GPS data to analyze travel routes (or paths) chosen by freight trucks to travel between different origin and destination (OD) location pairs in metropolitan regions of Florida. Two s...

  19. Identification of low order models for large scale processes

    NARCIS (Netherlands)

    Wattamwar, S.K.

    2010-01-01

    Many industrial chemical processes are complex, multi-phase and large scale in nature. These processes are characterized by various nonlinear physiochemical effects and fluid flows. Such processes often show coexistence of fast and slow dynamics during their time evolutions. The increasing demand

  20. Modelling the fathering role: Experience in the family of origin and father involvement

    Directory of Open Access Journals (Sweden)

    Mihić Ivana

    2012-01-01

    Full Text Available The study presented in this paper deals with the effects of experiences with father in the family of origin on the fathering role in the family of procreation. The results of the studies so far point to great importance of such experiences in parental role modelling, while recent approaches have suggested the concept of introjected notion or an internal working model of the fathering role as the way to operationalise the transgenerational transfer. The study included 247 two-parent couple families whose oldest child attended preschool education. Fathers provided information on self-assessed involvement via the Inventory of father involvement, while both fathers and mothers gave information on introjected experiences from the family of origin via the inventory Presence of the father in the family of origin. It was shown that father’s experiences from the family of origin had significant direct effects on his involvement in child-care. Very important experiences were those of negative emotional exchange, physical closeness and availability of the father, as well as beliefs about the importance of the father as a parent. Although maternal experiences from the family of origin did not contribute significantly to father involvement, shared beliefs about father’s importance as a parent in the parenting alliance had an effect on greater involvement in child-care. The data provide confirmation of the hypotheses on modelling of the fathering role, but also open the issue of the factor of intergenerational maintenance of traditional forms of father involvement in families in Serbia.

  1. A numerical shoreline model for shorelines with large curvature

    DEFF Research Database (Denmark)

    Kærgaard, Kasper Hauberg; Fredsøe, Jørgen

    2013-01-01

    orthogonal horizontal directions are used. The volume error in the sediment continuity equation which is thereby introduced is removed through an iterative procedure. The model treats the shoreline changes by computing the sediment transport in a 2D coastal area model, and then integrating the sediment...

  2. Modeling Large Time Series for Efficient Approximate Query Processing

    DEFF Research Database (Denmark)

    Perera, Kasun S; Hahmann, Martin; Lehner, Wolfgang

    2015-01-01

    query statistics derived from experiments and when running the system. Our approach can also reduce communication load by exchanging models instead of data. To allow seamless integration of model-based querying into traditional data warehouses, we introduce a SQL compatible query terminology. Our...

  3. Large scale experiments as a tool for numerical model development

    DEFF Research Database (Denmark)

    Kirkegaard, Jens; Hansen, Erik Asp; Fuchs, Jesper

    2003-01-01

    Experimental modelling is an important tool for study of hydrodynamic phenomena. The applicability of experiments can be expanded by the use of numerical models and experiments are important for documentation of the validity of numerical tools. In other cases numerical tools can be applied...

  4. Misspecified poisson regression models for large-scale registry data

    DEFF Research Database (Denmark)

    Grøn, Randi; Gerds, Thomas A.; Andersen, Per K.

    2016-01-01

    working models that are then likely misspecified. To support and improve conclusions drawn from such models, we discuss methods for sensitivity analysis, for estimation of average exposure effects using aggregated data, and a semi-parametric bootstrap method to obtain robust standard errors. The methods...

  5. Fast Kalman-like filtering for large-dimensional linear and Gaussian state-space models

    KAUST Repository

    Ait-El-Fquih, Boujemaa; Hoteit, Ibrahim

    2015-01-01

    This paper considers the filtering problem for linear and Gaussian state-space models with large dimensions, a setup in which the optimal Kalman Filter (KF) might not be applicable owing to the excessive cost of manipulating huge covariance matrices. Among the most popular alternatives that enable cheaper and reasonable computation is the Ensemble KF (EnKF), a Monte Carlo-based approximation. In this paper, we consider a class of a posteriori distributions with diagonal covariance matrices and propose fast approximate deterministic-based algorithms based on the Variational Bayesian (VB) approach. More specifically, we derive two iterative KF-like algorithms that differ in the way they operate between two successive filtering estimates; one involves a smoothing estimate and the other involves a prediction estimate. Despite its iterative nature, the prediction-based algorithm provides a computational cost that is, on the one hand, independent of the number of iterations in the limit of very large state dimensions, and on the other hand, always much smaller than the cost of the EnKF. The cost of the smoothing-based algorithm depends on the number of iterations that may, in some situations, make this algorithm slower than the EnKF. The performances of the proposed filters are studied and compared to those of the KF and EnKF through a numerical example.

  6. Fast Kalman-like filtering for large-dimensional linear and Gaussian state-space models

    KAUST Repository

    Ait-El-Fquih, Boujemaa

    2015-08-13

    This paper considers the filtering problem for linear and Gaussian state-space models with large dimensions, a setup in which the optimal Kalman Filter (KF) might not be applicable owing to the excessive cost of manipulating huge covariance matrices. Among the most popular alternatives that enable cheaper and reasonable computation is the Ensemble KF (EnKF), a Monte Carlo-based approximation. In this paper, we consider a class of a posteriori distributions with diagonal covariance matrices and propose fast approximate deterministic-based algorithms based on the Variational Bayesian (VB) approach. More specifically, we derive two iterative KF-like algorithms that differ in the way they operate between two successive filtering estimates; one involves a smoothing estimate and the other involves a prediction estimate. Despite its iterative nature, the prediction-based algorithm provides a computational cost that is, on the one hand, independent of the number of iterations in the limit of very large state dimensions, and on the other hand, always much smaller than the cost of the EnKF. The cost of the smoothing-based algorithm depends on the number of iterations that may, in some situations, make this algorithm slower than the EnKF. The performances of the proposed filters are studied and compared to those of the KF and EnKF through a numerical example.

  7. Benchmarking Deep Learning Models on Large Healthcare Datasets.

    Science.gov (United States)

    Purushotham, Sanjay; Meng, Chuizheng; Che, Zhengping; Liu, Yan

    2018-06-04

    Deep learning models (aka Deep Neural Networks) have revolutionized many fields including computer vision, natural language processing, speech recognition, and is being increasingly used in clinical healthcare applications. However, few works exist which have benchmarked the performance of the deep learning models with respect to the state-of-the-art machine learning models and prognostic scoring systems on publicly available healthcare datasets. In this paper, we present the benchmarking results for several clinical prediction tasks such as mortality prediction, length of stay prediction, and ICD-9 code group prediction using Deep Learning models, ensemble of machine learning models (Super Learner algorithm), SAPS II and SOFA scores. We used the Medical Information Mart for Intensive Care III (MIMIC-III) (v1.4) publicly available dataset, which includes all patients admitted to an ICU at the Beth Israel Deaconess Medical Center from 2001 to 2012, for the benchmarking tasks. Our results show that deep learning models consistently outperform all the other approaches especially when the 'raw' clinical time series data is used as input features to the models. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Multilevel method for modeling large-scale networks.

    Energy Technology Data Exchange (ETDEWEB)

    Safro, I. M. (Mathematics and Computer Science)

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  9. Multiscale modeling of large deformations in 3-D polycrystals

    International Nuclear Information System (INIS)

    Lu Jing; Maniatty, Antoinette; Misiolek, Wojciech; Bandar, Alexander

    2004-01-01

    An approach for modeling 3-D polycrystals, linking to the macroscale, is presented. A Potts type model is used to generate a statistically representative grain structures with periodicity to allow scale-linking. The grain structures are compared to experimentally observed grain structures to validate that they are representative. A macroscale model of a compression test is compared against an experimental compression test for an Al-Mg-Si alloy to determine various deformation paths at different locations in the samples. These deformation paths are then applied to the experimental grain structure using a scale-bridging technique. Preliminary results from this work will be presented and discussed

  10. A model for recovery kinetics of aluminum after large strain

    DEFF Research Database (Denmark)

    Yu, Tianbo; Hansen, Niels

    2012-01-01

    A model is suggested to analyze recovery kinetics of heavily deformed aluminum. The model is based on the hardness of isothermal annealed samples before recrystallization takes place, and it can be extrapolated to longer annealing times to factor out the recrystallization component of the hardness...... for conditions where recovery and recrystallization overlap. The model is applied to the isothermal recovery at temperatures between 140 and 220°C of commercial purity aluminum deformed to true strain 5.5. EBSD measurements have been carried out to detect the onset of discontinuous recrystallization. Furthermore...

  11. Large Scale Community Detection Using a Small World Model

    Directory of Open Access Journals (Sweden)

    Ranjan Kumar Behera

    2017-11-01

    Full Text Available In a social network, small or large communities within the network play a major role in deciding the functionalities of the network. Despite of diverse definitions, communities in the network may be defined as the group of nodes that are more densely connected as compared to nodes outside the group. Revealing such hidden communities is one of the challenging research problems. A real world social network follows small world phenomena, which indicates that any two social entities can be reachable in a small number of steps. In this paper, nodes are mapped into communities based on the random walk in the network. However, uncovering communities in large-scale networks is a challenging task due to its unprecedented growth in the size of social networks. A good number of community detection algorithms based on random walk exist in literature. In addition, when large-scale social networks are being considered, these algorithms are observed to take considerably longer time. In this work, with an objective to improve the efficiency of algorithms, parallel programming framework like Map-Reduce has been considered for uncovering the hidden communities in social network. The proposed approach has been compared with some standard existing community detection algorithms for both synthetic and real-world datasets in order to examine its performance, and it is observed that the proposed algorithm is more efficient than the existing ones.

  12. Modelling binaural processes involved in simultaneous reflection masking: limitations of current models

    DEFF Research Database (Denmark)

    Buchholz, Jörg

    2007-01-01

    Masked thresholds were measured for a single test reflection, masked by the direct sound, as a function of the reflection delay. This was done for diotic as well as for dichotic stimulus presentations and all stimuli were presented via headphones. The input signal was a 200-ms long broadband noise......, such as normalized cross-correlation models (e.g., Bernstein et al., 1999, JASA, pp. 870-876), the power-addition model (Zurek, 1979, JASA, pp. 1750-1757), or Equalization-Cancellation-based models (e.g., Breebaart et al., 2001, JASA, pp. 1074-1088), cannot account for the psychoacoustical data. The present talk...

  13. Canadian Whole-Farm Model Holos - Development, Stakeholder Involvement, and Model Application

    Science.gov (United States)

    Kroebel, R.; Janzen, H.; Beauchemin, K. A.

    2017-12-01

    Agriculture and Agri-Food Canada's Holos model, based mostly on emission factors, aims to explore the effect of management on Canadian whole-farm greenhouse gas emissions. The model includes 27 commonly grown annual and perennial crops, summer fallow, grassland, and 8 types of tree plantings, along with beef, dairy, sheep, swine and other livestock or poultry operations. Model outputs encompass net emissions of CO2, CH4, and N2O (in CO2 equivalents), calculated for various farm components. Where possible, algorithms are drawn from peer-reviewed publications. For consistency, Holos is aligned with the Canadian sustainability indicator and national greenhouse gas inventory objectives. Although primarily an exploratory tool for research, the model's design makes it accessible and instructive also to agricultural producers, educators, and policy makers. Model development, therefore, proceeds iteratively, with extensive stakeholder feedback from training sessions or annual workshops. To make the model accessible to diverse users, the team developed a multi-layered interface, with general farming scenarios for general use, but giving access to detailed coefficients and assumptions to researchers. The model relies on extensive climate, soil, and agronomic databases to populate regionally-applicable default values thereby minimizing keyboard entries. In an initial application, the model was used to assess greenhouse gas emissions from the Canadian beef production system; it showed that enteric methane accounted for 63% of total GHG emissions and that 84% of emissions originated from the cow-calf herd. The model further showed that GHG emission intensity per kg beef, nationally, declined by 14% from 1981 to 2011, owing to gains in production efficiency. Holos is now being used to consider further potential advances through improved rations or other management options. We are now aiming to expand into questions of grazing management, and are developing a novel carbon

  14. Large-Scale Topic Detection and Language Model Adaptation

    National Research Council Canada - National Science Library

    Seymore, Kristie

    1997-01-01

    .... We have developed a language model adaptation scheme that takes apiece of text, chooses the most similar topic clusters from a set of over 5000 elemental topics, and uses topic specific language...

  15. A Large Scale, High Resolution Agent-Based Insurgency Model

    Science.gov (United States)

    2013-09-30

    CUDA) is NVIDIA Corporation’s software development model for General Purpose Programming on Graphics Processing Units (GPGPU) ( NVIDIA Corporation ...Conference. Argonne National Laboratory, Argonne, IL, October, 2005. NVIDIA Corporation . NVIDIA CUDA Programming Guide 2.0 [Online]. NVIDIA Corporation

  16. Large scale Bayesian nuclear data evaluation with consistent model defects

    International Nuclear Information System (INIS)

    Schnabel, G

    2015-01-01

    The aim of nuclear data evaluation is the reliable determination of cross sections and related quantities of the atomic nuclei. To this end, evaluation methods are applied which combine the information of experiments with the results of model calculations. The evaluated observables with their associated uncertainties and correlations are assembled into data sets, which are required for the development of novel nuclear facilities, such as fusion reactors for energy supply, and accelerator driven systems for nuclear waste incineration. The efficiency and safety of such future facilities is dependent on the quality of these data sets and thus also on the reliability of the applied evaluation methods. This work investigated the performance of the majority of available evaluation methods in two scenarios. The study indicated the importance of an essential component in these methods, which is the frequently ignored deficiency of nuclear models. Usually, nuclear models are based on approximations and thus their predictions may deviate from reliable experimental data. As demonstrated in this thesis, the neglect of this possibility in evaluation methods can lead to estimates of observables which are inconsistent with experimental data. Due to this finding, an extension of Bayesian evaluation methods is proposed to take into account the deficiency of the nuclear models. The deficiency is modeled as a random function in terms of a Gaussian process and combined with the model prediction. This novel formulation conserves sum rules and allows to explicitly estimate the magnitude of model deficiency. Both features are missing in available evaluation methods so far. Furthermore, two improvements of existing methods have been developed in the course of this thesis. The first improvement concerns methods relying on Monte Carlo sampling. A Metropolis-Hastings scheme with a specific proposal distribution is suggested, which proved to be more efficient in the studied scenarios than the

  17. Use of models in large-area forest surveys: comparing model-assisted, model-based and hybrid estimation

    Science.gov (United States)

    Goran Stahl; Svetlana Saarela; Sebastian Schnell; Soren Holm; Johannes Breidenbach; Sean P. Healey; Paul L. Patterson; Steen Magnussen; Erik Naesset; Ronald E. McRoberts; Timothy G. Gregoire

    2016-01-01

    This paper focuses on the use of models for increasing the precision of estimators in large-area forest surveys. It is motivated by the increasing availability of remotely sensed data, which facilitates the development of models predicting the variables of interest in forest surveys. We present, review and compare three different estimation frameworks where...

  18. Characteristics of the large corporation-based, bureaucratic model among oecd countries - an foi model analysis

    Directory of Open Access Journals (Sweden)

    Bartha Zoltán

    2014-03-01

    Full Text Available Deciding on the development path of the economy has been a delicate question in economic policy, not least because of the trade-off effects which immediately worsen certain economic indicators as steps are taken to improve others. The aim of the paper is to present a framework that helps decide on such policy dilemmas. This framework is based on an analysis conducted among OECD countries with the FOI model (focusing on future, outside and inside potentials. Several development models can be deduced by this method, out of which only the large corporation-based, bureaucratic model is discussed in detail. The large corporation-based, bureaucratic model implies a development strategy focused on the creation of domestic safe havens. Based on country studies, it is concluded that well-performing safe havens require the active participation of the state. We find that, in countries adhering to this model, business competitiveness is sustained through intensive public support, and an active role taken by the government in education, research and development, in detecting and exploiting special market niches, and in encouraging sectorial cooperation.

  19. Degree of multicollinearity and variables involved in linear dependence in additive-dominant models

    Directory of Open Access Journals (Sweden)

    Juliana Petrini

    2012-12-01

    Full Text Available The objective of this work was to assess the degree of multicollinearity and to identify the variables involved in linear dependence relations in additive-dominant models. Data of birth weight (n=141,567, yearling weight (n=58,124, and scrotal circumference (n=20,371 of Montana Tropical composite cattle were used. Diagnosis of multicollinearity was based on the variance inflation factor (VIF and on the evaluation of the condition indexes and eigenvalues from the correlation matrix among explanatory variables. The first model studied (RM included the fixed effect of dam age class at calving and the covariates associated to the direct and maternal additive and non-additive effects. The second model (R included all the effects of the RM model except the maternal additive effects. Multicollinearity was detected in both models for all traits considered, with VIF values of 1.03 - 70.20 for RM and 1.03 - 60.70 for R. Collinearity increased with the increase of variables in the model and the decrease in the number of observations, and it was classified as weak, with condition index values between 10.00 and 26.77. In general, the variables associated with additive and non-additive effects were involved in multicollinearity, partially due to the natural connection between these covariables as fractions of the biological types in breed composition.

  20. Comprehensive personal witness: a model to enlarge missional involvement of the local church

    Directory of Open Access Journals (Sweden)

    Hancke, Frans

    2013-06-01

    Full Text Available In the The Split-Level Fellowship, Wesley Baker analysed the role of individual members in the Church. He gave a name to a tragic phenomenon with which Church leaders are familiar. Although true of society in general it is especially true of the church. Baker called the difference between the committed few and the uninvolved many, Factor Beta. This reality triggers the question: Why are the majority of Christians in the world not missionally involved through personal witness and which factors consequently influence personal witness and missional involvement? This article explains how the range of personal witness and missional involvement found in local churches are rooted in certain fundamental factors and conditions which are mutually influencing each other and ultimately contribute towards forming a certain paradigm. This paradigm acts as the basis from which certain behavioural patterns (witness will manifest. The factors influencing witness are either described as accelerators or decelerators and their relativity and mutual relationships are considered. Factors acting as decelerators can severely hamper or even annul witness, while accelerators on the other hand, can have an immensely positive effect to enlarge the transformational influence of witness. In conclusion a transformational model is developed through which paradigms can be influenced and eventually changed. This model fulfils a diagnostic and remedial function and will support local churches to enlarge the individual and corporate missional involvement of believers.

  1. Consumer input into health care: Time for a new active and comprehensive model of consumer involvement.

    Science.gov (United States)

    Hall, Alix E; Bryant, Jamie; Sanson-Fisher, Rob W; Fradgley, Elizabeth A; Proietto, Anthony M; Roos, Ian

    2018-03-07

    To ensure the provision of patient-centred health care, it is essential that consumers are actively involved in the process of determining and implementing health-care quality improvements. However, common strategies used to involve consumers in quality improvements, such as consumer membership on committees and collection of patient feedback via surveys, are ineffective and have a number of limitations, including: limited representativeness; tokenism; a lack of reliable and valid patient feedback data; infrequent assessment of patient feedback; delays in acquiring feedback; and how collected feedback is used to drive health-care improvements. We propose a new active model of consumer engagement that aims to overcome these limitations. This model involves the following: (i) the development of a new measure of consumer perceptions; (ii) low cost and frequent electronic data collection of patient views of quality improvements; (iii) efficient feedback to the health-care decision makers; and (iv) active involvement of consumers that fosters power to influence health system changes. © 2018 The Authors Health Expectations published by John Wiley & Sons Ltd.

  2. Portraiture of constructivist parental involvement: A model to develop a community of practice

    Science.gov (United States)

    Dignam, Christopher Anthony

    This qualitative research study addressed the problem of the lack of parental involvement in secondary school science. Increasing parental involvement is vital in supporting student academic achievement and social growth. The purpose of this emergent phenomenological study was to identify conditions required to successfully construct a supportive learning environment to form partnerships between students, parents, and educators. The overall research question in this study investigated the conditions necessary to successfully enlist parental participation with students during science inquiry investigations at the secondary school level. One hundred thirteen pairs of parents and students engaged in a 6-week scientific inquiry activity and recorded attitudinal data in dialogue journals, questionnaires, open-ended surveys, and during one-one-one interviews conducted by the researcher between individual parents and students. Comparisons and cross-interpretations of inter-rater, codified, triangulated data were utilized for identifying emergent themes. Data analysis revealed the active involvement of parents in researching with their child during inquiry investigations, engaging in journaling, and assessing student performance fostered partnerships among students, parents, and educators and supported students' social skills development. The resulting model, employing constructivist leadership and enlisting parent involvement, provides conditions and strategies required to develop a community of practice that can help effect social change. The active involvement of parents fostered improved efficacy and a holistic mindset to develop in parents, students, and teachers. Based on these findings, the interactive collaboration of parents in science learning activities can proactively facilitate a community of practice that will assist educators in facilitating social change.

  3. CFD modeling of pool swell during large break LOCA

    International Nuclear Information System (INIS)

    Yan, Jin; Bolger, Francis; Li, Guangjun; Mintz, Saul; Pappone, Daniel

    2009-01-01

    GE had conducted a series of one-third scale three-vent air tests in support the horizontal vent pressure suppression system used in Mark III containment design for General Electric BWR plants. During the test, the air-water interface has been tracked by conductivity probes. There are many pressure monitors inside the test rig. The purpose of the test was to provide a basis for the pool swell load definition for the Mark III containment. In this paper, a transient 3-Dimensional CFD model of the one-third scale Mark III suppression pool swell process is constructed. The Volume of Fluid (VOF) multiphase model is used to explicitly track the interface between the water liquid and the air. The CFD results such as flow velocity, pressure, interface locations are compared to those from the test. Through the comparisons, a technical approach to numerically model the pool swell phenomenon is established and benchmarked. (author)

  4. Modeling and simulation of large scale stirred tank

    Science.gov (United States)

    Neuville, John R.

    The purpose of this dissertation is to provide a written record of the evaluation performed on the DWPF mixing process by the construction of numerical models that resemble the geometry of this process. There were seven numerical models constructed to evaluate the DWPF mixing process and four pilot plants. The models were developed with Fluent software and the results from these models were used to evaluate the structure of the flow field and the power demand of the agitator. The results from the numerical models were compared with empirical data collected from these pilot plants that had been operated at an earlier date. Mixing is commonly used in a variety ways throughout industry to blend miscible liquids, disperse gas through liquid, form emulsions, promote heat transfer and, suspend solid particles. The DOE Sites at Hanford in Richland Washington, West Valley in New York, and Savannah River Site in Aiken South Carolina have developed a process that immobilizes highly radioactive liquid waste. The radioactive liquid waste at DWPF is an opaque sludge that is mixed in a stirred tank with glass frit particles and water to form slurry of specified proportions. The DWPF mixing process is composed of a flat bottom cylindrical mixing vessel with a centrally located helical coil, and agitator. The helical coil is used to heat and cool the contents of the tank and can improve flow circulation. The agitator shaft has two impellers; a radial blade and a hydrofoil blade. The hydrofoil is used to circulate the mixture between the top region and bottom region of the tank. The radial blade sweeps the bottom of the tank and pushes the fluid in the outward radial direction. The full scale vessel contains about 9500 gallons of slurry with flow behavior characterized as a Bingham Plastic. Particles in the mixture have an abrasive characteristic that cause excessive erosion to internal vessel components at higher impeller speeds. The desire for this mixing process is to ensure the

  5. Simplified local density model for adsorption over large pressure ranges

    International Nuclear Information System (INIS)

    Rangarajan, B.; Lira, C.T.; Subramanian, R.

    1995-01-01

    Physical adsorption of high-pressure fluids onto solids is of interest in the transportation and storage of fuel and radioactive gases; the separation and purification of lower hydrocarbons; solid-phase extractions; adsorbent regenerations using supercritical fluids; supercritical fluid chromatography; and critical point drying. A mean-field model is developed that superimposes the fluid-solid potential on a fluid equation of state to predict adsorption on a flat wall from vapor, liquid, and supercritical phases. A van der Waals-type equation of state is used to represent the fluid phase, and is simplified with a local density approximation for calculating the configurational energy of the inhomogeneous fluid. The simplified local density approximation makes the model tractable for routine calculations over wide pressure ranges. The model is capable of prediction of Type 2 and 3 subcritical isotherms for adsorption on a flat wall, and shows the characteristic cusplike behavior and crossovers seen experimentally near the fluid critical point

  6. REPORT ON THE MODELING OF THE LARGE MIS CANS

    International Nuclear Information System (INIS)

    MOODY, E.; LYMAN, J.; VEIRS, K.

    2000-01-01

    Changes in gas composition and gas pressure for closed systems containing plutonium dioxide and water are studied using a model that incorporates both radiolysis and chemical reactions. The model is used to investigate the behavior of material stored in storage containers conforming to DOE-STD-3013-99 storage standard. Scaling of the container to allow use of smaller amounts of nuclear material in experiments designed to bound the behavior of all material destined for long-term storage is studied. It is found that the container volume must be scaled along with the amount of material to achieve applicable results

  7. Modeling and analysis of a large deployable antenna structure

    Science.gov (United States)

    Chu, Zhengrong; Deng, Zongquan; Qi, Xiaozhi; Li, Bing

    2014-02-01

    One kind of large deployable antenna (LDA) structure is proposed by combining a number of basic deployable units in this paper. In order to avoid vibration caused by fast deployment speed of the mechanism, a braking system is used to control the spring-actuated system. Comparisons between the LDA structure and a similar structure used by the large deployable reflector (LDR) indicate that the former has potential for use in antennas with up to 30 m aperture due to its lighter weight. The LDA structure is designed to form a spherical surface found by the least square fitting method so that it can be symmetrical. In this case, the positions of the terminal points in the structure are determined by two principles. A method to calculate the cable network stretched on the LDA structure is developed, which combines the original force density method and the parabolic surface constraint. Genetic algorithm is applied to ensure that each cable reaches a desired tension, which avoids the non-convergence issue effectively. We find that the pattern for the front and rear cable net must be the same when finding the shape of the rear cable net, otherwise anticlastic surface would generate.

  8. Searches for phenomena beyond the Standard Model at the Large

    Indian Academy of Sciences (India)

    The LHC has delivered several fb-1 of data in spring and summer 2011, opening new windows of opportunity for discovering phenomena beyond the Standard Model. A summary of the searches conducted by the ATLAS and CMS experiments based on about 1 fb-1 of data is presented.

  9. Symmetry-guided large-scale shell-model theory

    Czech Academy of Sciences Publication Activity Database

    Launey, K. D.; Dytrych, Tomáš; Draayer, J. P.

    2016-01-01

    Roč. 89, JUL (2016), s. 101-136 ISSN 0146-6410 R&D Projects: GA ČR GA16-16772S Institutional support: RVO:61389005 Keywords : Ab intio shell -model theory * Symplectic symmetry * Collectivity * Clusters * Hoyle state * Orderly patterns in nuclei from first principles Subject RIV: BE - Theoretical Physics Impact factor: 11.229, year: 2016

  10. Large-area dry bean yield prediction modeling in Mexico

    Science.gov (United States)

    Given the importance of dry bean in Mexico, crop yield predictions before harvest are valuable for authorities of the agricultural sector, in order to define support for producers. The aim of this study was to develop an empirical model to estimate the yield of dry bean at the regional level prior t...

  11. Soil carbon management in large-scale Earth system modelling

    DEFF Research Database (Denmark)

    Olin, S.; Lindeskog, M.; Pugh, T. A. M.

    2015-01-01

    , carbon sequestration and nitrogen leaching from croplands are evaluated and discussed. Compared to the version of LPJ-GUESS that does not include land-use dynamics, estimates of soil carbon stocks and nitrogen leaching from terrestrial to aquatic ecosystems were improved. Our model experiments allow us...

  12. An Effect of the Environmental Pollution via Mathematical Model Involving the Mittag-Leffler Function

    Directory of Open Access Journals (Sweden)

    Anjali Goswami

    2017-08-01

    Full Text Available In the existing condition estimation of pollution effect on environment is big change for all of us. In this study we develop a new approach to estimate the effect of pollution on environment via mathematical model which involves the generalized Mittag-Leffler function of one variable $E_{\\alpha_{2},\\delta_{1};\\alpha_{3},\\delta_{2}}^{\\gamma_{1},\\alpha_{1}} (z$ which we introduced here.

  13. Misspecified poisson regression models for large-scale registry data: inference for 'large n and small p'.

    Science.gov (United States)

    Grøn, Randi; Gerds, Thomas A; Andersen, Per K

    2016-03-30

    Poisson regression is an important tool in register-based epidemiology where it is used to study the association between exposure variables and event rates. In this paper, we will discuss the situation with 'large n and small p', where n is the sample size and p is the number of available covariates. Specifically, we are concerned with modeling options when there are time-varying covariates that can have time-varying effects. One problem is that tests of the proportional hazards assumption, of no interactions between exposure and other observed variables, or of other modeling assumptions have large power due to the large sample size and will often indicate statistical significance even for numerically small deviations that are unimportant for the subject matter. Another problem is that information on important confounders may be unavailable. In practice, this situation may lead to simple working models that are then likely misspecified. To support and improve conclusions drawn from such models, we discuss methods for sensitivity analysis, for estimation of average exposure effects using aggregated data, and a semi-parametric bootstrap method to obtain robust standard errors. The methods are illustrated using data from the Danish national registries investigating the diabetes incidence for individuals treated with antipsychotics compared with the general unexposed population. Copyright © 2015 John Wiley & Sons, Ltd.

  14. CP violation in beauty decays the standard model paradigm of large effects

    CERN Document Server

    Bigi, Ikaros I.Y.

    1994-01-01

    The Standard Model contains a natural source for CP asymmetries in weak decays, which is described by the KM mechanism. Beyond \\epsilon _K it generates only elusive manifestations of CP violation in {\\em light-}quark systems. On the other hand it naturally leads to large asymmetries in certain non-leptonic beauty decays. In particular when B^0-\\bar B^0 oscillations are involved, theoretical uncertainties in the hadronic matrix elements either drop out or can be controlled, and one predicts asymmetries well in excess of 10\\% with high parametric reliability. It is briefly described how the KM triangle can be determined experimentally and then subjected to sensitive consistency tests. Any failure would constitute indirect, but unequivocal evidence for the intervention of New Physics; some examples are sketched. Any outcome of a comprehensive program of CP studies in B decays -- short of technical failure -- will provide us with fundamental and unique insights into nature's design.

  15. Enhanced ICP for the Registration of Large-Scale 3D Environment Models: An Experimental Study

    Directory of Open Access Journals (Sweden)

    Jianda Han

    2016-02-01

    Full Text Available One of the main applications of mobile robots is the large-scale perception of the outdoor environment. One of the main challenges of this application is fusing environmental data obtained by multiple robots, especially heterogeneous robots. This paper proposes an enhanced iterative closest point (ICP method for the fast and accurate registration of 3D environmental models. First, a hierarchical searching scheme is combined with the octree-based ICP algorithm. Second, an early-warning mechanism is used to perceive the local minimum problem. Third, a heuristic escape scheme based on sampled potential transformation vectors is used to avoid local minima and achieve optimal registration. Experiments involving one unmanned aerial vehicle and one unmanned surface vehicle were conducted to verify the proposed technique. The experimental results were compared with those of normal ICP registration algorithms to demonstrate the superior performance of the proposed method.

  16. Airflow and radon transport modeling in four large buildings

    International Nuclear Information System (INIS)

    Fan, J.B.; Persily, A.K.

    1995-01-01

    Computer simulations of multizone airflow and contaminant transport were performed in four large buildings using the program CONTAM88. This paper describes the physical characteristics of the buildings and their idealizations as multizone building airflow systems. These buildings include a twelve-story multifamily residential building, a five-story mechanically ventilated office building with an atrium, a seven-story mechanically ventilated office building with an underground parking garage, and a one-story school building. The air change rates and interzonal airflows of these buildings are predicted for a range of wind speeds, indoor-outdoor temperature differences, and percentages of outdoor air intake in the supply air Simulations of radon transport were also performed in the buildings to investigate the effects of indoor-outdoor temperature difference and wind speed on indoor radon concentrations

  17. Uncertainty Quantification for Large-Scale Ice Sheet Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Ghattas, Omar [Univ. of Texas, Austin, TX (United States)

    2016-02-05

    This report summarizes our work to develop advanced forward and inverse solvers and uncertainty quantification capabilities for a nonlinear 3D full Stokes continental-scale ice sheet flow model. The components include: (1) forward solver: a new state-of-the-art parallel adaptive scalable high-order-accurate mass-conservative Newton-based 3D nonlinear full Stokes ice sheet flow simulator; (2) inverse solver: a new adjoint-based inexact Newton method for solution of deterministic inverse problems governed by the above 3D nonlinear full Stokes ice flow model; and (3) uncertainty quantification: a novel Hessian-based Bayesian method for quantifying uncertainties in the inverse ice sheet flow solution and propagating them forward into predictions of quantities of interest such as ice mass flux to the ocean.

  18. A comparison of updating algorithms for large N reduced models

    Energy Technology Data Exchange (ETDEWEB)

    Pérez, Margarita García [Instituto de Física Teórica UAM-CSIC, Universidad Autónoma de Madrid,Nicolás Cabrera 13-15, E-28049-Madrid (Spain); González-Arroyo, Antonio [Instituto de Física Teórica UAM-CSIC, Universidad Autónoma de Madrid,Nicolás Cabrera 13-15, E-28049-Madrid (Spain); Departamento de Física Teórica, C-XI Universidad Autónoma de Madrid,E-28049 Madrid (Spain); Keegan, Liam [PH-TH, CERN,CH-1211 Geneva 23 (Switzerland); Okawa, Masanori [Graduate School of Science, Hiroshima University,Higashi-Hiroshima, Hiroshima 739-8526 (Japan); Core of Research for the Energetic Universe, Hiroshima University,Higashi-Hiroshima, Hiroshima 739-8526 (Japan); Ramos, Alberto [PH-TH, CERN,CH-1211 Geneva 23 (Switzerland)

    2015-06-29

    We investigate Monte Carlo updating algorithms for simulating SU(N) Yang-Mills fields on a single-site lattice, such as for the Twisted Eguchi-Kawai model (TEK). We show that performing only over-relaxation (OR) updates of the gauge links is a valid simulation algorithm for the Fabricius and Haan formulation of this model, and that this decorrelates observables faster than using heat-bath updates. We consider two different methods of implementing the OR update: either updating the whole SU(N) matrix at once, or iterating through SU(2) subgroups of the SU(N) matrix, we find the same critical exponent in both cases, and only a slight difference between the two.

  19. A comparison of updating algorithms for large $N$ reduced models

    CERN Document Server

    Pérez, Margarita García; Keegan, Liam; Okawa, Masanori; Ramos, Alberto

    2015-01-01

    We investigate Monte Carlo updating algorithms for simulating $SU(N)$ Yang-Mills fields on a single-site lattice, such as for the Twisted Eguchi-Kawai model (TEK). We show that performing only over-relaxation (OR) updates of the gauge links is a valid simulation algorithm for the Fabricius and Haan formulation of this model, and that this decorrelates observables faster than using heat-bath updates. We consider two different methods of implementing the OR update: either updating the whole $SU(N)$ matrix at once, or iterating through $SU(2)$ subgroups of the $SU(N)$ matrix, we find the same critical exponent in both cases, and only a slight difference between the two.

  20. The Waterfall Model in Large-Scale Development

    Science.gov (United States)

    Petersen, Kai; Wohlin, Claes; Baca, Dejan

    Waterfall development is still a widely used way of working in software development companies. Many problems have been reported related to the model. Commonly accepted problems are for example to cope with change and that defects all too often are detected too late in the software development process. However, many of the problems mentioned in literature are based on beliefs and experiences, and not on empirical evidence. To address this research gap, we compare the problems in literature with the results of a case study at Ericsson AB in Sweden, investigating issues in the waterfall model. The case study aims at validating or contradicting the beliefs of what the problems are in waterfall development through empirical research.

  1. The waterfall model in large-scale development

    OpenAIRE

    Petersen, Kai; Wohlin, Claes; Baca, Dejan

    2009-01-01

    Waterfall development is still a widely used way of working in software development companies. Many problems have been reported related to the model. Commonly accepted problems are for example to cope with change and that defects all too often are detected too late in the software development process. However, many of the problems mentioned in literature are based on beliefs and experiences, and not on empirical evidence. To address this research gap, we compare the problems in literature wit...

  2. The magnetic model of the large hadron collider

    CERN Document Server

    Auchmann, B; Buzio, M; Deniau, L; Fiscarelli, L; Giovannozzi, M; Hagen, P; Lamont, M; Montenero, G; Mueller, G; Pereira, M; Redaelli, S; Remondino, V; Schmidt, F; Steinhagen, R; Strzelczyk, M; Tomas Garcia, R; Todesco, E; Delsolaro, W Venturini; Walckiers, L; Wenninger, J; Wolf, R; Zimmermann, F

    2010-01-01

    The beam commissioning carried out in 2009 has proved that we have a pretty good understanding of the behaviour of the relation field-current in the LHC magnets and of its reproducibility. In this paper we summarize the main issues of beam commissioning as far as the magnetic model is concerned. An outline of what can be expected in 2010, when the LHC will be pushed to 3.5 TeV, is also given.

  3. Validity of scale modeling for large deformations in shipping containers

    International Nuclear Information System (INIS)

    Burian, R.J.; Black, W.E.; Lawrence, A.A.; Balmert, M.E.

    1979-01-01

    The principal overall objective of this phase of the continuing program for DOE/ECT is to evaluate the validity of applying scaling relationships to accurately assess the response of unprotected model shipping containers severe impact conditions -- specifically free fall from heights up to 140 ft onto a hard surface in several orientations considered most likely to produce severe damage to the containers. The objective was achieved by studying the following with three sizes of model casks subjected to the various impact conditions: (1) impact rebound response of the containers; (2) structural damage and deformation modes; (3) effect on the containment; (4) changes in shielding effectiveness; (5) approximate free-fall threshold height for various orientations at which excessive damage occurs; (6) the impact orientation(s) that tend to produce the most severe damage; and (7) vunerable aspects of the casks which should be examined. To meet the objective, the tests were intentionally designed to produce extreme structural damage to the cask models. In addition to the principal objective, this phase of the program had the secondary objectives of establishing a scientific data base for assessing the safety and environmental control provided by DOE nuclear shipping containers under impact conditions, and providing experimental data for verification and correlation with dynamic-structural-analysis computer codes being developed by the Los Alamos Scientific Laboratory for DOE/ECT

  4. Hydrogeochemical modeling of large fluvial basins: impact of climate change

    International Nuclear Information System (INIS)

    Beaulieu, E.

    2011-01-01

    The chemical weathering of continental surfaces represents the one of carbon sinks at the Earth's surface which regulates the climate through feedback mechanism. The weathering intensity is controlled by climate but also by lithology, vegetal cover, hydrology and presence of smectites and acids in soils. In this work, a study at global scale on grid cells highlighted that a CO 2 concentration increase in the atmosphere would involve a decrease of evapotranspiration due to stomatal progressive closure, and a rise of soil acidity related to enhanced bio-spheric productivity. These changes would promote the silicates chemical weathering and as a result, would lead to CO 2 consumption increase by 3% for 100 ppmv of CO 2 concentration rise in the atmosphere. Then, the study on the one of the most important catchments located in arctic environment, the Mackenzie basin (Canada), showed the high sensitivity of chemical weathering to sulfuric acid production. Indeed, the Mackenzie mean CO 2 consumption has decreased by 56%, taking account the pyrite presence in the catchment. In addition, the mean CO 2 consumption of this basin could rise by 53% between today climate and a climatic scenario predicted for the end of century. (author)

  5. Implementation of an Online Chemistry Model to a Large Eddy Simulation Model (PALM-4U0

    Science.gov (United States)

    Mauder, M.; Khan, B.; Forkel, R.; Banzhaf, S.; Russo, E. E.; Sühring, M.; Kanani-Sühring, F.; Raasch, S.; Ketelsen, K.

    2017-12-01

    Large Eddy Simulation (LES) models permit to resolve relevant scales of turbulent motion, so that these models can capture the inherent unsteadiness of atmospheric turbulence. However, LES models are so far hardly applied for urban air quality studies, in particular chemical transformation of pollutants. In this context, BMBF (Bundesministerium für Bildung und Forschung) funded a joint project, MOSAIK (Modellbasierte Stadtplanung und Anwendung im Klimawandel / Model-based city planning and application in climate change) with the main goal to develop a new highly efficient urban climate model (UCM) that also includes atmospheric chemical processes. The state-of-the-art LES model PALM; Maronga et al, 2015, Geosci. Model Dev., 8, doi:10.5194/gmd-8-2515-2015), has been used as a core model for the new UCM named as PALM-4U. For the gas phase chemistry, a fully coupled 'online' chemistry model has been implemented into PALM. The latest version of the Kinetic PreProcessor (KPP) Version 2.3, has been utilized for the numerical integration of chemical species. Due to the high computational demands of the LES model, compromises in the description of chemical processes are required. Therefore, a reduced chemistry mechanism, which includes only major pollutants namely O3, NO, NO2, CO, a highly simplified VOC chemistry and a small number of products have been implemented. This work shows preliminary results of the advection, and chemical transformation of atmospheric pollutants. Non-cyclic boundaries have been used for inflow and outflow in east-west directions while periodic boundary conditions have been implemented to the south-north lateral boundaries. For practical applications, our approach is to go beyond the simulation of single street canyons to chemical transformation, advection and deposition of air pollutants in the larger urban canopy. Tests of chemistry schemes and initial studies of chemistry-turbulence, transport and transformations are presented.

  6. Involvement of TRPM2 in a wide range of inflammatory and neuropathic pain mouse models

    Directory of Open Access Journals (Sweden)

    Kanako So

    2015-03-01

    Full Text Available Recent evidence suggests a role of transient receptor potential melastatin 2 (TRPM2 in immune and inflammatory responses. We previously reported that TRPM2 deficiency attenuated inflammatory and neuropathic pain in some pain mouse models, including formalin- or carrageenan-induced inflammatory pain, and peripheral nerve injury-induced neuropathic pain models, while it had no effect on the basal mechanical and thermal nociceptive sensitivities. In this study, we further explored the involvement of TRPM2 in various pain models using TRPM2-knockout mice. There were no differences in the chemonociceptive behaviors evoked by intraplantar injection of capsaicin or hydrogen peroxide between wildtype and TRPM2-knockout mice, while acetic acid-induced writhing behavior was significantly attenuated in TRPM2-knockout mice. In the postoperative incisional pain model, no difference in mechanical allodynia was observed between the two genotypes. By contrast, mechanical allodynia in the monosodium iodoacetate-induced osteoarthritis pain model and the experimental autoimmune encephalomyelitis model were significantly attenuated in TRPM2-knockout mice. Furthermore, mechanical allodynia in paclitaxel-induced peripheral neuropathy and streptozotocin-induced painful diabetic neuropathy models were significantly attenuated in TRPM2-knockout mice. Taken together, these results suggest that TRPM2 plays roles in a wide range of pathological pain models based on peripheral and central neuroinflammation, rather than physiological nociceptive pain.

  7. Modelling and transient stability of large wind farms

    DEFF Research Database (Denmark)

    Akhmatov, Vladislav; Knudsen, Hans; Nielsen, Arne Hejde

    2003-01-01

    by a physical model of grid-connected windmills. The windmill generators ate conventional induction generators and the wind farm is ac-connected to the power system. Improvements-of short-term voltage stability in case of failure events in the external power system are treated with use of conventional generator...... technology. This subject is treated as a parameter study with respect to the windmill electrical and mechanical parameters and with use of control strategies within the conventional generator technology. Stability improvements on the wind farm side of the connection point lead to significant reduction...

  8. Large meteorite impacts: The K/T model

    Science.gov (United States)

    Bohor, B. F.

    1992-01-01

    The Cretaceous/Tertiary (K/T) boundary event represents probably the largest meteorite impact known on Earth. It is the only impact event conclusively linked to a worldwide mass extinction, a reflection of its gigantic scale and global influence. Until recently, the impact crater was not definitively located and only the distal ejecta of this impact was available for study. However, detailed investigations of this ejecta's mineralogy, geochemistry, microstratigraphy, and textures have allowed its modes of ejection and dispersal to be modeled without benefit of a source crater of known size and location.

  9. Numerical Model for Solidification Zones Selection in the Large Ingots

    Directory of Open Access Journals (Sweden)

    Wołczyński W.

    2015-12-01

    Full Text Available A vertical cut at the mid-depth of the 15-ton forging steel ingot has been performed by curtesy of the CELSA - Huta Ostrowiec plant. Some metallographic studies were able to reveal not only the chilled undersized grains under the ingot surface but columnar grains and large equiaxed grains as well. Additionally, the structural zone within which the competition between columnar and equiaxed structure formation was confirmed by metallography study, was also revealed. Therefore, it seemed justified to reproduce some of the observed structural zones by means of numerical calculation of the temperature field. The formation of the chilled grains zone is the result of unconstrained rapid solidification and was not subject of simulation. Contrary to the equiaxed structure formation, the columnar structure or columnar branched structure formation occurs under steep thermal gradient. Thus, the performed simulation is able to separate both discussed structural zones and indicate their localization along the ingot radius as well as their appearance in term of solidification time.

  10. Modeling a Large Data Acquisition Network in a Simulation Framework

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00337030; The ATLAS collaboration; Froening, Holger; Garcia, Pedro Javier; Vandelli, Wainer

    2015-01-01

    The ATLAS detector at CERN records particle collision “events” delivered by the Large Hadron Collider. Its data-acquisition system is a distributed software system that identifies, selects, and stores interesting events in near real-time, with an aggregate throughput of several 10 GB/s. It is a distributed software system executed on a farm of roughly 2000 commodity worker nodes communicating via TCP/IP on an Ethernet network. Event data fragments are received from the many detector readout channels and are buffered, collected together, analyzed and either stored permanently or discarded. This system, and data-acquisition systems in general, are sensitive to the latency of the data transfer from the readout buffers to the worker nodes. Challenges affecting this transfer include the many-to-one communication pattern and the inherently bursty nature of the traffic. In this paper we introduce the main performance issues brought about by this workload, focusing in particular on the so-called TCP incast pathol...

  11. Effects of uncertainty in model predictions of individual tree volume on large area volume estimates

    Science.gov (United States)

    Ronald E. McRoberts; James A. Westfall

    2014-01-01

    Forest inventory estimates of tree volume for large areas are typically calculated by adding model predictions of volumes for individual trees. However, the uncertainty in the model predictions is generally ignored with the result that the precision of the large area volume estimates is overestimated. The primary study objective was to estimate the effects of model...

  12. Non sentinel node involvement prediction for sentinel node micrometastases in breast cancer: nomogram validation and comparison with other models.

    Science.gov (United States)

    Houvenaeghel, Gilles; Bannier, Marie; Nos, Claude; Giard, Sylvia; Mignotte, Herve; Jacquemier, Jocelyne; Martino, Marc; Esterni, Benjamin; Belichard, Catherine; Classe, Jean-Marc; Tunon de Lara, Christine; Cohen, Monique; Payan, Raoul; Blanchot, Jerome; Rouanet, Philippe; Penault-Llorca, Frederique; Bonnier, Pascal; Fournet, Sandrine; Agostini, Aubert; Marchal, Frederique; Garbay, Jean-Remi

    2012-04-01

    The risk of non sentinel node (NSN) involvement varies in function of the characteristics of sentinel nodes (SN) and primary tumor. Our aim was to determine and validate a statistical tool (a nomogram) able to predict the risk of NSN involvement in case of SN micro or sub-micrometastasis of breast cancer. We have compared this monogram with other models described in the literature. We have collected data on 905 patients, then 484 other patients, to build and validate the nomogram and compare it with other published scores and nomograms. Multivariate analysis conducted on the data of the first cohort allowed us to define a nomogram based on 5 criteria: the method of SN detection (immunohistochemistry or by standard coloration with HES); the ratio of positive SN out of total removed SN; the pathologic size of the tumor; the histological type; and the presence (or not) of lympho-vascular invasion. The nomogram developed here is the only one dedicated to micrometastasis and developed on the basis of two large cohorts. The results of this statistical tool in the calculation of the risk of NSN involvement is similar to those of the MSKCC (the similarly more effective nomogram according to the literature), with a lower rate of false negatives. this nomogram is dedicated specifically to cases of SN involvement by metastasis lower or equal to 2 mm. It could be used in clinical practice in the way to omit ALND when the risk of NSN involvement is low. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. NST: Thermal Modeling for a Large Aperture Solar Telescope

    Science.gov (United States)

    Coulter, Roy

    2011-05-01

    Late in the 1990s the Dutch Open Telescope demonstrated that internal seeing in open, large aperture solar telescopes can be controlled by flushing air across the primary mirror and other telescope structures exposed to sunlight. In that system natural wind provides a uniform air temperature throughout the imaging volume, while efficiently sweeping heated air away from the optics and mechanical structure. Big Bear Solar Observatory's New Solar Telescope (NST) was designed to realize that same performance in an enclosed system by using both natural wind through the dome and forced air circulation around the primary mirror to provide the uniform air temperatures required within the telescope volume. The NST is housed in a conventional, ventilated dome with a circular opening, in place of the standard dome slit, that allows sunlight to fall only on an aperture stop and the primary mirror. The primary mirror is housed deep inside a cylindrical cell with only minimal openings in the side at the level of the mirror. To date, the forced air and cooling systems designed for the NST primary mirror have not been implemented, yet the telescope regularly produces solar images indicative of the absence of mirror seeing. Computational Fluid Dynamics (CFD) analysis of the NST primary mirror system along with measurements of air flows within the dome, around the telescope structure, and internal to the mirror cell are used to explain the origin of this seemingly incongruent result. The CFD analysis is also extended to hypothetical systems of various scales. We will discuss the results of these investigations.

  14. Large-scale groundwater modeling using global datasets: a test case for the Rhine-Meuse basin

    Directory of Open Access Journals (Sweden)

    E. H. Sutanudjaja

    2011-09-01

    Full Text Available The current generation of large-scale hydrological models does not include a groundwater flow component. Large-scale groundwater models, involving aquifers and basins of multiple countries, are still rare mainly due to a lack of hydro-geological data which are usually only available in developed countries. In this study, we propose a novel approach to construct large-scale groundwater models by using global datasets that are readily available. As the test-bed, we use the combined Rhine-Meuse basin that contains groundwater head data used to verify the model output. We start by building a distributed land surface model (30 arc-second resolution to estimate groundwater recharge and river discharge. Subsequently, a MODFLOW transient groundwater model is built and forced by the recharge and surface water levels calculated by the land surface model. Results are promising despite the fact that we still use an offline procedure to couple the land surface and MODFLOW groundwater models (i.e. the simulations of both models are separately performed. The simulated river discharges compare well to the observations. Moreover, based on our sensitivity analysis, in which we run several groundwater model scenarios with various hydro-geological parameter settings, we observe that the model can reasonably well reproduce the observed groundwater head time series. However, we note that there are still some limitations in the current approach, specifically because the offline-coupling technique simplifies the dynamic feedbacks between surface water levels and groundwater heads, and between soil moisture states and groundwater heads. Also the current sensitivity analysis ignores the uncertainty of the land surface model output. Despite these limitations, we argue that the results of the current model show a promise for large-scale groundwater modeling practices, including for data-poor environments and at the global scale.

  15. Models for physics of the very small and very large

    CERN Document Server

    Buckholtz, Thomas J

    2016-01-01

    This monograph tackles three challenges. First, show math that matches known elementary particles. Second, apply the math to match other known physics data. Third, predict future physics data The math features solutions to isotropic pairs of isotropic quantum harmonic oscillators. This monograph matches some solutions to known elementary particles. Matched properties include spin and types of interactions in which the particles partake Other solutions point to possible elementary particles This monograph applies the math and the extended particle list. Results narrow gaps between physics data and theory. Results pertain to elementary particles, astrophysics, and cosmology For example, this monograph predicts properties for beyond-the-Standard-Model elementary particles, proposes descriptions of dark matter and dark energy, provides new relationships between known physics constants, includes theory that dovetails with the ratio of dark matter to ordinary matter, includes math that dovetails with the number of ...

  16. One Patient, Two Uncommon B-Cell Neoplasms: Solitary Plasmacytoma following Complete Remission from Intravascular Large B-Cell Lymphoma Involving Central Nervous System

    Directory of Open Access Journals (Sweden)

    Joycelyn Lee

    2014-01-01

    Full Text Available Second lymphoid neoplasms are an uncommon but recognized feature of non-Hodgkin’s lymphomas, putatively arising secondary to common genetic or environmental risk factors. Previous limited evaluations of clonal relatedness between successive mature B-cell malignancies have yielded mixed results. We describe the case of a man with intravascular large B-cell lymphoma involving the central nervous system who went into clinical remission following immunochemotherapy and brain radiation, only to relapse 2 years later with a plasmacytoma of bone causing cauda equina syndrome. The plasmacytoma stained strongly for the cell cycle regulator cyclin D1 on immunohistochemistry, while the original intravascular large cell lymphoma was negative, a disparity providing no support for clonal identity between the 2 neoplasms. Continued efforts atcataloging and evaluating unique associations of B-cell malignancies are critical to improving understanding of overarching disease biology in B-cell malignancies.

  17. Simplified Model for the Population Dynamics Involved in a Malaria Crisis

    International Nuclear Information System (INIS)

    Kenfack-Jiotsa, A.; Fotsa-Ngaffo, F.

    2009-12-01

    We adapt a simple model of predator-prey to the population involved in a crisis of malaria. The study is made only in the stream blood inside the human body except for the liver. Particularly we look at the dynamics of the malaria parasites 'merozoites' and their interaction with the blood components, more specifically the red blood cells (RBC) and the immune response grouped under the white blood cells (WBC). The stability analysis of the system reveals an important practical direction to investigate as regards the ratio WBC over RBC since it is a fundamental parameter that characterizes stable regions. The model numerically presents a wide range of possible features of the disease. Even with its simplified form, the model not only recovers well-known results but in addition predicts possible hidden phenomenon and an interesting clinical feature a malaria crisis. (author)

  18. A comparative modeling and molecular docking study on Mycobacterium tuberculosis targets involved in peptidoglycan biosynthesis.

    Science.gov (United States)

    Fakhar, Zeynab; Naiker, Suhashni; Alves, Claudio N; Govender, Thavendran; Maguire, Glenn E M; Lameira, Jeronimo; Lamichhane, Gyanu; Kruger, Hendrik G; Honarparvar, Bahareh

    2016-11-01

    An alarming rise of multidrug-resistant Mycobacterium tuberculosis strains and the continuous high global morbidity of tuberculosis have reinvigorated the need to identify novel targets to combat the disease. The enzymes that catalyze the biosynthesis of peptidoglycan in M. tuberculosis are essential and noteworthy therapeutic targets. In this study, the biochemical function and homology modeling of MurI, MurG, MraY, DapE, DapA, Alr, and Ddl enzymes of the CDC1551 M. tuberculosis strain involved in the biosynthesis of peptidoglycan cell wall are reported. Generation of the 3D structures was achieved with Modeller 9.13. To assess the structural quality of the obtained homology modeled targets, the models were validated using PROCHECK, PDBsum, QMEAN, and ERRAT scores. Molecular dynamics simulations were performed to calculate root mean square deviation (RMSD) and radius of gyration (Rg) of MurI and MurG target proteins and their corresponding templates. For further model validation, RMSD and Rg for selected targets/templates were investigated to compare the close proximity of their dynamic behavior in terms of protein stability and average distances. To identify the potential binding mode required for molecular docking, binding site information of all modeled targets was obtained using two prediction algorithms. A docking study was performed for MurI to determine the potential mode of interaction between the inhibitor and the active site residues. This study presents the first accounts of the 3D structural information for the selected M. tuberculosis targets involved in peptidoglycan biosynthesis.

  19. Modeling of dengue occurrences early warning involving temperature and rainfall factors

    Directory of Open Access Journals (Sweden)

    Prama Setia Putra

    2017-07-01

    Full Text Available Objective: To understand dengue transmission process and its vector dynamics and to develop early warning model of dengue occurrences based on mosquito population and host-vector threshold values considering temperature and rainfall. Methods: To obtain the early warning model, mosquito population and host-vector models are developed initially. Both are developed using differential equations. Basic offspring number (R0m and basic reproductive ratio (R0d which are the threshold values are derived from the models under constant parameters assumption. Temperature and rainfall effects on mosquito and dengue are performed in entomological and disease transmission parameters. Some of parameters are set as functions of temperature or rainfall while other parameters are set to be constant. Hereafter, both threshold values are computed using those parameters. Monthly dengue occurrences data are categorized as zero and one values which one means the outbreak does occur in that month. Logistics regression is chosen to bridge the threshold values and categorized data. Threshold values are considered as the input of early warning model. Semarang city is selected as the sample to develop this early waning model. Results: The derived threshold values which are R 0 m and R 0 d show to have relation that mosquito as dengue vector affects transmission of the disease. Result of the early warning model will be a value between zero and one. It is categorized as outbreak does occur when the value is larger than 0.5 while other is categorized as outbreak does not occur. By using single predictor, the model can perform 68% accuracy approximately. Conclusions: The extinction of mosquitoes will be followed by disease disappearance while mosquitoes existence can lead to disease free or endemic states. Model simulations show that mosquito population are more affected by weather factors than human. Involving weather factors implicitly in the threshold value and linking them

  20. Towards agile large-scale predictive modelling in drug discovery with flow-based programming design principles.

    Science.gov (United States)

    Lampa, Samuel; Alvarsson, Jonathan; Spjuth, Ola

    2016-01-01

    Predictive modelling in drug discovery is challenging to automate as it often contains multiple analysis steps and might involve cross-validation and parameter tuning that create complex dependencies between tasks. With large-scale data or when using computationally demanding modelling methods, e-infrastructures such as high-performance or cloud computing are required, adding to the existing challenges of fault-tolerant automation. Workflow management systems can aid in many of these challenges, but the currently available systems are lacking in the functionality needed to enable agile and flexible predictive modelling. We here present an approach inspired by elements of the flow-based programming paradigm, implemented as an extension of the Luigi system which we name SciLuigi. We also discuss the experiences from using the approach when modelling a large set of biochemical interactions using a shared computer cluster.Graphical abstract.

  1. An empirical velocity scale relation for modelling a design of large mesh pelagic trawl

    NARCIS (Netherlands)

    Ferro, R.S.T.; Marlen, van B.; Hansen, K.E.

    1996-01-01

    Physical models of fishing nets are used in fishing technology research at scales of 1:40 or smaller. As with all modelling involving fluid flow, a set of rules is required to determine the geometry of the model and its velocity relative to the water. Appropriate rules ensure that the model is

  2. RELAP5/SCDAPSIM model development for AP1000 and verification for large break LOCA

    Energy Technology Data Exchange (ETDEWEB)

    Trivedi, A.K. [Nuclear Engineering and Technology Program, Indian Institute of Technology, Kanpur 208016 (India); Allison, C. [Innovative Systems Software, Idaho Falls, ID 83406 (United States); Khanna, A., E-mail: akhanna@iitk.ac.in [Nuclear Engineering and Technology Program, Indian Institute of Technology, Kanpur 208016 (India); Munshi, P. [Nuclear Engineering and Technology Program, Indian Institute of Technology, Kanpur 208016 (India)

    2016-08-15

    Highlights: • RELAP5/SCDAPSIM model of AP1000 has been developed. • Analysis involves a LBLOCA (double ended guillotine break) study in cold leg. • Results are compared with those of WCOBRA–TRAC and TRACE. • Concluded that PCT does not violate the safety criteria of 1477 K. - Abstract: The AP1000 is a Westinghouse 2-loop pressurized water reactor (PWR) with all emergency core cooling systems based on natural circulation. Its core design is very similar to a 3-loop PWR with 157 fuel assemblies. Westinghouse has reported their results of the safety analysis in its design control document (DCD) for a large break loss of coolant accident (LOCA) using WCOBRA/TRAC and for a small break LOCA using NOTRUMP. The current study involves the development of a representative RELAP5/SCDASIM model for AP1000 based on publically available data and its verification for a double ended cold leg (DECL) break in one of the cold legs in the loop containing core makeup tanks (CMT). The calculated RELAP5/SCDAPSIM results have been compared to publically available WCOBRA–TRAC and TRACE results of DECL break in AP1000. The objective of this study is to benchmark thermal hydraulic model for later severe accident analyses using the 2D SCDAP fuel rod component in place of the RELAP5 heat structures which currently represent the fuel rods. Results from this comparison provides sufficient confidence in the model which will be used for further studies such as a station blackout. The primary circuit pumps, pressurizer and steam generators (including the necessary secondary side) are modeled using RELAP5 components following all the necessary recommendations for nodalization. The core has been divided into 6 radial rings and 10 axial nodes. For the RELAP5 thermal hydraulic calculation, the six groups of fuel assemblies have been modeled as pipe components with equivalent flow areas. The fuel including the gap and cladding is modeled as a 1d heat structure. The final input deck achieved

  3. Ethics Literacy and "Ethics University": Two Intertwined Models for Public Involvement and Empowerment in Bioethics.

    Science.gov (United States)

    Strech, Daniel; Hirschberg, Irene; Meyer, Antje; Baum, Annika; Hainz, Tobias; Neitzke, Gerald; Seidel, Gabriele; Dierks, Marie-Luise

    2015-01-01

    Informing lay citizens about complex health-related issues and their related ethical, legal, and social aspects (ELSA) is one important component of democratic health care/research governance. Public information activities may be especially valuable when they are used in multi-staged processes that also include elements of information and deliberation. This paper presents a new model for a public involvement activity on ELSA (Ethics University) and evaluation data for a pilot event. The Ethics University is structurally based on the "patient university," an already established institution in some German medical schools, and the newly developed concept of "ethics literacy." The concept of "ethics literacy" consists of three levels: information, interaction, and reflection. The pilot project consisted of two series of events (lasting 4 days each). The thematic focus of the Ethics University pilot was ELSA of regenerative medicine. In this pilot, the concept of "ethics literacy" could be validated as its components were clearly visible in discussions with participants at the end of the event. The participants reacted favorably to the Ethics University by stating that they felt more educated with regard to the ELSA of regenerative medicine and with regard to their own abilities in normative reasoning on this topic. The Ethics University is an innovative model for public involvement and empowerment activities on ELSA theoretically underpinned by a concept for "ethics literacy." This model deserves further refinement, testing in other ELSA topics and evaluation in outcome research.

  4. Mechanism and models for collisional energy transfer in highly excited large polyatomic molecules

    International Nuclear Information System (INIS)

    Gilbert, R. G.

    1995-01-01

    Collisional energy transfer in highly excited molecules (say, 200-500 kJ mol -1 above the zero-point energy of reactant, or of product, for a recombination reaction) is reviewed. An understanding of this energy transfer is important in predicting and interpreting the pressure dependence of gas-phase rate coefficients for unimolecular and recombination reactions. For many years it was thought that this pressure dependence could be calculated from a single energy-transfer quantity, such as the average energy transferred per collision. However, the discovery of 'super collisions' (a small but significant fraction of collisions which transfer abnormally large amounts of energy) means that this simplistic approach needs some revision. The 'ordinary' (non-super) component of the distribution function for collisional energy transfer can be quantified either by empirical models (e.g., an exponential-down functional form) or by models with a physical basis, such as biased random walk (applicable to monatomic or diatomic collision partners) or ergodic (for polyatomic collision partners) treatments. The latter two models enable approximate expressions for the average energy transfer to be estimated from readily available molecular parameters. Rotational energy transfer, important for finding the pressure dependence for recombination reactions, can for these purposes usually be taken as transferring sufficient energy so that the explicit functional form is not required to predict the pressure dependence. The mechanism of 'ordinary' energy transfer seems to be dominated by low-frequency modes of the substrate, whereby there is sufficient time during a vibrational period for significant energy flow between the collision partners. Super collisions may involve sudden energy flow as an outer atom of the substrate is squashed between the substrate and the bath gas, and then is moved away from the interaction by large-amplitude motion such as a ring vibration or a rotation; improved

  5. Validating a Model of Motivational Factors Influencing Involvement for Parents of Transition-Age Youth with Disabilities

    Science.gov (United States)

    Hirano, Kara A.; Shanley, Lina; Garbacz, S. Andrew; Rowe, Dawn A.; Lindstrom, Lauren; Leve, Leslie D.

    2018-01-01

    Parent involvement is a predictor of postsecondary education and employment outcomes, but rigorous measures of parent involvement for youth with disabilities are lacking. Hirano, Garbacz, Shanley, and Rowe adapted scales based on Hoover-Dempsey and Sandler model of parent involvement for use with parents of youth with disabilities aged 14 to 23.…

  6. Large-n limit of the Heisenberg model: The decorated lattice and the disordered chain

    International Nuclear Information System (INIS)

    Khoruzhenko, B.A.; Pastur, L.A.; Shcherbina, M.V.

    1989-01-01

    The critical temperature of the generalized spherical model (large-component limit of the classical Heisenberg model) on a cubic lattice, whose every bond is decorated by L spins, is found. When L → ∞, the asymptotics of the temperature is T c ∼ aL -1 . The reduction of the number of spherical constraints for the model is found to be fairly large. The free energy of the one-dimensional generalized spherical model with random nearest neighbor interaction is calculated

  7. An explanatory model of maths achievement:Perceived parental involvement and academic motivation.

    Science.gov (United States)

    Rodríguez, Susana; Piñeiro, Isabel; Gómez-Taibo, Mª L; Regueiro, Bibiana; Estévez, Iris; Valle, Antonio

    2017-05-01

    Although numerous studies have tried to explain performance in maths very few have deeply explored the relationship between different variables and how they jointly explain mathematical performance. With a sample of 897 students in 5th and 6th grade in Primary Education and using structural equation modeling (SEM), this study analyzes how the perception of parents’ beliefs is related to children´s beliefs, their involvement in mathematical tasks and their performance. Perceived parental involvement contributes to the motivation of their children in mathematics. Direct supervision of students’ academic work by parents may increase students’ concerns about the image and rating of their children, but not their academic performance. In fact, maths achievement depends directly and positively on the parents’ expectations and children’s maths self-efficacy and negatively on the parents’ help in tasks and performance goal orientation. Perceived parental involvement contributes to children’s motivation in maths essentially conveying confidence in their abilities and showing interest in their progress and schoolwork.

  8. University Physics Students' Use of Models in Explanations of Phenomena Involving Interaction between Metals and Electromagnetic Radiation.

    Science.gov (United States)

    Redfors, Andreas; Ryder, Jim

    2001-01-01

    Examines third year university physics students' use of models when explaining familiar phenomena involving interaction between metals and electromagnetic radiation. Concludes that few students use a single model consistently. (Contains 27 references.) (DDR)

  9. A Computational Model of a Descending Mechanosensory Pathway Involved in Active Tactile Sensing.

    Directory of Open Access Journals (Sweden)

    Jan M Ache

    2015-07-01

    Full Text Available Many animals, including humans, rely on active tactile sensing to explore the environment and negotiate obstacles, especially in the dark. Here, we model a descending neural pathway that mediates short-latency proprioceptive information from a tactile sensor on the head to thoracic neural networks. We studied the nocturnal stick insect Carausius morosus, a model organism for the study of adaptive locomotion, including tactually mediated reaching movements. Like mammals, insects need to move their tactile sensors for probing the environment. Cues about sensor position and motion are therefore crucial for the spatial localization of tactile contacts and the coordination of fast, adaptive motor responses. Our model explains how proprioceptive information about motion and position of the antennae, the main tactile sensors in insects, can be encoded by a single type of mechanosensory afferents. Moreover, it explains how this information is integrated and mediated to thoracic neural networks by a diverse population of descending interneurons (DINs. First, we quantified responses of a DIN population to changes in antennal position, motion and direction of movement. Using principal component (PC analysis, we find that only two PCs account for a large fraction of the variance in the DIN response properties. We call the two-dimensional space spanned by these PCs 'coding-space' because it captures essential features of the entire DIN population. Second, we model the mechanoreceptive input elements of this descending pathway, a population of proprioceptive mechanosensory hairs monitoring deflection of the antennal joints. Finally, we propose a computational framework that can model the response properties of all important DIN types, using the hair field model as its only input. This DIN model is validated by comparison of tuning characteristics, and by mapping the modelled neurons into the two-dimensional coding-space of the real DIN population. This

  10. Involving regional expertise in nationwide modeling for adequate prediction of climate change effects on different demands for fresh water

    Science.gov (United States)

    de Lange, W. J.

    2014-05-01

    Wim J. de Lange, Geert F. Prinsen, Jacco H. Hoogewoud, Ab A Veldhuizen, Joachim Hunink, Erik F.W. Ruijgh, Timo Kroon Nationwide modeling aims to produce a balanced distribution of climate change effects (e.g. harm on crops) and possible compensation (e.g. volume fresh water) based on consistent calculation. The present work is based on the Netherlands Hydrological Instrument (NHI, www.nhi.nu), which is a national, integrated, hydrological model that simulates distribution, flow and storage of all water in the surface water and groundwater systems. The instrument is developed to assess the impact on water use on land-surface (sprinkling crops, drinking water) and in surface water (navigation, cooling). The regional expertise involved in the development of NHI come from all parties involved in the use, production and management of water, such as waterboards, drinking water supply companies, provinces, ngo's, and so on. Adequate prediction implies that the model computes changes in the order of magnitude that is relevant to the effects. In scenarios related to drought, adequate prediction applies to the water demand and the hydrological effects during average, dry, very dry and extremely dry periods. The NHI acts as a part of the so-called Deltamodel (www.deltamodel.nl), which aims to predict effects and compensating measures of climate change both on safety against flooding and on water shortage during drought. To assess the effects, a limited number of well-defined scenarios is used within the Deltamodel. The effects on demand of fresh water consist of an increase of the demand e.g. for surface water level control to prevent dike burst, for flushing salt in ditches, for sprinkling of crops, for preserving wet nature and so on. Many of the effects are dealt with by regional and local parties. Therefore, these parties have large interest in the outcome of the scenario analyses. They are participating in the assessment of the NHI previous to the start of the analyses

  11. Social Work Involvement in Advance Care Planning: Findings from a Large Survey of Social Workers in Hospice and Palliative Care Settings.

    Science.gov (United States)

    Stein, Gary L; Cagle, John G; Christ, Grace H

    2017-03-01

    Few data are available describing the involvement and activities of social workers in advance care planning (ACP). We sought to provide data about (1) social worker involvement and leadership in ACP conversations with patients and families; and (2) the extent of functions and activities when these discussions occur. We conducted a large web-based survey of social workers employed in hospice, palliative care, and related settings to explore their role, participation, and self-rated competency in facilitating ACP discussions. Respondents were recruited through the Social Work Hospice and Palliative Care Network and the National Hospice and Palliative Care Organization. Descriptive analyses were conducted on the full sample of respondents (N = 641) and a subsample of clinical social workers (N = 456). Responses were analyzed to explore differences in ACP involvement by practice setting. Most clinical social workers (96%) reported that social workers in their department are conducting ACP discussions with patients/families. Majorities also participate in, and lead, ACP discussions (69% and 60%, respectively). Most respondents report that social workers are responsible for educating patients/families about ACP options (80%) and are the team members responsible for documenting ACP (68%). Compared with other settings, oncology and inpatient palliative care social workers were less likely to be responsible for ensuring that patients/families are informed of ACP options and documenting ACP preferences. Social workers are prominently involved in facilitating, leading, and documenting ACP discussions. Policy-makers, administrators, and providers should incorporate the vital contributions of social work professionals in policies and programs supporting ACP.

  12. Simulation of a Large Wildfire in a Coupled Fire-Atmosphere Model

    Directory of Open Access Journals (Sweden)

    Jean-Baptiste Filippi

    2018-06-01

    Full Text Available The Aullene fire devastated more than 3000 ha of Mediterranean maquis and pine forest in July 2009. The simulation of combustion processes, as well as atmospheric dynamics represents a challenge for such scenarios because of the various involved scales, from the scale of the individual flames to the larger regional scale. A coupled approach between the Meso-NH (Meso-scale Non-Hydrostatic atmospheric model running in LES (Large Eddy Simulation mode and the ForeFire fire spread model is proposed for predicting fine- to large-scale effects of this extreme wildfire, showing that such simulation is possible in a reasonable time using current supercomputers. The coupling involves the surface wind to drive the fire, while heat from combustion and water vapor fluxes are injected into the atmosphere at each atmospheric time step. To be representative of the phenomenon, a sub-meter resolution was used for the simulation of the fire front, while atmospheric simulations were performed with nested grids from 2400-m to 50-m resolution. Simulations were run with or without feedback from the fire to the atmospheric model, or without coupling from the atmosphere to the fire. In the two-way mode, the burnt area was reproduced with a good degree of realism at the local scale, where an acceleration in the valley wind and over sloping terrain pushed the fire line to locations in accordance with fire passing point observations. At the regional scale, the simulated fire plume compares well with the satellite image. The study explores the strong fire-atmosphere interactions leading to intense convective updrafts extending above the boundary layer, significant downdrafts behind the fire line in the upper plume, and horizontal wind speeds feeding strong inflow into the base of the convective updrafts. The fire-induced dynamics is induced by strong near-surface sensible heat fluxes reaching maximum values of 240 kW m − 2 . The dynamical production of turbulent kinetic

  13. Large-eddy simulation of the temporal mixing layer using the Clark model

    NARCIS (Netherlands)

    Vreman, A.W.; Geurts, B.J.; Kuerten, J.G.M.

    1996-01-01

    The Clark model for the turbulent stress tensor in large-eddy simulation is investigated from a theoretical and computational point of view. In order to be applicable to compressible turbulent flows, the Clark model has been reformulated. Actual large-eddy simulation of a weakly compressible,

  14. The Cauchy problem for a model of immiscible gas flow with large data

    Energy Technology Data Exchange (ETDEWEB)

    Sande, Hilde

    2008-12-15

    The thesis consists of an introduction and two papers; 1. The solution of the Cauchy problem with large data for a model of a mixture of gases. 2. Front tracking for a model of immiscible gas flow with large data. (AG) refs, figs

  15. Large-N limit of the two-Hermitian-matrix model by the hidden BRST method

    International Nuclear Information System (INIS)

    Alfaro, J.

    1993-01-01

    This paper discusses the large-N limit of the two-Hermitian-matrix model in zero dimensions, using the hidden Becchi-Rouet-Stora-Tyutin method. A system of integral equations previously found is solved, showing that it contained the exact solution of the model in leading order of large N

  16. Modeling and analysis of large-eddy simulations of particle-laden turbulent boundary layer flows

    KAUST Repository

    Rahman, Mustafa M.; Samtaney, Ravi

    2017-01-01

    layer employs stretched spiral vortex subgrid-scale model and a virtual wall model similar to the work of Cheng, Pullin & Samtaney (J. Fluid Mech., 2015). This LES model is virtually parameter free and involves no active filtering of the computed

  17. Ethics literacy and 'ethics university'. Two intertwined models for public involvement and empowerment in bioethics

    Directory of Open Access Journals (Sweden)

    Daniel eStrech

    2016-02-01

    Full Text Available Background: Informing lay citizens about complex health-related issues and their related ethical, legal and social aspects (ELSA is one important component of democratic health care/research governance. Public information activities may be especially valuable when they are used in multi-staged processes that also include elements of information and deliberation. Objectives: This paper presents a new model for a public involvement activity on ELSA (ethics university and evaluation data for a pilot event. Methods: The ethics university is structurally based on the ‘patient university’, an already established institution in some German medical schools, and the newly developed concept of ‘ethics literacy’. The concept of ‘ethics literacy’ consists of three levels: information, interaction, and reflection. The pilot project consisted of two series of events (lasting four days each.Results: The thematic focus of the ethics university pilot was ELSA of regenerative medicine. In this pilot the concept of ‘ethics literacy’ could be validated as its components were clearly visible in discussions with participants at the end of the event. The participants reacted favorably to the ethics university by stating that they felt more educated with regard to the ELSA of regenerative medicine and with regard to their own abilities in normative reasoning on this topic.Conclusion: The ethics university is an innovative model for public involvement and empowerment activities on ELSA theoretically underpinned by a concept for ‘ethics literacy’. This model deserves further refinement, testing in other ELSA topics and evaluation in outcome research .

  18. Air quality models and unusually large ozone increases: Identifying model failures, understanding environmental causes, and improving modeled chemistry

    Science.gov (United States)

    Couzo, Evan A.

    Several factors combine to make ozone (O3) pollution in Houston, Texas, unique when compared to other metropolitan areas. These include complex meteorology, intense clustering of industrial activity, and significant precursor emissions from the heavily urbanized eight-county area. Decades of air pollution research have borne out two different causes, or conceptual models, of O 3 formation. One conceptual model describes a gradual region-wide increase in O3 concentrations "typical" of many large U.S. cities. The other conceptual model links episodic emissions of volatile organic compounds to spatially limited plumes of high O3, which lead to large hourly increases that have exceeded 100 parts per billion (ppb) per hour. These large hourly increases are known to lead to violations of the federal O 3 standard and impact Houston's status as a non-attainment area. There is a need to further understand and characterize the causes of peak O 3 levels in Houston and simulate them correctly so that environmental regulators can find the most cost-effective pollution controls. This work provides a detailed understanding of unusually large O 3 increases in the natural and modeled environments. First, we probe regulatory model simulations and assess their ability to reproduce the observed phenomenon. As configured for the purpose of demonstrating future attainment of the O3 standard, the model fails to predict the spatially limited O3 plumes observed in Houston. Second, we combine ambient meteorological and pollutant measurement data to identify the most likely geographic origins and preconditions of the concentrated O3 plumes. We find evidence that the O3 plumes are the result of photochemical activity accelerated by industrial emissions. And, third, we implement changes to the modeled chemistry to add missing formation mechanisms of nitrous acid, which is an important radical precursor. Radicals control the chemical reactivity of atmospheric systems, and perturbations to

  19. Clinical response to a lomustine/cytarabine-based chemotherapy protocol in a case of canine large granular lymphocyte T-cell lymphoma with spinal involvement

    Directory of Open Access Journals (Sweden)

    Elisabetta Treggiari

    2018-05-01

    Full Text Available A 7-year-old, female neutered cross-breed dog was referred to our institution with a history of progressive hind limb weakness, which then progressed to paraplegia. An MRI of the spine revealed severe meningeal infiltrate consistent with lymphoma involvement, located at the level of L2-L7 with concurrent lymph node enlargement and abnormal bone marrow. Abdominal ultrasonography also identified changes in the spleen and confirmed enlargement of the lumbar aortic lymph node. Cytology of lymph nodes and spleen confirmed a high-grade lymphoma with features of a large granular lymphocyte (LGL variant; PCR for antigen receptor re-arrangements (PARR was positive for a clonal T-cell receptor rearrangement. The dog was started on a chemotherapy protocol with lomustine and cytarabine incorporation and had a rapid improvement in neurological status. Chemotherapy was continued until relapse and rescue treatment used at that time. The dog was euthanased at the time of recurrence of neurological signs, 195 days since medical treatment was started. This case report suggests that combination chemotherapy may be of use when treating LGL lymphoma with spinal involvement and survival time may potentially exceed 6 months.

  20. Possible Role of GADD45γ Methylation in Diffuse Large B-Cell Lymphoma: Does It Affect the Progression and Tissue Involvement?

    Directory of Open Access Journals (Sweden)

    İkbal Cansu Barış

    2015-12-01

    Full Text Available INTRODUCTION: Diffuse large B-cell lymphoma (DLBCL is the most common type of non-Hodgkin lymphoma among adults and is characterized by heterogeneous clinical, immunophenotypic, and genetic features. Different mechanisms deregulating cell cycle and apoptosis play a role in the pathogenesis of DLBCL. Growth arrest DNA damage-inducible 45 (GADD45γ is an important gene family involved in these mechanisms. The aims of this study are to determine the frequency of GADD45γ methylation, to evaluate the correlation between GADD45γ methylation and protein expression, and to investigate the relation between methylation status and clinicopathologic parameters in DLBCL tissues and reactive lymphoid node tissues from patients with reactive lymphoid hyperplasia. METHODS: Thirty-six tissue samples of DLBCL and 40 nonmalignant reactive lymphoid node tissues were analyzed in this study. Methylation-sensitive high-resolution melting analysis was used for the determination of GADD45γ methylation status. The GADD45γ protein expression was determined by immunohistochemistry. RESULTS: GADD45γ methylation was frequent (50.0% in DLBCL. It was also significantly higher in advanced-stage tumors compared with early-stage (p=0.041. In contrast, unmethylated GADD45γ was associated with nodal involvement as the primary anatomical site (p=0.040. DISCUSSION AND CONCLUSION: The results of this study show that, in contrast to solid tumors, the frequency of GADD45γ methylation is higher and this epigenetic alteration of GADD45γ may be associated with progression in DLBCL. In addition, nodal involvement is more likely to be present in patients with unmethylated GADD45γ.

  1. The emission of α,ω-diphenylpolyenes: A model involving several molecular structures

    International Nuclear Information System (INIS)

    Catalan, Javier

    2007-01-01

    Available photophysical evidence for the emission of α,ω-diphenylpolyenes is shown to be consistent with a previously reported model [J. Catalan, J.L.G. de Paz, J. Chem. Phys. 124 (2006) 034306] involving two electronically excited molecular structures of 1B u and C s symmetry, respectively. The 1B u structure is produced by direct light absorption from the all-trans form of the α,ω-diphenylpolyene in the ground state and its emission exhibits mirror symmetry with respect to the absorption of the compound. On the other hand, the C s structure is generated from the 1B u structure of the α,ω-diphenylpolyene by rotation about a C-C single bond in the polyene chain, its emission being red-shifted with respect to the previous one and exhibiting markedly decreased vibrational structure. At room temperature, both emissions give the excitation spectrum, which are ascribed to the first absorption band for the compound. It is shown that some polyenes may exist in more than one structure of C s symmetry in the excited electronic state with lower energy than that of the 1B u state, from which the C s structures are produced. Hence, more than one electronic structure may be involved in the deactivation processes of the 1B u state, which is initially populated upon photo-excitation of the polyene molecule in the ground electronic state

  2. Cadmium Handling, Toxicity and Molecular Targets Involved during Pregnancy: Lessons from Experimental Models

    Directory of Open Access Journals (Sweden)

    Tania Jacobo-Estrada

    2017-07-01

    Full Text Available Even decades after the discovery of Cadmium (Cd toxicity, research on this heavy metal is still a hot topic in scientific literature: as we wrote this review, more than 1440 scientific articles had been published and listed by the PubMed.gov website during 2017. Cadmium is one of the most common and harmful heavy metals present in our environment. Since pregnancy is a very particular physiological condition that could impact and modify essential pathways involved in the handling of Cd, the prenatal life is a critical stage for exposure to this non-essential element. To give the reader an overview of the possible mechanisms involved in the multiple organ toxic effects in fetuses after the exposure to Cd during pregnancy, we decided to compile some of the most relevant experimental studies performed in experimental models and to summarize the advances in this field such as the Cd distribution and the factors that could alter it (diet, binding-proteins and membrane transporters, the Cd-induced toxicity in dams (preeclampsia, fertility, kidney injury, alteration in essential element homeostasis and bone mineralization, in placenta and in fetus (teratogenicity, central nervous system, liver and kidney.

  3. Clustering mechanism of oxocarboxylic acids involving hydration reaction: Implications for the atmospheric models

    Science.gov (United States)

    Liu, Ling; Kupiainen-Määttä, Oona; Zhang, Haijie; Li, Hao; Zhong, Jie; Kurtén, Theo; Vehkamäki, Hanna; Zhang, Shaowen; Zhang, Yunhong; Ge, Maofa; Zhang, Xiuhui; Li, Zesheng

    2018-06-01

    The formation of atmospheric aerosol particles from condensable gases is a dominant source of particulate matter in the boundary layer, but the mechanism is still ambiguous. During the clustering process, precursors with different reactivities can induce various chemical reactions in addition to the formation of hydrogen bonds. However, the clustering mechanism involving chemical reactions is rarely considered in most of the nucleation process models. Oxocarboxylic acids are common compositions of secondary organic aerosol, but the role of oxocarboxylic acids in secondary organic aerosol formation is still not fully understood. In this paper, glyoxylic acid, the simplest and the most abundant atmospheric oxocarboxylic acid, has been selected as a representative example of oxocarboxylic acids in order to study the clustering mechanism involving hydration reactions using density functional theory combined with the Atmospheric Clusters Dynamic Code. The hydration reaction of glyoxylic acid can occur either in the gas phase or during the clustering process. Under atmospheric conditions, the total conversion ratio of glyoxylic acid to its hydration reaction product (2,2-dihydroxyacetic acid) in both gas phase and clusters can be up to 85%, and the product can further participate in the clustering process. The differences in cluster structures and properties induced by the hydration reaction lead to significant differences in cluster formation rates and pathways at relatively low temperatures.

  4. Processes and parameters involved in modeling radionuclide transport from bedded salt repositories. Final report. Technical memorandum

    International Nuclear Information System (INIS)

    Evenson, D.E.; Prickett, T.A.; Showalter, P.A.

    1979-07-01

    The parameters necessary to model radionuclide transport in salt beds are identified and described. A proposed plan for disposal of the radioactive wastes generated by nuclear power plants is to store waste canisters in repository sites contained in stable salt formations approximately 600 meters below the ground surface. Among the principal radioactive wastes contained in these canisters will be radioactive isotopes of neptunium, americium, uranium, and plutonium along with many highly radioactive fission products. A concern with this form of waste disposal is the possibility of ground-water flow occurring in the salt beds and endangering water supplies and the public health. Specifically, the research investigated the processes involved in the movement of radioactive wastes from the repository site by groundwater flow. Since the radioactive waste canisters also generate heat, temperature is an important factor. Among the processes affecting movement of radioactive wastes from a repository site in a salt bed are thermal conduction, groundwater movement, ion exchange, radioactive decay, dissolution and precipitation of salt, dispersion and diffusion, adsorption, and thermomigration. In addition, structural changes in the salt beds as a result of temperature changes are important. Based upon the half-lives of the radioactive wastes, he period of concern is on the order of a million years. As a result, major geologic phenomena that could affect both the salt bed and groundwater flow in the salt beds was considered. These phenomena include items such as volcanism, faulting, erosion, glaciation, and the impact of meteorites. CDM reviewed all of the critical processes involved in regional groundwater movement of radioactive wastes and identified and described the parameters that must be included to mathematically model their behavior. In addition, CDM briefly reviewed available echniques to measure these parameters

  5. De novo characterization of the spleen transcriptome of the large yellow croaker (Pseudosciaena crocea) and analysis of the immune relevant genes and pathways involved in the antiviral response

    KAUST Repository

    Mu, Yinnan

    2014-05-12

    The large yellow croaker (Pseudosciaena crocea) is an economically important marine fish in China. To understand the molecular basis for antiviral defense in this species, we used Illumia paired-end sequencing to characterize the spleen transcriptome of polyriboinosinic:polyribocytidylic acid [poly(I:C)]-induced large yellow croakers. The library produced 56,355,728 reads and assembled into 108,237 contigs. As a result, 15,192 unigenes were found from this transcriptome. Gene ontology analysis showed that 4,759 genes were involved in three major functional categories: biological process, cellular component, and molecular function. We further ascertained that numerous consensus sequences were homologous to known immune-relevant genes. Kyoto Encyclopedia of Genes and Genomes orthology mapping annotated 5,389 unigenes and identified numerous immune-relevant pathways. These immune-relevant genes and pathways revealed major antiviral immunity effectors, including but not limited to: pattern recognition receptors, adaptors and signal transducers, the interferons and interferon-stimulated genes, inflammatory cytokines and receptors, complement components, and B-cell and T-cell antigen activation molecules. Moreover, the partial genes of Toll-like receptor signaling pathway, RIG-I-like receptors signaling pathway, Janus kinase-Signal Transducer and Activator of Transcription (JAK-STAT) signaling pathway, and T-cell receptor (TCR) signaling pathway were found to be changed after poly(I:C) induction by real-time polymerase chain reaction (PCR) analysis, suggesting that these signaling pathways may be regulated by poly(I:C), a viral mimic. Overall, the antivirus-related genes and signaling pathways that were identified in response to poly(I:C) challenge provide valuable leads for further investigation of the antiviral defense mechanism in the large yellow croaker. © 2014 Mu et al.

  6. De novo characterization of the spleen transcriptome of the large yellow croaker (Pseudosciaena crocea and analysis of the immune relevant genes and pathways involved in the antiviral response.

    Directory of Open Access Journals (Sweden)

    Yinnan Mu

    Full Text Available The large yellow croaker (Pseudosciaena crocea is an economically important marine fish in China. To understand the molecular basis for antiviral defense in this species, we used Illumia paired-end sequencing to characterize the spleen transcriptome of polyriboinosinic:polyribocytidylic acid [poly(I:C]-induced large yellow croakers. The library produced 56,355,728 reads and assembled into 108,237 contigs. As a result, 15,192 unigenes were found from this transcriptome. Gene ontology analysis showed that 4,759 genes were involved in three major functional categories: biological process, cellular component, and molecular function. We further ascertained that numerous consensus sequences were homologous to known immune-relevant genes. Kyoto Encyclopedia of Genes and Genomes orthology mapping annotated 5,389 unigenes and identified numerous immune-relevant pathways. These immune-relevant genes and pathways revealed major antiviral immunity effectors, including but not limited to: pattern recognition receptors, adaptors and signal transducers, the interferons and interferon-stimulated genes, inflammatory cytokines and receptors, complement components, and B-cell and T-cell antigen activation molecules. Moreover, the partial genes of Toll-like receptor signaling pathway, RIG-I-like receptors signaling pathway, Janus kinase-Signal Transducer and Activator of Transcription (JAK-STAT signaling pathway, and T-cell receptor (TCR signaling pathway were found to be changed after poly(I:C induction by real-time polymerase chain reaction (PCR analysis, suggesting that these signaling pathways may be regulated by poly(I:C, a viral mimic. Overall, the antivirus-related genes and signaling pathways that were identified in response to poly(I:C challenge provide valuable leads for further investigation of the antiviral defense mechanism in the large yellow croaker.

  7. De novo characterization of the spleen transcriptome of the large yellow croaker (Pseudosciaena crocea) and analysis of the immune relevant genes and pathways involved in the antiviral response

    KAUST Repository

    Mu, Yinnan; Li, Mingyu; Ding, Feng; Ding, Yang; Ao, Jingqun; Hu, Songnian; Chen, Xinhua

    2014-01-01

    The large yellow croaker (Pseudosciaena crocea) is an economically important marine fish in China. To understand the molecular basis for antiviral defense in this species, we used Illumia paired-end sequencing to characterize the spleen transcriptome of polyriboinosinic:polyribocytidylic acid [poly(I:C)]-induced large yellow croakers. The library produced 56,355,728 reads and assembled into 108,237 contigs. As a result, 15,192 unigenes were found from this transcriptome. Gene ontology analysis showed that 4,759 genes were involved in three major functional categories: biological process, cellular component, and molecular function. We further ascertained that numerous consensus sequences were homologous to known immune-relevant genes. Kyoto Encyclopedia of Genes and Genomes orthology mapping annotated 5,389 unigenes and identified numerous immune-relevant pathways. These immune-relevant genes and pathways revealed major antiviral immunity effectors, including but not limited to: pattern recognition receptors, adaptors and signal transducers, the interferons and interferon-stimulated genes, inflammatory cytokines and receptors, complement components, and B-cell and T-cell antigen activation molecules. Moreover, the partial genes of Toll-like receptor signaling pathway, RIG-I-like receptors signaling pathway, Janus kinase-Signal Transducer and Activator of Transcription (JAK-STAT) signaling pathway, and T-cell receptor (TCR) signaling pathway were found to be changed after poly(I:C) induction by real-time polymerase chain reaction (PCR) analysis, suggesting that these signaling pathways may be regulated by poly(I:C), a viral mimic. Overall, the antivirus-related genes and signaling pathways that were identified in response to poly(I:C) challenge provide valuable leads for further investigation of the antiviral defense mechanism in the large yellow croaker. © 2014 Mu et al.

  8. A wave propagation model of blood flow in large vessels using an approximate velocity profile function

    NARCIS (Netherlands)

    Bessems, D.; Rutten, M.C.M.; Vosse, van de F.N.

    2007-01-01

    Lumped-parameter models (zero-dimensional) and wave-propagation models (one-dimensional) for pressure and flow in large vessels, as well as fully three-dimensional fluid–structure interaction models for pressure and velocity, can contribute valuably to answering physiological and patho-physiological

  9. Beyond the Situational Model: Bystander Action Consequences to Intervening in Situations Involving Sexual Violence.

    Science.gov (United States)

    Moschella, Elizabeth A; Bennett, Sidney; Banyard, Victoria L

    2016-03-02

    Sexual violence is a widely reported problem in college communities. To date, research has largely focused on bystander intervention as one way to help prevent this problem. Although perceived consequences of bystander intervention, such as the weighting of costs and benefits, have been examined, little research has explored what happens after a bystander intervenes. The current study investigated what bystanders report as perceived outcomes and actual consequences of their bystander actions in response to risk for sexual assault. Of the 545 surveyed, 150 reported having taking bystander action in the past month and qualitatively described their bystander behavior and the responses of those parties involved. A range of behavioral responses and intervention methods were identified. The most frequent responses reported by participants were victims conveying positive and perpetrators conveying negative responses. Different types of helping were associated with bystanders reporting different types of responses to their actions. Future research should incorporate additional measures of consequences of bystander intervention. Implications for policy and bystander intervention programs are discussed, stressing the need for bystander intervention programs to address a range of bystander behaviors and explain the potential consequences and risks of intervening. © The Author(s) 2016.

  10. A Hybrid Neuro-Fuzzy Model For Integrating Large Earth-Science Datasets

    Science.gov (United States)

    Porwal, A.; Carranza, J.; Hale, M.

    2004-12-01

    A GIS-based hybrid neuro-fuzzy approach to integration of large earth-science datasets for mineral prospectivity mapping is described. It implements a Takagi-Sugeno type fuzzy inference system in the framework of a four-layered feed-forward adaptive neural network. Each unique combination of the datasets is considered a feature vector whose components are derived by knowledge-based ordinal encoding of the constituent datasets. A subset of feature vectors with a known output target vector (i.e., unique conditions known to be associated with either a mineralized or a barren location) is used for the training of an adaptive neuro-fuzzy inference system. Training involves iterative adjustment of parameters of the adaptive neuro-fuzzy inference system using a hybrid learning procedure for mapping each training vector to its output target vector with minimum sum of squared error. The trained adaptive neuro-fuzzy inference system is used to process all feature vectors. The output for each feature vector is a value that indicates the extent to which a feature vector belongs to the mineralized class or the barren class. These values are used to generate a prospectivity map. The procedure is demonstrated by an application to regional-scale base metal prospectivity mapping in a study area located in the Aravalli metallogenic province (western India). A comparison of the hybrid neuro-fuzzy approach with pure knowledge-driven fuzzy and pure data-driven neural network approaches indicates that the former offers a superior method for integrating large earth-science datasets for predictive spatial mathematical modelling.

  11. A structural model of customer satisfaction and trust in vendors involved in mobile commerce

    Directory of Open Access Journals (Sweden)

    Suki, N.M.

    2011-01-01

    Full Text Available The purpose of this paper is to provide an explanation of factors influencing customer satisfaction and trust in vendors involved in mobile commerce (m-commerce. The study sample consists of 200 respondents. Data were analyzed by employing structural equation modelling (SEM supported by AMOS 5.0 with maximum likelihood estimation in order to test the proposed hypotheses. The proposed model was empirically tested and results confirmed that users’ satisfaction with vendors in m-commerce was not significantly influenced by two antecedents of the vendor’s website quality: interactivity and customisation, and also two antecedents of mobile technology quality: usefulness and ease-of-use. Meanwhile, users’ trust towards the vendor in m-commerce is affected by users’ satisfaction with the vendor. Interestingly, vendor quality dimensions such as responsiveness and brand image influence customer satisfaction with vendors in m-commerce. Based on the findings, vendors in m-commerce should focus on the factors which generate more satisfaction and trust among customers. For vendors in general, the results can help them to better develop customer trust in m-commerce. Vendors of m-commerce can provide a more satisfying experience for customers.

  12. Yeast Mitochondrial Interactosome Model: Metabolon Membrane Proteins Complex Involved in the Channeling of ADP/ATP

    Directory of Open Access Journals (Sweden)

    Benjamin Clémençon

    2012-02-01

    Full Text Available The existence of a mitochondrial interactosome (MI has been currently well established in mammalian cells but the exact composition of this super-complex is not precisely known, and its organization seems to be different from that in yeast. One major difference is the absence of mitochondrial creatine kinase (MtCK in yeast, unlike that described in the organization model of MI, especially in cardiac, skeletal muscle and brain cells. The aim of this review is to provide a detailed description of different partner proteins involved in the synergistic ADP/ATP transport across the mitochondrial membranes in the yeast Saccharomyces cerevisiae and to propose a new mitochondrial interactosome model. The ADP/ATP (Aacp and inorganic phosphate (PiC carriers as well as the VDAC (or mitochondrial porin catalyze the import and export of ADP, ATP and Pi across the mitochondrial membranes. Aacp and PiC, which appear to be associated with the ATP synthase, consist of two nanomotors (F0, F1 under specific conditions and form ATP synthasome. Identification and characterization of such a complex were described for the first time by Pedersen and co-workers in 2003.

  13. Involving mental health service users in suicide-related research: a qualitative inquiry model.

    Science.gov (United States)

    Lees, David; Procter, Nicholas; Fassett, Denise; Handley, Christine

    2016-03-01

    To describe the research model developed and successfully deployed as part of a multi-method qualitative study investigating suicidal service-users' experiences of mental health nursing care. Quality mental health care is essential to limiting the occurrence and burden of suicide, however there is a lack of relevant research informing practice in this context. Research utilising first-person accounts of suicidality is of particular importance to expanding the existing evidence base. However, conducting ethical research to support this imperative is challenging. The model discussed here illustrates specific and more generally applicable principles for qualitative research regarding sensitive topics and involving potentially vulnerable service-users. Researching into mental health service users with first-person experience of suicidality requires stakeholder and institutional support, researcher competency, and participant recruitment, consent, confidentiality, support and protection. Research with service users into their experiences of sensitive issues such as suicidality can result in rich and valuable data, and may also provide positive experiences of collaboration and inclusivity. If challenges are not met, objectification and marginalisation of service-users may be reinforced, and limitations in the evidence base and service provision may be perpetuated.

  14. Comparison of hard scattering models for particle production at large transverse momentum. 2

    International Nuclear Information System (INIS)

    Schiller, A.; Ilgenfritz, E.M.; Kripfganz, J.; Moehring, H.J.; Ranft, G.; Ranft, J.

    1977-01-01

    Single particle distributions of π + and π - at large transverse momentum are analysed using various hard collision models: qq → qq, qantiq → MantiM, qM → qM. The transverse momentum dependence at thetasub(cm) = 90 0 is well described in all models except qantiq → MantiM. This model has problems with the ratios (pp → π + +X)/(π +- p → π 0 +X). Presently available data on rapidity distributions of pions in π - p and pantip collisions are at rather low transverse momentum (however large xsub(perpendicular) = 2psub(perpendicular)/√s) where it is not obvious that hard collision models should dominate. The data, in particular the π - /π + asymmetry are well described by all models except qM → Mq (CIM). At large values of transverse momentum significant differences between the models are predicted. (author)

  15. Using radar altimetry to update a large-scale hydrological model of the Brahmaputra river basin

    DEFF Research Database (Denmark)

    Finsen, F.; Milzow, Christian; Smith, R.

    2014-01-01

    Measurements of river and lake water levels from space-borne radar altimeters (past missions include ERS, Envisat, Jason, Topex) are useful for calibration and validation of large-scale hydrological models in poorly gauged river basins. Altimetry data availability over the downstream reaches...... of the Brahmaputra is excellent (17 high-quality virtual stations from ERS-2, 6 from Topex and 10 from Envisat are available for the Brahmaputra). In this study, altimetry data are used to update a large-scale Budyko-type hydrological model of the Brahmaputra river basin in real time. Altimetry measurements...... improved model performance considerably. The Nash-Sutcliffe model efficiency increased from 0.77 to 0.83. Real-time river basin modelling using radar altimetry has the potential to improve the predictive capability of large-scale hydrological models elsewhere on the planet....

  16. A Model for the Detailed Analysis of Radio Links Involving Tree Canopies

    Directory of Open Access Journals (Sweden)

    F. Perez-Fontan

    2016-12-01

    Full Text Available Detailed analysis of tree canopy interaction with incident radiowaves has mainly been limited to remote sensing for the purpose of forest classification among many other applications. This represents a monostatic configuration, unlike the case of communication links, which are bistatic. In general, link analyses have been limited to the application of simple, empirical formulas based on the use of specific attenuation values in dB/m and the traversed vegetated mass as, e.g., the model in Recommendation ITU-R P.833-8 [1]. In remote sensing, two main techniques are used: Multiple Scattering Theory (MST [2][5] and Radiative Transfer Theory (RT, [5] and [6]. We have paid attention in the past to MST [7][10]. It was shown that a full application of MST leads to very long computation times which are unacceptable in the case where we have to analyze a scenario with several trees. Extensive work using MST has been also presented by others in [11][16] showing the interest in this technique. We have proposed a simplified model for scattering from tree canopies based on a hybridization of MST and a modified physical optics (PO approach [16]. We assume that propagation through a canopy is accounted for by using the complex valued propagation constant obtained by MST. Unlike the case when the full MST is applied, the proposed approach offers significant benefits including a direct software implementation and acceptable computation times even for high frequencies and electrically large canopies. The proposed model thus replaces the coherent component in MST, significant in the forward direction, but keeps the incoherent or diffuse scattering component present in all directions. The incoherent component can be calculated within reasonable times. Here, we present tests of the proposed model against MST using an artificial single-tree scenario at 2 GHz and 10 GHz.

  17. Large Deviations for Stochastic Models of Two-Dimensional Second Grade Fluids

    International Nuclear Information System (INIS)

    Zhai, Jianliang; Zhang, Tusheng

    2017-01-01

    In this paper, we establish a large deviation principle for stochastic models of incompressible second grade fluids. The weak convergence method introduced by Budhiraja and Dupuis (Probab Math Statist 20:39–61, 2000) plays an important role.

  18. Large Deviations for Stochastic Models of Two-Dimensional Second Grade Fluids

    Energy Technology Data Exchange (ETDEWEB)

    Zhai, Jianliang, E-mail: zhaijl@ustc.edu.cn [University of Science and Technology of China, School of Mathematical Sciences (China); Zhang, Tusheng, E-mail: Tusheng.Zhang@manchester.ac.uk [University of Manchester, School of Mathematics (United Kingdom)

    2017-06-15

    In this paper, we establish a large deviation principle for stochastic models of incompressible second grade fluids. The weak convergence method introduced by Budhiraja and Dupuis (Probab Math Statist 20:39–61, 2000) plays an important role.

  19. Critical behavior in dome D = 1 large-N matrix models

    International Nuclear Information System (INIS)

    Das, S.R.; Dhar, A.; Sengupta, A.M.; Wadia, D.R.

    1990-01-01

    The authors study the critical behavior in D = 1 large-N matrix models. The authors also look at the subleading terms in susceptibility in order to find out the dimensions of some of the operators in the theory

  20. Various approaches to the modelling of large scale 3-dimensional circulation in the Ocean

    Digital Repository Service at National Institute of Oceanography (India)

    Shaji, C.; Bahulayan, N.; Rao, A.D.; Dube, S.K.

    In this paper, the three different approaches to the modelling of large scale 3-dimensional flow in the ocean such as the diagnostic, semi-diagnostic (adaptation) and the prognostic are discussed in detail. Three-dimensional solutions are obtained...

  1. Large-Signal Code TESLA: Improvements in the Implementation and in the Model

    National Research Council Canada - National Science Library

    Chernyavskiy, Igor A; Vlasov, Alexander N; Anderson, Jr., Thomas M; Cooke, Simon J; Levush, Baruch; Nguyen, Khanh T

    2006-01-01

    We describe the latest improvements made in the large-signal code TESLA, which include transformation of the code to a Fortran-90/95 version with dynamical memory allocation and extension of the model...

  2. A Model for Teaching Large Classes: Facilitating a "Small Class Feel"

    Science.gov (United States)

    Lynch, Rosealie P.; Pappas, Eric

    2017-01-01

    This paper presents a model for teaching large classes that facilitates a "small class feel" to counteract the distance, anonymity, and formality that often characterize large lecture-style courses in higher education. One author (E. P.) has been teaching a 300-student general education critical thinking course for ten years, and the…

  3. Patient involvement in research programming and implementation: a responsive evaluation of the Dialogue Model for research agenda setting

    NARCIS (Netherlands)

    Abma, T.A.; Pittens, C.A.C.M.; Visse, M.; Elberse, J.E.; Broerse, J.E.W.

    2015-01-01

    Background: The Dialogue Model for research agenda-setting, involving multiple stakeholders including patients, was developed and validated in the Netherlands. However, there is little insight into whether and how patient involvement is sustained during the programming and implementation of research

  4. Applying the Intervention Model for Fostering Affective Involvement with Persons Who Are Congenitally Deafblind: An Effect Study

    Science.gov (United States)

    Martens, Marga A. W.; Janssen, Marleen J.; Ruijssenaars, Wied A. J. J. M.; Huisman, Mark; Riksen-Walraven, J. Marianne

    2014-01-01

    Introduction: In this study, we applied the Intervention Model for Affective Involvement (IMAI) to four participants who are congenitally deafblind and their 16 communication partners in 3 different settings (school, a daytime activities center, and a group home). We examined whether the intervention increased affective involvement between the…

  5. A dynamic globalization model for large eddy simulation of complex turbulent flow

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Hae Cheon; Park, No Ma; Kim, Jin Seok [Seoul National Univ., Seoul (Korea, Republic of)

    2005-07-01

    A dynamic subgrid-scale model is proposed for large eddy simulation of turbulent flows in complex geometry. The eddy viscosity model by Vreman [Phys. Fluids, 16, 3670 (2004)] is considered as a base model. A priori tests with the original Vreman model show that it predicts the correct profile of subgrid-scale dissipation in turbulent channel flow but the optimal model coefficient is far from universal. Dynamic procedures of determining the model coefficient are proposed based on the 'global equilibrium' between the subgrid-scale dissipation and viscous dissipation. An important feature of the proposed procedures is that the model coefficient determined is globally constant in space but varies only in time. Large eddy simulations with the present dynamic model are conducted for forced isotropic turbulence, turbulent channel flow and flow over a sphere, showing excellent agreements with previous results.

  6. A model-based eco-routing strategy for electric vehicles in large urban networks

    OpenAIRE

    De Nunzio , Giovanni; Thibault , Laurent; Sciarretta , Antonio

    2016-01-01

    International audience; A novel eco-routing navigation strategy and energy consumption modeling approach for electric vehicles are presented in this work. Speed fluctuations and road network infrastructure have a large impact on vehicular energy consumption. Neglecting these effects may lead to large errors in eco-routing navigation, which could trivially select the route with the lowest average speed. We propose an energy consumption model that considers both accelerations and impact of the ...

  7. Regional modeling of large wildfires under current and potential future climates in Colorado and Wyoming, USA

    Science.gov (United States)

    West, Amanda; Kumar, Sunil; Jarnevich, Catherine S.

    2016-01-01

    Regional analysis of large wildfire potential given climate change scenarios is crucial to understanding areas most at risk in the future, yet wildfire models are not often developed and tested at this spatial scale. We fit three historical climate suitability models for large wildfires (i.e. ≥ 400 ha) in Colorado andWyoming using topography and decadal climate averages corresponding to wildfire occurrence at the same temporal scale. The historical models classified points of known large wildfire occurrence with high accuracies. Using a novel approach in wildfire modeling, we applied the historical models to independent climate and wildfire datasets, and the resulting sensitivities were 0.75, 0.81, and 0.83 for Maxent, Generalized Linear, and Multivariate Adaptive Regression Splines, respectively. We projected the historic models into future climate space using data from 15 global circulation models and two representative concentration pathway scenarios. Maps from these geospatial analyses can be used to evaluate the changing spatial distribution of climate suitability of large wildfires in these states. April relative humidity was the most important covariate in all models, providing insight to the climate space of large wildfires in this region. These methods incorporate monthly and seasonal climate averages at a spatial resolution relevant to land management (i.e. 1 km2) and provide a tool that can be modified for other regions of North America, or adapted for other parts of the world.

  8. Fast and accurate focusing analysis of large photon sieve using pinhole ring diffraction model.

    Science.gov (United States)

    Liu, Tao; Zhang, Xin; Wang, Lingjie; Wu, Yanxiong; Zhang, Jizhen; Qu, Hemeng

    2015-06-10

    In this paper, we developed a pinhole ring diffraction model for the focusing analysis of a large photon sieve. Instead of analyzing individual pinholes, we discuss the focusing of all of the pinholes in a single ring. An explicit equation for the diffracted field of individual pinhole ring has been proposed. We investigated the validity range of this generalized model and analytically describe the sufficient conditions for the validity of this pinhole ring diffraction model. A practical example and investigation reveals the high accuracy of the pinhole ring diffraction model. This simulation method could be used for fast and accurate focusing analysis of a large photon sieve.

  9. Large-watershed flood simulation and forecasting based on different-resolution distributed hydrological model

    Science.gov (United States)

    Li, J.

    2017-12-01

    Large-watershed flood simulation and forecasting is very important for a distributed hydrological model in the application. There are some challenges including the model's spatial resolution effect, model performance and accuracy and so on. To cope with the challenge of the model's spatial resolution effect, different model resolution including 1000m*1000m, 600m*600m, 500m*500m, 400m*400m, 200m*200m were used to build the distributed hydrological model—Liuxihe model respectively. The purpose is to find which one is the best resolution for Liuxihe model in Large-watershed flood simulation and forecasting. This study sets up a physically based distributed hydrological model for flood forecasting of the Liujiang River basin in south China. Terrain data digital elevation model (DEM), soil type and land use type are downloaded from the website freely. The model parameters are optimized by using an improved Particle Swarm Optimization(PSO) algorithm; And parameter optimization could reduce the parameter uncertainty that exists for physically deriving model parameters. The different model resolution (200m*200m—1000m*1000m ) are proposed for modeling the Liujiang River basin flood with the Liuxihe model in this study. The best model's spatial resolution effect for flood simulation and forecasting is 200m*200m.And with the model's spatial resolution reduction, the model performance and accuracy also become worse and worse. When the model resolution is 1000m*1000m, the flood simulation and forecasting result is the worst, also the river channel divided based on this resolution is differs from the actual one. To keep the model with an acceptable performance, minimum model spatial resolution is needed. The suggested threshold model spatial resolution for modeling the Liujiang River basin flood is a 500m*500m grid cell, but the model spatial resolution with a 200m*200m grid cell is recommended in this study to keep the model at a best performance.

  10. Presence of a large β(1-3)glucan linked to chitin at the Saccharomyces cerevisiae mother-bud neck suggests involvement in localized growth control.

    Science.gov (United States)

    Cabib, Enrico; Blanco, Noelia; Arroyo, Javier

    2012-04-01

    Previous results suggested that the chitin ring present at the yeast mother-bud neck, which is linked specifically to the nonreducing ends of β(1-3)glucan, may help to suppress cell wall growth at the neck by competing with β(1-6)glucan and thereby with mannoproteins for their attachment to the same sites. Here we explored whether the linkage of chitin to β(1-3)glucan may also prevent the remodeling of this polysaccharide that would be necessary for cell wall growth. By a novel mild procedure, β(1-3)glucan was isolated from cell walls, solubilized by carboxymethylation, and fractionated by size exclusion chromatography, giving rise to a very high-molecular-weight peak and to highly polydisperse material. The latter material, soluble in alkali, may correspond to glucan being remodeled, whereas the large-size fraction would be the final cross-linked structural product. In fact, the β(1-3)glucan of buds, where growth occurs, is solubilized by alkali. A gas1 mutant with an expected defect in glucan elongation showed a large increase in the polydisperse fraction. By a procedure involving sodium hydroxide treatment, carboxymethylation, fractionation by affinity chromatography on wheat germ agglutinin-agarose, and fractionation by size chromatography on Sephacryl columns, it was shown that the β(1-3)glucan attached to chitin consists mostly of high-molecular-weight material. Therefore, it appears that linkage to chitin results in a polysaccharide that cannot be further remodeled and does not contribute to growth at the neck. In the course of these experiments, the new finding was made that part of the chitin forms a noncovalent complex with β(1-3)glucan.

  11. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    Directory of Open Access Journals (Sweden)

    A. F. Van Loon

    2012-11-01

    Full Text Available Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological drought? To answer this question, we evaluated the simulation of drought propagation in an ensemble mean of ten large-scale models, both land-surface models and global hydrological models, that participated in the model intercomparison project of WATCH (WaterMIP. For a selection of case study areas, we studied drought characteristics (number of droughts, duration, severity, drought propagation features (pooling, attenuation, lag, lengthening, and hydrological drought typology (classical rainfall deficit drought, rain-to-snow-season drought, wet-to-dry-season drought, cold snow season drought, warm snow season drought, composite drought.

    Drought characteristics simulated by large-scale models clearly reflected drought propagation; i.e. drought events became fewer and longer when moving through the hydrological cycle. However, more differentiation was expected between fast and slowly responding systems, with slowly responding systems having fewer and longer droughts in runoff than fast responding systems. This was not found using large-scale models. Drought propagation features were poorly reproduced by the large-scale models, because runoff reacted immediately to precipitation, in all case study areas. This fast reaction to precipitation, even in cold climates in winter and in semi-arid climates in summer, also greatly influenced the hydrological drought typology as identified by the large-scale models. In general, the large-scale models had the correct representation of drought types, but the percentages of occurrence had some important mismatches, e.g. an overestimation of classical rainfall deficit droughts, and an

  12. Biomembrane models and drug-biomembrane interaction studies: Involvement in drug design and development

    Directory of Open Access Journals (Sweden)

    R Pignatello

    2011-01-01

    Full Text Available Contact with many different biological membranes goes along the destiny of a drug after its systemic administration. From the circulating macrophage cells to the vessel endothelium, to more complex absorption barriers, the interaction of a biomolecule with these membranes largely affects its rate and time of biodistribution in the body and at the target sites. Therefore, investigating the phenomena occurring on the cell membranes, as well as their different interaction with drugs in the physiological or pathological conditions, is important to exploit the molecular basis of many diseases and to identify new potential therapeutic strategies. Of course, the complexity of the structure and functions of biological and cell membranes, has pushed researchers toward the proposition and validation of simpler two- and three-dimensional membrane models, whose utility and drawbacks will be discussed. This review also describes the analytical methods used to look at the interactions among bioactive compounds with biological membrane models, with a particular accent on the calorimetric techniques. These studies can be considered as a powerful tool for medicinal chemistry and pharmaceutical technology, in the steps of designing new drugs and optimizing the activity and safety profile of compounds already used in the therapy.

  13. Large animal and primate models of spinal cord injury for the testing of novel therapies.

    Science.gov (United States)

    Kwon, Brian K; Streijger, Femke; Hill, Caitlin E; Anderson, Aileen J; Bacon, Mark; Beattie, Michael S; Blesch, Armin; Bradbury, Elizabeth J; Brown, Arthur; Bresnahan, Jacqueline C; Case, Casey C; Colburn, Raymond W; David, Samuel; Fawcett, James W; Ferguson, Adam R; Fischer, Itzhak; Floyd, Candace L; Gensel, John C; Houle, John D; Jakeman, Lyn B; Jeffery, Nick D; Jones, Linda Ann Truett; Kleitman, Naomi; Kocsis, Jeffery; Lu, Paul; Magnuson, David S K; Marsala, Martin; Moore, Simon W; Mothe, Andrea J; Oudega, Martin; Plant, Giles W; Rabchevsky, Alexander Sasha; Schwab, Jan M; Silver, Jerry; Steward, Oswald; Xu, Xiao-Ming; Guest, James D; Tetzlaff, Wolfram

    2015-07-01

    Large animal and primate models of spinal cord injury (SCI) are being increasingly utilized for the testing of novel therapies. While these represent intermediary animal species between rodents and humans and offer the opportunity to pose unique research questions prior to clinical trials, the role that such large animal and primate models should play in the translational pipeline is unclear. In this initiative we engaged members of the SCI research community in a questionnaire and round-table focus group discussion around the use of such models. Forty-one SCI researchers from academia, industry, and granting agencies were asked to complete a questionnaire about their opinion regarding the use of large animal and primate models in the context of testing novel therapeutics. The questions centered around how large animal and primate models of SCI would be best utilized in the spectrum of preclinical testing, and how much testing in rodent models was warranted before employing these models. Further questions were posed at a focus group meeting attended by the respondents. The group generally felt that large animal and primate models of SCI serve a potentially useful role in the translational pipeline for novel therapies, and that the rational use of these models would depend on the type of therapy and specific research question being addressed. While testing within these models should not be mandatory, the detection of beneficial effects using these models lends additional support for translating a therapy to humans. These models provides an opportunity to evaluate and refine surgical procedures prior to use in humans, and safety and bio-distribution in a spinal cord more similar in size and anatomy to that of humans. Our results reveal that while many feel that these models are valuable in the testing of novel therapies, important questions remain unanswered about how they should be used and how data derived from them should be interpreted. Copyright © 2015 Elsevier

  14. Modeling economic costs of disasters and recovery involving positive effects of reconstruction: analysis using a dynamic CGE model

    Science.gov (United States)

    Xie, W.; Li, N.; Wu, J.-D.; Hao, X.-L.

    2013-11-01

    Disaster damages have negative effects on economy, whereas reconstruction investments have positive effects. The aim of this study is to model economic causes of disasters and recovery involving positive effects of reconstruction activities. Computable general equilibrium (CGE) model is a promising approach because it can incorporate these two kinds of shocks into a unified framework and further avoid double-counting problem. In order to factor both shocks in CGE model, direct loss is set as the amount of capital stock reduced on supply side of economy; A portion of investments restore the capital stock in existing period; An investment-driven dynamic model is formulated due to available reconstruction data, and the rest of a given country's saving is set as an endogenous variable. The 2008 Wenchuan Earthquake is selected as a case study to illustrate the model, and three scenarios are constructed: S0 (no disaster occurs), S1 (disaster occurs with reconstruction investment) and S2 (disaster occurs without reconstruction investment). S0 is taken as business as usual, and the differences between S1 and S0 and that between S2 and S0 can be interpreted as economic losses including reconstruction and excluding reconstruction respectively. The study showed that output from S1 is found to be closer to real data than that from S2. S2 overestimates economic loss by roughly two times that under S1. The gap in economic aggregate between S1 and S0 is reduced to 3% in 2011, a level that should take another four years to achieve under S2.

  15. Modeling and Control of a Large Nuclear Reactor A Three-Time-Scale Approach

    CERN Document Server

    Shimjith, S R; Bandyopadhyay, B

    2013-01-01

    Control analysis and design of large nuclear reactors requires a suitable mathematical model representing the steady state and dynamic behavior of the reactor with reasonable accuracy. This task is, however, quite challenging because of several complex dynamic phenomena existing in a reactor. Quite often, the models developed would be of prohibitively large order, non-linear and of complex structure not readily amenable for control studies. Moreover, the existence of simultaneously occurring dynamic variations at different speeds makes the mathematical model susceptible to numerical ill-conditioning, inhibiting direct application of standard control techniques. This monograph introduces a technique for mathematical modeling of large nuclear reactors in the framework of multi-point kinetics, to obtain a comparatively smaller order model in standard state space form thus overcoming these difficulties. It further brings in innovative methods for controller design for systems exhibiting multi-time-scale property,...

  16. Water and salt balance modelling to predict the effects of land-use changes in forested catchments. 3. The large catchment model

    Science.gov (United States)

    Sivapalan, Murugesu; Viney, Neil R.; Jeevaraj, Charles G.

    1996-03-01

    This paper presents an application of a long-term, large catchment-scale, water balance model developed to predict the effects of forest clearing in the south-west of Western Australia. The conceptual model simulates the basic daily water balance fluxes in forested catchments before and after clearing. The large catchment is divided into a number of sub-catchments (1-5 km2 in area), which are taken as the fundamental building blocks of the large catchment model. The responses of the individual subcatchments to rainfall and pan evaporation are conceptualized in terms of three inter-dependent subsurface stores A, B and F, which are considered to represent the moisture states of the subcatchments. Details of the subcatchment-scale water balance model have been presented earlier in Part 1 of this series of papers. The response of any subcatchment is a function of its local moisture state, as measured by the local values of the stores. The variations of the initial values of the stores among the subcatchments are described in the large catchment model through simple, linear equations involving a number of similarity indices representing topography, mean annual rainfall and level of forest clearing.The model is applied to the Conjurunup catchment, a medium-sized (39·6 km2) catchment in the south-west of Western Australia. The catchment has been heterogeneously (in space and time) cleared for bauxite mining and subsequently rehabilitated. For this application, the catchment is divided into 11 subcatchments. The model parameters are estimated by calibration, by comparing observed and predicted runoff values, over a 18 year period, for the large catchment and two of the subcatchments. Excellent fits are obtained.

  17. Optimization of large-scale heterogeneous system-of-systems models.

    Energy Technology Data Exchange (ETDEWEB)

    Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Lee, Herbert K. H. (University of California, Santa Cruz, Santa Cruz, CA); Hart, William Eugene; Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Woodruff, David L. (University of California, Davis, Davis, CA)

    2012-01-01

    Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

  18. Oxidative stress may be involved in distant organ failure in tourniquet shock model mice.

    Science.gov (United States)

    Nishikata, Rie; Kato, Naho; Hiraiwa, Kouichi

    2014-03-01

    Crush syndrome is characterized by prolonged shock resulting from extensive muscle damage and multiple organ failure. However, the pathogenesis of multiple organ failure has not yet been completely elucidated. Therefore, we investigated the molecular biological and histopathological aspects of distant organ injury in crush syndrome by using tourniquet shock model mice. DNA microarray analysis of the soleus muscle showed an increase in the mRNA levels of Cox-2, Hsp70, c-fos, and IL-6, at 3h after ischemia/reperfusion injury at the lower extremity. In vivo staining with hematoxylin and eosin (HE) showed edema and degeneration in the soleus muscle, but no change in the distant organs. Immunohistological staining of the HSP70 protein revealed nuclear translocation in the soleus muscle, kidney, liver, and lung. The c-fos mRNA levels were elevated in the soleus muscle, kidney, and liver, displaying nuclear translocation of c-FOS protein. Terminal deoxynucleotidyl transferase dUTP nick-end labeling (TUNEL) analysis suggested the involvement of apoptosis in ischemia/reperfusion injury in the soleus muscle. Apoptotic cells were not found in greater quantities in the kidney. Oxidative stress, as determined using a free radical elective evaluator (d-ROM test), markedly increased after ischemia/reperfusion injury. Therefore, examination of immunohistological changes and determination of oxidative stress are proposed to be useful in evaluating the extent of tourniquet shock, even before changes are observed by HE staining. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  19. The Hamburg large scale geostrophic ocean general circulation model. Cycle 1

    International Nuclear Information System (INIS)

    Maier-Reimer, E.; Mikolajewicz, U.

    1992-02-01

    The rationale for the Large Scale Geostrophic ocean circulation model (LSG-OGCM) is based on the observations that for a large scale ocean circulation model designed for climate studies, the relevant characteristic spatial scales are large compared with the internal Rossby radius throughout most of the ocean, while the characteristic time scales are large compared with the periods of gravity modes and barotropic Rossby wave modes. In the present version of the model, the fast modes have been filtered out by a conventional technique of integrating the full primitive equations, including all terms except the nonlinear advection of momentum, by an implicit time integration method. The free surface is also treated prognostically, without invoking a rigid lid approximation. The numerical scheme is unconditionally stable and has the additional advantage that it can be applied uniformly to the entire globe, including the equatorial and coastal current regions. (orig.)

  20. Developing a conceptual model for the application of patient and public involvement in the healthcare system in Iran.

    Science.gov (United States)

    Azmal, Mohammad; Sari, Ali Akbari; Foroushani, Abbas Rahimi; Ahmadi, Batoul

    2016-06-01

    Patient and public involvement is engaging patients, providers, community representatives, and the public in healthcare planning and decision-making. The purpose of this study was to develop a model for the application of patient and public involvement in decision making in the Iranian healthcare system. A mixed qualitative-quantitative approach was used to develop a conceptual model. Thirty three key informants were purposely recruited in the qualitative stage, and 420 people (patients and their companions) were included in a protocol study that was implemented in five steps: 1) Identifying antecedents, consequences, and variables associated with the patient and the publics' involvement in healthcare decision making through a comprehensive literature review; 2) Determining the main variables in the context of Iran's health system using conceptual framework analysis; 3) Prioritizing and weighting variables by Shannon entropy; 4) designing and validating a tool for patient and public involvement in healthcare decision making; and 5) Providing a conceptual model of patient and the public involvement in planning and developing healthcare using structural equation modeling. We used various software programs, including SPSS (17), Max QDA (10), EXCEL, and LISREL. Content analysis, Shannon entropy, and descriptive and analytic statistics were used to analyze the data. In this study, seven antecedents variable, five dimensions of involvement, and six consequences were identified. These variables were used to design a valid tool. A logical model was derived that explained the logical relationships between antecedent and consequent variables and the dimensions of patient and public involvement as well. Given the specific context of the political, social, and innovative environments in Iran, it was necessary to design a model that would be compatible with these features. It can improve the quality of care and promote the patient and the public satisfaction with healthcare and

  1. Use of a statistical model of the whole femur in a large scale, multi-model study of femoral neck fracture risk.

    Science.gov (United States)

    Bryan, Rebecca; Nair, Prasanth B; Taylor, Mark

    2009-09-18

    Interpatient variability is often overlooked in orthopaedic computational studies due to the substantial challenges involved in sourcing and generating large numbers of bone models. A statistical model of the whole femur incorporating both geometric and material property variation was developed as a potential solution to this problem. The statistical model was constructed using principal component analysis, applied to 21 individual computer tomography scans. To test the ability of the statistical model to generate realistic, unique, finite element (FE) femur models it was used as a source of 1000 femurs to drive a study on femoral neck fracture risk. The study simulated the impact of an oblique fall to the side, a scenario known to account for a large proportion of hip fractures in the elderly and have a lower fracture load than alternative loading approaches. FE model generation, application of subject specific loading and boundary conditions, FE processing and post processing of the solutions were completed automatically. The generated models were within the bounds of the training data used to create the statistical model with a high mesh quality, able to be used directly by the FE solver without remeshing. The results indicated that 28 of the 1000 femurs were at highest risk of fracture. Closer analysis revealed the percentage of cortical bone in the proximal femur to be a crucial differentiator between the failed and non-failed groups. The likely fracture location was indicated to be intertrochantic. Comparison to previous computational, clinical and experimental work revealed support for these findings.

  2. Neuroprotective effect of lurasidone via antagonist activities on histamine in a rat model of cranial nerve involvement.

    Science.gov (United States)

    He, Baoming; Yu, Liang; Li, Suping; Xu, Fei; Yang, Lili; Ma, Shuai; Guo, Yi

    2018-04-01

    Cranial nerve involvement frequently involves neuron damage and often leads to psychiatric disorder caused by multiple inducements. Lurasidone is a novel antipsychotic agent approved for the treatment of cranial nerve involvement and a number of mental health conditions in several countries. In the present study, the neuroprotective effect of lurasidone by antagonist activities on histamine was investigated in a rat model of cranial nerve involvement. The antagonist activities of lurasidone on serotonin 5‑HT7, serotonin 5‑HT2A, serotonin 5‑HT1A and serotonin 5‑HT6 were analyzed, and the preclinical therapeutic effects of lurasidone were examined in a rat model of cranial nerve involvement. The safety, maximum tolerated dose (MTD) and preliminary antitumor activity of lurasidone were also assessed in the cranial nerve involvement model. The therapeutic dose of lurasidone was 0.32 mg once daily, administered continuously in 14‑day cycles. The results of the present study found that the preclinical prescriptions induced positive behavioral responses following treatment with lurasidone. The MTD was identified as a once daily administration of 0.32 mg lurasidone. Long‑term treatment with lurasidone for cranial nerve involvement was shown to improve the therapeutic effects and reduce anxiety in the experimental rats. In addition, treatment with lurasidone did not affect body weight. The expression of the language competence protein, Forkhead‑BOX P2, was increased, and the levels of neuroprotective SxIP motif and microtubule end‑binding protein were increased in the hippocampal cells of rats with cranial nerve involvement treated with lurasidone. Lurasidone therapy reinforced memory capability and decreased anxiety. Taken together, lurasidone treatment appeared to protect against language disturbances associated with negative and cognitive impairment in the rat model of cranial nerve involvement, providing a basis for its use in the clinical treatment of

  3. Material model for non-linear finite element analyses of large concrete structures

    NARCIS (Netherlands)

    Engen, Morten; Hendriks, M.A.N.; Øverli, Jan Arve; Åldstedt, Erik; Beushausen, H.

    2016-01-01

    A fully triaxial material model for concrete was implemented in a commercial finite element code. The only required input parameter was the cylinder compressive strength. The material model was suitable for non-linear finite element analyses of large concrete structures. The importance of including

  4. Finite Mixture Multilevel Multidimensional Ordinal IRT Models for Large Scale Cross-Cultural Research

    Science.gov (United States)

    de Jong, Martijn G.; Steenkamp, Jan-Benedict E. M.

    2010-01-01

    We present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups of countries have different measurement operations, while…

  5. The pig as a large animal model for influenza a virus infection

    DEFF Research Database (Denmark)

    Skovgaard, Kerstin; Brogaard, Louise; Larsen, Lars Erik

    It is increasingly realized that large animal models like the pig are exceptionally human like and serve as an excellent model for disease and inflammation. Pigs are fully susceptible to human influenza, share many similarities with humans regarding lung physiology and innate immune cell...

  6. Modeling the impact of large-scale energy conversion systems on global climate

    International Nuclear Information System (INIS)

    Williams, J.

    There are three energy options which could satisfy a projected energy requirement of about 30 TW and these are the solar, nuclear and (to a lesser extent) coal options. Climate models can be used to assess the impact of large scale deployment of these options. The impact of waste heat has been assessed using energy balance models and general circulation models (GCMs). Results suggest that the impacts are significant when the heat imput is very high and studies of more realistic scenarios are required. Energy balance models, radiative-convective models and a GCM have been used to study the impact of doubling the atmospheric CO 2 concentration. State-of-the-art models estimate a surface temperature increase of 1.5-3.0 0 C with large amplification near the poles, but much uncertainty remains. Very few model studies have been made of the impact of particles on global climate, more information on the characteristics of particle input are required. The impact of large-scale deployment of solar energy conversion systems has received little attention but model studies suggest that large scale changes in surface characteristics associated with such systems (surface heat balance, roughness and hydrological characteristics and ocean surface temperature) could have significant global climatic effects. (Auth.)

  7. Induction of continuous expanding infrarenal aortic aneurysms in a large porcine animal model

    DEFF Research Database (Denmark)

    Kloster, Brian Ozeraitis; Lund, Lars; Lindholt, Jes S.

    2015-01-01

    BackgroundA large animal model with a continuous expanding infrarenal aortic aneurysm gives access to a more realistic AAA model with anatomy and physiology similar to humans, and thus allows for new experimental research in the natural history and treatment options of the disease. Methods10 pigs...

  8. Expression profiles of genes involved in xenobiotic metabolism and disposition in human renal tissues and renal cell models

    Energy Technology Data Exchange (ETDEWEB)

    Van der Hauwaert, Cynthia; Savary, Grégoire [EA4483, Université de Lille 2, Faculté de Médecine de Lille, Pôle Recherche, 59045 Lille (France); Buob, David [Institut de Pathologie, Centre de Biologie Pathologie Génétique, Centre Hospitalier Régional Universitaire de Lille, 59037 Lille (France); Leroy, Xavier; Aubert, Sébastien [Institut de Pathologie, Centre de Biologie Pathologie Génétique, Centre Hospitalier Régional Universitaire de Lille, 59037 Lille (France); Institut National de la Santé et de la Recherche Médicale, UMR837, Centre de Recherche Jean-Pierre Aubert, Equipe 5, 59045 Lille (France); Flamand, Vincent [Service d' Urologie, Hôpital Huriez, Centre Hospitalier Régional Universitaire de Lille, 59037 Lille (France); Hennino, Marie-Flore [EA4483, Université de Lille 2, Faculté de Médecine de Lille, Pôle Recherche, 59045 Lille (France); Service de Néphrologie, Hôpital Huriez, Centre Hospitalier Régional Universitaire de Lille, 59037 Lille (France); Perrais, Michaël [Institut National de la Santé et de la Recherche Médicale, UMR837, Centre de Recherche Jean-Pierre Aubert, Equipe 5, 59045 Lille (France); and others

    2014-09-15

    Numerous xenobiotics have been shown to be harmful for the kidney. Thus, to improve our knowledge of the cellular processing of these nephrotoxic compounds, we evaluated, by real-time PCR, the mRNA expression level of 377 genes encoding xenobiotic-metabolizing enzymes (XMEs), transporters, as well as nuclear receptors and transcription factors that coordinate their expression in eight normal human renal cortical tissues. Additionally, since several renal in vitro models are commonly used in pharmacological and toxicological studies, we investigated their metabolic capacities and compared them with those of renal tissues. The same set of genes was thus investigated in HEK293 and HK2 immortalized cell lines in commercial primary cultures of epithelial renal cells and in proximal tubular cell primary cultures. Altogether, our data offers a comprehensive description of kidney ability to process xenobiotics. Moreover, by hierarchical clustering, we observed large variations in gene expression profiles between renal cell lines and renal tissues. Primary cultures of proximal tubular epithelial cells exhibited the highest similarities with renal tissue in terms of transcript profiling. Moreover, compared to other renal cell models, Tacrolimus dose dependent toxic effects were lower in proximal tubular cell primary cultures that display the highest metabolism and disposition capacity. Therefore, primary cultures appear to be the most relevant in vitro model for investigating the metabolism and bioactivation of nephrotoxic compounds and for toxicological and pharmacological studies. - Highlights: • Renal proximal tubular (PT) cells are highly sensitive to xenobiotics. • Expression of genes involved in xenobiotic disposition was measured. • PT cells exhibited the highest similarities with renal tissue.

  9. Uterine, but not ovarian, female reproductive organ involvement at presentation by diffuse large B-cell lymphoma is associated with poor outcomes and a high frequency of secondary CNS involvement

    DEFF Research Database (Denmark)

    El-Galaly, Tarec Christoffer; Cheah, Chan Y; Hutchings, Martin

    2016-01-01

    progression-free survival and overall survival compared to those without reproductive organ involvement, whereas ovarian DLBCL was not predictive of outcome. Secondary central nervous system (CNS) involvement (SCNS) occurred in 7/17 (41%) women with uterine DLBCL (two patients with concomitant ovarian DLBCL...

  10. Large transverse momentum processes in a non-scaling parton model

    International Nuclear Information System (INIS)

    Stirling, W.J.

    1977-01-01

    The production of large transverse momentum mesons in hadronic collisions by the quark fusion mechanism is discussed in a parton model which gives logarithmic corrections to Bjorken scaling. It is found that the moments of the large transverse momentum structure function exhibit a simple scale breaking behaviour similar to the behaviour of the Drell-Yan and deep inelastic structure functions of the model. An estimate of corresponding experimental consequences is made and the extent to which analogous results can be expected in an asymptotically free gauge theory is discussed. A simple set of rules is presented for incorporating the logarithmic corrections to scaling into all covariant parton model calculations. (Auth.)

  11. Investigation on the integral output power model of a large-scale wind farm

    Institute of Scientific and Technical Information of China (English)

    BAO Nengsheng; MA Xiuqian; NI Weidou

    2007-01-01

    The integral output power model of a large-scale wind farm is needed when estimating the wind farm's output over a period of time in the future.The actual wind speed power model and calculation method of a wind farm made up of many wind turbine units are discussed.After analyzing the incoming wind flow characteristics and their energy distributions,and after considering the multi-effects among the wind turbine units and certain assumptions,the incoming wind flow model of multi-units is built.The calculation algorithms and steps of the integral output power model of a large-scale wind farm are provided.Finally,an actual power output of the wind farm is calculated and analyzed by using the practical measurement wind speed data.The characteristics of a large-scale wind farm are also discussed.

  12. The sheep as a large osteoporotic model for orthopaedic research in humans

    DEFF Research Database (Denmark)

    Cheng, L.; Ding, Ming; Li, Z.

    2008-01-01

    Although small animals as rodents are very popular animals for osteoporosis models , large animals models are necessary for research of human osteoporotic diseases. Sheep osteoporosis models are becoming more important because of its unique advantages for osteoporosis reseach. Sheep are docile...... in nature and large in size , which facilitates obtaining blood samples , urine samples and bone tissue samples for different biochemical tests and histological tests , and surgical manipulation and instrument examinations. Their physiology is similar to humans. To induce osteoporosis , OVX and calcium...... intake restriction and glucocorticoid application are the most effective methods for sheep osteoporosis model. Sheep osteoporosis model is an ideal animal model for studying various medicines reacting to osteoporosis and other treatment methods such as prosthetic replacement reacting to osteoporotic...

  13. Knowledge discovery in large model datasets in the marine environment: the THREDDS Data Server example

    Directory of Open Access Journals (Sweden)

    A. Bergamasco

    2012-06-01

    Full Text Available In order to monitor, describe and understand the marine environment, many research institutions are involved in the acquisition and distribution of ocean data, both from observations and models. Scientists from these institutions are spending too much time looking for, accessing, and reformatting data: they need better tools and procedures to make the science they do more efficient. The U.S. Integrated Ocean Observing System (US-IOOS is working on making large amounts of distributed data usable in an easy and efficient way. It is essentially a network of scientists, technicians and technologies designed to acquire, collect and disseminate observational and modelled data resulting from coastal and oceanic marine regions investigations to researchers, stakeholders and policy makers. In order to be successful, this effort requires standard data protocols, web services and standards-based tools. Starting from the US-IOOS approach, which is being adopted throughout much of the oceanographic and meteorological sectors, we describe here the CNR-ISMAR Venice experience in the direction of setting up a national Italian IOOS framework using the THREDDS (THematic Real-time Environmental Distributed Data Services Data Server (TDS, a middleware designed to fill the gap between data providers and data users. The TDS provides services that allow data users to find the data sets pertaining to their scientific needs, to access, to visualize and to use them in an easy way, without downloading files to the local workspace. In order to achieve this, it is necessary that the data providers make their data available in a standard form that the TDS understands, and with sufficient metadata to allow the data to be read and searched in a standard way. The core idea is then to utilize a Common Data Model (CDM, a unified conceptual model that describes different datatypes within each dataset. More specifically, Unidata (www.unidata.ucar.edu has developed CDM

  14. A novel model of inflammatory pain in human skin involving topical application of sodium lauryl sulfate.

    Science.gov (United States)

    Petersen, L J; Lyngholm, A M; Arendt-Nielsen, L

    2010-09-01

    Sodium lauryl sulfate (SLS) is a known irritant. It releases pro-inflammatory mediators considered pivotal in inflammatory pain. The sensory effects of SLS in the skin remain largely unexplored. In this study, SLS was evaluated for its effect on skin sensory functions. Eight healthy subjects were recruited for this study. Skin sites were randomized to topical SLS 0.25, 0.5, 1, 2% and vehicle for 24 h. Topical capsaicin 1% was applied for 30 min at 24 h after SLS application. Assessments included laser Doppler imaging of local vasodilation and flare reactions, rating of spontaneous pain, assessment of primary thermal and tactile hyperalgesia, and determination of secondary dynamic and static hyperalgesia. SLS induced significant and dose-dependent local inflammation and primary hyperalgesia to tactile and thermal stimulation at 24 h after application, with SLS 2% treatment eliciting results comparable to those observed following treatment with capsaicin 1%. SLS induced no spontaneous pain, small areas of flare, and minimal secondary hyperalgesia. The primary hyperalgesia vanished within 2-3 days, whereas the skin inflammation persisted and was only partly normalized by Day 6. SLS induces profound perturbations of skin sensory functions lasting 2-3 days. SLS-induced inflammation may be a useful model for studying the mechanisms of inflammatory pain.

  15. Model to predict radiological consequences of transportation accidents involving dispersal of radioactive material in urban areas

    International Nuclear Information System (INIS)

    Taylor, J.M.; Daniel, S.L.

    1978-01-01

    The analysis of accidental releases of radioactive material which may result from transportation accidents in high-density urban areas is influenced by several urban characteristics which make computer simulation the calculational method of choice. These urban features fall into four categories. Each of these categories contains time- and location-dependent parameters which must be coupled to the actual time and location of the release in the calculation of the anticipated radiological consequences. Due to the large number of dependent parameters a computer model, METRAN, has been developed to quantify these radiological consequences. Rather than attempt to describe an urban area as a single entity, a specific urban area is subdivided into a set of cells of fixed size to permit more detailed characterization. Initially, the study area is subdivided into a set of 2-dimensional cells. A uniform set of time-dependent physical characteristics which describe the land use, population distribution, traffic density, etc., within that cell are then computed from various data sources. The METRAN code incorporates several details of urban areas. A principal limitation of the analysis is the limited availability of accurate information to use as input data. Although the code was originally developed to analyze dispersal of radioactive material, it is currently being evaluated for use in analyzing the effects of dispersal of other hazardous materials in both urban and rural areas

  16. Metallogenic model for continental volcanic-type rich and large uranium deposits

    International Nuclear Information System (INIS)

    Chen Guihua

    1998-01-01

    A metallogenic model for continental volcanic-type rich and large/super large uranium deposits has been established on the basis of analysis of occurrence features and ore-forming mechanism of some continental volcanic-type rich and large/super large uranium deposits in the world. The model proposes that uranium-enriched granite or granitic basement is the foundation, premetallogenic polycyclic and multistage volcanic eruptions are prerequisites, intense tectonic-extensional environment is the key for the ore formation, and relatively enclosed geologic setting is the reliable protection condition of the deposit. By using the model the author explains the occurrence regularities of some rich and large/super large uranium deposits such as Strelichof uranium deposit in Russia, Dornot uranium deposit in Mongolia, Olympic Dam Cu-U-Au-REE deposit in Australia, uranium deposit No.460 and Zhoujiashan uranium deposit in China, and then compares the above deposits with a large poor uranium deposit No.661 as well

  17. A Simple Method to Estimate Large Fixed Effects Models Applied to Wage Determinants and Matching

    OpenAIRE

    Mittag, Nikolas

    2016-01-01

    Models with high dimensional sets of fixed effects are frequently used to examine, among others, linked employer-employee data, student outcomes and migration. Estimating these models is computationally difficult, so simplifying assumptions that are likely to cause bias are often invoked to make computation feasible and specification tests are rarely conducted. I present a simple method to estimate large two-way fixed effects (TWFE) and worker-firm match effect models without additional assum...

  18. Simple Model for Simulating Characteristics of River Flow Velocity in Large Scale

    Directory of Open Access Journals (Sweden)

    Husin Alatas

    2015-01-01

    Full Text Available We propose a simple computer based phenomenological model to simulate the characteristics of river flow velocity in large scale. We use shuttle radar tomography mission based digital elevation model in grid form to define the terrain of catchment area. The model relies on mass-momentum conservation law and modified equation of motion of falling body in inclined plane. We assume inelastic collision occurs at every junction of two river branches to describe the dynamics of merged flow velocity.

  19. Dynamic subgrid scale model of large eddy simulation of cross bundle flows

    International Nuclear Information System (INIS)

    Hassan, Y.A.; Barsamian, H.R.

    1996-01-01

    The dynamic subgrid scale closure model of Germano et. al (1991) is used in the large eddy simulation code GUST for incompressible isothermal flows. Tube bundle geometries of staggered and non-staggered arrays are considered in deep bundle simulations. The advantage of the dynamic subgrid scale model is the exclusion of an input model coefficient. The model coefficient is evaluated dynamically for each nodal location in the flow domain. Dynamic subgrid scale results are obtained in the form of power spectral densities and flow visualization of turbulent characteristics. Comparisons are performed among the dynamic subgrid scale model, the Smagorinsky eddy viscosity model (that is used as the base model for the dynamic subgrid scale model) and available experimental data. Spectral results of the dynamic subgrid scale model correlate better with experimental data. Satisfactory turbulence characteristics are observed through flow visualization

  20. An integrated model for assessing both crop productivity and agricultural water resources at a large scale

    Science.gov (United States)

    Okada, M.; Sakurai, G.; Iizumi, T.; Yokozawa, M.

    2012-12-01

    Agricultural production utilizes regional resources (e.g. river water and ground water) as well as local resources (e.g. temperature, rainfall, solar energy). Future climate changes and increasing demand due to population increases and economic developments would intensively affect the availability of water resources for agricultural production. While many studies assessed the impacts of climate change on agriculture, there are few studies that dynamically account for changes in water resources and crop production. This study proposes an integrated model for assessing both crop productivity and agricultural water resources at a large scale. Also, the irrigation management to subseasonal variability in weather and crop response varies for each region and each crop. To deal with such variations, we used the Markov Chain Monte Carlo technique to quantify regional-specific parameters associated with crop growth and irrigation water estimations. We coupled a large-scale crop model (Sakurai et al. 2012), with a global water resources model, H08 (Hanasaki et al. 2008). The integrated model was consisting of five sub-models for the following processes: land surface, crop growth, river routing, reservoir operation, and anthropogenic water withdrawal. The land surface sub-model was based on a watershed hydrology model, SWAT (Neitsch et al. 2009). Surface and subsurface runoffs simulated by the land surface sub-model were input to the river routing sub-model of the H08 model. A part of regional water resources available for agriculture, simulated by the H08 model, was input as irrigation water to the land surface sub-model. The timing and amount of irrigation water was simulated at a daily step. The integrated model reproduced the observed streamflow in an individual watershed. Additionally, the model accurately reproduced the trends and interannual variations of crop yields. To demonstrate the usefulness of the integrated model, we compared two types of impact assessment of

  1. Additional Survival Benefit of Involved-Lesion Radiation Therapy After R-CHOP Chemotherapy in Limited Stage Diffuse Large B-Cell Lymphoma

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Jeanny [Department of Radiation Oncology, Seoul National University College of Medicine, Seoul (Korea, Republic of); Kim, Il Han, E-mail: ihkim@snu.ac.kr [Department of Radiation Oncology, Seoul National University College of Medicine, Seoul (Korea, Republic of); Cancer Research Institute, Seoul National University College of Medicine, Seoul (Korea, Republic of); Institute of Radiation Medicine, Medical Research Center, Seoul National University, Seoul (Korea, Republic of); Kim, Byoung Hyuck [Department of Radiation Oncology, Seoul National University College of Medicine, Seoul (Korea, Republic of); Kim, Tae Min; Heo, Dae Seog [Department of Internal Medicine, Seoul National University Hospital, Seoul (Korea, Republic of)

    2015-05-01

    Purpose: The purpose of this study was to evaluate the role of involved-lesion radiation therapy (ILRT) after rituximab, cyclophosphamide, doxorubicin, vincristine, and prednisone (R-CHOP) chemotherapy in limited stage diffuse large B-cell lymphoma (DLBCL) by comparing outcomes of R-CHOP therapy alone with R-CHOP followed by ILRT. Methods and Materials: We identified 198 patients treated with R-CHOP (median, 6 cycles) for pathologically confirmed DLBCL of limited stage from July 2004 to December 2012. Clinical characteristics of these patients were 33% with stage I and 66.7% with stage II; 79.8% were in the low or low-intermediate risk group; 13.6% had B symptoms; 29.8% had bulky tumors (≥7 cm); and 75.3% underwent ≥6 cycles of R-CHOP therapy. RT was given to 43 patients (21.7%) using ILRT technique, which included the prechemotherapy tumor volume with a median margin of 2 cm (median RT dose: 36 Gy). Results: After a median follow-up of 40 months, 3-year progression-free survival (PFS) and overall survival (OS) were 85.8% and 88.9%, respectively. Multivariate analysis showed ≥6 cycles of R-CHOP (PFS, P=.004; OS, P=.004) and ILRT (PFS, P=.021; OS, P=.014) were favorable prognosticators of PFS and OS. A bulky tumor (P=.027) and response to R-CHOP (P=.012) were also found to be independent factors of OS. In subgroup analysis, the effect of ILRT was prominent in patients with a bulky tumor (PFS, P=.014; OS, P=.030) or an elevated level of serum lactate dehydrogenase (LDH; PFS, P=.004; OS, P=.012). Conclusions: Our results suggest that ILRT after R-CHOP therapy improves PFS and OS in patients with limited stage DLBCL, especially in those with bulky disease or an elevated serum LDH level.

  2. Additional survival benefit of involved-lesion radiation therapy after R-CHOP chemotherapy in limited stage diffuse large B-cell lymphoma.

    Science.gov (United States)

    Kwon, Jeanny; Kim, Il Han; Kim, Byoung Hyuck; Kim, Tae Min; Heo, Dae Seog

    2015-05-01

    The purpose of this study was to evaluate the role of involved-lesion radiation therapy (ILRT) after rituximab, cyclophosphamide, doxorubicin, vincristine, and prednisone (R-CHOP) chemotherapy in limited stage diffuse large B-cell lymphoma (DLBCL) by comparing outcomes of R-CHOP therapy alone with R-CHOP followed by ILRT. We identified 198 patients treated with R-CHOP (median, 6 cycles) for pathologically confirmed DLBCL of limited stage from July 2004 to December 2012. Clinical characteristics of these patients were 33% with stage I and 66.7% with stage II; 79.8% were in the low or low-intermediate risk group; 13.6% had B symptoms; 29.8% had bulky tumors (≥ 7 cm); and 75.3% underwent ≥ 6 cycles of R-CHOP therapy. RT was given to 43 patients (21.7%) using ILRT technique, which included the prechemotherapy tumor volume with a median margin of 2 cm (median RT dose: 36 Gy). After a median follow-up of 40 months, 3-year progression-free survival (PFS) and overall survival (OS) were 85.8% and 88.9%, respectively. Multivariate analysis showed ≥ 6 cycles of R-CHOP (PFS, P=.004; OS, P=.004) and ILRT (PFS, P=.021; OS, P=.014) were favorable prognosticators of PFS and OS. A bulky tumor (P=.027) and response to R-CHOP (P=.012) were also found to be independent factors of OS. In subgroup analysis, the effect of ILRT was prominent in patients with a bulky tumor (PFS, P=.014; OS, P=.030) or an elevated level of serum lactate dehydrogenase (LDH; PFS, P=.004; OS, P=.012). Our results suggest that ILRT after R-CHOP therapy improves PFS and OS in patients with limited stage DLBCL, especially in those with bulky disease or an elevated serum LDH level. Copyright © 2015. Published by Elsevier Inc.

  3. Large fractions of CO2-fixing microorganisms in pristine limestone aquifers appear to be involved in the oxidation of reduced sulfur and nitrogen compounds

    Science.gov (United States)

    Herrmann, Martina; Rusznyák, Anna; Akob, Denise M.; Schulze, Isabel; Opitz, Sebastian; Totsche, Kai Uwe; Küsel, Kirsten

    2015-01-01

    The traditional view of the dependency of subsurface environments on surface-derived allochthonous carbon inputs is challenged by increasing evidence for the role of lithoautotrophy in aquifer carbon flow. We linked information on autotrophy (Calvin-Benson-Bassham cycle) with that from total microbial community analysis in groundwater at two superimposed—upper and lower—limestone groundwater reservoirs (aquifers). Quantitative PCR revealed that up to 17% of the microbial population had the genetic potential to fix CO2 via the Calvin cycle, with abundances of cbbM and cbbL genes, encoding RubisCO (ribulose-1,5-bisphosphate carboxylase/oxygenase) forms I and II, ranging from 1.14 × 103 to 6 × 106 genes liter−1 over a 2-year period. The structure of the active microbial communities based on 16S rRNA transcripts differed between the two aquifers, with a larger fraction of heterotrophic, facultative anaerobic, soil-related groups in the oxygen-deficient upper aquifer. Most identified CO2-assimilating phylogenetic groups appeared to be involved in the oxidation of sulfur or nitrogen compounds and harbored both RubisCO forms I and II, allowing efficient CO2 fixation in environments with strong oxygen and CO2 fluctuations. The genera Sulfuricellaand Nitrosomonas were represented by read fractions of up to 78 and 33%, respectively, within the cbbM and cbbL transcript pool and accounted for 5.6 and 3.8% of 16S rRNA sequence reads, respectively, in the lower aquifer. Our results indicate that a large fraction of bacteria in pristine limestone aquifers has the genetic potential for autotrophic CO2 fixation, with energy most likely provided by the oxidation of reduced sulfur and nitrogen compounds.

  4. Analogue scale modelling of extensional tectonic processes using a large state-of-the-art centrifuge

    Science.gov (United States)

    Park, Heon-Joon; Lee, Changyeol

    2017-04-01

    Analogue scale modelling of extensional tectonic processes such as rifting and basin opening has been numerously conducted. Among the controlling factors, gravitational acceleration (g) on the scale models was regarded as a constant (Earth's gravity) in the most of the analogue model studies, and only a few model studies considered larger gravitational acceleration by using a centrifuge (an apparatus generating large centrifugal force by rotating the model at a high speed). Although analogue models using a centrifuge allow large scale-down and accelerated deformation that is derived by density differences such as salt diapir, the possible model size is mostly limited up to 10 cm. A state-of-the-art centrifuge installed at the KOCED Geotechnical Centrifuge Testing Center, Korea Advanced Institute of Science and Technology (KAIST) allows a large surface area of the scale-models up to 70 by 70 cm under the maximum capacity of 240 g-tons. Using the centrifuge, we will conduct analogue scale modelling of the extensional tectonic processes such as opening of the back-arc basin. Acknowledgement This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (grant number 2014R1A6A3A04056405).

  5. Subgrid-scale models for large-eddy simulation of rotating turbulent channel flows

    Science.gov (United States)

    Silvis, Maurits H.; Bae, Hyunji Jane; Trias, F. Xavier; Abkar, Mahdi; Moin, Parviz; Verstappen, Roel

    2017-11-01

    We aim to design subgrid-scale models for large-eddy simulation of rotating turbulent flows. Rotating turbulent flows form a challenging test case for large-eddy simulation due to the presence of the Coriolis force. The Coriolis force conserves the total kinetic energy while transporting it from small to large scales of motion, leading to the formation of large-scale anisotropic flow structures. The Coriolis force may also cause partial flow laminarization and the occurrence of turbulent bursts. Many subgrid-scale models for large-eddy simulation are, however, primarily designed to parametrize the dissipative nature of turbulent flows, ignoring the specific characteristics of transport processes. We, therefore, propose a new subgrid-scale model that, in addition to the usual dissipative eddy viscosity term, contains a nondissipative nonlinear model term designed to capture transport processes, such as those due to rotation. We show that the addition of this nonlinear model term leads to improved predictions of the energy spectra of rotating homogeneous isotropic turbulence as well as of the Reynolds stress anisotropy in spanwise-rotating plane-channel flows. This work is financed by the Netherlands Organisation for Scientific Research (NWO) under Project Number 613.001.212.

  6. Optimizing Prediction Using Bayesian Model Averaging: Examples Using Large-Scale Educational Assessments.

    Science.gov (United States)

    Kaplan, David; Lee, Chansoon

    2018-01-01

    This article provides a review of Bayesian model averaging as a means of optimizing the predictive performance of common statistical models applied to large-scale educational assessments. The Bayesian framework recognizes that in addition to parameter uncertainty, there is uncertainty in the choice of models themselves. A Bayesian approach to addressing the problem of model uncertainty is the method of Bayesian model averaging. Bayesian model averaging searches the space of possible models for a set of submodels that satisfy certain scientific principles and then averages the coefficients across these submodels weighted by each model's posterior model probability (PMP). Using the weighted coefficients for prediction has been shown to yield optimal predictive performance according to certain scoring rules. We demonstrate the utility of Bayesian model averaging for prediction in education research with three examples: Bayesian regression analysis, Bayesian logistic regression, and a recently developed approach for Bayesian structural equation modeling. In each case, the model-averaged estimates are shown to yield better prediction of the outcome of interest than any submodel based on predictive coverage and the log-score rule. Implications for the design of large-scale assessments when the goal is optimal prediction in a policy context are discussed.

  7. Large-scale hydrology in Europe : observed patterns and model performance

    Energy Technology Data Exchange (ETDEWEB)

    Gudmundsson, Lukas

    2011-06-15

    In a changing climate, terrestrial water storages are of great interest as water availability impacts key aspects of ecosystem functioning. Thus, a better understanding of the variations of wet and dry periods will contribute to fully grasp processes of the earth system such as nutrient cycling and vegetation dynamics. Currently, river runoff from small, nearly natural, catchments is one of the few variables of the terrestrial water balance that is regularly monitored with detailed spatial and temporal coverage on large scales. River runoff, therefore, provides a foundation to approach European hydrology with respect to observed patterns on large scales, with regard to the ability of models to capture these.The analysis of observed river flow from small catchments, focused on the identification and description of spatial patterns of simultaneous temporal variations of runoff. These are dominated by large-scale variations of climatic variables but also altered by catchment processes. It was shown that time series of annual low, mean and high flows follow the same atmospheric drivers. The observation that high flows are more closely coupled to large scale atmospheric drivers than low flows, indicates the increasing influence of catchment properties on runoff under dry conditions. Further, it was shown that the low-frequency variability of European runoff is dominated by two opposing centres of simultaneous variations, such that dry years in the north are accompanied by wet years in the south.Large-scale hydrological models are simplified representations of our current perception of the terrestrial water balance on large scales. Quantification of the models strengths and weaknesses is the prerequisite for a reliable interpretation of simulation results. Model evaluations may also enable to detect shortcomings with model assumptions and thus enable a refinement of the current perception of hydrological systems. The ability of a multi model ensemble of nine large

  8. Attitudes towards people with mental illness among psychiatrists, psychiatric nurses, involved family members and the general population in a large city in Guangzhou, China.

    Science.gov (United States)

    Sun, Bin; Fan, Ni; Nie, Sha; Zhang, Minglin; Huang, Xini; He, Hongbo; Rosenheck, Robert A

    2014-01-01

    Stigma towards people with mental illness is believed to be widespread in low and middle income countries. This study assessed the attitudes towards people with mental illness among psychiatrists, psychiatric nurses, involved family members of patients in a psychiatric facility and the general public using a standard 43-item survey (N = 535). Exploratory factor analysis identified four distinctive attitudes which were then compared using Analysis of Covariance (ANCOVA) among the four groups, all with ties to the largest psychiatric facility in Guangzhou, China, adjusting for sociodemographic differences. Four uncorrelated factors expressed preferences for 1) community-based treatment, social integration and a biopsychosocial model of causation, 2) direct personal relationships with people with mental illness, 3) a lack of fear and positive views of personal interactions with people with mental illness, 4) disbelief in superstitious explanations of mental illness. Statistically significant differences favored community-based treatment and biopsychosocial causation (factor 1) among professional groups (psychiatrists and nurses) as compared with family members and the general public (p problems of their relatives and support in their care.

  9. Adaptation of streeter model - Phelps for water quality modeling in a large semi-arid basin.

    OpenAIRE

    Wagner Josà da Silva Mendes

    2014-01-01

    This paper presents an adaptation of the classical model of Streeter-Phelps modeling of Dissolved Oxygen (DO) and Biochemical Oxygen Demand (BOD) in the basin of the Upper Jaguaribe (25,000 km2), State of Ceara, Brazil. The adaptation of the model consisted of the numerical solution of differential equations Streeter-Phelps, considering the effect of incremental flows and sewage releases over the sections, as well as the variability of the sections of rivers and tributaries. For model calibra...

  10. Evaluating two model reduction approaches for large scale hedonic models sensitive to omitted variables and multicollinearity

    DEFF Research Database (Denmark)

    Panduro, Toke Emil; Thorsen, Bo Jellesmark

    2014-01-01

    Hedonic models in environmental valuation studies have grown in terms of number of transactions and number of explanatory variables. We focus on the practical challenge of model reduction, when aiming for reliable parsimonious models, sensitive to omitted variable bias and multicollinearity. We...

  11. Hierarchical and Matrix Structures in a Large Organizational Email Network: Visualization and Modeling Approaches

    OpenAIRE

    Sims, Benjamin H.; Sinitsyn, Nikolai; Eidenbenz, Stephan J.

    2014-01-01

    This paper presents findings from a study of the email network of a large scientific research organization, focusing on methods for visualizing and modeling organizational hierarchies within large, complex network datasets. In the first part of the paper, we find that visualization and interpretation of complex organizational network data is facilitated by integration of network data with information on formal organizational divisions and levels. By aggregating and visualizing email traffic b...

  12. Groundwater Flow and Thermal Modeling to Support a Preferred Conceptual Model for the Large Hydraulic Gradient North of Yucca Mountain

    International Nuclear Information System (INIS)

    McGraw, D.; Oberlander, P.

    2007-01-01

    The purpose of this study is to report on the results of a preliminary modeling framework to investigate the causes of the large hydraulic gradient north of Yucca Mountain. This study builds on the Saturated Zone Site-Scale Flow and Transport Model (referenced herein as the Site-scale model (Zyvoloski, 2004a)), which is a three-dimensional saturated zone model of the Yucca Mountain area. Groundwater flow was simulated under natural conditions. The model framework and grid design describe the geologic layering and the calibration parameters describe the hydrogeology. The Site-scale model is calibrated to hydraulic heads, fluid temperature, and groundwater flowpaths. One area of interest in the Site-scale model represents the large hydraulic gradient north of Yucca Mountain. Nearby water levels suggest over 200 meters of hydraulic head difference in less than 1,000 meters horizontal distance. Given the geologic conceptual models defined by various hydrogeologic reports (Faunt, 2000, 2001; Zyvoloski, 2004b), no definitive explanation has been found for the cause of the large hydraulic gradient. Luckey et al. (1996) presents several possible explanations for the large hydraulic gradient as provided below: The gradient is simply the result of flow through the upper volcanic confining unit, which is nearly 300 meters thick near the large gradient. The gradient represents a semi-perched system in which flow in the upper and lower aquifers is predominantly horizontal, whereas flow in the upper confining unit would be predominantly vertical. The gradient represents a drain down a buried fault from the volcanic aquifers to the lower Carbonate Aquifer. The gradient represents a spillway in which a fault marks the effective northern limit of the lower volcanic aquifer. The large gradient results from the presence at depth of the Eleana Formation, a part of the Paleozoic upper confining unit, which overlies the lower Carbonate Aquifer in much of the Death Valley region. The

  13. Surface accuracy analysis and mathematical modeling of deployable large aperture elastic antenna reflectors

    Science.gov (United States)

    Coleman, Michael J.

    One class of deployable large aperture antenna consists of thin light-weight parabolic reflectors. A reflector of this type is a deployable structure that consists of an inflatable elastic membrane that is supported about its perimeter by a set of elastic tendons and is subjected to a constant hydrostatic pressure. A design may not hold the parabolic shape to within a desired tolerance due to an elastic deformation of the surface, particularly near the rim. We can compute the equilibrium configuration of the reflector system using an optimization-based solution procedure that calculates the total system energy and determines a configuration of minimum energy. Analysis of the equilibrium configuration reveals the behavior of the reflector shape under various loading conditions. The pressure, film strain energy, tendon strain energy, and gravitational energy are all considered in this analysis. The surface accuracy of the antenna reflector is measured by an RMS calculation while the reflector phase error component of the efficiency is determined by computing the power density at boresight. Our error computation methods are tailored for the faceted surface of our model and they are more accurate for this particular problem than the commonly applied Ruze Equation. Previous analytical work on parabolic antennas focused on axisymmetric geometries and loads. Symmetric equilibria are not assumed in our analysis. In addition, this dissertation contains two principle original findings: (1) the typical supporting tendon system tends to flatten a parabolic reflector near its edge. We find that surface accuracy can be significantly improved by fixing the edge of the inflated reflector to a rigid structure; (2) for large membranes assembled from flat sheets of thin material, we demonstrate that the surface accuracy of the resulting inflated membrane reflector can be improved by altering the cutting pattern of the flat components. Our findings demonstrate that the proper choice

  14. Modeling and control of a large nuclear reactor. A three-time-scale approach

    Energy Technology Data Exchange (ETDEWEB)

    Shimjith, S.R. [Indian Institute of Technology Bombay, Mumbai (India); Bhabha Atomic Research Centre, Mumbai (India); Tiwari, A.P. [Bhabha Atomic Research Centre, Mumbai (India); Bandyopadhyay, B. [Indian Institute of Technology Bombay, Mumbai (India). IDP in Systems and Control Engineering

    2013-07-01

    Recent research on Modeling and Control of a Large Nuclear Reactor. Presents a three-time-scale approach. Written by leading experts in the field. Control analysis and design of large nuclear reactors requires a suitable mathematical model representing the steady state and dynamic behavior of the reactor with reasonable accuracy. This task is, however, quite challenging because of several complex dynamic phenomena existing in a reactor. Quite often, the models developed would be of prohibitively large order, non-linear and of complex structure not readily amenable for control studies. Moreover, the existence of simultaneously occurring dynamic variations at different speeds makes the mathematical model susceptible to numerical ill-conditioning, inhibiting direct application of standard control techniques. This monograph introduces a technique for mathematical modeling of large nuclear reactors in the framework of multi-point kinetics, to obtain a comparatively smaller order model in standard state space form thus overcoming these difficulties. It further brings in innovative methods for controller design for systems exhibiting multi-time-scale property, with emphasis on three-time-scale systems.

  15. The three-point function as a probe of models for large-scale structure

    International Nuclear Information System (INIS)

    Frieman, J.A.; Gaztanaga, E.

    1993-01-01

    The authors analyze the consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime. Several variations of the standard Ω = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, R p ∼20 h -1 Mpc, e.g., low-matter-density (non-zero cosmological constant) models, open-quote tilted close-quote primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, et al. The authors show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q J at large scales, r approx-gt R p . Current observational constraints on the three-point amplitudes Q 3 and S 3 can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales

  16. An interactive display system for large-scale 3D models

    Science.gov (United States)

    Liu, Zijian; Sun, Kun; Tao, Wenbing; Liu, Liman

    2018-04-01

    With the improvement of 3D reconstruction theory and the rapid development of computer hardware technology, the reconstructed 3D models are enlarging in scale and increasing in complexity. Models with tens of thousands of 3D points or triangular meshes are common in practical applications. Due to storage and computing power limitation, it is difficult to achieve real-time display and interaction with large scale 3D models for some common 3D display software, such as MeshLab. In this paper, we propose a display system for large-scale 3D scene models. We construct the LOD (Levels of Detail) model of the reconstructed 3D scene in advance, and then use an out-of-core view-dependent multi-resolution rendering scheme to realize the real-time display of the large-scale 3D model. With the proposed method, our display system is able to render in real time while roaming in the reconstructed scene and 3D camera poses can also be displayed. Furthermore, the memory consumption can be significantly decreased via internal and external memory exchange mechanism, so that it is possible to display a large scale reconstructed scene with over millions of 3D points or triangular meshes in a regular PC with only 4GB RAM.

  17. Wind and Photovoltaic Large-Scale Regional Models for hourly production evaluation

    DEFF Research Database (Denmark)

    Marinelli, Mattia; Maule, Petr; Hahmann, Andrea N.

    2015-01-01

    This work presents two large-scale regional models used for the evaluation of normalized power output from wind turbines and photovoltaic power plants on a European regional scale. The models give an estimate of renewable production on a regional scale with 1 h resolution, starting from a mesosca...... of the transmission system, especially regarding the cross-border power flows. The tuning of these regional models is done using historical meteorological data acquired on a per-country basis and using publicly available data of installed capacity.......This work presents two large-scale regional models used for the evaluation of normalized power output from wind turbines and photovoltaic power plants on a European regional scale. The models give an estimate of renewable production on a regional scale with 1 h resolution, starting from a mesoscale...

  18. Model checking methodology for large systems, faults and asynchronous behaviour. SARANA 2011 work report

    International Nuclear Information System (INIS)

    Lahtinen, J.; Launiainen, T.; Heljanko, K.; Ropponen, J.

    2012-01-01

    Digital instrumentation and control (I and C) systems are challenging to verify. They enable complicated control functions, and the state spaces of the models easily become too large for comprehensive verification through traditional methods. Model checking is a formal method that can be used for system verification. A number of efficient model checking systems are available that provide analysis tools to determine automatically whether a given state machine model satisfies the desired safety properties. This report reviews the work performed in the Safety Evaluation and Reliability Analysis of Nuclear Automation (SARANA) project in 2011 regarding model checking. We have developed new, more exact modelling methods that are able to capture the behaviour of a system more realistically. In particular, we have developed more detailed fault models depicting the hardware configuration of a system, and methodology to model function-block-based systems asynchronously. In order to improve the usability of our model checking methods, we have developed an algorithm for model checking large modular systems. The algorithm can be used to verify properties of a model that could otherwise not be verified in a straightforward manner. (orig.)

  19. Model checking methodology for large systems, faults and asynchronous behaviour. SARANA 2011 work report

    Energy Technology Data Exchange (ETDEWEB)

    Lahtinen, J. [VTT Technical Research Centre of Finland, Espoo (Finland); Launiainen, T.; Heljanko, K.; Ropponen, J. [Aalto Univ., Espoo (Finland). Dept. of Information and Computer Science

    2012-07-01

    Digital instrumentation and control (I and C) systems are challenging to verify. They enable complicated control functions, and the state spaces of the models easily become too large for comprehensive verification through traditional methods. Model checking is a formal method that can be used for system verification. A number of efficient model checking systems are available that provide analysis tools to determine automatically whether a given state machine model satisfies the desired safety properties. This report reviews the work performed in the Safety Evaluation and Reliability Analysis of Nuclear Automation (SARANA) project in 2011 regarding model checking. We have developed new, more exact modelling methods that are able to capture the behaviour of a system more realistically. In particular, we have developed more detailed fault models depicting the hardware configuration of a system, and methodology to model function-block-based systems asynchronously. In order to improve the usability of our model checking methods, we have developed an algorithm for model checking large modular systems. The algorithm can be used to verify properties of a model that could otherwise not be verified in a straightforward manner. (orig.)

  20. Topic modeling for cluster analysis of large biological and medical datasets.

    Science.gov (United States)

    Zhao, Weizhong; Zou, Wen; Chen, James J

    2014-01-01

    The big data moniker is nowhere better deserved than to describe the ever-increasing prodigiousness and complexity of biological and medical datasets. New methods are needed to generate and test hypotheses, foster biological interpretation, and build validated predictors. Although multivariate techniques such as cluster analysis may allow researchers to identify groups, or clusters, of related variables, the accuracies and effectiveness of traditional clustering methods diminish for large and hyper dimensional datasets. Topic modeling is an active research field in machine learning and has been mainly used as an analytical tool to structure large textual corpora for data mining. Its ability to reduce high dimensionality to a small number of latent variables makes it suitable as a means for clustering or overcoming clustering difficulties in large biological and medical datasets. In this study, three topic model-derived clustering methods, highest probable topic assignment, feature selection and feature extraction, are proposed and tested on the cluster analysis of three large datasets: Salmonella pulsed-field gel electrophoresis (PFGE) dataset, lung cancer dataset, and breast cancer dataset, which represent various types of large biological or medical datasets. All three various methods are shown to improve the efficacy/effectiveness of clustering results on the three datasets in comparison to traditional methods. A preferable cluster analysis method emerged for each of the three datasets on the basis of replicating known biological truths. Topic modeling could be advantageously applied to the large datasets of biological or medical research. The three proposed topic model-derived clustering methods, highest probable topic assignment, feature selection and feature extraction, yield clustering improvements for the three different data types. Clusters more efficaciously represent truthful groupings and subgroupings in the data than traditional methods, suggesting

  1. Software engineering the mixed model for genome-wide association studies on large samples.

    Science.gov (United States)

    Zhang, Zhiwu; Buckler, Edward S; Casstevens, Terry M; Bradbury, Peter J

    2009-11-01

    Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample size and number of markers used for GWAS is increasing dramatically, resulting in greater statistical power to detect those associations. The use of mixed models with increasingly large data sets depends on the availability of software for analyzing those models. While multiple software packages implement the mixed model method, no single package provides the best combination of fast computation, ability to handle large samples, flexible modeling and ease of use. Key elements of association analysis with mixed models are reviewed, including modeling phenotype-genotype associations using mixed models, population stratification, kinship and its estimation, variance component estimation, use of best linear unbiased predictors or residuals in place of raw phenotype, improving efficiency and software-user interaction. The available software packages are evaluated, and suggestions made for future software development.

  2. Prospectus: towards the development of high-fidelity models of wall turbulence at large Reynolds number.

    Science.gov (United States)

    Klewicki, J C; Chini, G P; Gibson, J F

    2017-03-13

    Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier-Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  3. Prospectus: towards the development of high-fidelity models of wall turbulence at large Reynolds number

    Science.gov (United States)

    Klewicki, J. C.; Chini, G. P.; Gibson, J. F.

    2017-01-01

    Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier–Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted. This article is part of the themed issue ‘Toward the development of high-fidelity models of wall turbulence at large Reynolds number’. PMID:28167585

  4. Linear velocity fields in non-Gaussian models for large-scale structure

    Science.gov (United States)

    Scherrer, Robert J.

    1992-01-01

    Linear velocity fields in two types of physically motivated non-Gaussian models are examined for large-scale structure: seed models, in which the density field is a convolution of a density profile with a distribution of points, and local non-Gaussian fields, derived from a local nonlinear transformation on a Gaussian field. The distribution of a single component of the velocity is derived for seed models with randomly distributed seeds, and these results are applied to the seeded hot dark matter model and the global texture model with cold dark matter. An expression for the distribution of a single component of the velocity in arbitrary local non-Gaussian models is given, and these results are applied to such fields with chi-squared and lognormal distributions. It is shown that all seed models with randomly distributed seeds and all local non-Guassian models have single-component velocity distributions with positive kurtosis.

  5. The Undergraduate ALFALFA Team: A Model for Involving Undergraduates in Major Legacy Astronomy Research

    Science.gov (United States)

    Troischt, Parker; Koopmann, Rebecca A.; Haynes, Martha P.; Higdon, Sarah; Balonek, Thomas J.; Cannon, John M.; Coble, Kimberly A.; Craig, David; Durbala, Adriana; Finn, Rose; Hoffman, G. Lyle; Kornreich, David A.; Lebron, Mayra E.; Crone-Odekon, Mary; O'Donoghue, Aileen A.; Olowin, Ronald Paul; Pantoja, Carmen; Rosenberg, Jessica L.; Venkatesan, Aparna; Wilcots, Eric M.; Alfalfa Team

    2015-01-01

    The NSF-sponsored Undergraduate ALFALFA (Arecibo Legacy Fast ALFA) Team (UAT) is a consortium of 19 institutions founded to promote undergraduate research and faculty development within the extragalactic ALFALFA HI blind survey project and follow-up programs. The collaborative nature of the UAT allows faculty and students from a wide ​range of public and private colleges and especially those with small astronomy programs to develop scholarly collaborations. Components of the program include an annual undergraduate workshop at Arecibo Observatory, observing runs at Arecibo, computer infrastructure, summer and academic year research projects, and dissemination at national meetings (e.g., Alfvin et al., Martens et al., Sanders et al., this meeting). Through this model, faculty and students are learning how science is accomplished in a large collaboration while contributing to the scientific goals of a major legacy survey. In the 7 years of the program, 23 faculty and more than 220 undergraduate students have participated at a significant level. 40% of them have been women and members of underrepresented groups. Faculty, many of whom were new to the collaboration and had expertise in other fields, contribute their diverse sets of skills to ALFALFA ​related projects via observing, data reduction, collaborative research, and research with students. 142 undergraduate students have attended the annual workshops at Arecibo Observatory, interacting with faculty, graduate students, their peers, and Arecibo staff in lectures, group activities, tours, and observing runs. Team faculty have supervised 131 summer research projects and 94 academic year (e.g., senior thesis) projects. 62 students have traveled to Arecibo Observatory for observing runs and 46 have presented their results at national meetings. 93% of alumni are attending graduate school and/or pursuing a career in STEM. Half of those pursuing graduate degrees in Physics or Astronomy are women. This work has been

  6. ِDesigning a Model to Medical Errors Prediction for Outpatients Visits According to Rganizational Commitment and Job Involvement

    Directory of Open Access Journals (Sweden)

    SM Mirhosseini

    2015-09-01

    Full Text Available Abstract Introduction: A wide ranges of variables effect on the medical errors such as job involvement and organizational commitment. Coincidental relationship between two variables on medical errors during outpatients’ visits has been investigated to design a model. Methods: A field study with 114 physicians during outpatients’ visits revealed the mean of medical errors. Azimi and Allen-meyer questionnaires were used to measure Job involvement and organizational commitment. Physicians divided into four groups according to the Job involvement and organizational commitment in two dimensions (Zone1: high job involvement and high organizational commitment, Zone2: high job involvement and low organizational commitment, Zone3: low job involvement and high organizational commitment, Zone 4: low job involvement and low organizational commitment. ANOVA and Scheffe test were conducted to analyse the medical errors in four Zones by SPSS22. A guideline was presented according to the relationship between errors and two other variables. Results: The mean of organizational commitment was 79.50±12.30 and job involvement 12.72±3.66, medical errors in first group (0.32, second group (0.51, third group (0.41 and last one (0.50. ANOVA (F test=22.20, sig=0.00 and Scheffé were significant except for the second and forth group. The validity of the model was 73.60%. Conclusion: Applying some strategies to boost the organizational commitment and job involvement can help for diminishing the medical errors during outpatients’ visits. Thus, the investigation to comprehend the factors contributing organizational commitment and job involvement can be helpful.

  7. Global models underestimate large decadal declining and rising water storage trends relative to GRACE satellite data

    Science.gov (United States)

    Scanlon, Bridget R.; Zhang, Zizhan; Save, Himanshu; Sun, Alexander Y.; van Beek, Ludovicus P. H.; Wiese, David N.; Reedy, Robert C.; Longuevergne, Laurent; Döll, Petra; Bierkens, Marc F. P.

    2018-01-01

    Assessing reliability of global models is critical because of increasing reliance on these models to address past and projected future climate and human stresses on global water resources. Here, we evaluate model reliability based on a comprehensive comparison of decadal trends (2002–2014) in land water storage from seven global models (WGHM, PCR-GLOBWB, GLDAS NOAH, MOSAIC, VIC, CLM, and CLSM) to trends from three Gravity Recovery and Climate Experiment (GRACE) satellite solutions in 186 river basins (∼60% of global land area). Medians of modeled basin water storage trends greatly underestimate GRACE-derived large decreasing (≤−0.5 km3/y) and increasing (≥0.5 km3/y) trends. Decreasing trends from GRACE are mostly related to human use (irrigation) and climate variations, whereas increasing trends reflect climate variations. For example, in the Amazon, GRACE estimates a large increasing trend of ∼43 km3/y, whereas most models estimate decreasing trends (−71 to 11 km3/y). Land water storage trends, summed over all basins, are positive for GRACE (∼71–82 km3/y) but negative for models (−450 to −12 km3/y), contributing opposing trends to global mean sea level change. Impacts of climate forcing on decadal land water storage trends exceed those of modeled human intervention by about a factor of 2. The model-GRACE comparison highlights potential areas of future model development, particularly simulated water storage. The inability of models to capture large decadal water storage trends based on GRACE indicates that model projections of climate and human-induced water storage changes may be underestimated. PMID:29358394

  8. How uncertainty in socio-economic variables affects large-scale transport model forecasts

    DEFF Research Database (Denmark)

    Manzo, Stefano; Nielsen, Otto Anker; Prato, Carlo Giacomo

    2015-01-01

    A strategic task assigned to large-scale transport models is to forecast the demand for transport over long periods of time to assess transport projects. However, by modelling complex systems transport models have an inherent uncertainty which increases over time. As a consequence, the longer...... the period forecasted the less reliable is the forecasted model output. Describing uncertainty propagation patterns over time is therefore important in order to provide complete information to the decision makers. Among the existing literature only few studies analyze uncertainty propagation patterns over...

  9. Sizing and scaling requirements of a large-scale physical model for code validation

    International Nuclear Information System (INIS)

    Khaleel, R.; Legore, T.

    1990-01-01

    Model validation is an important consideration in application of a code for performance assessment and therefore in assessing the long-term behavior of the engineered and natural barriers of a geologic repository. Scaling considerations relevant to porous media flow are reviewed. An analysis approach is presented for determining the sizing requirements of a large-scale, hydrology physical model. The physical model will be used to validate performance assessment codes that evaluate the long-term behavior of the repository isolation system. Numerical simulation results for sizing requirements are presented for a porous medium model in which the media properties are spatially uncorrelated

  10. Hybrid Reynolds-Averaged/Large Eddy Simulation of a Cavity Flameholder; Assessment of Modeling Sensitivities

    Science.gov (United States)

    Baurle, R. A.

    2015-01-01

    Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. The cases simulated corresponded to those used to examine this flowfield experimentally using particle image velocimetry. A variety of turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged / large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This effort was undertaken to formally assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community. The numerical errors were quantified for both the steady-state and scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results showed a high degree of variability when comparing the predictions obtained from each turbulence model, with the non-linear eddy viscosity model (an explicit algebraic stress model) providing the most accurate prediction of the measured values. The hybrid Reynolds-averaged/large eddy simulation results were carefully scrutinized to ensure that even the coarsest grid had an acceptable level of resolution for large eddy simulation, and that the time-averaged statistics were acceptably accurate. The autocorrelation and its Fourier transform were the primary tools used for this assessment. The statistics extracted from the hybrid simulation strategy proved to be more accurate than the Reynolds-averaged results obtained using the linear eddy viscosity models. However, there was no predictive improvement noted over the results obtained from the explicit

  11. Oligopolistic competition in wholesale electricity markets: Large-scale simulation and policy analysis using complementarity models

    Science.gov (United States)

    Helman, E. Udi

    This dissertation conducts research into the large-scale simulation of oligopolistic competition in wholesale electricity markets. The dissertation has two parts. Part I is an examination of the structure and properties of several spatial, or network, equilibrium models of oligopolistic electricity markets formulated as mixed linear complementarity problems (LCP). Part II is a large-scale application of such models to the electricity system that encompasses most of the United States east of the Rocky Mountains, the Eastern Interconnection. Part I consists of Chapters 1 to 6. The models developed in this part continue research into mixed LCP models of oligopolistic electricity markets initiated by Hobbs [67] and subsequently developed by Metzler [87] and Metzler, Hobbs and Pang [88]. Hobbs' central contribution is a network market model with Cournot competition in generation and a price-taking spatial arbitrage firm that eliminates spatial price discrimination by the Cournot firms. In one variant, the solution to this model is shown to be equivalent to the "no arbitrage" condition in a "pool" market, in which a Regional Transmission Operator optimizes spot sales such that the congestion price between two locations is exactly equivalent to the difference in the energy prices at those locations (commonly known as locational marginal pricing). Extensions to this model are presented in Chapters 5 and 6. One of these is a market model with a profit-maximizing arbitrage firm. This model is structured as a mathematical program with equilibrium constraints (MPEC), but due to the linearity of its constraints, can be solved as a mixed LCP. Part II consists of Chapters 7 to 12. The core of these chapters is a large-scale simulation of the U.S. Eastern Interconnection applying one of the Cournot competition with arbitrage models. This is the first oligopolistic equilibrium market model to encompass the full Eastern Interconnection with a realistic network representation (using

  12. Large tau and tau neutrino electric dipole moments in models with vectorlike multiplets

    International Nuclear Information System (INIS)

    Ibrahim, Tarek; Nath, Pran

    2010-01-01

    It is shown that the electric dipole moment of the τ lepton several orders of magnitude larger than predicted by the standard model can be generated from mixings in models with vectorlike mutiplets. The electric dipole moment (EDM) of the τ lepton arises from loops involving the exchange of the W, the charginos, the neutralinos, the sleptons, the mirror leptons, and the mirror sleptons. The EDM of the Dirac τ neutrino is also computed from loops involving the exchange of the W, the charginos, the mirror leptons, and the mirror sleptons. A numerical analysis is presented, and it is shown that the EDMs of the τ lepton and the τ neutrino which lie just a couple of orders of magnitude below the sensitivity of the current experiment can be achieved. Thus the predictions of the model are testable in an improved experiment on the EDM of the τ and the τ neutrino.

  13. Microfluidic very large scale integration (VLSI) modeling, simulation, testing, compilation and physical synthesis

    CERN Document Server

    Pop, Paul; Madsen, Jan

    2016-01-01

    This book presents the state-of-the-art techniques for the modeling, simulation, testing, compilation and physical synthesis of mVLSI biochips. The authors describe a top-down modeling and synthesis methodology for the mVLSI biochips, inspired by microelectronics VLSI methodologies. They introduce a modeling framework for the components and the biochip architecture, and a high-level microfluidic protocol language. Coverage includes a topology graph-based model for the biochip architecture, and a sequencing graph to model for biochemical application, showing how the application model can be obtained from the protocol language. The techniques described facilitate programmability and automation, enabling developers in the emerging, large biochip market. · Presents the current models used for the research on compilation and synthesis techniques of mVLSI biochips in a tutorial fashion; · Includes a set of "benchmarks", that are presented in great detail and includes the source code of several of the techniques p...

  14. Large eddy simulation of spanwise rotating turbulent channel flow with dynamic variants of eddy viscosity model

    Science.gov (United States)

    Jiang, Zhou; Xia, Zhenhua; Shi, Yipeng; Chen, Shiyi

    2018-04-01

    A fully developed spanwise rotating turbulent channel flow has been numerically investigated utilizing large-eddy simulation. Our focus is to assess the performances of the dynamic variants of eddy viscosity models, including dynamic Vreman's model (DVM), dynamic wall adapting local eddy viscosity (DWALE) model, dynamic σ (Dσ ) model, and the dynamic volumetric strain-stretching (DVSS) model, in this canonical flow. The results with dynamic Smagorinsky model (DSM) and direct numerical simulations (DNS) are used as references. Our results show that the DVM has a wrong asymptotic behavior in the near wall region, while the other three models can correctly predict it. In the high rotation case, the DWALE can get reliable mean velocity profile, but the turbulence intensities in the wall-normal and spanwise directions show clear deviations from DNS data. DVSS exhibits poor predictions on both the mean velocity profile and turbulence intensities. In all three cases, Dσ performs the best.

  15. Impact of resilience and job involvement on turnover intention of new graduate nurses using structural equation modeling.

    Science.gov (United States)

    Yu, Mi; Lee, Haeyoung

    2018-03-06

    Nurses' turnover intention is not just a result of their maladjustment to the field; it is an organizational issue. This study aimed to construct a structural model to verify the effects of new graduate nurses' work environment satisfaction, emotional labor, and burnout on their turnover intention, with consideration of resilience and job involvement, and to test the adequacy of the developed model. A cross-sectional study and a structural equation modelling approach were used. A nationwide survey was conducted of 371 new nurses who were working in hospitals for ≤18 months between July and October, 2014. The final model accounted for 40% of the variance in turnover intention. Emotional labor and burnout had a significant positive direct effect and an indirect effect on nurses' turnover intention. Resilience had a positive direct effect on job involvement. Job involvement had a negative direct effect on turnover intention. Resilience and job involvement mediated the effect of work environment satisfaction, emotional labor, and burnout on turnover intention. It is important to strengthen new graduate nurses' resilience in order to increase their job involvement and to reduce their turnover intention. © 2018 Japan Academy of Nursing Science.

  16. The Large Office Environment - Measurement and Modeling of the Wideband Radio Channel

    DEFF Research Database (Denmark)

    Andersen, Jørgen Bach; Nielsen, Jesper Ødum; Bauch, Gerhard

    2006-01-01

    In a future 4G or WLAN wideband application we can imagine multiple users in a large office environment con-sisting of a single room with partitions. Up to now, indoor radio channel measurement and modelling has mainly concentrated on scenarios with several office rooms and corridors. We present...... here measurements at 5.8GHz for 100 MHz bandwidth and a novel modelling approach for the wideband radio channel in a large office room envi-ronment. An acoustic like reverberation theory is pro-posed that allows to specify a tapped delay line model just from the room dimensions and an average...... calculated from the measurements. The pro-posed model can likely also be applied to indoor hot spot scenarios....

  17. Large deformation analysis of adhesive by Eulerian method with new material model

    International Nuclear Information System (INIS)

    Maeda, K; Nishiguchi, K; Iwamoto, T; Okazawa, S

    2010-01-01

    The material model to describe large deformation of a pressure sensitive adhesive (PSA) is presented. A relationship between stress and strain of PSA includes viscoelasticity and rubber-elasticity. Therefore, we propose the material model for describing viscoelasticity and rubber-elasticity, and extend the presented material model to the rate form for three dimensional finite element analysis. After proposing the material model for PSA, we formulate the Eulerian method to simulate large deformation behavior. In the Eulerian calculation, the Piecewise Linear Interface Calculation (PLIC) method for capturing material surface is employed. By using PLIC method, we can impose dynamic and kinematic boundary conditions on captured material surface. The representative two computational examples are calculated to check validity of the present methods.

  18. The large-scale peculiar velocity field in flat models of the universe

    International Nuclear Information System (INIS)

    Vittorio, N.; Turner, M.S.

    1986-10-01

    The inflationary Universe scenario predicts a flat Universe and both adiabatic and isocurvature primordial density perturbations with the Zel'dovich spectrum. The two simplest realizations, models dominated by hot or cold dark matter, seem to be in conflict with observations. Flat models are examined with two components of mass density, where one of the components of mass density is smoothly distributed and the large-scale (≥10h -1 MpC) peculiar velocity field for these models is considered. For the smooth component relativistic particles, a relic cosmological term, and light strings are considered. At present the observational situation is unsettled; but, in principle, the large-scale peculiar velocity field is very powerful discriminator between these different models. 61 refs

  19. Bilevel Traffic Evacuation Model and Algorithm Design for Large-Scale Activities

    Directory of Open Access Journals (Sweden)

    Danwen Bao

    2017-01-01

    Full Text Available This paper establishes a bilevel planning model with one master and multiple slaves to solve traffic evacuation problems. The minimum evacuation network saturation and shortest evacuation time are used as the objective functions for the upper- and lower-level models, respectively. The optimizing conditions of this model are also analyzed. An improved particle swarm optimization (PSO method is proposed by introducing an electromagnetism-like mechanism to solve the bilevel model and enhance its convergence efficiency. A case study is carried out using the Nanjing Olympic Sports Center. The results indicate that, for large-scale activities, the average evacuation time of the classic model is shorter but the road saturation distribution is more uneven. Thus, the overall evacuation efficiency of the network is not high. For induced emergencies, the evacuation time of the bilevel planning model is shortened. When the audience arrival rate is increased from 50% to 100%, the evacuation time is shortened from 22% to 35%, indicating that the optimization effect of the bilevel planning model is more effective compared to the classic model. Therefore, the model and algorithm presented in this paper can provide a theoretical basis for the traffic-induced evacuation decision making of large-scale activities.

  20. Phase-field-based lattice Boltzmann modeling of large-density-ratio two-phase flows

    Science.gov (United States)

    Liang, Hong; Xu, Jiangrong; Chen, Jiangxing; Wang, Huili; Chai, Zhenhua; Shi, Baochang

    2018-03-01

    In this paper, we present a simple and accurate lattice Boltzmann (LB) model for immiscible two-phase flows, which is able to deal with large density contrasts. This model utilizes two LB equations, one of which is used to solve the conservative Allen-Cahn equation, and the other is adopted to solve the incompressible Navier-Stokes equations. A forcing distribution function is elaborately designed in the LB equation for the Navier-Stokes equations, which make it much simpler than the existing LB models. In addition, the proposed model can achieve superior numerical accuracy compared with previous Allen-Cahn type of LB models. Several benchmark two-phase problems, including static droplet, layered Poiseuille flow, and spinodal decomposition are simulated to validate the present LB model. It is found that the present model can achieve relatively small spurious velocity in the LB community, and the obtained numerical results also show good agreement with the analytical solutions or some available results. Lastly, we use the present model to investigate the droplet impact on a thin liquid film with a large density ratio of 1000 and the Reynolds number ranging from 20 to 500. The fascinating phenomena of droplet splashing is successfully reproduced by the present model and the numerically predicted spreading radius exhibits to obey the power law reported in the literature.

  1. Parameterization of a Hydrological Model for a Large, Ungauged Urban Catchment

    Directory of Open Access Journals (Sweden)

    Gerald Krebs

    2016-10-01

    Full Text Available Urbanization leads to the replacement of natural areas by impervious surfaces and affects the catchment hydrological cycle with adverse environmental impacts. Low impact development tools (LID that mimic hydrological processes of natural areas have been developed and applied to mitigate these impacts. Hydrological simulations are one possibility to evaluate the LID performance but the associated small-scale processes require a highly spatially distributed and explicit modeling approach. However, detailed data for model development are often not available for large urban areas, hampering the model parameterization. In this paper we propose a methodology to parameterize a hydrological model to a large, ungauged urban area by maintaining at the same time a detailed surface discretization for direct parameter manipulation for LID simulation and a firm reliance on available data for model conceptualization. Catchment delineation was based on a high-resolution digital elevation model (DEM and model parameterization relied on a novel model regionalization approach. The impact of automated delineation and model regionalization on simulation results was evaluated for three monitored study catchments (5.87–12.59 ha. The simulated runoff peak was most sensitive to accurate catchment discretization and calibration, while both the runoff volume and the fit of the hydrograph were less affected.

  2. Large scale structures in the kinetic gravity braiding model that can be unbraided

    International Nuclear Information System (INIS)

    Kimura, Rampei; Yamamoto, Kazuhiro

    2011-01-01

    We study cosmological consequences of a kinetic gravity braiding model, which is proposed as an alternative to the dark energy model. The kinetic braiding model we study is characterized by a parameter n, which corresponds to the original galileon cosmological model for n = 1. We find that the background expansion of the universe of the kinetic braiding model is the same as the Dvali-Turner's model, which reduces to that of the standard cold dark matter model with a cosmological constant (ΛCDM model) for n equal to infinity. We also find that the evolution of the linear cosmological perturbation in the kinetic braiding model reduces to that of the ΛCDM model for n = ∞. Then, we focus our study on the growth history of the linear density perturbation as well as the spherical collapse in the nonlinear regime of the density perturbations, which might be important in order to distinguish between the kinetic braiding model and the ΛCDM model when n is finite. The theoretical prediction for the large scale structure is confronted with the multipole power spectrum of the luminous red galaxy sample of the Sloan Digital Sky survey. We also discuss future prospects of constraining the kinetic braiding model using a future redshift survey like the WFMOS/SuMIRe PFS survey as well as the cluster redshift distribution in the South Pole Telescope survey

  3. Scale breaking effects in the quark-parton model for large P perpendicular phenomena

    International Nuclear Information System (INIS)

    Baier, R.; Petersson, B.

    1977-01-01

    We discuss how the scaling violations suggested by an asymptotically free parton model, i.e., the Q 2 -dependence of the transverse momentum of partons within hadrons may affect the parton model description of large p perpendicular phenomena. We show that such a mechanism can provide an explanation for the magnitude of the opposite side correlations and their dependence on the trigger momentum. (author)

  4. RELAPS choked flow model and application to a large scale flow test

    International Nuclear Information System (INIS)

    Ransom, V.H.; Trapp, J.A.

    1980-01-01

    The RELAP5 code was used to simulate a large scale choked flow test. The fluid system used in the test was modeled in RELAP5 using a uniform, but coarse, nodalization. The choked mass discharge rate was calculated using the RELAP5 choked flow model. The calulations were in good agreement with the test data, and the flow was calculated to be near thermal equilibrium

  5. Modifying a dynamic global vegetation model for simulating large spatial scale land surface water balance

    Science.gov (United States)

    Tang, G.; Bartlein, P. J.

    2012-01-01

    Water balance models of simple structure are easier to grasp and more clearly connect cause and effect than models of complex structure. Such models are essential for studying large spatial scale land surface water balance in the context of climate and land cover change, both natural and anthropogenic. This study aims to (i) develop a large spatial scale water balance model by modifying a dynamic global vegetation model (DGVM), and (ii) test the model's performance in simulating actual evapotranspiration (ET), soil moisture and surface runoff for the coterminous United States (US). Toward these ends, we first introduced development of the "LPJ-Hydrology" (LH) model by incorporating satellite-based land covers into the Lund-Potsdam-Jena (LPJ) DGVM instead of dynamically simulating them. We then ran LH using historical (1982-2006) climate data and satellite-based land covers at 2.5 arc-min grid cells. The simulated ET, soil moisture and surface runoff were compared to existing sets of observed or simulated data for the US. The results indicated that LH captures well the variation of monthly actual ET (R2 = 0.61, p 0.46, p 0.52) with observed values over the years 1982-2006, respectively. The modeled spatial patterns of annual ET and surface runoff are in accordance with previously published data. Compared to its predecessor, LH simulates better monthly stream flow in winter and early spring by incorporating effects of solar radiation on snowmelt. Overall, this study proves the feasibility of incorporating satellite-based land-covers into a DGVM for simulating large spatial scale land surface water balance. LH developed in this study should be a useful tool for studying effects of climate and land cover change on land surface hydrology at large spatial scales.

  6. Development of a transverse mixing model for large scale impulsion phenomenon in tight lattice

    International Nuclear Information System (INIS)

    Liu, Xiaojing; Ren, Shuo; Cheng, Xu

    2017-01-01

    Highlights: • Experiment data of Krauss is used to validate the feasibility of CFD simulation method. • CFD simulation is performed to simulate the large scale impulsion phenomenon for tight-lattice bundle. • A mixing model to simulate the large scale impulsion phenomenon is proposed based on CFD result fitting. • The new developed mixing model has been added in the subchannel code. - Abstract: Tight-lattice is widely adopted in the innovative reactor fuel bundles design since it can increase the conversion ratio and improve the heat transfer between fuel bundles and coolant. It has been noticed that a large scale impulsion of cross-velocity exists in the gap region, which plays an important role on the transverse mixing flow and heat transfer. Although many experiments and numerical simulation have been carried out to study the impulsion of velocity, a model to describe the wave length, amplitude and frequency of mixing coefficient is still missing. This research work takes advantage of the CFD method to simulate the experiment of Krauss and to compare experiment data and simulation result in order to demonstrate the feasibility of simulation method and turbulence model. Then, based on this verified method and model, several simulations are performed with different Reynolds number and different Pitch-to-Diameter ratio. By fitting the CFD results achieved, a mixing model to simulate the large scale impulsion phenomenon is proposed and adopted in the current subchannel code. The new mixing model is applied to some fuel assembly analysis by subchannel calculation, it can be noticed that the new developed mixing model can reduce the hot channel factor and contribute to a uniform distribution of outlet temperature.

  7. DMPy: a Python package for automated mathematical model construction of large-scale metabolic systems.

    Science.gov (United States)

    Smith, Robert W; van Rosmalen, Rik P; Martins Dos Santos, Vitor A P; Fleck, Christian

    2018-06-19

    Models of metabolism are often used in biotechnology and pharmaceutical research to identify drug targets or increase the direct production of valuable compounds. Due to the complexity of large metabolic systems, a number of conclusions have been drawn using mathematical methods with simplifying assumptions. For example, constraint-based models describe changes of internal concentrations that occur much quicker than alterations in cell physiology. Thus, metabolite concentrations and reaction fluxes are fixed to constant values. This greatly reduces the mathematical complexity, while providing a reasonably good description of the system in steady state. However, without a large number of constraints, many different flux sets can describe the optimal model and we obtain no information on how metabolite levels dynamically change. Thus, to accurately determine what is taking place within the cell, finer quality data and more detailed models need to be constructed. In this paper we present a computational framework, DMPy, that uses a network scheme as input to automatically search for kinetic rates and produce a mathematical model that describes temporal changes of metabolite fluxes. The parameter search utilises several online databases to find measured reaction parameters. From this, we take advantage of previous modelling efforts, such as Parameter Balancing, to produce an initial mathematical model of a metabolic pathway. We analyse the effect of parameter uncertainty on model dynamics and test how recent flux-based model reduction techniques alter system properties. To our knowledge this is the first time such analysis has been performed on large models of metabolism. Our results highlight that good estimates of at least 80% of the reaction rates are required to accurately model metabolic systems. Furthermore, reducing the size of the model by grouping reactions together based on fluxes alters the resulting system dynamics. The presented pipeline automates the

  8. Large-scale 3-D modeling by integration of resistivity models and borehole data through inversion

    DEFF Research Database (Denmark)

    Foged, N.; Marker, Pernille Aabye; Christiansen, A. V.

    2014-01-01

    resistivity and the clay fraction. Through inversion we use the lithological data and the resistivity data to determine the optimum spatially distributed translator function. Applying the translator function we get a 3-D clay fraction model, which holds information from the resistivity data set...... and the borehole data set in one variable. Finally, we use k-means clustering to generate a 3-D model of the subsurface structures. We apply the procedure to the Norsminde survey in Denmark, integrating approximately 700 boreholes and more than 100 000 resistivity models from an airborne survey...

  9. Imaging the Chicxulub central crater zone from large scale seismic acoustic wave propagation and gravity modeling

    Science.gov (United States)

    Fucugauchi, J. U.; Ortiz-Aleman, C.; Martin, R.

    2017-12-01

    Large complex craters are characterized by central uplifts that represent large-scale differential movement of deep basement from the transient cavity. Here we investigate the central sector of the large multiring Chicxulub crater, which has been surveyed by an array of marine, aerial and land-borne geophysical methods. Despite high contrasts in physical properties,contrasting results for the central uplift have been obtained, with seismic reflection surveys showing lack of resolution in the central zone. We develop an integrated seismic and gravity model for the main structural elements, imaging the central basement uplift and melt and breccia units. The 3-D velocity model built from interpolation of seismic data is validated using perfectly matched layer seismic acoustic wave propagation modeling, optimized at grazing incidence using shift in the frequency domain. Modeling shows significant lack of illumination in the central sector, masking presence of the central uplift. Seismic energy remains trapped in an upper low velocity zone corresponding to the sedimentary infill, melt/breccias and surrounding faulted blocks. After conversion of seismic velocities into a volume of density values, we use massive parallel forward gravity modeling to constrain the size and shape of the central uplift that lies at 4.5 km depth, providing a high-resolution image of crater structure.The Bouguer anomaly and gravity response of modeled units show asymmetries, corresponding to the crater structure and distribution of post-impact carbonates, breccias, melt and target sediments

  10. Absorption and scattering coefficient dependence of laser-Doppler flowmetry models for large tissue volumes

    International Nuclear Information System (INIS)

    Binzoni, T; Leung, T S; Ruefenacht, D; Delpy, D T

    2006-01-01

    Based on quasi-elastic scattering theory (and random walk on a lattice approach), a model of laser-Doppler flowmetry (LDF) has been derived which can be applied to measurements in large tissue volumes (e.g. when the interoptode distance is >30 mm). The model holds for a semi-infinite medium and takes into account the transport-corrected scattering coefficient and the absorption coefficient of the tissue, and the scattering coefficient of the red blood cells. The model holds for anisotropic scattering and for multiple scattering of the photons by the moving scatterers of finite size. In particular, it has also been possible to take into account the simultaneous presence of both Brownian and pure translational movements. An analytical and simplified version of the model has also been derived and its validity investigated, for the case of measurements in human skeletal muscle tissue. It is shown that at large optode spacing it is possible to use the simplified model, taking into account only a 'mean' light pathlength, to predict the blood flow related parameters. It is also demonstrated that the 'classical' blood volume parameter, derived from LDF instruments, may not represent the actual blood volume variations when the investigated tissue volume is large. The simplified model does not need knowledge of the tissue optical parameters and thus should allow the development of very simple and cost-effective LDF hardware

  11. Two-Dimensional Physical and CFD Modelling of Large Gas Bubble Behaviour in Bath Smelting Furnaces

    Directory of Open Access Journals (Sweden)

    Yuhua Pan

    2010-09-01

    Full Text Available The behaviour of large gas bubbles in a liquid bath and the mechanisms of splash generation due to gas bubble rupture in high-intensity bath smelting furnaces were investigated by means of physical and mathematical (CFD modelling techniques. In the physical modelling work, a two-dimensional Perspex model of the pilot plant furnace at CSIRO Process Science and Engineering was established in the laboratory. An aqueous glycerol solution was used to simulate liquid slag. Air was injected via a submerged lance into the liquid bath and the bubble behaviour and the resultant splashing phenomena were observed and recorded with a high-speed video camera. In the mathematical modelling work, a two-dimensional CFD model was developed to simulate the free surface flows due to motion and deformation of large gas bubbles in the liquid bath and rupture of the bubbles at the bath free surface. It was concluded from these modelling investigations that the splashes generated in high-intensity bath smelting furnaces are mainly caused by the rupture of fast rising large gas bubbles. The acceleration of the bubbles into the preceding bubbles and the rupture of the coalescent bubbles at the bath surface contribute significantly to splash generation.

  12. 5D Modelling: An Efficient Approach for Creating Spatiotemporal Predictive 3D Maps of Large-Scale Cultural Resources

    Science.gov (United States)

    Doulamis, A.; Doulamis, N.; Ioannidis, C.; Chrysouli, C.; Grammalidis, N.; Dimitropoulos, K.; Potsiou, C.; Stathopoulou, E.-K.; Ioannides, M.

    2015-08-01

    Outdoor large-scale cultural sites are mostly sensitive to environmental, natural and human made factors, implying an imminent need for a spatio-temporal assessment to identify regions of potential cultural interest (material degradation, structuring, conservation). On the other hand, in Cultural Heritage research quite different actors are involved (archaeologists, curators, conservators, simple users) each of diverse needs. All these statements advocate that a 5D modelling (3D geometry plus time plus levels of details) is ideally required for preservation and assessment of outdoor large scale cultural sites, which is currently implemented as a simple aggregation of 3D digital models at different time and levels of details. The main bottleneck of such an approach is its complexity, making 5D modelling impossible to be validated in real life conditions. In this paper, a cost effective and affordable framework for 5D modelling is proposed based on a spatial-temporal dependent aggregation of 3D digital models, by incorporating a predictive assessment procedure to indicate which regions (surfaces) of an object should be reconstructed at higher levels of details at next time instances and which at lower ones. In this way, dynamic change history maps are created, indicating spatial probabilities of regions needed further 3D modelling at forthcoming instances. Using these maps, predictive assessment can be made, that is, to localize surfaces within the objects where a high accuracy reconstruction process needs to be activated at the forthcoming time instances. The proposed 5D Digital Cultural Heritage Model (5D-DCHM) is implemented using open interoperable standards based on the CityGML framework, which also allows the description of additional semantic metadata information. Visualization aspects are also supported to allow easy manipulation, interaction and representation of the 5D-DCHM geometry and the respective semantic information. The open source 3DCity

  13. Zone modelling of the thermal performances of a large-scale bloom reheating furnace

    International Nuclear Information System (INIS)

    Tan, Chee-Keong; Jenkins, Joana; Ward, John; Broughton, Jonathan; Heeley, Andy

    2013-01-01

    This paper describes the development and comparison of a two- (2D) and three-dimensional (3D) mathematical models, based on the zone method of radiation analysis, to simulate the thermal performances of a large bloom reheating furnace. The modelling approach adopted in the current paper differs from previous work since it takes into account the net radiation interchanges between the top and bottom firing sections of the furnace and also allows for enthalpy exchange due to the flows of combustion products between these sections. The models were initially validated at two different furnace throughput rates using experimental and plant's model data supplied by Tata Steel. The results to-date demonstrated that the model predictions are in good agreement with measured heating profiles of the blooms encountered in the actual furnace. It was also found no significant differences between the predictions from the 2D and 3D models. Following the validation, the 2D model was then used to assess the impact of the furnace responses to changing throughput rate. It was found that the potential furnace response to changing throughput rate influences the settling time of the furnace to the next steady state operation. Overall the current work demonstrates the feasibility and practicality of zone modelling and its potential for incorporation into a model based furnace control system. - Highlights: ► 2D and 3D zone models of large-scale bloom reheating furnace. ► The models were validated with experimental and plant model data. ► Examine the transient furnace response to changing the furnace throughput rates. ► No significant differences found between the predictions from the 2D and 3D models.

  14. Full-Scale Approximations of Spatio-Temporal Covariance Models for Large Datasets

    KAUST Repository

    Zhang, Bohai; Sang, Huiyan; Huang, Jianhua Z.

    2014-01-01

    of dataset and application of such models is not feasible for large datasets. This article extends the full-scale approximation (FSA) approach by Sang and Huang (2012) to the spatio-temporal context to reduce computational complexity. A reversible jump Markov

  15. A large deviations approach to the transient of the Erlang loss model

    NARCIS (Netherlands)

    Mandjes, M.R.H.; Ridder, Annemarie

    2001-01-01

    This paper deals with the transient behavior of the Erlang loss model. After scaling both arrival rate and number of trunks, an asymptotic analysis of the blocking probability is given. Apart from that, the most likely path to blocking is given. Compared to Shwartz and Weiss [Large Deviations for

  16. Modeling very large-fire occurrences over the continental United States from weather and climate forcing

    Science.gov (United States)

    R Barbero; J T Abatzoglou; E A Steel

    2014-01-01

    Very large-fires (VLFs) have widespread impacts on ecosystems, air quality, fire suppression resources, and in many regions account for a majority of total area burned. Empirical generalized linear models of the largest fires (>5000 ha) across the contiguous United States (US) were developed at ¡­60 km spatial and weekly temporal resolutions using solely atmospheric...

  17. Modeling of Ammonia Dry Deposition to a Pocosin Landscape Downwind of a Large Poultry Facility

    Science.gov (United States)

    A semi-empirical bi-directional flux modeling approach is used to estimate NH3 air concentrations and dry deposition fluxes to a portion of the Pocosin Lakes National Wildlife Refuge (PLNWR) downwind of a large poultry facility. Meteorological patterns at PLNWR are such that som...

  18. Large Deviations for the Annealed Ising Model on Inhomogeneous Random Graphs: Spins and Degrees

    Science.gov (United States)

    Dommers, Sander; Giardinà, Cristian; Giberti, Claudio; Hofstad, Remco van der

    2018-04-01

    We prove a large deviations principle for the total spin and the number of edges under the annealed Ising measure on generalized random graphs. We also give detailed results on how the annealing over the Ising model changes the degrees of the vertices in the graph and show how it gives rise to interesting correlated random graphs.

  19. Mixed-signal instrumentation for large-signal device characterization and modelling

    NARCIS (Netherlands)

    Marchetti, M.

    2013-01-01

    This thesis concentrates on the development of advanced large-signal measurement and characterization tools to support technology development, model extraction and validation, and power amplifier (PA) designs that address the newly introduced third and fourth generation (3G and 4G) wideband

  20. Model Predictive Control for Flexible Power Consumption of Large-Scale Refrigeration Systems

    DEFF Research Database (Denmark)

    Shafiei, Seyed Ehsan; Stoustrup, Jakob; Rasmussen, Henrik

    2014-01-01

    A model predictive control (MPC) scheme is introduced to directly control the electrical power consumption of large-scale refrigeration systems. Deviation from the baseline of the consumption is corresponded to the storing and delivering of thermal energy. By virtue of such correspondence...

  1. Model of large scale man-machine systems with an application to vessel traffic control

    NARCIS (Netherlands)

    Wewerinke, P.H.; van der Ent, W.I.; ten Hove, D.

    1989-01-01

    Mathematical models are discussed to deal with complex large-scale man-machine systems such as vessel (air, road) traffic and process control systems. Only interrelationships between subsystems are assumed. Each subsystem is controlled by a corresponding human operator (HO). Because of the

  2. The Impact of a Flipped Classroom Model of Learning on a Large Undergraduate Statistics Class

    Science.gov (United States)

    Nielson, Perpetua Lynne; Bean, Nathan William Bean; Larsen, Ross Allen Andrew

    2018-01-01

    We examine the impact of a flipped classroom model of learning on student performance and satisfaction in a large undergraduate introductory statistics class. Two professors each taught a lecture-section and a flipped-class section. Using MANCOVA, a linear combination of final exam scores, average quiz scores, and course ratings was compared for…

  3. Small- and large-signal modeling of InP HBTs in transferred-substrate technology

    DEFF Research Database (Denmark)

    Johansen, Tom Keinicke; Rudolph, Matthias; Jensen, Thomas

    2014-01-01

    In this paper, the small- and large-signal modeling of InP heterojunction bipolar transistors (HBTs) in transferred substrate (TS) technology is investigated. The small-signal equivalent circuit parameters for TS-HBTs in two-terminal and three-terminal configurations are determined by employing...

  4. The use of soil moisture - remote sensing products for large-scale groundwater modeling and assessment

    NARCIS (Netherlands)

    Sutanudjaja, E.H.

    2012-01-01

    In this thesis, the possibilities of using spaceborne remote sensing for large-scale groundwater modeling are explored. We focus on a soil moisture product called European Remote Sensing Soil Water Index (ERS SWI, Wagner et al., 1999) - representing the upper profile soil moisture. As a test-bed, we

  5. Modelling Morphological Response of Large Tidal Inlet Systems to Sea Level Rise

    NARCIS (Netherlands)

    Dissanayake, P.K.

    2011-01-01

    This dissertation qualitatively investigates the morphodynamic response of a large inlet system to IPCC projected relative sea level rise (RSLR). Adopted numerical approach (Delft3D) used a highly schematised model domain analogous to the Ameland inlet in the Dutch Wadden Sea. Predicted inlet

  6. Software engineering the mixed model for genome-wide association studies on large samples

    Science.gov (United States)

    Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample siz...

  7. A large-scale multi-species spatial depletion model for overwintering waterfowl

    NARCIS (Netherlands)

    Baveco, J.M.; Kuipers, H.; Nolet, B.A.

    2011-01-01

    In this paper, we develop a model to evaluate the capacity of accommodation areas for overwintering waterfowl, at a large spatial scale. Each day geese are distributed over roosting sites. Based on the energy minimization principle, the birds daily decide which surrounding fields to exploit within

  8. Large-order behavior of nondecoupling effects in the standard model and triviality

    International Nuclear Information System (INIS)

    Aoki, K.

    1994-01-01

    We compute some nondecoupling effects in the standard model, such as the ρ parameter, to all orders in the coupling constant expansion. We analyze their large order behavior and explicitly show how they are related to the nonperturbative cutoff dependence of these nondecoupling effects due to the triviality of the theory

  9. The Oncopig Cancer Model: An Innovative Large Animal Translational Oncology Platform

    DEFF Research Database (Denmark)

    Schachtschneider, Kyle M.; Schwind, Regina M.; Newson, Jordan

    2017-01-01

    -the Oncopig Cancer Model (OCM)-as a next-generation large animal platform for the study of hematologic and solid tumor oncology. With mutations in key tumor suppressor and oncogenes, TP53R167H and KRASG12D , the OCM recapitulates transcriptional hallmarks of human disease while also exhibiting clinically...

  10. Analysis and Design Environment for Large Scale System Models and Collaborative Model Development, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — As NASA modeling efforts grow more complex and more distributed among many working groups, new tools and technologies are required to integrate their efforts...

  11. Atlantis model outputs - Developing end-to-end models of the California Current Large Marine Ecosystem

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The purpose of this project is to develop spatially discrete end-to-end models of the California Current LME, linking oceanography, biogeochemistry, food web...

  12. SMILE: experimental results of the WP4 PTS large scale test performed on a component in terms of cracked cylinder involving warm pre-stress

    International Nuclear Information System (INIS)

    Kerkhof, K.; Bezdikian, G.; Moinereau, D.; Dahl, A; Wadier, Y.; Gilles, P.; Keim, E.; Chapuliot, S.; Taylor, N.; Lidbury, D.; Sharples, J.; Budden, P.; Siegele, D.; Nagel, G.; Bass, R.; Emond, D.

    2005-01-01

    The Reactor Pressure Vessel (RPV) is an essential component, which is liable to limit the lifetime duration of PWR plants. The assessment of defects in RPV subjected to pressurized thermal shock (PTS) transients made at an European level generally does not necessarily consider the beneficial effect of the load history (Warm Pre-stress, WPS). The SMILE project - Structural Margin Improvements in aged embrittled RPV with Load history Effects-aims to give sufficient elements to demonstrate, to model and to validate the beneficial WPS effect. It also aims to harmonize the different approaches in the national codes and standards regarding the inclusion of the WPS effect in a RPV structural integrity assessment. The project includes significant experimental work on WPS type experiments with C(T) specimens and a PTS type transient experiment on a large component. This paper deals with the results of the PTS type transient experiment on a component-like specimen subjected to WPS- loading, the so called Validation Test, carried out within the framework of work package WP4. The test specimen consists of a cylindrical thick walled specimen with a thickness of 40 mm and an outer diameter of 160 mm, provided with an internal fully circumferential crack with a depth of about 15 mm. The specified load path type is Load-Cool-Unload-Fracture (LCUF). No crack initiation occurred during cooling (thermal shock loading) although the loading path crossed the fracture toughness curve in the transition region. The benefit of the WPS-effect by final re-loading up to fracture in the lower shelf region, was shown clearly. The corresponding fracture load during reloading in the lower shelf region was significantly higher than the crack initiation values of the original material in the lower shelf region. The post test fractographic evaluation showed that the fracture mode was predominantly cleavage fracture also with some secondary cracks emanating from major crack. (authors)

  13. Identifiability in N-mixture models: a large-scale screening test with bird data.

    Science.gov (United States)

    Kéry, Marc

    2018-02-01

    Binomial N-mixture models have proven very useful in ecology, conservation, and monitoring: they allow estimation and modeling of abundance separately from detection probability using simple counts. Recently, doubts about parameter identifiability have been voiced. I conducted a large-scale screening test with 137 bird data sets from 2,037 sites. I found virtually no identifiability problems for Poisson and zero-inflated Poisson (ZIP) binomial N-mixture models, but negative-binomial (NB) models had problems in 25% of all data sets. The corresponding multinomial N-mixture models had no problems. Parameter estimates under Poisson and ZIP binomial and multinomial N-mixture models were extremely similar. Identifiability problems became a little more frequent with smaller sample sizes (267 and 50 sites), but were unaffected by whether the models did or did not include covariates. Hence, binomial N-mixture model parameters with Poisson and ZIP mixtures typically appeared identifiable. In contrast, NB mixtures were often unidentifiable, which is worrying since these were often selected by Akaike's information criterion. Identifiability of binomial N-mixture models should always be checked. If problems are found, simpler models, integrated models that combine different observation models or the use of external information via informative priors or penalized likelihoods, may help. © 2017 by the Ecological Society of America.

  14. Compensatory hypertrophy of the teres minor muscle after large rotator cuff tear model in adult male rat.

    Science.gov (United States)

    Ichinose, Tsuyoshi; Yamamoto, Atsushi; Kobayashi, Tsutomu; Shitara, Hitoshi; Shimoyama, Daisuke; Iizuka, Haku; Koibuchi, Noriyuki; Takagishi, Kenji

    2016-02-01

    Rotator cuff tear (RCT) is a common musculoskeletal disorder in the elderly. The large RCT is often irreparable due to the retraction and degeneration of the rotator cuff muscle. The integrity of the teres minor (TM) muscle is thought to affect postoperative functional recovery in some surgical treatments. Hypertrophy of the TM is found in some patients with large RCTs; however, the process underlying this hypertrophy is still unclear. The objective of this study was to determine if compensatory hypertrophy of the TM muscle occurs in a large RCT rat model. Twelve Wistar rats underwent transection of the suprascapular nerve and the supraspinatus and infraspinatus tendons in the left shoulder. The rats were euthanized 4 weeks after the surgery, and the cuff muscles were collected and weighed. The cross-sectional area and the involvement of Akt/mammalian target of rapamycin (mTOR) signaling were examined in the remaining TM muscle. The weight and cross-sectional area of the TM muscle was higher in the operated-on side than in the control side. The phosphorylated Akt/Akt protein ratio was not significantly different between these sides. The phosphorylated-mTOR/mTOR protein ratio was significantly higher on the operated-on side. Transection of the suprascapular nerve and the supraspinatus and infraspinatus tendons activates mTOR signaling in the TM muscle, which results in muscle hypertrophy. The Akt-signaling pathway may not be involved in this process. Nevertheless, activation of mTOR signaling in the TM muscle after RCT may be an effective therapeutic target of a large RCT. Copyright © 2016 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  15. Covariance approximation for large multivariate spatial data sets with an application to multiple climate model errors

    KAUST Repository

    Sang, Huiyan

    2011-12-01

    This paper investigates the cross-correlations across multiple climate model errors. We build a Bayesian hierarchical model that accounts for the spatial dependence of individual models as well as cross-covariances across different climate models. Our method allows for a nonseparable and nonstationary cross-covariance structure. We also present a covariance approximation approach to facilitate the computation in the modeling and analysis of very large multivariate spatial data sets. The covariance approximation consists of two parts: a reduced-rank part to capture the large-scale spatial dependence, and a sparse covariance matrix to correct the small-scale dependence error induced by the reduced rank approximation. We pay special attention to the case that the second part of the approximation has a block-diagonal structure. Simulation results of model fitting and prediction show substantial improvement of the proposed approximation over the predictive process approximation and the independent blocks analysis. We then apply our computational approach to the joint statistical modeling of multiple climate model errors. © 2012 Institute of Mathematical Statistics.

  16. A 2D nonlinear multiring model for blood flow in large elastic arteries

    Science.gov (United States)

    Ghigo, Arthur R.; Fullana, Jose-Maria; Lagrée, Pierre-Yves

    2017-12-01

    In this paper, we propose a two-dimensional nonlinear ;multiring; model to compute blood flow in axisymmetric elastic arteries. This model is designed to overcome the numerical difficulties of three-dimensional fluid-structure interaction simulations of blood flow without using the over-simplifications necessary to obtain one-dimensional blood flow models. This multiring model is derived by integrating over concentric rings of fluid the simplified long-wave Navier-Stokes equations coupled to an elastic model of the arterial wall. The resulting system of balance laws provides a unified framework in which both the motion of the fluid and the displacement of the wall are dealt with simultaneously. The mathematical structure of the multiring model allows us to use a finite volume method that guarantees the conservation of mass and the positivity of the numerical solution and can deal with nonlinear flows and large deformations of the arterial wall. We show that the finite volume numerical solution of the multiring model provides at a reasonable computational cost an asymptotically valid description of blood flow velocity profiles and other averaged quantities (wall shear stress, flow rate, ...) in large elastic and quasi-rigid arteries. In particular, we validate the multiring model against well-known solutions such as the Womersley or the Poiseuille solutions as well as against steady boundary layer solutions in quasi-rigid constricted and expanded tubes.

  17. Hybrid Reynolds-Averaged/Large Eddy Simulation of the Flow in a Model SCRamjet Cavity Flameholder

    Science.gov (United States)

    Baurle, R. A.

    2016-01-01

    Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. Experimental data available for this configuration include velocity statistics obtained from particle image velocimetry. Several turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged/large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This e ort was undertaken to not only assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community, but to also begin to understand how this capability can best be used to augment standard Reynolds-averaged simulations. The numerical errors were quantified for the steady-state simulations, and at least qualitatively assessed for the scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results displayed a high degree of variability when comparing the flameholder fuel distributions obtained from each turbulence model. This prompted the consideration of applying the higher-fidelity scale-resolving simulations as a surrogate "truth" model to calibrate the Reynolds-averaged closures in a non-reacting setting prior to their use for the combusting simulations. In general, the Reynolds-averaged velocity profile predictions at the lowest fueling level matched the particle imaging measurements almost as well as was observed for the non-reacting condition. However, the velocity field predictions proved to be more sensitive to the flameholder fueling rate than was indicated in the measurements.

  18. Traffic Flow Prediction Model for Large-Scale Road Network Based on Cloud Computing

    Directory of Open Access Journals (Sweden)

    Zhaosheng Yang

    2014-01-01

    Full Text Available To increase the efficiency and precision of large-scale road network traffic flow prediction, a genetic algorithm-support vector machine (GA-SVM model based on cloud computing is proposed in this paper, which is based on the analysis of the characteristics and defects of genetic algorithm and support vector machine. In cloud computing environment, firstly, SVM parameters are optimized by the parallel genetic algorithm, and then this optimized parallel SVM model is used to predict traffic flow. On the basis of the traffic flow data of Haizhu District in Guangzhou City, the proposed model was verified and compared with the serial GA-SVM model and parallel GA-SVM model based on MPI (message passing interface. The results demonstrate that the parallel GA-SVM model based on cloud computing has higher prediction accuracy, shorter running time, and higher speedup.

  19. Flexible non-linear predictive models for large-scale wind turbine diagnostics

    DEFF Research Database (Denmark)

    Bach-Andersen, Martin; Rømer-Odgaard, Bo; Winther, Ole

    2017-01-01

    We demonstrate how flexible non-linear models can provide accurate and robust predictions on turbine component temperature sensor data using data-driven principles and only a minimum of system modeling. The merits of different model architectures are evaluated using data from a large set...... of turbines operating under diverse conditions. We then go on to test the predictive models in a diagnostic setting, where the output of the models are used to detect mechanical faults in rotor bearings. Using retrospective data from 22 actual rotor bearing failures, the fault detection performance...... of the models are quantified using a structured framework that provides the metrics required for evaluating the performance in a fleet wide monitoring setup. It is demonstrated that faults are identified with high accuracy up to 45 days before a warning from the hard-threshold warning system....

  20. On a Numerical and Graphical Technique for Evaluating some Models Involving Rational Expectations

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders Rygh

    Campbell and Shiller (1987) proposed a graphical technique for the present value model which consists of plotting the spread and theoretical spread as calculated from the cointegrated vector autoregressive model. We extend these techniques to a number of rational expectation models and give...

  1. On a numerical and graphical technique for evaluating some models involving rational expectations

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders Rygh

    Campbell and Shiller (1987) proposed a graphical technique for the present value model which consists of plotting the spread and theoretical spread as calculated from the cointegrated vector autoregressive model. We extend these techniques to a number of rational expectation models and give...

  2. Longitudinal Modeling of Adolescents' Activity Involvement, Problem Peer Associations, and Youth Smoking

    Science.gov (United States)

    Metzger, Aaron; Dawes, Nickki; Mermelstein, Robin; Wakschlag, Lauren

    2011-01-01

    Longitudinal associations among different types of organized activity involvement, problem peer associations, and cigarette smoking were examined in a sample of 1040 adolescents (mean age = 15.62 at baseline, 16.89 at 15-month assessment, 17.59 at 24 months) enriched for smoking experimentation (83% had tried smoking). A structural equation model…

  3. Modelling of phase equilibria and related properties of mixtures involving lipids

    DEFF Research Database (Denmark)

    Cunico, Larissa

    Many challenges involving physical and thermodynamic properties in the production of edible oils and biodiesel are observed, such as availability of experimental data and realiable prediction. In the case of lipids, a lack of experimental data for pure components and also for their mixtures in open...

  4. Expanding the Work Phases Model: User and Expert Involvement in the Construction of Online Specialised Dictionaries

    DEFF Research Database (Denmark)

    Leroyer, Patrick

    The purpose of this article is to establish new proposals for the lexicographic process and the involvement of experts and users in the construction of online specialised dictionaries. It is argued that the ENeL action should also have a view to the development of innovative theories and methodol...

  5. Distributed HUC-based modeling with SUMMA for ensemble streamflow forecasting over large regional domains.

    Science.gov (United States)

    Saharia, M.; Wood, A.; Clark, M. P.; Bennett, A.; Nijssen, B.; Clark, E.; Newman, A. J.

    2017-12-01

    Most operational streamflow forecasting systems rely on a forecaster-in-the-loop approach in which some parts of the forecast workflow require an experienced human forecaster. But this approach faces challenges surrounding process reproducibility, hindcasting capability, and extension to large domains. The operational hydrologic community is increasingly moving towards `over-the-loop' (completely automated) large-domain simulations yet recent developments indicate a widespread lack of community knowledge about the strengths and weaknesses of such systems for forecasting. A realistic representation of land surface hydrologic processes is a critical element for improving forecasts, but often comes at the substantial cost of forecast system agility and efficiency. While popular grid-based models support the distributed representation of land surface processes, intermediate-scale Hydrologic Unit Code (HUC)-based modeling could provide a more efficient and process-aligned spatial discretization, reducing the need for tradeoffs between model complexity and critical forecasting requirements such as ensemble methods and comprehensive model calibration. The National Center for Atmospheric Research is collaborating with the University of Washington, the Bureau of Reclamation and the USACE to implement, assess, and demonstrate real-time, over-the-loop distributed streamflow forecasting for several large western US river basins and regions. In this presentation, we present early results from short to medium range hydrologic and streamflow forecasts for the Pacific Northwest (PNW). We employ a real-time 1/16th degree daily ensemble model forcings as well as downscaled Global Ensemble Forecasting System (GEFS) meteorological forecasts. These datasets drive an intermediate-scale configuration of the Structure for Unifying Multiple Modeling Alternatives (SUMMA) model, which represents the PNW using over 11,700 HUCs. The system produces not only streamflow forecasts (using the Mizu

  6. Volterra representation enables modeling of complex synaptic nonlinear dynamics in large-scale simulations.

    Science.gov (United States)

    Hu, Eric Y; Bouteiller, Jean-Marie C; Song, Dong; Baudry, Michel; Berger, Theodore W

    2015-01-01

    Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations.

  7. SWAT meta-modeling as support of the management scenario analysis in large watersheds.

    Science.gov (United States)

    Azzellino, A; Çevirgen, S; Giupponi, C; Parati, P; Ragusa, F; Salvetti, R

    2015-01-01

    In the last two decades, numerous models and modeling techniques have been developed to simulate nonpoint source pollution effects. Most models simulate the hydrological, chemical, and physical processes involved in the entrainment and transport of sediment, nutrients, and pesticides. Very often these models require a distributed modeling approach and are limited in scope by the requirement of homogeneity and by the need to manipulate extensive data sets. Physically based models are extensively used in this field as a decision support for managing the nonpoint source emissions. A common characteristic of this type of model is a demanding input of several state variables that makes the calibration and effort-costing in implementing any simulation scenario more difficult. In this study the USDA Soil and Water Assessment Tool (SWAT) was used to model the Venice Lagoon Watershed (VLW), Northern Italy. A Multi-Layer Perceptron (MLP) network was trained on SWAT simulations and used as a meta-model for scenario analysis. The MLP meta-model was successfully trained and showed an overall accuracy higher than 70% both on the training and on the evaluation set, allowing a significant simplification in conducting scenario analysis.

  8. Modeling and experiments of biomass combustion in a large-scale grate boiler

    DEFF Research Database (Denmark)

    Yin, Chungen; Rosendahl, Lasse; Kær, Søren Knudsen

    2007-01-01

    is inherently more difficult due to the complexity of the solid biomass fuel bed on the grate, the turbulent reacting flow in the combustion chamber and the intensive interaction between them. This paper presents the CFD validation efforts for a modern large-scale biomass-fired grate boiler. Modeling...... and experiments are both done for the grate boiler. The comparison between them shows an overall acceptable agreement in tendency. However at some measuring ports, big discrepancies between the modeling and the experiments are observed, mainly because the modeling-based boundary conditions (BCs) could differ...

  9. A simple atmospheric boundary layer model applied to large eddy simulations of wind turbine wakes

    DEFF Research Database (Denmark)

    Troldborg, Niels; Sørensen, Jens Nørkær; Mikkelsen, Robert Flemming

    2014-01-01

    A simple model for including the influence of the atmospheric boundary layer in connection with large eddy simulations of wind turbine wakes is presented and validated by comparing computed results with measurements as well as with direct numerical simulations. The model is based on an immersed...... boundary type technique where volume forces are used to introduce wind shear and atmospheric turbulence. The application of the model for wake studies is demonstrated by combining it with the actuator line method, and predictions are compared with field measurements. Copyright © 2013 John Wiley & Sons, Ltd....

  10. A dynamic programming approach for quickly estimating large network-based MEV models

    DEFF Research Database (Denmark)

    Mai, Tien; Frejinger, Emma; Fosgerau, Mogens

    2017-01-01

    We propose a way to estimate a family of static Multivariate Extreme Value (MEV) models with large choice sets in short computational time. The resulting model is also straightforward and fast to use for prediction. Following Daly and Bierlaire (2006), the correlation structure is defined by a ro...... to converge (4.3 h on an Intel(R) 3.2 GHz machine using a non-parallelized code). We also show that our approach allows to estimate a cross-nested logit model of 111 nests with a real data set of more than 100,000 observations in 14 h....

  11. REQUIREMENTS FOR SYSTEMS DEVELOPMENT LIFE CYCLE MODELS FOR LARGE-SCALE DEFENSE SYSTEMS

    Directory of Open Access Journals (Sweden)

    Kadir Alpaslan DEMIR

    2015-10-01

    Full Text Available TLarge-scale defense system projects are strategic for maintaining and increasing the national defense capability. Therefore, governments spend billions of dollars in the acquisition and development of large-scale defense systems. The scale of defense systems is always increasing and the costs to build them are skyrocketing. Today, defense systems are software intensive and they are either a system of systems or a part of it. Historically, the project performances observed in the development of these systems have been signifi cantly poor when compared to other types of projects. It is obvious that the currently used systems development life cycle models are insuffi cient to address today’s challenges of building these systems. Using a systems development life cycle model that is specifi cally designed for largescale defense system developments and is effective in dealing with today’s and near-future challenges will help to improve project performances. The fi rst step in the development a large-scale defense systems development life cycle model is the identifi cation of requirements for such a model. This paper contributes to the body of literature in the fi eld by providing a set of requirements for system development life cycle models for large-scale defense systems. Furthermore, a research agenda is proposed.

  12. Finite element modelling for fatigue stress analysis of large suspension bridges

    Science.gov (United States)

    Chan, Tommy H. T.; Guo, L.; Li, Z. X.

    2003-03-01

    Fatigue is an important failure mode for large suspension bridges under traffic loadings. However, large suspension bridges have so many attributes that it is difficult to analyze their fatigue damage using experimental measurement methods. Numerical simulation is a feasible method of studying such fatigue damage. In British standards, the finite element method is recommended as a rigorous method for steel bridge fatigue analysis. This paper aims at developing a finite element (FE) model of a large suspension steel bridge for fatigue stress analysis. As a case study, a FE model of the Tsing Ma Bridge is presented. The verification of the model is carried out with the help of the measured bridge modal characteristics and the online data measured by the structural health monitoring system installed on the bridge. The results show that the constructed FE model is efficient for bridge dynamic analysis. Global structural analyses using the developed FE model are presented to determine the components of the nominal stress generated by railway loadings and some typical highway loadings. The critical locations in the bridge main span are also identified with the numerical results of the global FE stress analysis. Local stress analysis of a typical weld connection is carried out to obtain the hot-spot stresses in the region. These results provide a basis for evaluating fatigue damage and predicting the remaining life of the bridge.

  13. Hierarchical modeling and robust synthesis for the preliminary design of large scale complex systems

    Science.gov (United States)

    Koch, Patrick Nathan

    Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: (1) Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis, (2) Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration, and (3) Noise modeling techniques for implementing robust preliminary design when approximate models are employed. The method developed and associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system; the turbofan system-level problem is partitioned into engine cycle and configuration design and a compressor module is integrated for more detailed subsystem-level design exploration, improving system evaluation.

  14. REIONIZATION ON LARGE SCALES. I. A PARAMETRIC MODEL CONSTRUCTED FROM RADIATION-HYDRODYNAMIC SIMULATIONS

    International Nuclear Information System (INIS)

    Battaglia, N.; Trac, H.; Cen, R.; Loeb, A.

    2013-01-01

    We present a new method for modeling inhomogeneous cosmic reionization on large scales. Utilizing high-resolution radiation-hydrodynamic simulations with 2048 3 dark matter particles, 2048 3 gas cells, and 17 billion adaptive rays in a L = 100 Mpc h –1 box, we show that the density and reionization redshift fields are highly correlated on large scales (∼> 1 Mpc h –1 ). This correlation can be statistically represented by a scale-dependent linear bias. We construct a parametric function for the bias, which is then used to filter any large-scale density field to derive the corresponding spatially varying reionization redshift field. The parametric model has three free parameters that can be reduced to one free parameter when we fit the two bias parameters to simulation results. We can differentiate degenerate combinations of the bias parameters by combining results for the global ionization histories and correlation length between ionized regions. Unlike previous semi-analytic models, the evolution of the reionization redshift field in our model is directly compared cell by cell against simulations and performs well in all tests. Our model maps the high-resolution, intermediate-volume radiation-hydrodynamic simulations onto lower-resolution, larger-volume N-body simulations (∼> 2 Gpc h –1 ) in order to make mock observations and theoretical predictions

  15. Simulation of hydrogen release and combustion in large scale geometries: models and methods

    International Nuclear Information System (INIS)

    Beccantini, A.; Dabbene, F.; Kudriakov, S.; Magnaud, J.P.; Paillere, H.; Studer, E.

    2003-01-01

    The simulation of H2 distribution and combustion in confined geometries such as nuclear reactor containments is a challenging task from the point of view of numerical simulation, as it involves quite disparate length and time scales, which need to resolved appropriately and efficiently. Cea is involved in the development and validation of codes to model such problems, for external clients such as IRSN (TONUS code), Technicatome (NAUTILUS code) or for its own safety studies. This paper provides an overview of the physical and numerical models developed for such applications, as well as some insight into the current research topics which are being pursued. Examples of H2 mixing and combustion simulations are given. (authors)

  16. A 3D thermal runaway propagation model for a large format lithium ion battery module

    International Nuclear Information System (INIS)

    Feng, Xuning; Lu, Languang; Ouyang, Minggao; Li, Jiangqiu; He, Xiangming

    2016-01-01

    In this paper, a 3D thermal runaway (TR) propagation model is built for a large format lithium ion battery module. The 3D TR propagation model is built based on the energy balance equation. Empirical equations are utilized to simplify the calculation of the chemical kinetics for TR, whereas equivalent thermal resistant layer is employed to simplify the heat transfer through the thin thermal layer. The 3D TR propagation model is validated by experiment and can provide beneficial discussions on the mechanisms of TR propagation. According to the modeling analysis of the 3D model, the TR propagation can be delayed or prevented through: 1) increasing the TR triggering temperature; 2) reducing the total electric energy released during TR; 3) enhancing the heat dissipation level; 4) adding extra thermal resistant layer between adjacent batteries. The TR propagation is successfully prevented in the model and validated by experiment. The model with 3D temperature distribution provides a beneficial tool for researchers to study the TR propagation mechanisms and for engineers to design a safer battery pack. - Highlights: • A 3D thermal runaway (TR) propagation model for Li-ion battery pack is built. • The 3D TR propagation model can fit experimental results well. • Temperature distributions during TR propagation are presented using the 3D model. • Modeling analysis provides solutions for the prevention of TR propagation. • Quantified solutions to prevent TR propagation in battery pack are discussed.

  17. Large-eddy simulation of ethanol spray combustion using a finite-rate combustion model

    Energy Technology Data Exchange (ETDEWEB)

    Li, K.; Zhou, L.X. [Tsinghua Univ., Beijing (China). Dept. of Engineering Mechanics; Chan, C.K. [Hong Kong Polytechnic Univ. (China). Dept. of Applied Mathematics

    2013-07-01

    Large-eddy simulation of spray combustion is under its rapid development, but the combustion models are less validated by detailed experimental data. In this paper, large-eddy simulation of ethanol-air spray combustion was made using an Eulerian-Lagrangian approach, a subgrid-scale kinetic energy stress model, and a finite-rate combustion model. The simulation results are validated in detail by experiments. The LES obtained statistically averaged temperature is in agreement with the experimental results in most regions. The instantaneous LES results show the coherent structures of the shear region near the high-temperature flame zone and the fuel vapor concentration map, indicating the droplets are concentrated in this shear region. The droplet sizes are found to be in the range of 20-100{mu}m. The instantaneous temperature map shows the close interaction between the coherent structures and the combustion reaction.

  18. Large-x dependence of νW2 in the generalized vector-dominance model

    International Nuclear Information System (INIS)

    Argyres, E.N.; Lam, C.S.

    1977-01-01

    It is well known that the usual generalized vector-meson-dominance (GVMD) model gives too large a contribution to νW 2 for large x. Various heuristic modifications, for example making use of the t/sub min/ effect, have been proposed in order to achieve a reduction of this contribution. In this paper we examine within the GVMD context whether such reductions can rigorously be achieved. This is done utilizing a potential as well as a relativistic eikonal model. We find that whereas a reduction equivalent to that of t/sub min/ can be arranged in vector-meson photoproduction, the same is not true for virtual-photon Compton scattering in such diagonal models. The reason for this difference is discussed in detail. Finally we show that the desired reduction can be obtained if nondiagonal vector-meson scattering terms are properly taken into account

  19. A testing facility for large scale models at 100 bar and 3000C to 10000C

    International Nuclear Information System (INIS)

    Zemann, H.

    1978-07-01

    A testing facility for large scale model tests is in construction under support of the Austrian Industry. It will contain a Prestressed Concrete Pressure Vessel (PCPV) with hot linear (300 0 C at 100 bar), an electrical heating system (1.2 MW, 1000 0 C), a gas supply system, and a cooling system for the testing space. The components themselves are models for advanced high temperature applications. The first main component which was tested successfully was the PCPV. Basic investigation of the building materials, improvements of concrete gauges, large scale model tests and measurements within the structural concrete and on the liner from the beginning of construction during the period of prestressing, the period of stabilization and the final pressurizing tests have been made. On the basis of these investigations a computer controlled safety surveillance system for long term high pressure, high temperature tests has been developed. (author)

  20. Design of roundness measurement model with multi-systematic error for cylindrical components with large radius.

    Science.gov (United States)

    Sun, Chuanzhi; Wang, Lei; Tan, Jiubin; Zhao, Bo; Tang, Yangchao

    2016-02-01

    The paper designs a roundness measurement model with multi-systematic error, which takes eccentricity, probe offset, radius of tip head of probe, and tilt error into account for roundness measurement of cylindrical components. The effects of the systematic errors and radius of components are analysed in the roundness measurement. The proposed method is built on the instrument with a high precision rotating spindle. The effectiveness of the proposed method is verified by experiment with the standard cylindrical component, which is measured on a roundness measuring machine. Compared to the traditional limacon measurement model, the accuracy of roundness measurement can be increased by about 2.2 μm using the proposed roundness measurement model for the object with a large radius of around 37 mm. The proposed method can improve the accuracy of roundness measurement and can be used for error separation, calibration, and comparison, especially for cylindrical components with a large radius.