WorldWideScience

Sample records for models involving large

  1. An MCDA and GIS coupling conceptual model to be used in a circular decision process by stakeholders involved in large wind farm projects

    Energy Technology Data Exchange (ETDEWEB)

    Vazquez, M. [Quebec Univ., Rimouski, PQ (Canada). Laboratoire de Recherche en Energie Eolienne; Quebec Univ., Montreal, PQ (Canada). GEIGER; Waaub, J.P. [Quebec Univ., Montreal, PQ (Canada). GEIGER; Ilinca, A. [Quebec Univ., Rimouski, PQ (Canada). Laboratoire de Recherche en Energie Eolienne

    2010-07-01

    This poster presentation described an MCDA and geographic information system (GIS) coupling conceptual model designed for use in stakeholder decision-making processes for large wind farm projects. The model was comprised of 4 modules and 4 stakeholder categories that considered the environment and communities involved in the project. The integrated modelling approach was designed to ensure a transparent decision-making process. The modules included: (1) an MCDA module, (2) a local expertise and scientific knowledge module, (3) a stakeholder involvement module, and (4) a participatory GIS module. The model can be used to structure issues during consultation procedures, as well as to conduct preference analyses and to identify indicators. Examples of stakeholder weighting were included. tabs., figs.

  2. Severities of transportation accidents involving large packages

    Energy Technology Data Exchange (ETDEWEB)

    Dennis, A.W.; Foley, J.T. Jr.; Hartman, W.F.; Larson, D.W.

    1978-05-01

    The study was undertaken to define in a quantitative nonjudgmental technical manner the abnormal environments to which a large package (total weight over 2 tons) would be subjected as the result of a transportation accident. Because of this package weight, air shipment was not considered as a normal transportation mode and was not included in the study. The abnormal transportation environments for shipment by motor carrier and train were determined and quantified. In all cases the package was assumed to be transported on an open flat-bed truck or an open flat-bed railcar. In an earlier study, SLA-74-0001, the small-package environments were investigated. A third transportation study, related to the abnormal environment involving waterways transportation, is now under way at Sandia Laboratories and should complete the description of abnormal transportation environments. Five abnormal environments were defined and investigated, i.e., fire, impact, crush, immersion, and puncture. The primary interest of the study was directed toward the type of large package used to transport radioactive materials; however, the findings are not limited to this type of package but can be applied to a much larger class of material shipping containers.

  3. Severities of transportation accidents involving large packages

    International Nuclear Information System (INIS)

    Dennis, A.W.; Foley, J.T. Jr.; Hartman, W.F.; Larson, D.W.

    1978-05-01

    The study was undertaken to define in a quantitative nonjudgmental technical manner the abnormal environments to which a large package (total weight over 2 tons) would be subjected as the result of a transportation accident. Because of this package weight, air shipment was not considered as a normal transportation mode and was not included in the study. The abnormal transportation environments for shipment by motor carrier and train were determined and quantified. In all cases the package was assumed to be transported on an open flat-bed truck or an open flat-bed railcar. In an earlier study, SLA-74-0001, the small-package environments were investigated. A third transportation study, related to the abnormal environment involving waterways transportation, is now under way at Sandia Laboratories and should complete the description of abnormal transportation environments. Five abnormal environments were defined and investigated, i.e., fire, impact, crush, immersion, and puncture. The primary interest of the study was directed toward the type of large package used to transport radioactive materials; however, the findings are not limited to this type of package but can be applied to a much larger class of material shipping containers

  4. A Reasoned Action Model of Male Client Involvement in Commercial Sex Work in Kibera, A Large Informal Settlement in Nairobi, Kenya.

    Science.gov (United States)

    Roth, Eric Abella; Ngugi, Elizabeth; Benoit, Cecilia; Jansson, Mikael; Hallgrimsdottir, Helga

    2014-01-01

    Male clients of female sex workers (FSWs) are epidemiologically important because they can form bridge groups linking high- and low-risk subpopulations. However, because male clients are hard to locate, they are not frequently studied. Recent research emphasizes searching for high-risk behavior groups in locales where new sexual partnerships form and the threat of HIV transmission is high. Sub-Saharan Africa public drinking venues satisfy these criteria. Accordingly, this study developed and implemented a rapid assessment methodology to survey men in bars throughout the large informal settlement of Kibera, Nairobi, Kenya, with the goal of delineating cultural and economic rationales associated with male participation in commercial sex. The study sample consisted of 220 male patrons of 110 bars located throughout Kibera's 11 communities. Logistic regression analysis incorporating a modified Reasoned Action Model indicated that a social norm condoning commercial sex among male peers and the cultural belief that men should practice sex before marriage support commercial sex involvement. Conversely, lacking money to drink and/or pay for sexual services were barriers to male commercial sex involvement. Results are interpreted in light of possible harm reduction programs focusing on FSWs' male clients.

  5. Large scale model testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.

    1989-01-01

    Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs

  6. Modeling interdisciplinary activities involving Mathematics

    DEFF Research Database (Denmark)

    Iversen, Steffen Møllegaard

    2006-01-01

    In this paper a didactical model is presented. The goal of the model is to work as a didactical tool, or conceptual frame, for developing, carrying through and evaluating interdisciplinary activities involving the subject of mathematics and philosophy in the high schools. Through the terms...... domains (Michelsen, 2001, 2005a, 2005b). Furthermore the theoretical description rest on a series of qualitative interviews with teachers from the Danish high school (grades 9-11) conducted recently. The special case of concrete interdisciplinary activities between mathematics and philosophy is also...

  7. LARGE VESSEL INVOLVEMENT IN BEHCET’S DISEASE

    Directory of Open Access Journals (Sweden)

    AR. Jamshidi F. Davatchi

    2004-08-01

    Full Text Available Large vessel involvement is one of the hallmarks of Behcet’s disease (BD but its prevalence varies widely due to ethnic variation or environmental factors. The aim of this study is to find the characteristics of vasculo-Behcet (VB in Iran. In a cohort of 4769 patients with BD, those with vascular involvement were selected. Different manifestations of disease were compared with the remaining group of patients. A confidence interval at 95% (CI was calculated for each item. Vascular involvement was seen in 409 cases (8.6%; CI, 0.8. Venous involvement was seen in 396 cases, deep vein thrombosis in 294 (6.2%; CI, 0.7, superficial phlebitis in 108 (2.3%; CI, 0.4 and large vein thrombosis in 45 (0.9%; CI, 0.3. Arterial involvement was seen in 28 patients (25 aneurysms and 4 thromboses. Thirteen patients showed both arterial and venous involvement. The mean age of the patients with VB was slightly higher (P<0.03, but the disease duration was significantly longer (P<0.0003. VB was more common in men. As the presenting sign, ocular lesions were less frequent in VB (P<0.0006, while skin lesions were over 2 times more common in these cases (P<0.000001. VB was associated with a higher frequency of genital aphthosis, skin involvement, joint manifestations, epididymitis, CNS lesions and GI involvement. The juvenile form was less common in VB (P<0.03. High ESR was more frequent in VB (P=0.000002, but the frequency of false positive VDRL, pathergy phenomenon, HLA-B5 or HLA-B27 showed no significant difference between the two groups. In Iranian patients with BD, vascular involvement is not common and large vessel involvement is rare. It may be sex-related, and is more common in well-established disease with multiple organ involvement and longer disease duration.

  8. Large scale composting model

    OpenAIRE

    Henon , Florent; Debenest , Gérald; Tremier , Anne; Quintard , Michel; Martel , Jean-Luc; Duchalais , Guy

    2012-01-01

    International audience; One way to treat the organic wastes accordingly to the environmental policies is to develop biological treatment like composting. Nevertheless, this development largely relies on the quality of the final product and as a consequence on the quality of the biological activity during the treatment. Favourable conditions (oxygen concentration, temperature and moisture content) in the waste bed largely contribute to the establishment of a good aerobic biological activity an...

  9. Work zone fatal crashes involving large trucks, 2012 : [analysis brief].

    Science.gov (United States)

    2014-11-01

    In 2012, 30,800 fatal crashes took place on our Nations roadways, with 11.2 percent (3,464) involving at least 1 large truck. While the majority of all fatal crashes (98.2 percent) took place outside of a work zone in 2012, 547 fatal crashes (1.8 ...

  10. Consumers' reaction towards involvement of large retailers in selling ...

    African Journals Online (AJOL)

    ... analysis employed to identify customers' reaction to large retailers' involvement in selling fair trade coffee. The study indicates that credence processing attributes such as 'retailers image', 'fair deal', 'fair trade promotion', 'social responsibility' and 'against own label' are the major factors that influence consumers' intention ...

  11. Primary Mediastinal Large B-cell Lymphoma Exhibiting Endobronchial Involvement.

    Science.gov (United States)

    Shimada, Midori; Fukuda, Minoru; Horio, Kensuke; Suyama, Takayuki; Kitazaki, Takeshi; Hashiguchi, Kohji; Fukuda, Masaaki; Shigematsu, Kazuto; Nakamura, Yoichi; Honda, Takuya; Ashizawa, Kazuto; Mukae, Hiroshi

    Primary mediastinal large B-cell lymphoma (PMLBCL) is one of the subtypes of diffuse large B-cell lymphoma. We experienced a rare case of PMLBCL that exhibited endobronchial involvement. A 33-year-old Japanese female with the chief complaints of epigastralgia, back pain, and nausea visited a primary care hospital. Computed tomography of the chest and abdomen demonstrated a bulky mass in the left anterior mediastinum, multiple pulmonary nodules, axillary lymph node swelling, and a pancreatic tumor. Fiberoptic bronchoscopy showed a white-tinged irregularly shaped endobronchial tumor accompanied by capillary vessel dilation in the left upper lobar bronchus. Taken together, these findings resulted in a diagnosis of PMLBCL.

  12. Observations involving broadband impedance modelling

    International Nuclear Information System (INIS)

    Berg, J.S.

    1995-08-01

    Results for single- and multi-bunch instabilities can be significantly affected by the precise model that is used for the broadband impendance. This paper discusses three aspects of broadband impendance modeling. The first is an observation of the effect that a seemingly minor change in an impedance model has on the single-bunch mode coupling threshold. The second is a successful attempt to construct a model for the high-frequency tails of an r.f cavity. The last is a discussion of requirements for the mathematical form of an impendance which follow from the general properties of impendances

  13. Observations involving broadband impedance modelling

    Energy Technology Data Exchange (ETDEWEB)

    Berg, J.S. [Stanford Linear Accelerator Center, Menlo Park, CA (United States)

    1996-08-01

    Results for single- and multi-bunch instabilities can be significantly affected by the precise model that is used for the broadband impedance. This paper discusses three aspects of broadband impedance modelling. The first is an observation of the effect that a seemingly minor change in an impedance model has on the single-bunch mode coupling threshold. The second is a successful attempt to construct a model for the high-frequency tails of an r.f. cavity. The last is a discussion of requirements for the mathematical form of an impedance which follow from the general properties of impedances. (author)

  14. Multifocal Extranodal Involvement of Diffuse Large B-Cell Lymphoma

    Directory of Open Access Journals (Sweden)

    Devrim Cabuk

    2013-01-01

    Full Text Available Endobronchial involvement of extrapulmonary malignant tumors is uncommon and mostly associated with breast, kidney, colon, and rectum carcinomas. A 68-year-old male with a prior diagnosis of colon non-Hodgkin lymphoma (NHL was admitted to the hospital with a complaint of cough, sputum, and dyspnea. The chest radiograph showed right hilar enlargement and opacity at the right middle zone suggestive of a mass lesion. Computed tomography of thorax revealed a right-sided mass lesion extending to thoracic wall with the destruction of the third and the fourth ribs and a right hilar mass lesion. Fiberoptic bronchoscopy was performed in order to evaluate endobronchial involvement and showed stenosis with mucosal tumor infiltration in right upper lobe bronchus. The pathological examination of bronchoscopic biopsy specimen reported diffuse large B-cell lymphoma and the patient was accepted as the endobronchial recurrence of sigmoid colon NHL. The patient is still under treatment of R-ICE (rituximab-ifosfamide-carboplatin-etoposide chemotherapy and partial regression of pulmonary lesions was noted after 3 courses of treatment.

  15. Metal-Oxide Film Conversions Involving Large Anions

    International Nuclear Information System (INIS)

    Pretty, S.; Zhang, X.; Shoesmith, D.W.; Wren, J.C.

    2008-01-01

    The main objective of my research is to establish the mechanism and kinetics of metal-oxide film conversions involving large anions (I - , Br - , S 2- ). Within a given group, the anions will provide insight on the effect of anion size on the film conversion, while comparison of Group 6 and Group 7 anions will provide insight on the effect of anion charge. This research has a range of industrial applications, for example, hazardous radioiodine can be immobilized by reaction with Ag to yield AgI. From the perspective of public safety, radioiodine is one of the most important fission products from the uranium fuel because of its large fuel inventory, high volatility, and radiological hazard. Additionally, because of its mobility, the gaseous iodine concentration is a critical parameter for safety assessment and post-accident management. A full kinetic analysis using electrochemical techniques has been performed on the conversion of Ag 2 O to (1) AgI and (2) AgBr. (authors)

  16. Functional involvement of human discs large tumor suppressor in cytokinesis

    International Nuclear Information System (INIS)

    Unno, Kenji; Hanada, Toshihiko; Chishti, Athar H.

    2008-01-01

    Cytokinesis is the final step of cell division that completes the separation of two daughter cells. We found that the human discs large (hDlg) tumor suppressor homologue is functionally involved in cytokinesis. The guanylate kinase (GUK) domain of hDlg mediates the localization of hDlg to the midbody during cytokinesis, and over-expression of the GUK domain in U2OS and HeLa cells impaired cytokinesis. Mouse embryonic fibroblasts (MEFs) derived from dlg mutant mice contained an increased number of multinucleated cells and showed reduced proliferation in culture. A kinesin-like motor protein, GAKIN, which binds directly to the GUK domain of hDlg, exhibited a similar intracellular distribution pattern with hDlg throughout mitosis and localized to the midbody during cytokinesis. However, the targeting of hDlg and GAKIN to the midbody appeared to be independent of each other. The midbody localization of GAKIN required its functional kinesin-motor domain. Treatment of cells with the siRNA specific for hDlg and GAKIN caused formation of multinucleated cells and delayed cytokinesis. Together, these results suggest that hDlg and GAKIN play functional roles in the maintenance of midbody architecture during cytokinesis

  17. Model of large pool fires

    International Nuclear Information System (INIS)

    Fay, J.A.

    2006-01-01

    A two zone entrainment model of pool fires is proposed to depict the fluid flow and flame properties of the fire. Consisting of combustion and plume zones, it provides a consistent scheme for developing non-dimensional scaling parameters for correlating and extrapolating pool fire visible flame length, flame tilt, surface emissive power, and fuel evaporation rate. The model is extended to include grey gas thermal radiation from soot particles in the flame zone, accounting for emission and absorption in both optically thin and thick regions. A model of convective heat transfer from the combustion zone to the liquid fuel pool, and from a water substrate to cryogenic fuel pools spreading on water, provides evaporation rates for both adiabatic and non-adiabatic fires. The model is tested against field measurements of large scale pool fires, principally of LNG, and is generally in agreement with experimental values of all variables

  18. Bullying Prevention and the Parent Involvement Model

    Science.gov (United States)

    Kolbert, Jered B.; Schultz, Danielle; Crothers, Laura M.

    2014-01-01

    A recent meta-analysis of bullying prevention programs provides support for social-ecological theory, in which parent involvement addressing child bullying behaviors is seen as important in preventing school-based bullying. The purpose of this manuscript is to suggest how Epstein and colleagues' parent involvement model can be used as a…

  19. Abdominal War Wounds With Large Bowel Involvement: The Medina ...

    African Journals Online (AJOL)

    Marco Baldan, ICRC Regional Surgeon for Africa, ICRC Nairobi Regional Delegation,. P.O.Box 73226, Nairobi-Kenya. ... present their experience in the treatment of penetrating abdominal war wounds involving the colon in Medina Hospital. ... surgical care and "on the job training" to six "local surgeons" who were young ...

  20. Learning models of activities involving interacting objects

    DEFF Research Database (Denmark)

    Manfredotti, Cristina; Pedersen, Kim Steenstrup; Hamilton, Howard J.

    2013-01-01

    We propose the LEMAIO multi-layer framework, which makes use of hierarchical abstraction to learn models for activities involving multiple interacting objects from time sequences of data concerning the individual objects. Experiments in the sea navigation domain yielded learned models that were...... then successfully applied to activity recognition, activity simulation and multi-target tracking. Our method compares favourably with respect to previously reported results using Hidden Markov Models and Relational Particle Filtering....

  1. Involving parents from the start: formative evaluation for a large ...

    African Journals Online (AJOL)

    While HIV prevention research conducted among adolescent populations may encounter parental resistance, the active engagement of parents from inception to trial completion may alleviate opposition. In preparation for implementing a large randomised controlled trial (RCT) examining the efficacy of a behavioural ...

  2. Very Large System Dynamics Models - Lessons Learned

    Energy Technology Data Exchange (ETDEWEB)

    Jacob J. Jacobson; Leonard Malczynski

    2008-10-01

    This paper provides lessons learned from developing several large system dynamics (SD) models. System dynamics modeling practice emphasize the need to keep models small so that they are manageable and understandable. This practice is generally reasonable and prudent; however, there are times that large SD models are necessary. This paper outlines two large SD projects that were done at two Department of Energy National Laboratories, the Idaho National Laboratory and Sandia National Laboratories. This paper summarizes the models and then discusses some of the valuable lessons learned during these two modeling efforts.

  3. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  4. Regularization modeling for large-eddy simulation

    NARCIS (Netherlands)

    Geurts, Bernardus J.; Holm, D.D.

    2003-01-01

    A new modeling approach for large-eddy simulation (LES) is obtained by combining a "regularization principle" with an explicit filter and its inversion. This regularization approach allows a systematic derivation of the implied subgrid model, which resolves the closure problem. The central role of

  5. Models for large superconducting toroidal magnet systems

    International Nuclear Information System (INIS)

    Arendt, F.; Brechna, H.; Erb, J.; Komarek, P.; Krauth, H.; Maurer, W.

    1976-01-01

    Prior to the design of large GJ toroidal magnet systems it is appropriate to procure small scale models, which can simulate their pertinent properties and allow to investigate their relevant phenomena. The important feature of the model is to show under which circumstances the system performance can be extrapolated to large magnets. Based on parameters such as the maximum magnetic field and the current density, the maximum tolerable magneto-mechanical stresses, a simple method of designing model magnets is presented. It is shown how pertinent design parameters are changed when the toroidal dimensions are altered. In addition some conductor cost estimations are given based on reactor power output and wall loading

  6. Modeling human learning involved in car driving

    NARCIS (Netherlands)

    Wewerinke, P.H.

    1994-01-01

    In this paper, car driving is considered at the level of human tracking and maneuvering in the context of other traffic. A model analysis revealed the most salient features determining driving performance and safety. Learning car driving is modelled based on a system theoretical approach and based

  7. Stochastic species abundance models involving special copulas

    Science.gov (United States)

    Huillet, Thierry E.

    2018-01-01

    Copulas offer a very general tool to describe the dependence structure of random variables supported by the hypercube. Inspired by problems of species abundances in Biology, we study three distinct toy models where copulas play a key role. In a first one, a Marshall-Olkin copula arises in a species extinction model with catastrophe. In a second one, a quasi-copula problem arises in a flagged species abundance model. In a third model, we study completely random species abundance models in the hypercube as those, not of product type, with uniform margins and singular. These can be understood from a singular copula supported by an inflated simplex. An exchangeable singular Dirichlet copula is also introduced, together with its induced completely random species abundance vector.

  8. Constituent models and large transverse momentum reactions

    International Nuclear Information System (INIS)

    Brodsky, S.J.

    1975-01-01

    The discussion of constituent models and large transverse momentum reactions includes the structure of hard scattering models, dimensional counting rules for large transverse momentum reactions, dimensional counting and exclusive processes, the deuteron form factor, applications to inclusive reactions, predictions for meson and photon beams, the charge-cubed test for the e/sup +-/p → e/sup +-/γX asymmetry, the quasi-elastic peak in inclusive hadronic reactions, correlations, and the multiplicity bump at large transverse momentum. Also covered are the partition method for bound state calculations, proofs of dimensional counting, minimal neutralization and quark--quark scattering, the development of the constituent interchange model, and the A dependence of high transverse momentum reactions

  9. Buried Waste Integrated Demonstration stakeholder involvement model

    International Nuclear Information System (INIS)

    Kaupanger, R.M.; Kostelnik, K.M.; Milam, L.M.

    1994-04-01

    The Buried Waste Integrated Demonstration (BWID) is a program funded by the US Department of Energy (DOE) Office of Technology Development. BWID supports the applied research, development, demonstration, and evaluation of a suite of advanced technologies that together form a comprehensive remediation system for the effective and efficient remediation of buried waste. Stakeholder participation in the DOE Environmental Management decision-making process is critical to remediation efforts. Appropriate mechanisms for communication with the public, private sector, regulators, elected officials, and others are being aggressively pursued by BWID to permit informed participation. This document summarizes public outreach efforts during FY-93 and presents a strategy for expanded stakeholder involvement during FY-94

  10. Modeling human learning involved in car driving

    OpenAIRE

    Wewerinke, P.H.

    1994-01-01

    In this paper, car driving is considered at the level of human tracking and maneuvering in the context of other traffic. A model analysis revealed the most salient features determining driving performance and safety. Learning car driving is modelled based on a system theoretical approach and based on a neural network approach. The aim of this research is to assess the relative merit of both approaches to describe human learning behavior in car driving specifically and in operating dynamic sys...

  11. Large Mammalian Animal Models of Heart Disease

    Directory of Open Access Journals (Sweden)

    Paula Camacho

    2016-10-01

    Full Text Available Due to the biological complexity of the cardiovascular system, the animal model is an urgent pre-clinical need to advance our knowledge of cardiovascular disease and to explore new drugs to repair the damaged heart. Ideally, a model system should be inexpensive, easily manipulated, reproducible, a biological representative of human disease, and ethically sound. Although a larger animal model is more expensive and difficult to manipulate, its genetic, structural, functional, and even disease similarities to humans make it an ideal model to first consider. This review presents the commonly-used large animals—dog, sheep, pig, and non-human primates—while the less-used other large animals—cows, horses—are excluded. The review attempts to introduce unique points for each species regarding its biological property, degrees of susceptibility to develop certain types of heart diseases, and methodology of induced conditions. For example, dogs barely develop myocardial infarction, while dilated cardiomyopathy is developed quite often. Based on the similarities of each species to the human, the model selection may first consider non-human primates—pig, sheep, then dog—but it also depends on other factors, for example, purposes, funding, ethics, and policy. We hope this review can serve as a basic outline of large animal models for cardiovascular researchers and clinicians.

  12. Managing large-scale models: DBS

    International Nuclear Information System (INIS)

    1981-05-01

    A set of fundamental management tools for developing and operating a large scale model and data base system is presented. Based on experience in operating and developing a large scale computerized system, the only reasonable way to gain strong management control of such a system is to implement appropriate controls and procedures. Chapter I discusses the purpose of the book. Chapter II classifies a broad range of generic management problems into three groups: documentation, operations, and maintenance. First, system problems are identified then solutions for gaining management control are disucssed. Chapters III, IV, and V present practical methods for dealing with these problems. These methods were developed for managing SEAS but have general application for large scale models and data bases

  13. Modeling capillary forces for large displacements

    NARCIS (Netherlands)

    Mastrangeli, M.; Arutinov, G.; Smits, E.C.P.; Lambert, P.

    2014-01-01

    Originally applied to the accurate, passive positioning of submillimetric devices, recent works proved capillary self-alignment as effective also for larger components and relatively large initial offsets. In this paper, we describe an analytic quasi-static model of 1D capillary restoring forces

  14. Pronunciation Modeling for Large Vocabulary Speech Recognition

    Science.gov (United States)

    Kantor, Arthur

    2010-01-01

    The large pronunciation variability of words in conversational speech is one of the major causes of low accuracy in automatic speech recognition (ASR). Many pronunciation modeling approaches have been developed to address this problem. Some explicitly manipulate the pronunciation dictionary as well as the set of the units used to define the…

  15. Generation and analysis of large reliability models

    Science.gov (United States)

    Palumbo, Daniel L.; Nicol, David M.

    1990-01-01

    An effort has been underway for several years at NASA's Langley Research Center to extend the capability of Markov modeling techniques for reliability analysis to the designers of highly reliable avionic systems. This effort has been focused in the areas of increased model abstraction and increased computational capability. The reliability model generator (RMG), a software tool which uses as input a graphical, object-oriented block diagram of the system, is discussed. RMG uses an automated failure modes-effects analysis algorithm to produce the reliability model from the graphical description. Also considered is the ASSURE software tool, a parallel processing program which uses the ASSIST modeling language and SURE semi-Markov solution technique. An executable failure modes-effects analysis is used by ASSURE. The successful combination of the power of graphical representation, automated model generation, and parallel computation leads to the conclusion that large system architectures can now be analyzed.

  16. Dark radiation in LARGE volume models

    Science.gov (United States)

    Cicoli, Michele; Conlon, Joseph P.; Quevedo, Fernando

    2013-02-01

    We consider reheating driven by volume modulus decays in the LARGE volume scenario. Such reheating always generates nonzero dark radiation through the decays to the axion partner, while the only competitive visible sector decays are Higgs pairs via the Giudice-Masiero term. In the framework of sequestered models where the cosmological moduli problem is absent, the simplest model with a shift-symmetric Higgs sector generates 1.56≤ΔNeff≤1.74. For more general cases, the known experimental bounds on ΔNeff strongly constrain the parameters and matter content of the models.

  17. Using a Person-Environment Fit Model to Predict Job Involvement and Organizational Commitment.

    Science.gov (United States)

    Blau, Gary J.

    1987-01-01

    Using a sample of registered nurses (N=228) from a large urban hospital, this longitudinal study tested the applicability of a person-environment fit model for predicting job involvement and organizational commitment. Results indicated the proposed person-environment fit model is useful for predicting job involvement, but not organizational…

  18. Modelling and control of large cryogenic refrigerator

    International Nuclear Information System (INIS)

    Bonne, Francois

    2014-01-01

    This manuscript is concern with both the modeling and the derivation of control schemes for large cryogenic refrigerators. The particular case of those which are submitted to highly variable pulsed heat load is studied. A model of each object that normally compose a large cryo-refrigerator is proposed. The methodology to gather objects model into the model of a subsystem is presented. The manuscript also shows how to obtain a linear equivalent model of the subsystem. Based on the derived models, advances control scheme are proposed. Precisely, a linear quadratic controller for warm compression station working with both two and three pressures state is derived, and a predictive constrained one for the cold-box is obtained. The particularity of those control schemes is that they fit the computing and data storage capabilities of Programmable Logic Controllers (PLC) with are well used in industry. The open loop model prediction capability is assessed using experimental data. Developed control schemes are validated in simulation and experimentally on the 400W1.8K SBT's cryogenic test facility and on the CERN's LHC warm compression station. (author) [fr

  19. Computational Modeling of Large Wildfires: A Roadmap

    KAUST Repository

    Coen, Janice L.

    2010-08-01

    Wildland fire behavior, particularly that of large, uncontrolled wildfires, has not been well understood or predicted. Our methodology to simulate this phenomenon uses high-resolution dynamic models made of numerical weather prediction (NWP) models coupled to fire behavior models to simulate fire behavior. NWP models are capable of modeling very high resolution (< 100 m) atmospheric flows. The wildland fire component is based upon semi-empirical formulas for fireline rate of spread, post-frontal heat release, and a canopy fire. The fire behavior is coupled to the atmospheric model such that low level winds drive the spread of the surface fire, which in turn releases sensible heat, latent heat, and smoke fluxes into the lower atmosphere, feeding back to affect the winds directing the fire. These coupled dynamic models capture the rapid spread downwind, flank runs up canyons, bifurcations of the fire into two heads, and rough agreement in area, shape, and direction of spread at periods for which fire location data is available. Yet, intriguing computational science questions arise in applying such models in a predictive manner, including physical processes that span a vast range of scales, processes such as spotting that cannot be modeled deterministically, estimating the consequences of uncertainty, the efforts to steer simulations with field data ("data assimilation"), lingering issues with short term forecasting of weather that may show skill only on the order of a few hours, and the difficulty of gathering pertinent data for verification and initialization in a dangerous environment. © 2010 IEEE.

  20. Large-scale multimedia modeling applications

    International Nuclear Information System (INIS)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications

  1. On spinfoam models in large spin regime

    International Nuclear Information System (INIS)

    Han, Muxin

    2014-01-01

    We study the semiclassical behavior of Lorentzian Engle–Pereira–Rovelli–Livine (EPRL) spinfoam model, by taking into account the sum over spins in the large spin regime. We also employ the method of stationary phase analysis with parameters and the so-called, almost analytic machinery, in order to find the asymptotic behavior of the contributions from all possible large spin configurations in the spinfoam model. The spins contributing the sum are written as J f = λj f , where λ is a large parameter resulting in an asymptotic expansion via stationary phase approximation. The analysis shows that at least for the simplicial Lorentzian geometries (as spinfoam critical configurations), they contribute the leading order approximation of spinfoam amplitude only when their deficit angles satisfy γ Θ-ring f ≤λ −1/2 mod 4πZ. Our analysis results in a curvature expansion of the semiclassical low energy effective action from the spinfoam model, where the UV modifications of Einstein gravity appear as subleading high-curvature corrections. (paper)

  2. Iron Malabsorption in a Patient With Large Cell Lymphoma Involving the Duodenum

    Science.gov (United States)

    1992-01-01

    111-37. coeliac disease . Lancet 1960:1:192-4. 7. Shreeve DR. Horrocks P. Mainwaring AR. Steatorrhea and intra- 20. Green PA. Wollaeger EE. The clinical...compounded the anemia in a pa- tion in celiac disease were reversible by the institution tient with diffuse large cell lymphoma involving the of a gluten...hemoglobin. The lymphomas (5-7). The presenting symptoms mimic chest radiograph in May demonstrated an anterior me- those of celiac disease and include

  3. Modelling large-scale hydrogen infrastructure development

    International Nuclear Information System (INIS)

    De Groot, A.; Smit, R.; Weeda, M.

    2005-08-01

    In modelling a possible H2 infrastructure development the following questions are answered in this presentation: How could the future demand for H2 develop in the Netherlands?; and In which year and where would it be economically viable to construct a H2 infrastructure in the Netherlands? Conclusions are that: A model for describing a possible future H2 infrastructure is successfully developed; The model is strongly regional and time dependent; Decrease of fuel cell cost appears to be a sensitive parameter for development of H2 demand; Cost-margin between large-scale and small-scale H2 production is a main driver for development of a H2 infrastructure; A H2 infrastructure seems economically viable in the Netherlands starting from the year 2022

  4. Large animal models for stem cell therapy.

    Science.gov (United States)

    Harding, John; Roberts, R Michael; Mirochnitchenko, Oleg

    2013-03-28

    The field of regenerative medicine is approaching translation to clinical practice, and significant safety concerns and knowledge gaps have become clear as clinical practitioners are considering the potential risks and benefits of cell-based therapy. It is necessary to understand the full spectrum of stem cell actions and preclinical evidence for safety and therapeutic efficacy. The role of animal models for gaining this information has increased substantially. There is an urgent need for novel animal models to expand the range of current studies, most of which have been conducted in rodents. Extant models are providing important information but have limitations for a variety of disease categories and can have different size and physiology relative to humans. These differences can preclude the ability to reproduce the results of animal-based preclinical studies in human trials. Larger animal species, such as rabbits, dogs, pigs, sheep, goats, and non-human primates, are better predictors of responses in humans than are rodents, but in each case it will be necessary to choose the best model for a specific application. There is a wide spectrum of potential stem cell-based products that can be used for regenerative medicine, including embryonic and induced pluripotent stem cells, somatic stem cells, and differentiated cellular progeny. The state of knowledge and availability of these cells from large animals vary among species. In most cases, significant effort is required for establishing and characterizing cell lines, comparing behavior to human analogs, and testing potential applications. Stem cell-based therapies present significant safety challenges, which cannot be addressed by traditional procedures and require the development of new protocols and test systems, for which the rigorous use of larger animal species more closely resembling human behavior will be required. In this article, we discuss the current status and challenges of and several major directions

  5. [Diffuse large B-cell lymphoma with primary involvement of mediastinal lymph nodes: diagnosis and treatment].

    Science.gov (United States)

    Mangasarova, Ia K; Magomedova, A U; Kravchenko, S K; Zvonkov, E E; Kremenetskaia, A M; Vorob'ev, V I; Mar'in, D S; Gubkin, A V; Skidan, N I; Kaplanskaia, I B; Vorob'ev, I A; Samoĭlova, R S; Vorob'ev, A I

    2010-01-01

    To diagnose diffuse large B-cell lymphosarcoma (DLBCLS) with primary involvement of the mediastinal lymph nodes (LN) and to evaluate the efficiency of aggressive polychemotherapy (PCT). The study included 15 patients (6 men and 9 women aged 18 to 70 years; median 38 years) followed up at the Hematology Research Center, Russian Academy of Medical Sciences, in 2004 to 2009. Three and 12 patients had Stages II and IE DLBCLS, respectively. B symptoms were found in 14 (93.4%) patients. Increased lactate dehydrogenase (LDH) concentrations were detectable in 14 (93.4%) patients; tumors of 10 cm or more (bulky disease) were seen in 11 (73.3%). Enlarged cervical, supraclavicular, and axillary lymph nodes were found in 9 (60%) patients; lung involvement via extension in 9 (60%), and invasion into the pericardium in 5 (33.3%) and soft tissues of the anterior thoracic wall in (13.3%). There were no signs of involvement of extranodal organs at the moment of diagnosis. All the 15 patients received PCT according to the modified NHL-BFM-90 program: 4 to 6 courses depending on the response to the therapy; 10 (66.6%) and 5 (33.3%) patients had 4 and 6 courses, respectively; for consolidating purpose, 11 (78.5%) patients were prescribed radiotherapy applied to the mediastinum in a cumulative dose of 36 Gy due to the fact that they had a residual mass. Thirteen (86.6%) patients achieved a complete remission (CR). Primary PCT resistance was confirmed in one case. Another patient was stated to have near-complete remission. No recurrences were notified during the follow-up. The mean CR duration was 24.5 (range 2-49) months. DLBCLS with primary LN involvement is an individual nosological entity to be differentiated from primary mediastinal large B-cell lymphosarcoma. In most cases, DLBCLS shows signs of a poor prognosis, which makes it necessary to perform aggressive PCT.

  6. Black holes from large N singlet models

    Science.gov (United States)

    Amado, Irene; Sundborg, Bo; Thorlacius, Larus; Wintergerst, Nico

    2018-03-01

    The emergent nature of spacetime geometry and black holes can be directly probed in simple holographic duals of higher spin gravity and tensionless string theory. To this end, we study time dependent thermal correlation functions of gauge invariant observables in suitably chosen free large N gauge theories. At low temperature and on short time scales the correlation functions encode propagation through an approximate AdS spacetime while interesting departures emerge at high temperature and on longer time scales. This includes the existence of evanescent modes and the exponential decay of time dependent boundary correlations, both of which are well known indicators of bulk black holes in AdS/CFT. In addition, a new time scale emerges after which the correlation functions return to a bulk thermal AdS form up to an overall temperature dependent normalization. A corresponding length scale was seen in equal time correlation functions in the same models in our earlier work.

  7. Comparing the performance of SIMD computers by running large air pollution models

    DEFF Research Database (Denmark)

    Brown, J.; Hansen, Per Christian; Wasniewski, J.

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on these computers. Using a realistic large-scale model, we gained detailed insight about the performance of the computers involved when used to solve large-scale scientific...... problems that involve several types of numerical computations. The computers used in our study are the Connection Machines CM-200 and CM-5, and the MasPar MP-2216...

  8. Competency modeling targeted on promotion of organizations towards VO involvement

    NARCIS (Netherlands)

    Ermilova, E.; Afsarmanesh, H.

    2008-01-01

    During the last decades, a number of models is introduced in research, addressing different perspectives of the organizations’ competencies in collaborative networks. This paper introduces the "4C-model", developed to address competencies of organizations, involved in Virtual organizations Breeding

  9. On the modelling of microsegregation in steels involving thermodynamic databases

    International Nuclear Information System (INIS)

    You, D; Bernhard, C; Michelic, S; Wieser, G; Presoly, P

    2016-01-01

    A microsegregation model involving thermodynamic database based on Ohnaka's model is proposed. In the model, the thermodynamic database is applied for equilibrium calculation. Multicomponent alloy effects on partition coefficients and equilibrium temperatures are accounted for. Microsegregation and partition coefficients calculated using different databases exhibit significant differences. The segregated concentrations predicted using the optimized database are in good agreement with the measured inter-dendritic concentrations. (paper)

  10. Pituitary and adrenal involvement in diffuse large B-cell lymphoma, with recovery of their function after chemotherapy

    OpenAIRE

    Nakashima, Yasuhiro; Shiratsuchi, Motoaki; Abe, Ichiro; Matsuda, Yayoi; Miyata, Noriyuki; Ohno, Hirofumi; Ikeda, Motohiko; Matsushima, Takamitsu; Nomura, Masatoshi; Takayanagi, Ryoichi

    2013-01-01

    Background Diffuse large B-cell lymphoma sometimes involves the endocrine organs, but involvement of both the pituitary and adrenal glands is extremely rare. Involvement of these structures can lead to hypopituitarism and adrenal insufficiency, and subsequent recovery of their function is rarely seen. The present report describes an extremely rare case of pituitary and adrenal diffuse large B-cell lymphoma presenting with hypopituitarism and adrenal insufficiency with subsequent recovery of p...

  11. Questionnaire: involved actors in large disused components management - Summary Of Responses To The Questionnaire

    International Nuclear Information System (INIS)

    2012-01-01

    The aim of the Questionnaire is to establish an overview of the various bodies [Actors] that have responsibilities or input to the issue of large component decommissioning. In answering the intent is to cover the overall organisation and those bits that have most relevance to large components. The answers should reflect the areas from site operations to decommissioning as well as the wider issue of disposal at another location. The Questionnaire covers the following points: 1 - What is the country (institutional) structure for decommissioning? 2 - who does what and where lie the responsibilities? 3 - Which bodies have responsibility for onsite safety regulation, discharges and disposal? 4 - Which body(s) owns the facilities? 5 - Describe the responsibilities for funding of the decommissioning plan and disposal plan. Are they one and the same body? Whilst there are differences between countries there are some common threads. Regulation is through the state though the number of regulators involved may vary. In summary, the IAEA principles concerning independence of the regulatory body are followed. Funding arrangements vary but there are plans. Similarly, ownership of facilities is a mix of state and private. Some systems require a separate decommissioning license with Spain having the clearest demarcation of responsibilities for the decommissioning phase and waste management responsibilities

  12. The contribution of advisory committees and public involvement to large studies: case study

    Directory of Open Access Journals (Sweden)

    Tew Jerry

    2010-12-01

    Full Text Available Abstract Background Many large studies have complex advisory committee structures, yet there is no empirical evidence regarding their optimal composition, scope and contribution. The aim of this study was to inform the committee and advice infrastructure for future research studies. Methods In the context of a five-year study funded by the UK National Institute for Health Research, three advisory committees were formed. In addition, advice was obtained from individual experts. All recommendations received in the start-up phase (first seven months of the study were recorded, along with the decision about implementation of the recommendation. A particular focus was on the impact of public involvement. Results A total of 172 recommendations were made, including 70 from 20 individual experts. The recommendations were grouped into five emergent themes: Scientific, Pragmatic, Resources, Committee and Collaboration. Most recommendations related to strengthening existing components or adding new components to the study protocol. Very few recommendations either proposed removing study components or contradicted other recommendations. Three 'implementation criteria' were identified: scientific value, pragmatic feasibility, and paradigmatic consistency. 103 (60% of recommendations were implemented and 25 (15% were not implemented. The benefits identified by the research team were improved quality and confidence, and the costs were increased cognitive demands, protocol revision time, and slower progress. Conclusions The findings are discussed in the context of the wider literature on public involvement in research. Six recommendations are identified. First, have a clear rationale for each advisory committee expressed as terms of reference, and consider the best balance between committees and individual consultation with experts. Second, an early concern of committees is inter-committee communication, so consider cross-representation and copying minutes

  13. Acquisition Integration Models: How Large Companies Successfully Integrate Startups

    Directory of Open Access Journals (Sweden)

    Peter Carbone

    2011-10-01

    Full Text Available Mergers and acquisitions (M&A have been popular means for many companies to address the increasing pace and level of competition that they face. Large companies have pursued acquisitions to more quickly access technology, markets, and customers, and this approach has always been a viable exit strategy for startups. However, not all deals deliver the anticipated benefits, in large part due to poor integration of the acquired assets into the acquiring company. Integration can greatly impact the success of the acquisition and, indeed, the combined company’s overall market success. In this article, I explore the implementation of several integration models that have been put into place by a large company and extract principles that may assist negotiating parties with maximizing success. This perspective may also be of interest to smaller companies as they explore exit options while trying to ensure continued market success after acquisition. I assert that business success with acquisitions is dependent on an appropriate integration model, but that asset integration is not formulaic. Any integration effort must consider the specific market context and personnel involved.

  14. Simulation of large-scale rule-based models

    Energy Technology Data Exchange (ETDEWEB)

    Hlavacek, William S [Los Alamos National Laboratory; Monnie, Michael I [Los Alamos National Laboratory; Colvin, Joshua [NON LANL; Faseder, James [NON LANL

    2008-01-01

    Interactions of molecules, such as signaling proteins, with multiple binding sites and/or multiple sites of post-translational covalent modification can be modeled using reaction rules. Rules comprehensively, but implicitly, define the individual chemical species and reactions that molecular interactions can potentially generate. Although rules can be automatically processed to define a biochemical reaction network, the network implied by a set of rules is often too large to generate completely or to simulate using conventional procedures. To address this problem, we present DYNSTOC, a general-purpose tool for simulating rule-based models. DYNSTOC implements a null-event algorithm for simulating chemical reactions in a homogenous reaction compartment. The simulation method does not require that a reaction network be specified explicitly in advance, but rather takes advantage of the availability of the reaction rules in a rule-based specification of a network to determine if a randomly selected set of molecular components participates in a reaction during a time step. DYNSTOC reads reaction rules written in the BioNetGen language which is useful for modeling protein-protein interactions involved in signal transduction. The method of DYNSTOC is closely related to that of STOCHSIM. DYNSTOC differs from STOCHSIM by allowing for model specification in terms of BNGL, which extends the range of protein complexes that can be considered in a model. DYNSTOC enables the simulation of rule-based models that cannot be simulated by conventional methods. We demonstrate the ability of DYNSTOC to simulate models accounting for multisite phosphorylation and multivalent binding processes that are characterized by large numbers of reactions. DYNSTOC is free for non-commercial use. The C source code, supporting documentation and example input files are available at .

  15. Environmental Management Model for Road Maintenance Operation Involving Community Participation

    Science.gov (United States)

    Triyono, A. R. H.; Setyawan, A.; Sobriyah; Setiono, P.

    2017-07-01

    Public expectations of Central Java, which is very high on demand fulfillment, especially road infrastructure as outlined in the number of complaints and community expectations tweeter, Short Mail Massage (SMS), e-mail and public reports from various media, Highways Department of Central Java province requires development model of environmental management in the implementation of a routine way by involving the community in order to fulfill the conditions of a representative, may serve road users safely and comfortably. This study used survey method with SEM analysis and SWOT with Latent Independent Variable (X), namely; Public Participation in the regulation, development, construction and supervision of road (PSM); Public behavior in the utilization of the road (PMJ) Provincial Road Service (PJP); Safety in the Provincial Road (KJP); Integrated Management System (SMT) and latent dependent variable (Y) routine maintenance of the provincial road that is integrated with the environmental management system and involve the participation of the community (MML). The result showed the implementation of routine maintenance of road conditions in Central Java province has yet to implement an environmental management by involving the community; Therefore developed environmental management model with the results of H1: Community Participation (PSM) has positive influence on the Model of Environmental Management (MML); H2: Behavior Society in Jalan Utilization (PMJ) positive effect on Model Environmental Management (MML); H3: Provincial Road Service (PJP) positive effect on Model Environmental Management (MML); H4: Safety in the Provincial Road (KJP) positive effect on Model Environmental Management (MML); H5: Integrated Management System (SMT) has positive influence on the Model of Environmental Management (MML). From the analysis obtained formulation model describing the relationship / influence of the independent variables PSM, PMJ, PJP, KJP, and SMT on the dependent variable

  16. Random Coefficient Logit Model for Large Datasets

    NARCIS (Netherlands)

    C. Hernández-Mireles (Carlos); D. Fok (Dennis)

    2010-01-01

    textabstractWe present an approach for analyzing market shares and products price elasticities based on large datasets containing aggregate sales data for many products, several markets and for relatively long time periods. We consider the recently proposed Bayesian approach of Jiang et al [Jiang,

  17. The Cortlandt complex: evidence for large-scale liquid immiscibility involving granodiorite and diorite magmas

    Science.gov (United States)

    Bender, J. F.; Hanson, G. N.; Bence, A. E.

    1982-05-01

    Granodiorite and diorite plutons of the Rosetown complex, N.Y., which are associated with the nearby Cortlandt complex, have chemical and textural characteristics indicating that large-scale liquid immiscibility played a major role in their petrogenesis. Rare earth element, zirconium, niobium and phosphorus abundances are much greater in the diorite precluding the possibility that the Rosetown diorite and granodiorite are related by fractional crystallization. The trace element data also eliminate the possibility that the granodiorite represents: (1) a partial melt of crustal rocks including basalt; (2) a granitic cumulate; or (3) a residue from an aqueous fluid derived either from a silicate melt or crustal rocks. Liquid immiscibility appears to be viable model for the origin of the Rosetown granodiorite and iron-rich diorite. This model is supported by the following: (1) the major element compositions occur in a two-liquid field on a Greig diagram; (2) both bodies have similar Sr isotope compositions; (3) common phases in the two rock types have overlapping compositions; (4) the major and trace element data of the diorite and granodiorite are similar to the experimentally determined partition data of immiscible liquid pairs; and (5) possible ocelli of iron-rich diorite are found in the granodiorite.

  18. Modeling of modification experiments involving neutral-gas release

    International Nuclear Information System (INIS)

    Bernhardt, P.A.

    1983-01-01

    Many experiments involve the injection of neutral gases into the upper atmosphere. Examples are critical velocity experiments, MHD wave generation, ionospheric hole production, plasma striation formation, and ion tracing. Many of these experiments are discussed in other sessions of the Active Experiments Conference. This paper limits its discussion to: (1) the modeling of the neutral gas dynamics after injection, (2) subsequent formation of ionosphere holes, and (3) use of such holes as experimental tools

  19. Fires involving radioactive materials : transference model; operative recommendations

    International Nuclear Information System (INIS)

    Rodriguez, C.E.; Puntarulo, L.J.; Canibano, J.A.

    1988-01-01

    In all aspects related to the nuclear activity, the occurrence of an explosion, fire or burst type accident, with or without victims, is directly related to the characteristics of the site. The present work analyses the different parameters involved, describing a transference model and recommendations for evaluation and control of the radiological risk for firemen. Special emphasis is placed on the measurement of the variables existing in this kind of operations

  20. Patterns of failure of diffuse large B-cell lymphoma patients after involved-site radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Holzhaeuser, Eva; Berlin, Maximilian; Bezold, Thomas; Mayer, Arnulf; Schmidberger, Heinz [University Medical Center Mainz, Department of Radiation Oncology and Radiotherapy, Mainz (Germany); Wollschlaeger, Daniel [University Medical Center Mainz, Institute for Medical Biostatistics, Epidemiology and Informatics, Mainz (Germany); Hess, Georg [University Medical Center Mainz, Department of Internal Medicine, Mainz (Germany)

    2017-12-15

    Radiotherapy (RT) in combination with chemoimmunotherapy is highly efficient in the treatment of diffuse large B-cell lymphoma (DLBCL). This retrospective analysis evaluated the efficacy of the treatment volume and the dose concept of involved-site RT (ISRT). We identified 60 histologically confirmed stage I-IV DLBCL patients treated with multimodal cytotoxic chemoimmunotherapy and followed by consolidative ISRT from 2005-2015. Progression-free survival (PFS) and overall survival (OS) were estimated by Kaplan-Meier method. Univariate analyses were performed by log-rank test and Mann-Whitney U-test. After initial chemoimmunotherapy (mostly R-CHOP; rituximab, cyclophosphamide, doxorubicin, vincristine, and prednisolone), 19 (36%) patients achieved complete response (CR), 34 (64%) partial response (PR) or less. Excluded were 7 (12%) patients with progressive disease after chemoimmunotherapy. All patients underwent ISRT with a dose of 40 Gy. After a median follow-up of 44 months, 79% of the patients remained disease free, while 21% presented with failure, progressive systemic disease, or death. All patients who achieved CR after chemoimmunotherapy remained in CR. Of the patients achieving PR after chemotherapy only 2 failed at the initial site within the ISRT volume. No marginal relapse was observed. Ann Arbor clinical stage I/II showed significantly improved PFS compared to stage III/IV (93% vs 65%; p ≤ 0.021). International Prognostic Index (IPI) score of 0 or 1 compared to 2-5 has been associated with significantly increased PFS (100% vs 70%; p ≤ 0.031). Postchemoimmunotherapy status of CR compared to PR was associated with significantly increased PFS (100% vs 68%; p ≤ 0.004) and OS (100% vs 82%; p ≤ 0.026). Only 3 of 53 patients developed grade II late side effects, whereas grade III or IV side effects have not been observed. These data suggest that a reduction of the RT treatment volume from involved-field (IF) to involved-site (IS) is sufficient because

  1. Solutions of large-scale electromagnetics problems involving dielectric objects with the parallel multilevel fast multipole algorithm.

    Science.gov (United States)

    Ergül, Özgür

    2011-11-01

    Fast and accurate solutions of large-scale electromagnetics problems involving homogeneous dielectric objects are considered. Problems are formulated with the electric and magnetic current combined-field integral equation and discretized with the Rao-Wilton-Glisson functions. Solutions are performed iteratively by using the multilevel fast multipole algorithm (MLFMA). For the solution of large-scale problems discretized with millions of unknowns, MLFMA is parallelized on distributed-memory architectures using a rigorous technique, namely, the hierarchical partitioning strategy. Efficiency and accuracy of the developed implementation are demonstrated on very large problems involving as many as 100 million unknowns.

  2. Holonic Modelling of Large Scale Geographic Environments

    Science.gov (United States)

    Mekni, Mehdi; Moulin, Bernard

    In this paper, we propose a novel approach to model Virtual Geographic Environments (VGE) which uses the holonic approach as a computational geographic methodology and holarchy as organizational principle. Our approach allows to automatically build VGE using data provided by Geographic Information Systems (GIS) and enables an explicit representation of the geographic environment for Situated Multi-Agent Systems (SMAS) in which agents are situated and with which they interact. In order to take into account geometric, topologic, and semantic characteristics of the geographic environment, we propose the use of the holonic approach to build the environment holarchy. We illustrate our holonic model using two different environments: an urban environment and a natural environment.

  3. Approximate Model Checking of PCTL Involving Unbounded Path Properties

    Science.gov (United States)

    Basu, Samik; Ghosh, Arka P.; He, Ru

    We study the problem of applying statistical methods for approximate model checking of probabilistic systems against properties encoded as PCTL formulas. Such approximate methods have been proposed primarily to deal with state-space explosion that makes the exact model checking by numerical methods practically infeasible for large systems. However, the existing statistical methods either consider a restricted subset of PCTL, specifically, the subset that can only express bounded until properties; or rely on user-specified finite bound on the sample path length. We propose a new method that does not have such restrictions and can be effectively used to reason about unbounded until properties. We approximate probabilistic characteristics of an unbounded until property by that of a bounded until property for a suitably chosen value of the bound. In essence, our method is a two-phase process: (a) the first phase is concerned with identifying the bound k 0; (b) the second phase computes the probability of satisfying the k 0-bounded until property as an estimate for the probability of satisfying the corresponding unbounded until property. In both phases, it is sufficient to verify bounded until properties which can be effectively done using existing statistical techniques. We prove the correctness of our technique and present its prototype implementations. We empirically show the practical applicability of our method by considering different case studies including a simple infinite-state model, and large finite-state models such as IPv4 zeroconf protocol and dining philosopher protocol modeled as Discrete Time Markov chains.

  4. Cerebrovascular effects of angiotensin converting enzyme inhibition involve large artery dilatation in rats

    DEFF Research Database (Denmark)

    Postiglione, A; Bobkiewicz, T; Vinholdt-Pedersen, E

    1991-01-01

    The aim of the study was to selectively examine the effects of converting enzyme inhibition on the large brain arteries by using concomitant inhibition of carbonic anhydrase to cause severe dilatation of mainly parenchymal resistance vessels....

  5. Bowel perforation from occult ileal involvement after diagnosis in a case of primary mediastinal large B-cell lymphoma.

    Science.gov (United States)

    De Philippis, Chiara; Di Chio, Maria Chiara; Sabattini, Elena; Bolli, Niccolo

    2016-07-14

    Primary mediastinal large B-cell lymphoma (PMBCL) is confined to the mediastinum or contiguous nodal areas in most cases. Extramediastinal and abdominal involvement, especially at diagnosis, is extremely rare. Our case describes the first case of histologically proven ileal involvement of PMBCL at diagnosis that led to ileal perforation. Positron emission tomography CT could increase the sensitivity of staging by detecting unusual sites of disease localisation, and could impact clinical management. 2016 BMJ Publishing Group Ltd.

  6. Long-Term Calculations with Large Air Pollution Models

    DEFF Research Database (Denmark)

    Ambelas Skjøth, C.; Bastrup-Birk, A.; Brandt, J.

    1999-01-01

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  7. Constituent rearrangement model and large transverse momentum reactions

    International Nuclear Information System (INIS)

    Igarashi, Yuji; Imachi, Masahiro; Matsuoka, Takeo; Otsuki, Shoichiro; Sawada, Shoji.

    1978-01-01

    In this chapter, two models based on the constituent rearrangement picture for large p sub( t) phenomena are summarized. One is the quark-junction model, and the other is the correlating quark rearrangement model. Counting rules of the models apply to both two-body reactions and hadron productions. (author)

  8. Multistability in Large Scale Models of Brain Activity.

    Directory of Open Access Journals (Sweden)

    Mathieu Golos

    2015-12-01

    Full Text Available Noise driven exploration of a brain network's dynamic repertoire has been hypothesized to be causally involved in cognitive function, aging and neurodegeneration. The dynamic repertoire crucially depends on the network's capacity to store patterns, as well as their stability. Here we systematically explore the capacity of networks derived from human connectomes to store attractor states, as well as various network mechanisms to control the brain's dynamic repertoire. Using a deterministic graded response Hopfield model with connectome-based interactions, we reconstruct the system's attractor space through a uniform sampling of the initial conditions. Large fixed-point attractor sets are obtained in the low temperature condition, with a bigger number of attractors than ever reported so far. Different variants of the initial model, including (i a uniform activation threshold or (ii a global negative feedback, produce a similarly robust multistability in a limited parameter range. A numerical analysis of the distribution of the attractors identifies spatially-segregated components, with a centro-medial core and several well-delineated regional patches. Those different modes share similarity with the fMRI independent components observed in the "resting state" condition. We demonstrate non-stationary behavior in noise-driven generalizations of the models, with different meta-stable attractors visited along the same time course. Only the model with a global dynamic density control is found to display robust and long-lasting non-stationarity with no tendency toward either overactivity or extinction. The best fit with empirical signals is observed at the edge of multistability, a parameter region that also corresponds to the highest entropy of the attractors.

  9. Using Quality Circles to Enhance Student Involvement and Course Quality in a Large Undergraduate Food Science and Human Nutrition Course

    Science.gov (United States)

    Schmidt, S. J.; Parmer, M. S.; Bohn, D. M.

    2005-01-01

    Large undergraduate classes are a challenge to manage, to engage, and to assess, yet such formidable classes can flourish when student participation is facilitated. One method of generating authentic student involvement is implementation of quality circles by means of a Student Feedback Committee (SFC), which is a volunteer problem-solving and…

  10. Ocular involvement in pemphigus vulgaris - a retrospective study of a large Spanish cohort.

    Science.gov (United States)

    España, Agustin; Iranzo, Pilar; Herrero-González, Josep; Mascaro, José M; Suárez, Ricardo

    2017-04-01

    Ocular/periocular involvement in pemphigus vulgaris (OPV) has rarely been reported. The objective of the present study was to investigate the pattern of OPV and define the prognostic value of its manifestation. From 1985 to 2014, a total of 167 patients with pemphigus vulgaris (PV) were treated at four tertiary Spanish hospitals. In this retrospective study, we included all patients with OPV. Clinical data and information on associated symptoms were obtained from patients' medical records. Only 24 (14.3 %) of all PV patients had ocular lesions. In most cases, -ocular involvement was preceded by PV lesions at various other sites (mean duration: 33.7 months). Ocular PV lesions occurred during flares of mucocutaneous pemphigus, and was never the only mucosal manifestation. The most common clinical signs were conjunctival hyperemia (87.5 %), erosions on the eyelids (41.6 %) as well as of the palpebral/bulbar conjunctiva (33.3 %) and at the medial epicanthus (20.8 %). The most relevant associated symptoms included local pain/stinging (71.4 %), irritation (47.6 %), photophobia (38.1 %), and epiphora (23.9 %). Ocular PV improved with systemic and adjuvant topical therapies. Only two patients experienced sequelae. In patients with PV, ocular involvement is an exception. Ocular PV is associated with greater disease activity, and usually follows a benign course. Sites affected are the conjunctiva, the eyelids, or both. © 2017 Deutsche Dermatologische Gesellschaft (DDG). Published by John Wiley & Sons Ltd.

  11. Heart of Lymphoma: Primary Mediastinal Large B-Cell Lymphoma with Endomyocardial Involvement

    Directory of Open Access Journals (Sweden)

    Elisa Rogowitz

    2013-01-01

    Full Text Available Primary mediastinal B-cell lymphoma (PMBCL is an uncommon aggressive subset of diffuse large B-cell lymphomas. Although PMBCL frequently spreads locally from the thymus into the pleura or pericardium, it rarely invades directly through the heart. Herein, we report a case of a young Mexican female diagnosed with PMBCL with clear infiltration of lymphoma through the cardiac wall and into the right atrium and tricuspid valve leading to tricuspid regurgitation. This was demonstrated by cardiac MRI and transthoracic echocardiogram. In addition, cardiac MRI and CT scan of the chest revealed the large mediastinal mass completely surrounding and eroding into the superior vena cava (SVC wall causing a collar of stokes. The cardiac and SVC infiltration created a significant therapeutic challenge as lymphomas are very responsive to chemotherapy, and treatment could potentially lead to vascular wall rupture and hemorrhage. Despite the lack of conclusive data on chemotherapy-induced hemodynamic compromise in such scenarios, her progressive severe SVC syndrome and respiratory distress necessitated urgent intervention. In addition to the unique presentation of this rare lymphoma, our case report highlights the safety of R-CHOP treatment.

  12. Skin involvement as the first manifestation of breast implant-associated anaplastic large cell lymphoma.

    Science.gov (United States)

    Alcalá, Rebeca; Llombart, Beatriz; Lavernia, Javier; Traves, Víctor; Guillén, Carlos; Sanmartín, Onofre

    2016-07-01

    Breast implant-associated anaplastic large cell lymphoma (ALCL) is a newly described clinical and pathologic entity that typically presents as seroma in the fibrous scar around the implant. Less frequently, it presents as a solid peri-implant mass, and there have been no reports to date of cutaneous lesions as the presenting manifestation. We report the case of a 56-year-old woman with a history of bilateral breast reconstruction following breast cancer of the right breast who consulted with several papules on the right breast suggestive of metastasis. Histopathology showed a proliferation of large epithelioid lymphocytes with highly pleomorphic cells and nuclei. The neoplastic cells were CD15 and CD30 positive and ALK-1 negative. The epithelial markers were all negative except for epithelial membrane antigen (EMA), which was weakly positive. Molecular analysis showed monoclonal T-cell receptor γ gene rearrangement, confirming a diagnosis of breast implant-associated ALCL. The non-specific morphology of the skin lesions, the epithelioid nature of the neoplastic cells and the expression of EMA can lead to an erroneous diagnosis of skin metastases from a poorly differentiated adenocarcinoma of the breast. We recommend immunohistochemical staining for CD30 and ALK-1 for patients with breast implants who develop anaplastic lesions. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  13. Large scale stochastic spatio-temporal modelling with PCRaster

    NARCIS (Netherlands)

    Karssenberg, D.J.; Drost, N.; Schmitz, O.; Jong, K. de; Bierkens, M.F.P.

    2013-01-01

    PCRaster is a software framework for building spatio-temporal models of land surface processes (http://www.pcraster.eu). Building blocks of models are spatial operations on raster maps, including a large suite of operations for water and sediment routing. These operations are available to model

  14. An accurate and simple large signal model of HEMT

    DEFF Research Database (Denmark)

    Liu, Qing

    1989-01-01

    A large-signal model of discrete HEMTs (high-electron-mobility transistors) has been developed. It is simple and suitable for SPICE simulation of hybrid digital ICs. The model parameters are extracted by using computer programs and data provided by the manufacturer. Based on this model, a hybrid...

  15. Mesothelioma With a Large Prevascular Lymph Node: N1 Involvement or Something Different?

    Science.gov (United States)

    Berzenji, Lawek; Van Schil, Paul E; Snoeckx, Annemie; Hertoghs, Marjan; Carp, Laurens

    2018-05-01

    A 64-year-old man presented with a large amount of right-sided pleural fluid on imaging, together with calcified pleural plaques and an enlarged nodular structure in the prevascular mediastinum, presumably an enlarged lymph node. Pleural biopsies were obtained during video-assisted thoracoscopic surgery to exclude malignancy. Histopathology showed an epithelial malignant pleural mesothelioma. Induction chemotherapy with cisplatin and pemetrexed was administered followed by an extended pleurectomy and decortication with systematic nodal dissection. Histopathology confirmed the diagnosis of a ypT3N0M0 (stage IB) mesothelioma, and an unexpected thymoma type B2 (stage II) was discovered in the prevascular nodule. Simultaneous occurrence of a mesothelioma and thymoma is extremely rare. Copyright © 2018 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  16. A deployable in vivo EPR tooth dosimeter for triage after a radiation event involving large populations

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Benjamin B., E-mail: Benjamin.B.Williams@dartmouth.edu [Dartmouth Physically Based Biodosimetry Center for Medical Countermeasures Against Radiation (Dart-Dose CMCR), Dartmouth Medical School, Hanover, NH 03768 (United States); Section of Radiation Oncology, Department of Medicine, Dartmouth Hitchcock Medical Center, Lebanon, NH (United States); Dong, Ruhong [Dartmouth Physically Based Biodosimetry Center for Medical Countermeasures Against Radiation (Dart-Dose CMCR), Dartmouth Medical School, Hanover, NH 03768 (United States); Flood, Ann Barry [Dartmouth Physically Based Biodosimetry Center for Medical Countermeasures Against Radiation (Dart-Dose CMCR), Dartmouth Medical School, Hanover, NH 03768 (United States); Clin-EPR, LLC, Lyme, NH (United States); Grinberg, Oleg [Clin-EPR, LLC, Lyme, NH (United States); Kmiec, Maciej; Lesniewski, Piotr N.; Matthews, Thomas P.; Nicolalde, Roberto J.; Raynolds, Tim [Dartmouth Physically Based Biodosimetry Center for Medical Countermeasures Against Radiation (Dart-Dose CMCR), Dartmouth Medical School, Hanover, NH 03768 (United States); Salikhov, Ildar K. [Clin-EPR, LLC, Lyme, NH (United States); Swartz, Harold M. [Dartmouth Physically Based Biodosimetry Center for Medical Countermeasures Against Radiation (Dart-Dose CMCR), Dartmouth Medical School, Hanover, NH 03768 (United States); Clin-EPR, LLC, Lyme, NH (United States)

    2011-09-15

    In order to meet the potential need for emergency large-scale retrospective radiation biodosimetry following an accident or attack, we have developed instrumentation and methodology for in vivo electron paramagnetic resonance spectroscopy to quantify concentrations of radiation-induced radicals within intact teeth. This technique has several very desirable characteristics for triage, including independence from confounding biologic factors, a non-invasive measurement procedure, the capability to make measurements at any time after the event, suitability for use by non-expert operators at the site of an event, and the ability to provide immediate estimates of individual doses. Throughout development there has been a particular focus on the need for a deployable system, including instrumental requirements for transport and field use, the need for high throughput, and use by minimally trained operators. Numerous measurements have been performed using this system in clinical and other non-laboratory settings, including in vivo measurements with unexposed populations as well as patients undergoing radiation therapies. The collection and analyses of sets of three serially-acquired spectra with independent placements of the resonator, in a data collection process lasting approximately 5 min, provides dose estimates with standard errors of prediction of approximately 1 Gy. As an example, measurements were performed on incisor teeth of subjects who had either received no irradiation or 2 Gy total body irradiation for prior bone marrow transplantation; this exercise provided a direct and challenging test of our capability to identify subjects who would be in need of acute medical care. -- Highlights: > Advances in radiation biodosimetry are needed for large-scale emergency response. > Radiation-induced radicals in tooth enamel can be measured using in vivo EPR. > A novel transportable spectrometer was applied in the laboratory and at remote sites. > The current instrument

  17. Regularization modeling for large-eddy simulation of diffusion flames

    NARCIS (Netherlands)

    Geurts, Bernardus J.; Wesseling, P.; Oñate, E.; Périaux, J.

    We analyze the evolution of a diffusion flame in a turbulent mixing layer using large-eddy simulation. The large-eddy simulation includes Leray regularization of the convective transport and approximate inverse filtering to represent the chemical source terms. The Leray model is compared to the more

  18. Advances in Modelling of Large Scale Coastal Evolution

    NARCIS (Netherlands)

    Stive, M.J.F.; De Vriend, H.J.

    1995-01-01

    The attention for climate change impact on the world's coastlines has established large scale coastal evolution as a topic of wide interest. Some more recent advances in this field, focusing on the potential of mathematical models for the prediction of large scale coastal evolution, are discussed.

  19. Multiple mechanisms involved in the large-spectrum therapeutic potential of cannabidiol in psychiatric disorders

    Science.gov (United States)

    Campos, Alline Cristina; Moreira, Fabricio Araújo; Gomes, Felipe Villela; Del Bel, Elaine Aparecida; Guimarães, Francisco Silveira

    2012-01-01

    Cannabidiol (CBD) is a major phytocannabinoid present in the Cannabis sativa plant. It lacks the psychotomimetic and other psychotropic effects that the main plant compound Δ9-tetrahydrocannabinol (THC) being able, on the contrary, to antagonize these effects. This property, together with its safety profile, was an initial stimulus for the investigation of CBD pharmacological properties. It is now clear that CBD has therapeutic potential over a wide range of non-psychiatric and psychiatric disorders such as anxiety, depression and psychosis. Although the pharmacological effects of CBD in different biological systems have been extensively investigated by in vitro studies, the mechanisms responsible for its therapeutic potential are still not clear. Here, we review recent in vivo studies indicating that these mechanisms are not unitary but rather depend on the behavioural response being measured. Acute anxiolytic and antidepressant-like effects seem to rely mainly on facilitation of 5-HT1A-mediated neurotransmission in key brain areas related to defensive responses, including the dorsal periaqueductal grey, bed nucleus of the stria terminalis and medial prefrontal cortex. Other effects, such as anti-compulsive, increased extinction and impaired reconsolidation of aversive memories, and facilitation of adult hippocampal neurogenesis could depend on potentiation of anandamide-mediated neurotransmission. Finally, activation of TRPV1 channels may help us to explain the antipsychotic effect and the bell-shaped dose-response curves commonly observed with CBD. Considering its safety profile and wide range of therapeutic potential, however, further studies are needed to investigate the involvement of other possible mechanisms (e.g. inhibition of adenosine uptake, inverse agonism at CB2 receptor, CB1 receptor antagonism, GPR55 antagonism, PPARγ receptors agonism, intracellular (Ca2+) increase, etc.), on CBD behavioural effects. PMID:23108553

  20. Nuclear spectroscopy in large shell model spaces: recent advances

    International Nuclear Information System (INIS)

    Kota, V.K.B.

    1995-01-01

    Three different approaches are now available for carrying out nuclear spectroscopy studies in large shell model spaces and they are: (i) the conventional shell model diagonalization approach but taking into account new advances in computer technology; (ii) the recently introduced Monte Carlo method for the shell model; (iii) the spectral averaging theory, based on central limit theorems, in indefinitely large shell model spaces. The various principles, recent applications and possibilities of these three methods are described and the similarity between the Monte Carlo method and the spectral averaging theory is emphasized. (author). 28 refs., 1 fig., 5 tabs

  1. Mild clinical involvement in two males with a large FMR1 premutation

    Energy Technology Data Exchange (ETDEWEB)

    Hagerman, R.; O`Connor, R.; Staley, L. [Children`s Hospital, Denver, CO (United States)] [and others

    1994-09-01

    Both male and female individuals who carry the FMR1 premutation are considered to be clinically unaffected and have been reported to have normal transcription of their FMR1 gene and normal FMR1 protein (FMRP) production. We have evaluated two males who are mildly affected clinically with features of fragile X syndrome and demonstrate a large premutation on DNA studies. The first patient is a 2 year 8 month old boy who demonstrated the fragile X chromosome in 3% of his lymphocytes on cytogenetic testing. His physical features include mildly prominent ears and hyperextensible finger joints. He has language delays along with behavioral problems including tantrums and attention deficit. Developmental testing revealed a mental scale of 116 on the Bayley Scales of Infant Development, which is in the normal range. DNA testing demonstrated a premutation with 161 CGG repeats. This premutation was methylated in a small percent of his cells (<2%). These findings were observed in both blood leukocytes and buccal cells. Protein studies of transformed lymphocytes from this boy showed approximately 50 to 70% of the normal level of FMRP. The second patient is a 14 year old male who was cytogenetically negative for fragile X expression. His physical exam demonstrates a long face, a high palate and macroorchidism, (testicular volume of approximately 35 ml). His overall full scale IQ on the WISC-III is 73. He has language deficits and visual spatial perceptual deficits which have caused significant learning problems in school. Behaviorally he has problems with shyness and social anxiety, although he does not have attention deficit hyperactivity disorder. DNA testing revealed an FMR1 mutation of approximately 210 CGG repeats that is methylated in 4.7% of his cells.

  2. How and Why Fathers Are Involved in Their Children's Education: Gendered Model of Parent Involvement

    Science.gov (United States)

    Kim, Sung won

    2018-01-01

    Accumulating evidence points to the unique contributions fathers make to their children's academic outcomes. However, the large body of multi-disciplinary literature on fatherhood does not address how fathers engage in specific practices relevant to education, while the educational research in the United States focused on parent involvement often…

  3. Estimation in a multiplicative mixed model involving a genetic relationship matrix

    Directory of Open Access Journals (Sweden)

    Eccleston John A

    2009-04-01

    Full Text Available Abstract Genetic models partitioning additive and non-additive genetic effects for populations tested in replicated multi-environment trials (METs in a plant breeding program have recently been presented in the literature. For these data, the variance model involves the direct product of a large numerator relationship matrix A, and a complex structure for the genotype by environment interaction effects, generally of a factor analytic (FA form. With MET data, we expect a high correlation in genotype rankings between environments, leading to non-positive definite covariance matrices. Estimation methods for reduced rank models have been derived for the FA formulation with independent genotypes, and we employ these estimation methods for the more complex case involving the numerator relationship matrix. We examine the performance of differing genetic models for MET data with an embedded pedigree structure, and consider the magnitude of the non-additive variance. The capacity of existing software packages to fit these complex models is largely due to the use of the sparse matrix methodology and the average information algorithm. Here, we present an extension to the standard formulation necessary for estimation with a factor analytic structure across multiple environments.

  4. A model of the supplier involvement in the product innovation

    Directory of Open Access Journals (Sweden)

    Kumar Manoj

    2017-01-01

    Full Text Available In this paper we examine the product innovation in a supply chain by a supplier and derive a model for a supplier’s product innovation policy. The product innovation of a supplier can contribute to the long-term competitiveness for the supply chain, and as it is for many supply chains a major factor, it should be considered in the development of strategies for a supplier. Here, we evaluate the effectiveness of supplier product innovation as a strategic tool to enhance the competitiveness and viability of supply chain. This paper explores the dynamic research performance of a supplier with endogenous time preference under a given arrangement of product innovation. We find that the optimal effort level and the achieved product innovation obey a saddle point path, or show tremendous fluctuations even without introducing the stochastic nature of product innovative activity. We also find that the fluctuation frequency is largely dependent both on the supplier’s characteristics such as supplier’s product innovative ability and on the nature of product innovation process per se. Short-run analyses are also made on the effect of supply chain cooperation in the product innovation process.

  5. Modelling and measurements of wakes in large wind farms

    DEFF Research Database (Denmark)

    Barthelmie, Rebecca Jane; Rathmann, Ole; Frandsen, Sten Tronæs

    2007-01-01

    The paper presents research conducted in the Flow workpackage of the EU funded UPWIND project which focuses on improving models of flow within and downwind of large wind farms in complex terrain and offshore. The main activity is modelling the behaviour of wind turbine wakes in order to improve p...

  6. Modeling and Forecasting Large Realized Covariance Matrices and Portfolio Choice

    NARCIS (Netherlands)

    Callot, Laurent A.F.; Kock, Anders B.; Medeiros, Marcelo C.

    2017-01-01

    We consider modeling and forecasting large realized covariance matrices by penalized vector autoregressive models. We consider Lasso-type estimators to reduce the dimensionality and provide strong theoretical guarantees on the forecast capability of our procedure. We show that we can forecast

  7. Including investment risk in large-scale power market models

    DEFF Research Database (Denmark)

    Lemming, Jørgen Kjærgaard; Meibom, P.

    2003-01-01

    can be included in large-scale partial equilibrium models of the power market. The analyses are divided into a part about risk measures appropriate for power market investors and a more technical part about the combination of a risk-adjustment model and a partial-equilibrium model. To illustrate...... the analyses quantitatively, a framework based on an iterative interaction between the equilibrium model and a separate risk-adjustment module was constructed. To illustrate the features of the proposed modelling approach we examined how uncertainty in demand and variable costs affects the optimal choice...

  8. Large scale experiments as a tool for numerical model development

    DEFF Research Database (Denmark)

    Kirkegaard, Jens; Hansen, Erik Asp; Fuchs, Jesper

    2003-01-01

    for improvement of the reliability of physical model results. This paper demonstrates by examples that numerical modelling benefits in various ways from experimental studies (in large and small laboratory facilities). The examples range from very general hydrodynamic descriptions of wave phenomena to specific......Experimental modelling is an important tool for study of hydrodynamic phenomena. The applicability of experiments can be expanded by the use of numerical models and experiments are important for documentation of the validity of numerical tools. In other cases numerical tools can be applied...... hydrodynamic interaction with structures. The examples also show that numerical model development benefits from international co-operation and sharing of high quality results....

  9. Active Exploration of Large 3D Model Repositories.

    Science.gov (United States)

    Gao, Lin; Cao, Yan-Pei; Lai, Yu-Kun; Huang, Hao-Zhi; Kobbelt, Leif; Hu, Shi-Min

    2015-12-01

    With broader availability of large-scale 3D model repositories, the need for efficient and effective exploration becomes more and more urgent. Existing model retrieval techniques do not scale well with the size of the database since often a large number of very similar objects are returned for a query, and the possibilities to refine the search are quite limited. We propose an interactive approach where the user feeds an active learning procedure by labeling either entire models or parts of them as "like" or "dislike" such that the system can automatically update an active set of recommended models. To provide an intuitive user interface, candidate models are presented based on their estimated relevance for the current query. From the methodological point of view, our main contribution is to exploit not only the similarity between a query and the database models but also the similarities among the database models themselves. We achieve this by an offline pre-processing stage, where global and local shape descriptors are computed for each model and a sparse distance metric is derived that can be evaluated efficiently even for very large databases. We demonstrate the effectiveness of our method by interactively exploring a repository containing over 100 K models.

  10. Modeling and Simulation Techniques for Large-Scale Communications Modeling

    National Research Council Canada - National Science Library

    Webb, Steve

    1997-01-01

    .... Tests of random number generators were also developed and applied to CECOM models. It was found that synchronization of random number strings in simulations is easy to implement and can provide significant savings for making comparative studies. If synchronization is in place, then statistical experiment design can be used to provide information on the sensitivity of the output to input parameters. The report concludes with recommendations and an implementation plan.

  11. CMAQ Involvement in Air Quality Model Evaluation International Initiative

    Science.gov (United States)

    Description of Air Quality Model Evaluation International Initiative (AQMEII). Different chemical transport models are applied by different groups over North America and Europe and evaluated against observations.

  12. Estimation and Inference for Very Large Linear Mixed Effects Models

    OpenAIRE

    Gao, K.; Owen, A. B.

    2016-01-01

    Linear mixed models with large imbalanced crossed random effects structures pose severe computational problems for maximum likelihood estimation and for Bayesian analysis. The costs can grow as fast as $N^{3/2}$ when there are N observations. Such problems arise in any setting where the underlying factors satisfy a many to many relationship (instead of a nested one) and in electronic commerce applications, the N can be quite large. Methods that do not account for the correlation structure can...

  13. Spectra of operators in large N tensor models

    Science.gov (United States)

    Bulycheva, Ksenia; Klebanov, Igor R.; Milekhin, Alexey; Tarnopolsky, Grigory

    2018-01-01

    We study the operators in the large N tensor models, focusing mostly on the fermionic quantum mechanics with O (N )3 symmetry which may be either global or gauged. In the model with global symmetry, we study the spectra of bilinear operators, which are in either the symmetric traceless or the antisymmetric representation of one of the O (N ) groups. In the symmetric traceless case, the spectrum of scaling dimensions is the same as in the Sachdev-Ye-Kitaev (SYK) model with real fermions; it includes the h =2 zero mode. For the operators antisymmetric in the two indices, the scaling dimensions are the same as in the additional sector found in the complex tensor and SYK models; the lowest h =0 eigenvalue corresponds to the conserved O (N ) charges. A class of singlet operators may be constructed from contracted combinations of m symmetric traceless or antisymmetric two-particle operators. Their two-point functions receive contributions from m melonic ladders. Such multiple ladders are a new phenomenon in the tensor model, which does not seem to be present in the SYK model. The more typical 2 k -particle operators do not receive any ladder corrections and have quantized large N scaling dimensions k /2 . We construct pictorial representations of various singlet operators with low k . For larger k , we use available techniques to count the operators and show that their number grows as 2kk !. As a consequence, the theory has a Hagedorn phase transition at the temperature which approaches zero in the large N limit. We also study the large N spectrum of low-lying operators in the Gurau-Witten model, which has O (N )6 symmetry. We argue that it corresponds to one of the generalized SYK models constructed by Gross and Rosenhaus. Our paper also includes studies of the invariants in large N tensor integrals with various symmetries.

  14. Modelling and transient stability of large wind farms

    DEFF Research Database (Denmark)

    Akhmatov, Vladislav; Knudsen, Hans; Nielsen, Arne Hejde

    2003-01-01

    by a physical model of grid-connected windmills. The windmill generators ate conventional induction generators and the wind farm is ac-connected to the power system. Improvements-of short-term voltage stability in case of failure events in the external power system are treated with use of conventional generator...... technology. This subject is treated as a parameter study with respect to the windmill electrical and mechanical parameters and with use of control strategies within the conventional generator technology. Stability improvements on the wind farm side of the connection point lead to significant reduction......The paper is dealing-with modelling and short-term Voltage stability considerations of large wind farms. A physical model of a large offshore wind farm consisting of a large number of windmills is implemented in the dynamic simulation tool PSS/E. Each windmill in the wind farm is represented...

  15. Differences in passenger car and large truck involved crash frequencies at urban signalized intersections: an exploratory analysis.

    Science.gov (United States)

    Dong, Chunjiao; Clarke, David B; Richards, Stephen H; Huang, Baoshan

    2014-01-01

    The influence of intersection features on safety has been examined extensively because intersections experience a relatively large proportion of motor vehicle conflicts and crashes. Although there are distinct differences between passenger cars and large trucks-size, operating characteristics, dimensions, and weight-modeling crash counts across vehicle types is rarely addressed. This paper develops and presents a multivariate regression model of crash frequencies by collision vehicle type using crash data for urban signalized intersections in Tennessee. In addition, the performance of univariate Poisson-lognormal (UVPLN), multivariate Poisson (MVP), and multivariate Poisson-lognormal (MVPLN) regression models in establishing the relationship between crashes, traffic factors, and geometric design of roadway intersections is investigated. Bayesian methods are used to estimate the unknown parameters of these models. The evaluation results suggest that the MVPLN model possesses most of the desirable statistical properties in developing the relationships. Compared to the UVPLN and MVP models, the MVPLN model better identifies significant factors and predicts crash frequencies. The findings suggest that traffic volume, truck percentage, lighting condition, and intersection angle significantly affect intersection safety. Important differences in car, car-truck, and truck crash frequencies with respect to various risk factors were found to exist between models. The paper provides some new or more comprehensive observations that have not been covered in previous studies. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Exposing earth surface process model simulations to a large audience

    Science.gov (United States)

    Overeem, I.; Kettner, A. J.; Borkowski, L.; Russell, E. L.; Peddicord, H.

    2015-12-01

    The Community Surface Dynamics Modeling System (CSDMS) represents a diverse group of >1300 scientists who develop and apply numerical models to better understand the Earth's surface. CSDMS has a mandate to make the public more aware of model capabilities and therefore started sharing state-of-the-art surface process modeling results with large audiences. One platform to reach audiences outside the science community is through museum displays on 'Science on a Sphere' (SOS). Developed by NOAA, SOS is a giant globe, linked with computers and multiple projectors and can display data and animations on a sphere. CSDMS has developed and contributed model simulation datasets for the SOS system since 2014, including hydrological processes, coastal processes, and human interactions with the environment. Model simulations of a hydrological and sediment transport model (WBM-SED) illustrate global river discharge patterns. WAVEWATCH III simulations have been specifically processed to show the impacts of hurricanes on ocean waves, with focus on hurricane Katrina and super storm Sandy. A large world dataset of dams built over the last two centuries gives an impression of the profound influence of humans on water management. Given the exposure of SOS, CSDMS aims to contribute at least 2 model datasets a year, and will soon provide displays of global river sediment fluxes and changes of the sea ice free season along the Arctic coast. Over 100 facilities worldwide show these numerical model displays to an estimated 33 million people every year. Datasets storyboards, and teacher follow-up materials associated with the simulations, are developed to address common core science K-12 standards. CSDMS dataset documentation aims to make people aware of the fact that they look at numerical model results, that underlying models have inherent assumptions and simplifications, and that limitations are known. CSDMS contributions aim to familiarize large audiences with the use of numerical

  17. Disinformative data in large-scale hydrological modelling

    Directory of Open Access Journals (Sweden)

    A. Kauffeldt

    2013-07-01

    Full Text Available Large-scale hydrological modelling has become an important tool for the study of global and regional water resources, climate impacts, and water-resources management. However, modelling efforts over large spatial domains are fraught with problems of data scarcity, uncertainties and inconsistencies between model forcing and evaluation data. Model-independent methods to screen and analyse data for such problems are needed. This study aimed at identifying data inconsistencies in global datasets using a pre-modelling analysis, inconsistencies that can be disinformative for subsequent modelling. The consistency between (i basin areas for different hydrographic datasets, and (ii between climate data (precipitation and potential evaporation and discharge data, was examined in terms of how well basin areas were represented in the flow networks and the possibility of water-balance closure. It was found that (i most basins could be well represented in both gridded basin delineations and polygon-based ones, but some basins exhibited large area discrepancies between flow-network datasets and archived basin areas, (ii basins exhibiting too-high runoff coefficients were abundant in areas where precipitation data were likely affected by snow undercatch, and (iii the occurrence of basins exhibiting losses exceeding the potential-evaporation limit was strongly dependent on the potential-evaporation data, both in terms of numbers and geographical distribution. Some inconsistencies may be resolved by considering sub-grid variability in climate data, surface-dependent potential-evaporation estimates, etc., but further studies are needed to determine the reasons for the inconsistencies found. Our results emphasise the need for pre-modelling data analysis to identify dataset inconsistencies as an important first step in any large-scale study. Applying data-screening methods before modelling should also increase our chances to draw robust conclusions from subsequent

  18. Coupled SWAT-MODFLOW Model Development for Large Basins

    Science.gov (United States)

    Aliyari, F.; Bailey, R. T.; Tasdighi, A.

    2017-12-01

    Water management in semi-arid river basins requires allocating water resources between urban, industrial, energy, and agricultural sectors, with the latter competing for necessary irrigation water to sustain crop yield. Competition between these sectors will intensify due to changes in climate and population growth. In this study, the recently developed SWAT-MODFLOW coupled hydrologic model is modified for application in a large managed river basin that provides both surface water and groundwater resources for urban and agricultural areas. Specific modifications include the linkage of groundwater pumping and irrigation practices and code changes to allow for the large number of SWAT hydrologic response units (HRU) required for a large river basin. The model is applied to the South Platte River Basin (SPRB), a 56,980 km2 basin in northeastern Colorado dominated by large urban areas along the front range of the Rocky Mountains and agriculture regions to the east. Irregular seasonal and annual precipitation and 150 years of urban and agricultural water management history in the basin provide an ideal test case for the SWAT-MODFLOW model. SWAT handles land surface and soil zone processes whereas MODFLOW handles groundwater flow and all sources and sinks (pumping, injection, bedrock inflow, canal seepage, recharge areas, groundwater/surface water interaction), with recharge and stream stage provided by SWAT. The model is tested against groundwater levels, deep percolation estimates, and stream discharge. The model will be used to quantify spatial groundwater vulnerability in the basin under scenarios of climate change and population growth.

  19. A hierarchical causal modeling for large industrial plants supervision

    International Nuclear Information System (INIS)

    Dziopa, P.; Leyval, L.

    1994-01-01

    A supervision system has to analyse the process current state and the way it will evolve after a modification of the inputs or disturbance. It is proposed to base this analysis on a hierarchy of models, witch differ by the number of involved variables and the abstraction level used to describe their temporal evolution. In a first step, special attention is paid to causal models building, from the most abstract one. Once the hierarchy of models has been build, the most detailed model parameters are estimated. Several models of different abstraction levels can be used for on line prediction. These methods have been applied to a nuclear reprocessing plant. The abstraction level could be chosen on line by the operator. Moreover when an abnormal process behaviour is detected a more detailed model is automatically triggered in order to focus the operator attention on the suspected subsystem. (authors). 11 refs., 11 figs

  20. FDG PET-CT Finding in Bilateral Renal and Bone Involvement of Diffuse Large B-Cell Lymphoma

    Directory of Open Access Journals (Sweden)

    Yusuf Ziya Tan

    2014-10-01

    Full Text Available Thirty-six year old male patient with pathological fracture of the left tibia underwent intramedullary and soft tissue curettage. The histopathological examination revealed diffuse large B cell lymphoma. The patient underwent F18-FDG PET-CT scanning for initial staging. FDG PET-CT scan revealed hypermetabolic lesions at the left tibia and in bilateral kidneys. After the systemic chemotherapy and local radiotherapy to the tibia, repeated FDG PET/CT scan showed improvement of the previous hypermetabolic lesions, suggesting good response to therapy. Bone and renal involvement is an uncommon variant of diffuse large B-cell lymphoma and FDG PET-CT is an useful whole body imaging modality in these cases.

  1. Hippocampal and striatal involvement in cognitive tasks: a computational model

    OpenAIRE

    Fabian, Chersi; Neil, Burgess

    2018-01-01

    The hippocampus and the striatum support episodic and procedural memory, respectively, and "place" and "response" learning within spatial navigation. Recently this dichotomy has been linked to "model-based" and "model-free" reinforcement learning. Here we present a well-constrained neural model of how both systems support spatial navigation, and apply the same model to more abstract problems such as sequential decision making. In particular, we show that if a task can be transformed into a Ma...

  2. Spatial Extent Models for Natural Language Phrases Involving Directional Containment

    NARCIS (Netherlands)

    Singh, G.; de By, R.A.

    2015-01-01

    We study the problem of assigning a spatial extent to a text phrase such as central northern California', with the objective of allowing spatial interpretations of natural language, and consistency testing of complex utterances that involve multiple phrases from which spatial extent can be derived.

  3. Large N scalars: From glueballs to dynamical Higgs models

    Science.gov (United States)

    Sannino, Francesco

    2016-05-01

    We construct effective Lagrangians, and corresponding counting schemes, valid to describe the dynamics of the lowest lying large N stable massive composite state emerging in strongly coupled theories. The large N counting rules can now be employed when computing quantum corrections via an effective Lagrangian description. The framework allows for systematic investigations of composite dynamics of a non-Goldstone nature. Relevant examples are the lightest glueball states emerging in any Yang-Mills theory. We further apply the effective approach and associated counting scheme to composite models at the electroweak scale. To illustrate the formalism we consider the possibility that the Higgs emerges as the lightest glueball of a new composite theory; the large N scalar meson in models of dynamical electroweak symmetry breaking; the large N pseudodilaton useful also for models of near-conformal dynamics. For each of these realizations we determine the leading N corrections to the electroweak precision parameters. The results nicely elucidate the underlying large N dynamics and can be used to confront first principle lattice results featuring composite scalars with a systematic effective approach.

  4. The pig as a large animal model for characterization of host-pathogen interactions

    DEFF Research Database (Denmark)

    Skovgaard, Kerstin; Brogaard, Louise; Heegaard, Peter M. H.

    Large animal models are essential in understanding the mechanisms involved in human infectious disease. To study the expression of host and bacterial genes involved in defense and survival mechanisms, we analyzed lung tissue from pigs experimentally infected with the Gram-negative bacterium A...... experimental H1N2 virus infection of pigs, and found the regulation of several swine encoded miRNAs and cytokines to mimic key findings from influenza studies in human patients. By employing the pig as a model we were able to perform highly controlled experimental infections and to study changes of symptoms...

  5. Solving large linear systems in an implicit thermohaline ocean model

    NARCIS (Netherlands)

    de Niet, Arie Christiaan

    2007-01-01

    The climate on earth is largely determined by the global ocean circulation. Hence it is important to predict how the flow will react to perturbation by for example melting icecaps. To answer questions about the stability of the global ocean flow, a computer model has been developed that is able to

  6. Application of Logic Models in a Large Scientific Research Program

    Science.gov (United States)

    O'Keefe, Christine M.; Head, Richard J.

    2011-01-01

    It is the purpose of this article to discuss the development and application of a logic model in the context of a large scientific research program within the Commonwealth Scientific and Industrial Research Organisation (CSIRO). CSIRO is Australia's national science agency and is a publicly funded part of Australia's innovation system. It conducts…

  7. Searches for phenomena beyond the Standard Model at the Large ...

    Indian Academy of Sciences (India)

    metry searches at the LHC is thus the channel with large missing transverse momentum and jets of high transverse momentum. No excess above the expected SM background is observed and limits are set on supersymmetric models. Figures 1 and 2 show the limits from ATLAS [11] and CMS [12]. In addition to setting limits ...

  8. A stochastic large deformation model for computational anatomy

    DEFF Research Database (Denmark)

    Arnaudon, Alexis; Holm, Darryl D.; Pai, Akshay Sadananda Uppinakudru

    2017-01-01

    In the study of shapes of human organs using computational anatomy, variations are found to arise from inter-subject anatomical differences, disease-specific effects, and measurement noise. This paper introduces a stochastic model for incorporating random variations into the Large Deformation...

  9. Adaptive Gaussian Predictive Process Models for Large Spatial Datasets

    Science.gov (United States)

    Guhaniyogi, Rajarshi; Finley, Andrew O.; Banerjee, Sudipto; Gelfand, Alan E.

    2011-01-01

    Large point referenced datasets occur frequently in the environmental and natural sciences. Use of Bayesian hierarchical spatial models for analyzing these datasets is undermined by onerous computational burdens associated with parameter estimation. Low-rank spatial process models attempt to resolve this problem by projecting spatial effects to a lower-dimensional subspace. This subspace is determined by a judicious choice of “knots” or locations that are fixed a priori. One such representation yields a class of predictive process models (e.g., Banerjee et al., 2008) for spatial and spatial-temporal data. Our contribution here expands upon predictive process models with fixed knots to models that accommodate stochastic modeling of the knots. We view the knots as emerging from a point pattern and investigate how such adaptive specifications can yield more flexible hierarchical frameworks that lead to automated knot selection and substantial computational benefits. PMID:22298952

  10. Particle production at large transverse momentum and hard collision models

    International Nuclear Information System (INIS)

    Ranft, G.; Ranft, J.

    1977-04-01

    The majority of the presently available experimental data is consistent with hard scattering models. Therefore the hard scattering model seems to be well established. There is good evidence for jets in large transverse momentum reactions as predicted by these models. The overall picture is however not yet well enough understood. We mention only the empirical hard scattering cross section introduced in most of the models, the lack of a deep theoretical understanding of the interplay between quark confinement and jet production, and the fact that we are not yet able to discriminate conclusively between the many proposed hard scattering models. The status of different hard collision models discussed in this paper is summarized. (author)

  11. A numerical shoreline model for shorelines with large curvature

    DEFF Research Database (Denmark)

    Kærgaard, Kasper Hauberg; Fredsøe, Jørgen

    2013-01-01

    This paper presents a new numerical model for shoreline change which can be used to model the evolution of shorelines with large curvature. The model is based on a one-line formulation in terms of coordinates which follow the shape of the shoreline, instead of the more common approach where the two...... orthogonal horizontal directions are used. The volume error in the sediment continuity equation which is thereby introduced is removed through an iterative procedure. The model treats the shoreline changes by computing the sediment transport in a 2D coastal area model, and then integrating the sediment...... transport field across the coastal profile to obtain the longshore sediment transport variation along the shoreline. The model is used to compute the evolution of a shoreline with a 90° change in shoreline orientation; due to this drastic change in orientation a migrating shoreline spit develops...

  12. Feasibility of an energy conversion system in Canada involving large-scale integrated hydrogen production using solid fuels

    International Nuclear Information System (INIS)

    Gnanapragasam, Nirmal V.; Reddy, Bale V.; Rosen, Marc A.

    2010-01-01

    A large-scale hydrogen production system is proposed using solid fuels and designed to increase the sustainability of alternative energy forms in Canada, and the technical and economic aspects of the system within the Canadian energy market are examined. The work investigates the feasibility and constraints in implementing such a system within the energy infrastructure of Canada. The proposed multi-conversion and single-function system produces hydrogen in large quantities using energy from solid fuels such as coal, tar sands, biomass, municipal solid waste (MSW) and agricultural/forest/industrial residue. The proposed system involves significant technology integration, with various energy conversion processes (such as gasification, chemical looping combustion, anaerobic digestion, combustion power cycles-electrolysis and solar-thermal converters) interconnected to increase the utilization of solid fuels as much as feasible within cost, environmental and other constraints. The analysis involves quantitative and qualitative assessments based on (i) energy resources availability and demand for hydrogen, (ii) commercial viability of primary energy conversion technologies, (iii) academia, industry and government participation, (iv) sustainability and (v) economics. An illustrative example provides an initial road map for implementing such a system. (author)

  13. Involving stakeholders in building integrated fisheries models using Bayesian methods

    DEFF Research Database (Denmark)

    Haapasaari, Päivi Elisabet; Mäntyniemi, Samu; Kuikka, Sakari

    2013-01-01

    the potential of the study to contribute to the development of participatory modeling practices. It is concluded that the subjective perspective to knowledge, that is fundamental in Bayesian theory, suits participatory modeling better than a positivist paradigm that seeks the objective truth. The methodology...

  14. Large deflection of viscoelastic beams using fractional derivative model

    International Nuclear Information System (INIS)

    Bahranini, Seyed Masoud Sotoodeh; Eghtesad, Mohammad; Ghavanloo, Esmaeal; Farid, Mehrdad

    2013-01-01

    This paper deals with large deflection of viscoelastic beams using a fractional derivative model. For this purpose, a nonlinear finite element formulation of viscoelastic beams in conjunction with the fractional derivative constitutive equations has been developed. The four-parameter fractional derivative model has been used to describe the constitutive equations. The deflected configuration for a uniform beam with different boundary conditions and loads is presented. The effect of the order of fractional derivative on the large deflection of the cantilever viscoelastic beam, is investigated after 10, 100, and 1000 hours. The main contribution of this paper is finite element implementation for nonlinear analysis of viscoelastic fractional model using the storage of both strain and stress histories. The validity of the present analysis is confirmed by comparing the results with those found in the literature.

  15. Engineering Large Animal Species to Model Human Diseases.

    Science.gov (United States)

    Rogers, Christopher S

    2016-07-01

    Animal models are an important resource for studying human diseases. Genetically engineered mice are the most commonly used species and have made significant contributions to our understanding of basic biology, disease mechanisms, and drug development. However, they often fail to recreate important aspects of human diseases and thus can have limited utility as translational research tools. Developing disease models in species more similar to humans may provide a better setting in which to study disease pathogenesis and test new treatments. This unit provides an overview of the history of genetically engineered large animals and the techniques that have made their development possible. Factors to consider when planning a large animal model, including choice of species, type of modification and methodology, characterization, production methods, and regulatory compliance, are also covered. © 2016 by John Wiley & Sons, Inc. Copyright © 2016 John Wiley & Sons, Inc.

  16. Involving stakeholders in building integrated fisheries models using Bayesian methods

    DEFF Research Database (Denmark)

    Haapasaari, Päivi Elisabet; Mäntyniemi, Samu; Kuikka, Sakari

    2013-01-01

    A participatory Bayesian approach was used to investigate how the views of stakeholders could be utilized to develop models to help understand the Central Baltic herring fishery. In task one, we applied the Bayesian belief network methodology to elicit the causal assumptions of six stakeholders...... on factors that influence natural mortality, growth, and egg survival of the herring stock in probabilistic terms. We also integrated the expressed views into a meta-model using the Bayesian model averaging (BMA) method. In task two, we used influence diagrams to study qualitatively how the stakeholders frame...... the potential of the study to contribute to the development of participatory modeling practices. It is concluded that the subjective perspective to knowledge, that is fundamental in Bayesian theory, suits participatory modeling better than a positivist paradigm that seeks the objective truth. The methodology...

  17. Ecohydrological modeling for large-scale environmental impact assessment.

    Science.gov (United States)

    Woznicki, Sean A; Nejadhashemi, A Pouyan; Abouali, Mohammad; Herman, Matthew R; Esfahanian, Elaheh; Hamaamin, Yaseen A; Zhang, Zhen

    2016-02-01

    Ecohydrological models are frequently used to assess the biological integrity of unsampled streams. These models vary in complexity and scale, and their utility depends on their final application. Tradeoffs are usually made in model scale, where large-scale models are useful for determining broad impacts of human activities on biological conditions, and regional-scale (e.g. watershed or ecoregion) models provide stakeholders greater detail at the individual stream reach level. Given these tradeoffs, the objective of this study was to develop large-scale stream health models with reach level accuracy similar to regional-scale models thereby allowing for impacts assessments and improved decision-making capabilities. To accomplish this, four measures of biological integrity (Ephemeroptera, Plecoptera, and Trichoptera taxa (EPT), Family Index of Biotic Integrity (FIBI), Hilsenhoff Biotic Index (HBI), and fish Index of Biotic Integrity (IBI)) were modeled based on four thermal classes (cold, cold-transitional, cool, and warm) of streams that broadly dictate the distribution of aquatic biota in Michigan. The Soil and Water Assessment Tool (SWAT) was used to simulate streamflow and water quality in seven watersheds and the Hydrologic Index Tool was used to calculate 171 ecologically relevant flow regime variables. Unique variables were selected for each thermal class using a Bayesian variable selection method. The variables were then used in development of adaptive neuro-fuzzy inference systems (ANFIS) models of EPT, FIBI, HBI, and IBI. ANFIS model accuracy improved when accounting for stream thermal class rather than developing a global model. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Modeling of nonlinear responses for reciprocal transducers involving polarization switching

    DEFF Research Database (Denmark)

    Willatzen, Morten; Wang, Linxiang

    2007-01-01

    Nonlinearities and hysteresis effects in a reciprocal PZT transducer are examined by use of a dynamical mathematical model on the basis of phase-transition theory. In particular, we consider the perovskite piezoelectric ceramic in which the polarization process in the material can be modeled...... by Landau theory for the first-order phase transformation, in which each polarization state is associated with a minimum of the Landau free-energy function. Nonlinear constitutive laws are obtained by using thermodynamical equilibrium conditions, and hysteretic behavior of the material can be modeled...

  19. Large time periodic solutions to coupled chemotaxis-fluid models

    Science.gov (United States)

    Jin, Chunhua

    2017-12-01

    In this paper, we deal with the time periodic problem to coupled chemotaxis-fluid models. We prove the existence of large time periodic strong solutions for the full chemotaxis-Navier-Stokes system in spatial dimension N=2, and the existence of large time periodic strong solutions for the chemotaxis-Stokes system in spatial dimension N=3. On the basis of these, the regularity of the solutions can be further improved. More precisely speaking, if the time periodic source g and the potential force \

  20. Temporal modeling of highway crash severity by involved person age.

    Science.gov (United States)

    2012-07-01

    This project consisted of three studies, each described in the following sections. Three published documents were generated; these are listed in the last section. : Study 1: Temporal Modeling of Highway Crash Counts for Senior and Non-Senior Drivers;...

  1. Large Animal Models for Foamy Virus Vector Gene Therapy

    Directory of Open Access Journals (Sweden)

    Peter A. Horn

    2012-12-01

    Full Text Available Foamy virus (FV vectors have shown great promise for hematopoietic stem cell (HSC gene therapy. Their ability to efficiently deliver transgenes to multi-lineage long-term repopulating cells in large animal models suggests they will be effective for several human hematopoietic diseases. Here, we review FV vector studies in large animal models, including the use of FV vectors with the mutant O6-methylguanine-DNA methyltransferase, MGMTP140K to increase the number of genetically modified cells after transplantation. In these studies, FV vectors have mediated efficient gene transfer to polyclonal repopulating cells using short ex vivo transduction protocols designed to minimize the negative effects of ex vivo culture on stem cell engraftment. In this regard, FV vectors appear superior to gammaretroviral vectors, which require longer ex vivo culture to effect efficient transduction. FV vectors have also compared favorably with lentiviral vectors when directly compared in the dog model. FV vectors have corrected leukocyte adhesion deficiency and pyruvate kinase deficiency in the dog large animal model. FV vectors also appear safer than gammaretroviral vectors based on a reduced frequency of integrants near promoters and also near proto-oncogenes in canine repopulating cells. Together, these studies suggest that FV vectors should be highly effective for several human hematopoietic diseases, including those that will require relatively high percentages of gene-modified cells to achieve clinical benefit.

  2. Global Bedload Flux Modeling and Analysis in Large Rivers

    Science.gov (United States)

    Islam, M. T.; Cohen, S.; Syvitski, J. P.

    2017-12-01

    Proper sediment transport quantification has long been an area of interest for both scientists and engineers in the fields of geomorphology, and management of rivers and coastal waters. Bedload flux is important for monitoring water quality and for sustainable development of coastal and marine bioservices. Bedload measurements, especially for large rivers, is extremely scarce across time, and many rivers have never been monitored. Bedload measurements in rivers, is particularly acute in developing countries where changes in sediment yields is high. The paucity of bedload measurements is the result of 1) the nature of the problem (large spatial and temporal uncertainties), and 2) field costs including the time-consuming nature of the measurement procedures (repeated bedform migration tracking, bedload samplers). Here we present a first of its kind methodology for calculating bedload in large global rivers (basins are >1,000 km. Evaluation of model skill is based on 113 bedload measurements. The model predictions are compared with an empirical model developed from the observational dataset in an attempt to evaluate the differences between a physically-based numerical model and a lumped relationship between bedload flux and fluvial and basin parameters (e.g., discharge, drainage area, lithology). The initial study success opens up various applications to global fluvial geomorphology (e.g. including the relationship between suspended sediment (wash load) and bedload). Simulated results with known uncertainties offers a new research product as a valuable resource for the whole scientific community.

  3. Nonlinear Synapses for Large-Scale Models: An Efficient Representation Enables Complex Synapse Dynamics Modeling in Large-Scale Simulations

    Directory of Open Access Journals (Sweden)

    Eric eHu

    2015-09-01

    Full Text Available Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations.

  4. Application of simplified models to CO2 migration and immobilization in large-scale geological systems

    KAUST Repository

    Gasda, Sarah E.

    2012-07-01

    Long-term stabilization of injected carbon dioxide (CO 2) is an essential component of risk management for geological carbon sequestration operations. However, migration and trapping phenomena are inherently complex, involving processes that act over multiple spatial and temporal scales. One example involves centimeter-scale density instabilities in the dissolved CO 2 region leading to large-scale convective mixing that can be a significant driver for CO 2 dissolution. Another example is the potentially important effect of capillary forces, in addition to buoyancy and viscous forces, on the evolution of mobile CO 2. Local capillary effects lead to a capillary transition zone, or capillary fringe, where both fluids are present in the mobile state. This small-scale effect may have a significant impact on large-scale plume migration as well as long-term residual and dissolution trapping. Computational models that can capture both large and small-scale effects are essential to predict the role of these processes on the long-term storage security of CO 2 sequestration operations. Conventional modeling tools are unable to resolve sufficiently all of these relevant processes when modeling CO 2 migration in large-scale geological systems. Herein, we present a vertically-integrated approach to CO 2 modeling that employs upscaled representations of these subgrid processes. We apply the model to the Johansen formation, a prospective site for sequestration of Norwegian CO 2 emissions, and explore the sensitivity of CO 2 migration and trapping to subscale physics. Model results show the relative importance of different physical processes in large-scale simulations. The ability of models such as this to capture the relevant physical processes at large spatial and temporal scales is important for prediction and analysis of CO 2 storage sites. © 2012 Elsevier Ltd.

  5. Modeling and Compensatory Processes Underlying Involvement in Child Care among Kibbutz-Reared Fathers

    Science.gov (United States)

    Gaunt, Ruth; Bassi, Liat

    2012-01-01

    This study examined modeling and compensatory processes underlying the effects of an early paternal model on father involvement in child care. Drawing on social learning theory, it was hypothesized that father-son relationships would moderate the association between a father's involvement and his own father's involvement. A sample of 136 kibbutz…

  6. Modelling binaural processes involved in simultaneous reflection masking: limitations of current models

    DEFF Research Database (Denmark)

    Buchholz, Jörg

    2007-01-01

    Masked thresholds were measured for a single test reflection, masked by the direct sound, as a function of the reflection delay. This was done for diotic as well as for dichotic stimulus presentations and all stimuli were presented via headphones. The input signal was a 200-ms long broadband noise......, such as normalized cross-correlation models (e.g., Bernstein et al., 1999, JASA, pp. 870-876), the power-addition model (Zurek, 1979, JASA, pp. 1750-1757), or Equalization-Cancellation-based models (e.g., Breebaart et al., 2001, JASA, pp. 1074-1088), cannot account for the psychoacoustical data. The present talk...... aims at understanding why these binaural models in their current form cannot describe the binaural mechanisms involved in reflection masking and a number of model-modifications are discussed that might help to overcome this deficiency....

  7. Penalized Estimation in Large-Scale Generalized Linear Array Models

    DEFF Research Database (Denmark)

    Lund, Adam; Vincent, Martin; Hansen, Niels Richard

    2017-01-01

    Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension of the para......Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...... of the parameter vector. A new design matrix free algorithm is proposed for computing the penalized maximum likelihood estimate for GLAMs, which, in particular, handles nondifferentiable penalty functions. The proposed algorithm is implemented and available via the R package glamlasso. It combines several ideas...

  8. Precise MRI-based stereotaxic surgery in large animal models

    DEFF Research Database (Denmark)

    Glud, A. N.; Bech, J.; Tvilling, L.

    and subcortical anatomical differences. NEW METHOD: We present a convenient method to make an MRI-visible skull fiducial for 3D MRI-based stereotaxic procedures in larger experimental animals. Plastic screws were filled with either copper-sulphate solution or MRI-visible paste from a commercially available......BACKGROUND: Stereotaxic neurosurgery in large animals is used widely in different sophisticated models, where precision is becoming more crucial as desired anatomical target regions are becoming smaller. Individually calculated coordinates are necessary in large animal models with cortical...... cranial head marker. The screw fiducials were inserted in the animal skulls and T1 weighted MRI was performed allowing identification of the inserted skull marker. RESULTS: Both types of fiducial markers were clearly visible on the MRÍs. This allows high precision in the stereotaxic space. COMPARISON...

  9. Model for large scale circulation of nuclides in nature, 1

    Energy Technology Data Exchange (ETDEWEB)

    Ohnishi, Teruaki

    1988-12-01

    A model for large scale circulation of nuclides was developed, and a computer code named COCAIN was made which simulates this circulation system-dynamically. The natural environment considered in the present paper consists of 2 atmospheres, 8 geospheres and 2 lithospheres. The biosphere is composed of 4 types of edible plants, 5 cattles and their products, 4 water biota and 16 human organs. The biosphere is assumed to be given nuclides from the natural environment mentioned above. With the use of COCAIN, two numerical case studies were carried out; the one is the study on nuclear pollution in nature by the radioactive nuclides originating from the past nuclear bomb tests, and the other is the study on the response of environment and biota to the pulse injection of nuclides into one compartment. From the former case study it was verified that this model can well explain the observation and properly simulate the large scale circulation of nuclides in nature.

  10. Large animal models and new therapies for glycogen storage disease.

    Science.gov (United States)

    Brooks, Elizabeth D; Koeberl, Dwight D

    2015-05-01

    Glycogen storage diseases (GSD), a unique category of inherited metabolic disorders, were first described early in the twentieth century. Since then, the biochemical and genetic bases of these disorders have been determined, and an increasing number of animal models for GSD have become available. At least seven large mammalian models have been developed for laboratory research on GSDs. These models have facilitated the development of new therapies, including gene therapy, which are undergoing clinical translation. For example, gene therapy prolonged survival and prevented hypoglycemia during fasting for greater than one year in dogs with GSD type Ia, and the need for periodic re-administration to maintain efficacy was demonstrated in that dog model. The further development of gene therapy could provide curative therapy for patients with GSD and other inherited metabolic disorders.

  11. ARMA modelling of neutron stochastic processes with large measurement noise

    International Nuclear Information System (INIS)

    Zavaljevski, N.; Kostic, Lj.; Pesic, M.

    1994-01-01

    An autoregressive moving average (ARMA) model of the neutron fluctuations with large measurement noise is derived from langevin stochastic equations and validated using time series data obtained during prompt neutron decay constant measurements at the zero power reactor RB in Vinca. Model parameters are estimated using the maximum likelihood (ML) off-line algorithm and an adaptive pole estimation algorithm based on the recursive prediction error method (RPE). The results show that subcriticality can be determined from real data with high measurement noise using much shorter statistical sample than in standard methods. (author)

  12. Modeling large underground experimental halls for the superconducting super collider

    International Nuclear Information System (INIS)

    Duan, F.; Mrugala, M.

    1993-01-01

    Geomechanical aspects of the excavation design, and analysis of two large underground experimental halls for the Superconducting Super Collider (SSC), being built in Texas, have been extensively investigated using computer modeling. Each chamber, measuring approximately 350 ft long, 110 ft wide, and 190 ft high, is to be excavated mainly through soft marl and overlying competent limestone. Wall stability is essential not only for ensuring excavation safety but also for meeting strict requirements for chamber stability over the 30-yr design life of the facility. Extensive numerical modeling has played a significant role in the selection of excavation methods, excavation sequence, and rock reinforcement systems. (Author)

  13. A Modeling & Simulation Implementation Framework for Large-Scale Simulation

    Directory of Open Access Journals (Sweden)

    Song Xiao

    2012-10-01

    Full Text Available Classical High Level Architecture (HLA systems are facing development problems for lack of supporting fine-grained component integration and interoperation in large-scale complex simulation applications. To provide efficient methods of this issue, an extensible, reusable and composable simulation framework is proposed. To promote the reusability from coarse-grained federate to fine-grained components, this paper proposes a modelling & simulation framework which consists of component-based architecture, modelling methods, and simulation services to support and simplify the process of complex simulation application construction. Moreover, a standard process and simulation tools are developed to ensure the rapid and effective development of simulation application.

  14. Simulating large cosmology surveys with calibrated halo models

    OpenAIRE

    Lynn, Stuart

    2011-01-01

    In this thesis I present a novel method for constructing large scale mock galaxy and halo catalogues and apply this model to a number of important topics in modern cosmology. Traditionally such mocks are created through first evolving a high resolution particle simulation from a set of initial conditions to the present epoch, identifying bound structures and their evolution, and finally applying a semi-analytic prescription for galaxy formation. In contrast to this computatio...

  15. Shear viscosity from a large-Nc NJL model

    Energy Technology Data Exchange (ETDEWEB)

    Lang, Robert; Kaiser, Norbert [TUM Physik Department, Garching (Germany); Weise, Wolfram [ECT, Villa Tambosi, Villazzano (Italy); TUM Physik Department, Garching (Germany)

    2015-07-01

    We calculate the ratio of shear viscosity to entropy density within a large-N{sub c} Nambu-Jona-Lasinio model. A consistent treatment of the Kubo formalism incorporating the full Dirac structure of the quark self-energy from mesonic fluctuations is presented. We compare our results to common approximation schemes applied to the Kubo formalism and to the quark self-energy.

  16. Protein homology model refinement by large-scale energy optimization.

    Science.gov (United States)

    Park, Hahnbeom; Ovchinnikov, Sergey; Kim, David E; DiMaio, Frank; Baker, David

    2018-03-20

    Proteins fold to their lowest free-energy structures, and hence the most straightforward way to increase the accuracy of a partially incorrect protein structure model is to search for the lowest-energy nearby structure. This direct approach has met with little success for two reasons: first, energy function inaccuracies can lead to false energy minima, resulting in model degradation rather than improvement; and second, even with an accurate energy function, the search problem is formidable because the energy only drops considerably in the immediate vicinity of the global minimum, and there are a very large number of degrees of freedom. Here we describe a large-scale energy optimization-based refinement method that incorporates advances in both search and energy function accuracy that can substantially improve the accuracy of low-resolution homology models. The method refined low-resolution homology models into correct folds for 50 of 84 diverse protein families and generated improved models in recent blind structure prediction experiments. Analyses of the basis for these improvements reveal contributions from both the improvements in conformational sampling techniques and the energy function.

  17. A Computational Modeling Mystery Involving Airfoil Trailing Edge Treatments

    Science.gov (United States)

    Choo, Yeunun; Epps, Brenden

    2015-11-01

    In a curious result, Fairman (2002) observed that steady RANS calculations predicted larger lift than the experimentally-measured data for six different airfoils with non-traditional trailing edge treatments, whereas the time average of unsteady RANS calculations matched the experiments almost exactly. Are these results reproducible? If so, is the difference between steady and unsteady RANS calculations a numerical artifact, or is there a physical explanation? The goals of this project are to solve this thirteen year old mystery and further to model viscous/load coupling for airfoils with non-traditional trailing edges. These include cupped, beveled, and blunt trailing edges, which are common anti-singing treatments for marine propeller sections. In this talk, we present steady and unsteady RANS calculations (ANSYS Fluent) with careful attention paid to the possible effects of asymmetric unsteady vortex shedding and the modeling of turbulence anisotropy. The effects of non-traditional trailing edge treatments are visualized and explained.

  18. Introducing an Intervention Model for Fostering Affective Involvement with Persons Who Are Congenitally Deafblind

    NARCIS (Netherlands)

    Martens, M.A.W.; Janssen, M.J.; Ruijssenaars, A.J.J.M.; Riksen-Walraven, J.M.A.

    2014-01-01

    The article presented here introduces the Intervention Model for Affective Involvement (IMAI), which was designed to train staff members (for example, teachers, caregivers, support workers) to foster affective involvement during interaction and communication with persons who have congenital

  19. Validating modeled turbulent heat fluxes across large freshwater surfaces

    Science.gov (United States)

    Lofgren, B. M.; Fujisaki-Manome, A.; Gronewold, A.; Anderson, E. J.; Fitzpatrick, L.; Blanken, P.; Spence, C.; Lenters, J. D.; Xiao, C.; Charusambot, U.

    2017-12-01

    Turbulent fluxes of latent and sensible heat are important physical processes that influence the energy and water budgets of the Great Lakes. Validation and improvement of bulk flux algorithms to simulate these turbulent heat fluxes are critical for accurate prediction of hydrodynamics, water levels, weather, and climate over the region. Here we consider five heat flux algorithms from several model systems; the Finite-Volume Community Ocean Model, the Weather Research and Forecasting model, and the Large Lake Thermodynamics Model, which are used in research and operational environments and concentrate on different aspects of the Great Lakes' physical system, but interface at the lake surface. The heat flux algorithms were isolated from each model and driven by meteorological data from over-lake stations in the Great Lakes Evaporation Network. The simulation results were compared with eddy covariance flux measurements at the same stations. All models show the capacity to the seasonal cycle of the turbulent heat fluxes. Overall, the Coupled Ocean Atmosphere Response Experiment algorithm in FVCOM has the best agreement with eddy covariance measurements. Simulations with the other four algorithms are overall improved by updating the parameterization of roughness length scales of temperature and humidity. Agreement between modelled and observed fluxes notably varied with geographical locations of the stations. For example, at the Long Point station in Lake Erie, observed fluxes are likely influenced by the upwind land surface while the simulations do not take account of the land surface influence, and therefore the agreement is worse in general.

  20. Architectural Large Constructed Environment. Modeling and Interaction Using Dynamic Simulations

    Science.gov (United States)

    Fiamma, P.

    2011-09-01

    How to use for the architectural design, the simulation coming from a large size data model? The topic is related to the phase coming usually after the acquisition of the data, during the construction of the model and especially after, when designers must have an interaction with the simulation, in order to develop and verify their idea. In the case of study, the concept of interaction includes the concept of real time "flows". The work develops contents and results that can be part of the large debate about the current connection between "architecture" and "movement". The focus of the work, is to realize a collaborative and participative virtual environment on which different specialist actors, client and final users can share knowledge, targets and constraints to better gain the aimed result. The goal is to have used a dynamic micro simulation digital resource that allows all the actors to explore the model in powerful and realistic way and to have a new type of interaction in a complex architectural scenario. On the one hand, the work represents a base of knowledge that can be implemented more and more; on the other hand the work represents a dealt to understand the large constructed architecture simulation as a way of life, a way of being in time and space. The architectural design before, and the architectural fact after, both happen in a sort of "Spatial Analysis System". The way is open to offer to this "system", knowledge and theories, that can support architectural design work for every application and scale. We think that the presented work represents a dealt to understand the large constructed architecture simulation as a way of life, a way of being in time and space. Architecture like a spatial configuration, that can be reconfigurable too through designing.

  1. ARCHITECTURAL LARGE CONSTRUCTED ENVIRONMENT. MODELING AND INTERACTION USING DYNAMIC SIMULATIONS

    Directory of Open Access Journals (Sweden)

    P. Fiamma

    2012-09-01

    Full Text Available How to use for the architectural design, the simulation coming from a large size data model? The topic is related to the phase coming usually after the acquisition of the data, during the construction of the model and especially after, when designers must have an interaction with the simulation, in order to develop and verify their idea. In the case of study, the concept of interaction includes the concept of real time "flows". The work develops contents and results that can be part of the large debate about the current connection between "architecture" and "movement". The focus of the work, is to realize a collaborative and participative virtual environment on which different specialist actors, client and final users can share knowledge, targets and constraints to better gain the aimed result. The goal is to have used a dynamic micro simulation digital resource that allows all the actors to explore the model in powerful and realistic way and to have a new type of interaction in a complex architectural scenario. On the one hand, the work represents a base of knowledge that can be implemented more and more; on the other hand the work represents a dealt to understand the large constructed architecture simulation as a way of life, a way of being in time and space. The architectural design before, and the architectural fact after, both happen in a sort of "Spatial Analysis System". The way is open to offer to this "system", knowledge and theories, that can support architectural design work for every application and scale. We think that the presented work represents a dealt to understand the large constructed architecture simulation as a way of life, a way of being in time and space. Architecture like a spatial configuration, that can be reconfigurable too through designing.

  2. Hydrogen combustion modelling in large-scale geometries

    International Nuclear Information System (INIS)

    Studer, E.; Beccantini, A.; Kudriakov, S.; Velikorodny, A.

    2014-01-01

    Hydrogen risk mitigation issues based on catalytic recombiners cannot exclude flammable clouds to be formed during the course of a severe accident in a Nuclear Power Plant. Consequences of combustion processes have to be assessed based on existing knowledge and state of the art in CFD combustion modelling. The Fukushima accidents have also revealed the need for taking into account the hydrogen explosion phenomena in risk management. Thus combustion modelling in a large-scale geometry is one of the remaining severe accident safety issues. At present day there doesn't exist a combustion model which can accurately describe a combustion process inside a geometrical configuration typical of the Nuclear Power Plant (NPP) environment. Therefore the major attention in model development has to be paid on the adoption of existing approaches or creation of the new ones capable of reliably predicting the possibility of the flame acceleration in the geometries of that type. A set of experiments performed previously in RUT facility and Heiss Dampf Reactor (HDR) facility is used as a validation database for development of three-dimensional gas dynamic model for the simulation of hydrogen-air-steam combustion in large-scale geometries. The combustion regimes include slow deflagration, fast deflagration, and detonation. Modelling is based on Reactive Discrete Equation Method (RDEM) where flame is represented as an interface separating reactants and combustion products. The transport of the progress variable is governed by different flame surface wrinkling factors. The results of numerical simulation are presented together with the comparisons, critical discussions and conclusions. (authors)

  3. Effective models of new physics at the Large Hadron Collider

    International Nuclear Information System (INIS)

    Llodra-Perez, J.

    2011-07-01

    With the start of the Large Hadron Collider runs, in 2010, particle physicists will be soon able to have a better understanding of the electroweak symmetry breaking. They might also answer to many experimental and theoretical open questions raised by the Standard Model. Surfing on this really favorable situation, we will first present in this thesis a highly model-independent parametrization in order to characterize the new physics effects on mechanisms of production and decay of the Higgs boson. This original tool will be easily and directly usable in data analysis of CMS and ATLAS, the huge generalist experiments of LHC. It will help indeed to exclude or validate significantly some new theories beyond the Standard Model. In another approach, based on model-building, we considered a scenario of new physics, where the Standard Model fields can propagate in a flat six-dimensional space. The new spatial extra-dimensions will be compactified on a Real Projective Plane. This orbifold is the unique six-dimensional geometry which possesses chiral fermions and a natural Dark Matter candidate. The scalar photon, which is the lightest particle of the first Kaluza-Klein tier, is stabilized by a symmetry relic of the six dimension Lorentz invariance. Using the current constraints from cosmological observations and our first analytical calculation, we derived a characteristic mass range around few hundred GeV for the Kaluza-Klein scalar photon. Therefore the new states of our Universal Extra-Dimension model are light enough to be produced through clear signatures at the Large Hadron Collider. So we used a more sophisticated analysis of particle mass spectrum and couplings, including radiative corrections at one-loop, in order to establish our first predictions and constraints on the expected LHC phenomenology. (author)

  4. The pig as a large animal model for influenza a virus infection

    DEFF Research Database (Denmark)

    Skovgaard, Kerstin; Brogaard, Louise; Larsen, Lars Erik

    It is increasingly realized that large animal models like the pig are exceptionally human like and serve as an excellent model for disease and inflammation. Pigs are fully susceptible to human influenza, share many similarities with humans regarding lung physiology and innate immune cell infiltra......It is increasingly realized that large animal models like the pig are exceptionally human like and serve as an excellent model for disease and inflammation. Pigs are fully susceptible to human influenza, share many similarities with humans regarding lung physiology and innate immune cell...... of immune factors including several genes known to be centrally involved in the viral defence was quantified by high throughput qPCR (BioMark, Fluidigm). Likewise, miRNAs were quantified using the BioMark (Fluidigm) as well as by MiRCURY LNATM (Exiqon). During the first 24 hours of infection we found...

  5. The independent spreaders involved SIR Rumor model in complex networks

    Science.gov (United States)

    Qian, Zhen; Tang, Shaoting; Zhang, Xiao; Zheng, Zhiming

    2015-07-01

    Recent studies of rumor or information diffusion process in complex networks show that in contrast to traditional comprehension, individuals who participate in rumor spreading within one network do not always get the rumor from their neighbors. They can obtain the rumor from different sources like online social networks and then publish it on their personal sites. In our paper, we discuss this phenomenon in complex networks by adopting the concept of independent spreaders. Rather than getting the rumor from neighbors, independent spreaders learn it from other channels. We further develop the classic "ignorant-spreaders-stiflers" or SIR model of rumor diffusion process in complex networks. A steady-state analysis is conducted to investigate the final spectrum of the rumor spreading under various spreading rate, stifling rate, density of independent spreaders and average degree of the network. Results show that independent spreaders effectively enhance the rumor diffusion process, by delivering the rumor to regions far away from the current rumor infected regions. And though the rumor spreading process in SF networks is faster than that in ER networks, the final size of rumor spreading in ER networks is larger than that in SF networks.

  6. Interactive modeling, design and analysis of large spacecraft

    Science.gov (United States)

    Garrett, L. B.

    1982-01-01

    An efficient computer aided design and analysis capability applicable to large space structures was developed to relieve the engineer of much of the effort required in the past. The automated capabilities can be used to rapidly synthesize, evaluate, and determine performance characteristics and costs for future large spacecraft concepts. The interactive design and evaluation of advanced spacecraft program (IDEAS) is used to illustrate the power, efficiency, and versatility of the approach. The coupling of space environment modeling algorithms with simplified analysis and design modules in the IDEAS program permits rapid evaluation of completing spacecraft and mission designs. The approach is particularly useful in the conceptual design phase of advanced space missions when a multiplicity of concepts must be considered before a limited set can be selected or more detailed analysis. Integrated spacecraft systems level data and data files are generated or subsystems and mission reexamination and/or refinement and for more rigorous analyses.

  7. Involvement of herbal medicine as a cause of mesenteric phlebosclerosis: results from a large-scale nationwide survey.

    Science.gov (United States)

    Shimizu, Seiji; Kobayashi, Taku; Tomioka, Hideo; Ohtsu, Kensei; Matsui, Toshiyuki; Hibi, Toshifumi

    2017-03-01

    Mesenteric phlebosclerosis (MP) is a rare disease characterized by venous calcification extending from the colonic wall to the mesentery, with chronic ischemic changes from venous return impairment in the intestine. It is an idiopathic disease, but increasing attention has been paid to the potential involvement of herbal medicine, or Kampo, in its etiology. Until now, there were scattered case reports, but no large-scale studies have been conducted to unravel the clinical characteristics and etiology of the disease. A nationwide survey was conducted using questionnaires to assess possible etiology (particularly the involvement of herbal medicine), clinical manifestations, disease course, and treatment of MP. Data from 222 patients were collected. Among the 169 patients (76.1 %), whose history of herbal medicine was obtained, 147 (87.0 %) used herbal medicines. The use of herbal medicines containing sanshishi (gardenia fruit, Gardenia jasminoides Ellis) was reported in 119 out of 147 patients (81.0 %). Therefore, the use of herbal medicine containing sanshishi was confirmed in 70.4 % of 169 patients whose history of herbal medicine was obtained. The duration of sanshishi use ranged from 3 to 51 years (mean 13.6 years). Patients who discontinued sanshishi showed a better outcome compared with those who continued it. The use of herbal medicine containing sanshishi is associated with the etiology of MP. Although it may not be the causative factor, it is necessary for gastroenterologists to be aware of the potential risk of herbal medicine containing sanshishi for the development of MP.

  8. Large animal models for vaccine development and testing.

    Science.gov (United States)

    Gerdts, Volker; Wilson, Heather L; Meurens, Francois; van Drunen Littel-van den Hurk, Sylvia; Wilson, Don; Walker, Stewart; Wheler, Colette; Townsend, Hugh; Potter, Andrew A

    2015-01-01

    The development of human vaccines continues to rely on the use of animals for research. Regulatory authorities require novel vaccine candidates to undergo preclinical assessment in animal models before being permitted to enter the clinical phase in human subjects. Substantial progress has been made in recent years in reducing and replacing the number of animals used for preclinical vaccine research through the use of bioinformatics and computational biology to design new vaccine candidates. However, the ultimate goal of a new vaccine is to instruct the immune system to elicit an effective immune response against the pathogen of interest, and no alternatives to live animal use currently exist for evaluation of this response. Studies identifying the mechanisms of immune protection; determining the optimal route and formulation of vaccines; establishing the duration and onset of immunity, as well as the safety and efficacy of new vaccines, must be performed in a living system. Importantly, no single animal model provides all the information required for advancing a new vaccine through the preclinical stage, and research over the last two decades has highlighted that large animals more accurately predict vaccine outcome in humans than do other models. Here we review the advantages and disadvantages of large animal models for human vaccine development and demonstrate that much of the success in bringing a new vaccine to market depends on choosing the most appropriate animal model for preclinical testing. © The Author 2015. Published by Oxford University Press on behalf of the Institute for Laboratory Animal Research. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  9. Crime scene investigation involving a large turbidite - a 1h-teaching unit in Limnogeology/Sedimentology

    Science.gov (United States)

    Gilli, Adrian; Kremer, Katrina

    2017-04-01

    In 1996, a dead corpse (named „Brienzi") was found at the banks of the picturesque Lake Brienz in the Bernese Alps/Switzerland. What is the origin of this corpse and which chain of events lead to this crime scene? This is the starting position for a 1h exercise/game for undergraduate students in Earth Sciences/Geosciences.The students are provided with a wealth of evidences like statements from people potentially involved in the case, age data on the human corpse and monitoring data from the lake and its surroundings. The students are guided through the game step by step. After solving a task, the students get a feedback and can check if they were correct with their interpretation. Interestingly for earth science students, a lacustrine mass moment plays an important role in these investigations, but more should not be given at this point. In this exercise, we can also check if the teached content of the previous lessons have been acquired correctly by the students, as it deals with diverse limnogeological and sedimentological aspects. The game is strongly based on a study of Girardclos et al. (2007) and uses their argumentation for the occurrence of a large mass movement in Lake Brienz in 1996. A copy of the game is available by the author upon request. Reference: Girardclos, S., Schmidt, O.T., Sturm, M., Ariztegui, D., Pugin, A. and Anselmetti, F.S., 2007, The 1996 AD delta collapse and large turbidite in Lake Brienz. Marine Geology, 241, pp.137-154. doi:10.1016/j.margeo.2007.03.011

  10. A stochastic large deformation model for computational anatomy

    DEFF Research Database (Denmark)

    Arnaudon, Alexis; Holm, Darryl D.; Pai, Akshay Sadananda Uppinakudru

    2017-01-01

    In the study of shapes of human organs using computational anatomy, variations are found to arise from inter-subject anatomical differences, disease-specific effects, and measurement noise. This paper introduces a stochastic model for incorporating random variations into the Large Deformation...... Diffeomorphic Metric Mapping (LDDMM) framework. By accounting for randomness in a particular setup which is crafted to fit the geometrical properties of LDDMM, we formulate the template estimation problem for landmarks with noise and give two methods for efficiently estimating the parameters of the noise fields...

  11. Parents as Role Models: Parental Behavior Affects Adolescents' Plans for Work Involvement

    Science.gov (United States)

    Wiese, Bettina S.; Freund, Alexandra M.

    2011-01-01

    This study (N = 520 high-school students) investigates the influence of parental work involvement on adolescents' own plans regarding their future work involvement. As expected, adolescents' perceptions of parental work behavior affected their plans for own work involvement. Same-sex parents served as main role models for the adolescents' own…

  12. Contribution of Large Pig for Renal Ischemia-Reperfusion and Transplantation Studies: The Preclinical Model

    Directory of Open Access Journals (Sweden)

    S. Giraud

    2011-01-01

    Full Text Available Animal experimentation is necessary to characterize human diseases and design adequate therapeutic interventions. In renal transplantation research, the limited number of in vitro models involves a crucial role for in vivo models and particularly for the porcine model. Pig and human kidneys are anatomically similar (characterized by multilobular structure in contrast to rodent and dog kidneys unilobular. The human proximity of porcine physiology and immune systems provides a basic knowledge of graft recovery and inflammatory physiopathology through in vivo studies. In addition, pig large body size allows surgical procedures similar to humans, repeated collections of peripheral blood or renal biopsies making pigs ideal for medical training and for the assessment of preclinical technologies. However, its size is also its main drawback implying expensive housing. Nevertheless, pig models are relevant alternatives to primate models, offering promising perspectives with developments of transgenic modulation and marginal donor models facilitating data extrapolation to human conditions.

  13. Soil carbon management in large-scale Earth system modelling

    DEFF Research Database (Denmark)

    Olin, S.; Lindeskog, M.; Pugh, T. A. M.

    2015-01-01

    Croplands are vital ecosystems for human well-being and provide important ecosystem services such as crop yields, retention of nitrogen and carbon storage. On large (regional to global)-scale levels, assessment of how these different services will vary in space and time, especially in response......, carbon sequestration and nitrogen leaching from croplands are evaluated and discussed. Compared to the version of LPJ-GUESS that does not include land-use dynamics, estimates of soil carbon stocks and nitrogen leaching from terrestrial to aquatic ecosystems were improved. Our model experiments allow us...... modelling C–N interactions in agricultural ecosystems under future environmental change and the effects these have on terrestrial biogeochemical cycles....

  14. Large-Signal DG-MOSFET Modelling for RFID Rectification

    Directory of Open Access Journals (Sweden)

    R. Rodríguez

    2016-01-01

    Full Text Available This paper analyses the undoped DG-MOSFETs capability for the operation of rectifiers for RFIDs and Wireless Power Transmission (WPT at microwave frequencies. For this purpose, a large-signal compact model has been developed and implemented in Verilog-A. The model has been numerically validated with a device simulator (Sentaurus. It is found that the number of stages to achieve the optimal rectifier performance is inferior to that required with conventional MOSFETs. In addition, the DC output voltage could be incremented with the use of appropriate mid-gap metals for the gate, as TiN. Minor impact of short channel effects (SCEs on rectification is also pointed out.

  15. Design and modelling of innovative machinery systems for large ships

    DEFF Research Database (Denmark)

    Larsen, Ulrik

    recovery (WHR) systems. Studies of alternative WHR systems in other applications suggests that the Kalina cycle and the organic Rankine cycle (ORC) can provide significant advantages over the steam Rankine cycle, which is currently used for marine WHR. This thesis aims at creating a better understanding...... consisting of a two-zone combustion and NOx emission model, a double Wiebe heat release model, the Redlich-Kwong equation of state and the Woschni heat loss correlation. A novel methodology is presented and used to determine the optimum organic Rankine cycle process layout, working fluid and process...... of the Kalina cycle and the ORC in the application on board large ships; the thermodynamic performances of the mentioned power cycles are compared. Recommendations of suitable system layouts and working fluids for the marine applications are provided along with methodologies useful for the design...

  16. Photorealistic large-scale urban city model reconstruction.

    Science.gov (United States)

    Poullis, Charalambos; You, Suya

    2009-01-01

    The rapid and efficient creation of virtual environments has become a crucial part of virtual reality applications. In particular, civil and defense applications often require and employ detailed models of operations areas for training, simulations of different scenarios, planning for natural or man-made events, monitoring, surveillance, games, and films. A realistic representation of the large-scale environments is therefore imperative for the success of such applications since it increases the immersive experience of its users and helps reduce the difference between physical and virtual reality. However, the task of creating such large-scale virtual environments still remains a time-consuming and manual work. In this work, we propose a novel method for the rapid reconstruction of photorealistic large-scale virtual environments. First, a novel, extendible, parameterized geometric primitive is presented for the automatic building identification and reconstruction of building structures. In addition, buildings with complex roofs containing complex linear and nonlinear surfaces are reconstructed interactively using a linear polygonal and a nonlinear primitive, respectively. Second, we present a rendering pipeline for the composition of photorealistic textures, which unlike existing techniques, can recover missing or occluded texture information by integrating multiple information captured from different optical sensors (ground, aerial, and satellite).

  17. Assisted reproduction involving gestational surrogacy: an analysis of the medical, psychosocial and legal issues: experience from a large surrogacy program.

    Science.gov (United States)

    Dar, Shir; Lazer, Tal; Swanson, Sonja; Silverman, Jan; Wasser, Cindy; Moskovtsev, Sergey I; Sojecki, Agata; Librach, Clifford L

    2015-02-01

    What are the medical, psychosocial and legal aspects of gestational surrogacy (GS), including pregnancy outcomes and complications, in a large series? Meticulous multidisciplinary teamwork, involving medical, legal and psychosocial input for both the intended parent(s) (IP) and the gestational carrier (GC), is critical to achieve a successful GS program. Small case series have described pregnancy rates of 17-50% for GS. There are no large case series and the medical, legal and psychological aspects of GS have not been addressed in most of these studies. To our knowledge, this is the largest reported GS case series. A retrospective cohort study was performed. Data were collected from 333 consecutive GC cycles between 1998 and 2012. There were 178 pregnancies achieved out of 333 stimulation cycles, including fresh and frozen transfers. The indications for a GC were divided into two groups. Those who have 'failed to carry', included women with recurrent implantation failure (RIF), recurrent pregnancy loss (RPL) and previous poor pregnancy outcome (n = 96; 132 cycles, pregnancy rate 50.0%). The second group consisted of those who 'cannot carry' including those with severe Asherman's syndrome, uterine malformations/uterine agenesis and maternal medical diseases (n = 108, 139 cycles, pregnancy rate 54.0%). A third group, of same-sex male couples and single men, were analyzed separately (n = 52, 62 cycles, pregnancy rate 59.7%). In 49.2% of cycles, autologous oocytes were used and 50.8% of cycles involved donor oocytes. The 'failed to carry' group consisted of 96 patients who underwent 132 cycles at a mean age of 40.3 years. There were 66 pregnancies (50.0%) with 17 miscarriages (25.8%) and 46 confirmed births (34.8%). The 'cannot carry pregnancy' group consisted of 108 patients who underwent 139 cycles at a mean age of 35.9 years. There were 75 pregnancies (54.0%) with 15 miscarriages (20.0%) and 56 confirmed births (40.3%). The pregnancy, miscarriage and live birth

  18. Geometric algorithms for electromagnetic modeling of large scale structures

    Science.gov (United States)

    Pingenot, James

    With the rapid increase in the speed and complexity of integrated circuit designs, 3D full wave and time domain simulation of chip, package, and board systems becomes more and more important for the engineering of modern designs. Much effort has been applied to the problem of electromagnetic (EM) simulation of such systems in recent years. Major advances in boundary element EM simulations have led to O(n log n) simulations using iterative methods and advanced Fast. Fourier Transform (FFT), Multi-Level Fast Multi-pole Methods (MLFMM), and low-rank matrix compression techniques. These advances have been augmented with an explosion of multi-core and distributed computing technologies, however, realization of the full scale of these capabilities has been hindered by cumbersome and inefficient geometric processing. Anecdotal evidence from industry suggests that users may spend around 80% of turn-around time manipulating the geometric model and mesh. This dissertation addresses this problem by developing fast and efficient data structures and algorithms for 3D modeling of chips, packages, and boards. The methods proposed here harness the regular, layered 2D nature of the models (often referred to as "2.5D") to optimize these systems for large geometries. First, an architecture is developed for efficient storage and manipulation of 2.5D models. The architecture gives special attention to native representation of structures across various input models and special issues particular to 3D modeling. The 2.5D structure is then used to optimize the mesh systems First, circuit/EM co-simulation techniques are extended to provide electrical connectivity between objects. This concept is used to connect independently meshed layers, allowing simple and efficient 2D mesh algorithms to be used in creating a 3D mesh. Here, adaptive meshing is used to ensure that the mesh accurately models the physical unknowns (current and charge). Utilizing the regularized nature of 2.5D objects and

  19. Modelling large scale human activity in San Francisco

    Science.gov (United States)

    Gonzalez, Marta

    2010-03-01

    Diverse group of people with a wide variety of schedules, activities and travel needs compose our cities nowadays. This represents a big challenge for modeling travel behaviors in urban environments; those models are of crucial interest for a wide variety of applications such as traffic forecasting, spreading of viruses, or measuring human exposure to air pollutants. The traditional means to obtain knowledge about travel behavior is limited to surveys on travel journeys. The obtained information is based in questionnaires that are usually costly to implement and with intrinsic limitations to cover large number of individuals and some problems of reliability. Using mobile phone data, we explore the basic characteristics of a model of human travel: The distribution of agents is proportional to the population density of a given region, and each agent has a characteristic trajectory size contain information on frequency of visits to different locations. Additionally we use a complementary data set given by smart subway fare cards offering us information about the exact time of each passenger getting in or getting out of the subway station and the coordinates of it. This allows us to uncover the temporal aspects of the mobility. Since we have the actual time and place of individual's origin and destination we can understand the temporal patterns in each visited location with further details. Integrating two described data set we provide a dynamical model of human travels that incorporates different aspects observed empirically.

  20. Modeling Resource Utilization of a Large Data Acquisition System

    CERN Document Server

    AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger

    2017-01-01

    The ATLAS 'Phase-II' upgrade, scheduled to start in 2024, will significantly change the requirements under which the data-acquisition system operates. The input data rate, currently fixed around 150 GB/s, is anticipated to reach 5 TB/s. In order to deal with the challenging conditions, and exploit the capabilities of newer technologies, a number of architectural changes are under consideration. Of particular interest is a new component, known as the Storage Handler, which will provide a large buffer area decoupling real-time data taking from event filtering. Dynamic operational models of the upgraded system can be used to identify the required resources and to select optimal techniques. In order to achieve a robust and dependable model, the current data-acquisition architecture has been used as a test case. This makes it possible to verify and calibrate the model against real operation data. Such a model can then be evolved toward the future ATLAS Phase-II architecture. In this paper we introduce the current ...

  1. Modelling Resource Utilization of a Large Data Acquisition System

    CERN Document Server

    Santos, Alejandro; The ATLAS collaboration

    2017-01-01

    The ATLAS 'Phase-II' upgrade, scheduled to start in 2024, will significantly change the requirements under which the data-acquisition system operates. The input data rate, currently fixed around 150 GB/s, is anticipated to reach 5 TB/s. In order to deal with the challenging conditions, and exploit the capabilities of newer technologies, a number of architectural changes are under consideration. Of particular interest is a new component, known as the Storage Handler, which will provide a large buffer area decoupling real-time data taking from event filtering. Dynamic operational models of the upgraded system can be used to identify the required resources and to select optimal techniques. In order to achieve a robust and dependable model, the current data-acquisition architecture has been used as a test case. This makes it possible to verify and calibrate the model against real operation data. Such a model can then be evolved toward the future ATLAS Phase-II architecture. In this paper we introduce the current ...

  2. Functional and Aesthetic Outcome of Reconstruction of Large Oro-Facial Defects Involving the Lip after Tumor Resection

    International Nuclear Information System (INIS)

    Denewer, A.D.; Setie, A.E.; Hussein, O.A.; Aly, O.F.

    2006-01-01

    Background: Squamous cell carcinoma of the head and neck is a challenging disease to both surgeons and radiation oncologists due to proximity of many important anatomical structures. Surgery could be curative as these cancers usually metastasize very late by blood stream. Aim of the Work: This work addresses the oncologic, functional and aesthetic factors affecting reconstruction of large orofacial defects involving the lip following tumor resection. Patients and Methods: The study reviews the surgical outcome of one hundred and twelve patients with invasive tumors at. or extending to, the lip(s). treated at the Mansoura University - Surgical Oncology Department, from January 2000 to January 2005. Tumor stage were T 2 (43), T 3 (56) and T 4 (13). Nodal state was N 0 in 80, N 1 in 29 and N 2 in three cases. AJCC stage grouping was II (T 2 N 0 ) in 33 patients. stage III (T 3 N 0 orT 1-3 N 1 ) in 64 cases and stage IV (T 4 due to bone erosion or N 2 ) in 15 cases. The technique used for lip reconstruction was unilateral or bilateral myocutaneous depressor anguli oris flap (MCDAOF) for isolated lip defect (n=63). Bilateral myocutaneous depressor anguli oris (MCDAOF) plus local cervical rotational flap chin defects (n=3). pectorals major myocutaneous pedicled flap for cheek defects involving the lip together with a tongue flap for mucosal reconstruction (n=35). sternocleidomastoid clavicular myo-osseous flap for concomitant mandibular defects (n=] 2). Results: esthetic and functional results are evaluated regarding appearance, oral incompetence, disabling microstomia and eating difficulties. depressor anguli oris reconstruction allowed functioning static and dynamic oral function in all cases in contrast to the Pectorals major flap. there were 18 cases of oral incompetence (46.1%), nine cases of speech difficulty (23%) and five patients with poor cosmetic appearance within the second group total flap loss was not encountered, Partial nap loss affected thirteen

  3. Modeling containment of large wildfires using generalized linear mixed-model analysis

    Science.gov (United States)

    Mark Finney; Isaac C. Grenfell; Charles W. McHugh

    2009-01-01

    Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...

  4. Dynamics of a large extra dimension inspired hybrid inflation model

    International Nuclear Information System (INIS)

    Green, Anne M.; Mazumdar, Anupam

    2002-01-01

    In low scale quantum gravity scenarios the fundamental scale of nature can be as low as 1 TeV, in order to address the naturalness of the electroweak scale. A number of difficulties arise in constructing specific models: stabilization of the radius of the extra dimensions, avoidance of overproduction of Kaluza-Klein modes, achieving successful baryogenesis and production of a close to scale-invariant spectrum of density perturbations with the correct amplitude. We examine in detail the dynamics, including radion stabilization, of a hybrid inflation model that has been proposed in order to address these difficulties, where the inflaton is a gauge singlet residing in the bulk. We find that for a low fundamental scale the phase transition, which in standard four dimensional hybrid models usually ends inflation, is slow and there is a second phase of inflation lasting for a large number of e-foldings. The density perturbations on cosmologically interesting scales exit the Hubble radius during this second phase of inflation, and we find that their amplitude is far smaller than is required. We find that the duration of the second phase of inflation can be short, so that cosmologically interesting scales exit the Hubble radius prior to the phase transition, and the density perturbations have the correct amplitude, only if the fundamental scale takes an intermediate value. Finally we comment briefly on the implications of an intermediate fundamental scale for the production of primordial black holes and baryogenesis

  5. Improving large-scale groundwater models by considering fossil gradients

    Science.gov (United States)

    Schulz, Stephan; Walther, Marc; Michelsen, Nils; Rausch, Randolf; Dirks, Heiko; Al-Saud, Mohammed; Merz, Ralf; Kolditz, Olaf; Schüth, Christoph

    2017-05-01

    Due to limited availability of surface water, many arid to semi-arid countries rely on their groundwater resources. Despite the quasi-absence of present day replenishment, some of these groundwater bodies contain large amounts of water, which was recharged during pluvial periods of the Late Pleistocene to Early Holocene. These mostly fossil, non-renewable resources require different management schemes compared to those which are usually applied in renewable systems. Fossil groundwater is a finite resource and its withdrawal implies mining of aquifer storage reserves. Although they receive almost no recharge, some of them show notable hydraulic gradients and a flow towards their discharge areas, even without pumping. As a result, these systems have more discharge than recharge and hence are not in steady state, which makes their modelling, in particular the calibration, very challenging. In this study, we introduce a new calibration approach, composed of four steps: (i) estimating the fossil discharge component, (ii) determining the origin of fossil discharge, (iii) fitting the hydraulic conductivity with a pseudo steady-state model, and (iv) fitting the storage capacity with a transient model by reconstructing head drawdown induced by pumping activities. Finally, we test the relevance of our approach and evaluated the effect of considering or ignoring fossil gradients on aquifer parameterization for the Upper Mega Aquifer (UMA) on the Arabian Peninsula.

  6. AtMic60 Is Involved in Plant Mitochondria Lipid Trafficking and Is Part of a Large Complex.

    Science.gov (United States)

    Michaud, Morgane; Gros, Valérie; Tardif, Marianne; Brugière, Sabine; Ferro, Myriam; Prinz, William A; Toulmay, Alexandre; Mathur, Jaideep; Wozny, Michael; Falconet, Denis; Maréchal, Eric; Block, Maryse A; Jouhet, Juliette

    2016-03-07

    The mitochondrion is an organelle originating from an endosymbiotic event and playing a role in several fundamental processes such as energy production, metabolite syntheses, and programmed cell death. This organelle is delineated by two membranes whose synthesis requires an extensive exchange of phospholipids with other cellular organelles such as endoplasmic reticulum (ER) and vacuolar membranes in yeast. These transfers of phospholipids are thought to occur by a non-vesicular pathway at contact sites between two closely apposed membranes. In plants, little is known about the biogenesis of mitochondrial membranes. Contact sites between ER and mitochondria are suspected to play a similar role in phospholipid trafficking as in yeast, but this has never been demonstrated. In contrast, it has been shown that plastids are able to transfer lipids to mitochondria during phosphate starvation. However, the proteins involved in such transfer are still unknown. Here, we identified in Arabidopsis thaliana a large lipid-enriched complex called the mitochondrial transmembrane lipoprotein (MTL) complex. The MTL complex contains proteins located in the two mitochondrial membranes and conserved in all eukaryotic cells, such as the TOM complex and AtMic60, a component of the MICOS complex. We demonstrate that AtMic60 contributes to the export of phosphatidylethanolamine from mitochondria and the import of galactoglycerolipids from plastids during phosphate starvation. Furthermore, AtMic60 promotes lipid desorption from membranes, likely as an initial step for lipid transfer, and binds to Tom40, suggesting that AtMic60 could regulate the tethering between the inner and outer membranes of mitochondria. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. A turbulence model for large interfaces in high Reynolds two-phase CFD

    International Nuclear Information System (INIS)

    Coste, P.; Laviéville, J.

    2015-01-01

    Highlights: • Two-phase CFD commonly involves interfaces much larger than the computational cells. • A two-phase turbulence model is developed to better take them into account. • It solves k–epsilon transport equations in each phase. • The special treatments and transfer terms at large interfaces are described. • Validation cases are presented. - Abstract: A model for two-phase (six-equation) CFD modelling of turbulence is presented, for the regions of the flow where the liquid–gas interface takes place on length scales which are much larger than the typical computational cell size. In the other regions of the flow, the liquid or gas volume fractions range from 0 to 1. Heat and mass transfer, compressibility of the fluids, are included in the system, which is used at high Reynolds numbers in large scale industrial calculations. In this context, a model based on k and ε transport equations in each phase was chosen. The paper describes the model, with a focus on the large interfaces, which require special treatments and transfer terms between the phases, including some approaches inspired from wall functions. The validation of the model is based on high Reynolds number experiments with turbulent quantities measurements of a liquid jet impinging a free surface and an air water stratified flow. A steam–water stratified condensing flow experiment is also used for an indirect validation in the case of heat and mass transfer

  8. Large geospatial images discovery: metadata model and technological framework

    Directory of Open Access Journals (Sweden)

    Lukáš Brůha

    2015-12-01

    Full Text Available The advancements in geospatial web technology triggered efforts for disclosure of valuable resources of historical collections. This paper focuses on the role of spatial data infrastructures (SDI in such efforts. The work describes the interplay between SDI technologies and potential use cases in libraries such as cartographic heritage. The metadata model is introduced to link up the sources from these two distinct fields. To enhance the data search capabilities, the work focuses on the representation of the content-based metadata of raster images, which is the crucial prerequisite to target the search in a more effective way. The architecture of the prototype system for automatic raster data processing, storage, analysis and distribution is introduced. The architecture responds to the characteristics of input datasets, namely to the continuous flow of very large raster data and related metadata. Proposed solutions are illustrated on the case study of cartometric analysis of digitised early maps and related metadata encoding.

  9. A large animal model for boron neutron capture therapy

    International Nuclear Information System (INIS)

    Gavin, P.R.; Kraft, S.L.; DeHaan, C.E.; Moore, M.P.; Griebenow, M.L.

    1992-01-01

    An epithermal neutron beam is needed to treat relatively deep seated tumors. The scattering characteristics of neutrons in this energy range dictate that in vivo experiments be conducted in a large animal to prevent unacceptable total body irradiation. The canine species has proven an excellent model to evaluate the various problems of boron neutron capture utilizing an epithermal neutron beam. This paper discusses three major components of the authors study: (1) the pharmacokinetics of borocaptate sodium (NA 2 B 12 H 11 SH or BSH) in dogs with spontaneously occurring brain tumors, (2) the radiation tolerance of normal tissues in the dog using an epithermal beam alone and in combination with borocaptate sodium, and (3) initial treatment of dogs with spontaneously occurring brain tumors utilizing borocaptate sodium and an epithermal neutron beam

  10. Modeling of large-scale oxy-fuel combustion processes

    DEFF Research Database (Denmark)

    Yin, Chungen

    2012-01-01

    Quite some studies have been conducted in order to implement oxy-fuel combustion with flue gas recycle in conventional utility boilers as an effective effort of carbon capture and storage. However, combustion under oxy-fuel conditions is significantly different from conventional air-fuel firing......, among which radiative heat transfer under oxy-fuel conditions is one of the fundamental issues. This paper demonstrates the nongray-gas effects in modeling of large-scale oxy-fuel combustion processes. Oxy-fuel combustion of natural gas in a 609MW utility boiler is numerically studied, in which...... calculation of the oxy-fuel WSGGM remarkably over-predicts the radiative heat transfer to the furnace walls and under-predicts the gas temperature at the furnace exit plane, which also result in a higher incomplete combustion in the gray calculation. Moreover, the gray and non-gray calculations of the same...

  11. Parameterization of Fire Injection Height in Large Scale Transport Model

    Science.gov (United States)

    Paugam, R.; Wooster, M.; Atherton, J.; Val Martin, M.; Freitas, S.; Kaiser, J. W.; Schultz, M. G.

    2012-12-01

    The parameterization of fire injection height in global chemistry transport model is currently a subject of debate in the atmospheric community. The approach usually proposed in the literature is based on relationships linking injection height and remote sensing products like the Fire Radiative Power (FRP) which can measure active fire properties. In this work we present an approach based on the Plume Rise Model (PRM) developed by Freitas et al (2007, 2010). This plume model is already used in different host models (e.g. WRF, BRAMS). In its original version, the fire is modeled by: a convective heat flux (CHF; pre-defined by the land cover and evaluated as a fixed part of the total heat released) and a plume radius (derived from the GOES Wildfire-ABBA product) which defines the fire extension where the CHF is homogeneously distributed. Here in our approach the Freitas model is modified, in particular we added (i) an equation for mass conservation, (ii) a scheme to parameterize horizontal entrainment/detrainment, and (iii) a new initialization module which estimates the sensible heat released by the fire on the basis of measured FRP rather than fuel cover type. FRP and Active Fire (AF) area necessary for the initialization of the model are directly derived from a modified version of the Dozier algorithm applied to the MOD14 product. An optimization (using the simulating annealing method) of this new version of the PRM is then proposed based on fire plume characteristics derived from the official MISR plume height project and atmospheric profiles extracted from the ECMWF analysis. The data set covers the main fire region (Africa, Siberia, Indonesia, and North and South America) and is set up to (i) retain fires where plume height and FRP can be easily linked (i.e. avoid large fire cluster where individual plume might interact), (ii) keep fire which show decrease of FRP and AF area after MISR overpass (i.e. to minimize effect of the time period needed for the plume to

  12. Large scale solar district heating. Evaluation, modelling and designing

    Energy Technology Data Exchange (ETDEWEB)

    Heller, A.

    2000-07-01

    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the tool for design studies and on a local energy planning case. The evaluation of the central solar heating technology is based on measurements on the case plant in Marstal, Denmark, and on published and unpublished data for other, mainly Danish, CSDHP plants. Evaluations on the thermal, economical and environmental performances are reported, based on the experiences from the last decade. The measurements from the Marstal case are analysed, experiences extracted and minor improvements to the plant design proposed. For the detailed designing and energy planning of CSDHPs, a computer simulation model is developed and validated on the measurements from the Marstal case. The final model is then generalised to a 'generic' model for CSDHPs in general. The meteorological reference data, Danish Reference Year, is applied to find the mean performance for the plant designs. To find the expectable variety of the thermal performance of such plants, a method is proposed where data from a year with poor solar irradiation and a year with strong solar irradiation are applied. Equipped with a simulation tool design studies are carried out spreading from parameter analysis over energy planning for a new settlement to a proposal for the combination of plane solar collectors with high performance solar collectors, exemplified by a trough solar collector. The methodology of utilising computer simulation proved to be a cheap and relevant tool in the design of future solar heating plants. The thesis also exposed the demand for developing computer models for the more advanced solar collector designs and especially for the control operation of CSHPs. In the final chapter the CSHP technology is put into perspective with respect to other possible technologies to find the relevance of the application

  13. Simulating large-scale pedestrian movement using CA and event driven model: Methodology and case study

    Science.gov (United States)

    Li, Jun; Fu, Siyao; He, Haibo; Jia, Hongfei; Li, Yanzhong; Guo, Yi

    2015-11-01

    Large-scale regional evacuation is an important part of national security emergency response plan. Large commercial shopping area, as the typical service system, its emergency evacuation is one of the hot research topics. A systematic methodology based on Cellular Automata with the Dynamic Floor Field and event driven model has been proposed, and the methodology has been examined within context of a case study involving the evacuation within a commercial shopping mall. Pedestrians walking is based on Cellular Automata and event driven model. In this paper, the event driven model is adopted to simulate the pedestrian movement patterns, the simulation process is divided into normal situation and emergency evacuation. The model is composed of four layers: environment layer, customer layer, clerk layer and trajectory layer. For the simulation of movement route of pedestrians, the model takes into account purchase intention of customers and density of pedestrians. Based on evacuation model of Cellular Automata with Dynamic Floor Field and event driven model, we can reflect behavior characteristics of customers and clerks at the situations of normal and emergency evacuation. The distribution of individual evacuation time as a function of initial positions and the dynamics of the evacuation process is studied. Our results indicate that the evacuation model using the combination of Cellular Automata with Dynamic Floor Field and event driven scheduling can be used to simulate the evacuation of pedestrian flows in indoor areas with complicated surroundings and to investigate the layout of shopping mall.

  14. The Oskarshamn model for public involvement in the siting of nuclear facilities

    International Nuclear Information System (INIS)

    Aahagen, H.; CarIsson, Torsten; Hallberg, K.; Andersson, Kjell

    1999-01-01

    The Oskarshamn model has so far worked extremely well as a tool to achieve openness and public participation. The municipality involvement has been successful in several aspects, e.g.: It has been possible to influence the program, to a large extent, to meet certain municipality conditions and to ensure the local perspective. The local competence has increased to a considerable degree. The activities generated by the six working groups with a total of 40 members have generated a large number of contacts with various organisations, schools, mass media, individuals in the general public and interest groups. For the future, clarification of the disposal method and site selection criteria as well as the site selection process as such is crucial. The municipality has also emphasised the importance of SKB having shown the integration between site selection criteria, the feasibility study and the safety assessment. Furthermore, the programs for the encapsulation facility and the repository must be co-ordinated. For Oskarshamn it will be of utmost importance that the repository is well under way to be realised before the encapsulation facility can be built

  15. Assessment for Improvement: Two Models for Assessing a Large Quantitative Reasoning Requirement

    Directory of Open Access Journals (Sweden)

    Mary C. Wright

    2015-03-01

    Full Text Available We present two models for assessment of a large and diverse quantitative reasoning (QR requirement at the University of Michigan. These approaches address two key challenges in assessment: (1 dissemination of findings for curricular improvement and (2 resource constraints associated with measurement of large programs. Approaches we present for data collection include convergent validation of self-report surveys, as well as use of mixed methods and learning analytics. Strategies we present for dissemination of findings include meetings with instructors to share data and best practices, sharing of results through social media, and use of easily accessible dashboards. These assessment approaches may be of particular interest to universities with large numbers of students engaging in a QR experience, projects that involve multiple courses with diverse instructional goals, or those who wish to promote evidence-based curricular improvement.

  16. GIS for large-scale watershed observational data model

    Science.gov (United States)

    Patino-Gomez, Carlos

    Because integrated management of a river basin requires the development of models that are used for many purposes, e.g., to assess risks and possible mitigation of droughts and floods, manage water rights, assess water quality, and simply to understand the hydrology of the basin, the development of a relational database from which models can access the various data needed to describe the systems being modeled is fundamental. In order for this concept to be useful and widely applicable, however, it must have a standard design. The recently developed ArcHydro data model facilitates the organization of data according to the "basin" principle and allows access to hydrologic information by models. The development of a basin-scale relational database for the Rio Grande/Bravo basin implemented in a Geographic Information System is one of the contributions of this research. This geodatabase represents the first major attempt to establish a more complete understanding of the basin as a whole, including spatial and temporal information obtained from the United States of America and Mexico. Difficulties in processing raster datasets over large regions are studied in this research. One of the most important contributions is the application of a Raster-Network Regionalization technique, which utilizes raster-based analysis at the subregional scale in an efficient manner and combines the resulting subregional vector datasets into a regional database. Another important contribution of this research is focused on implementing a robust structure for handling huge temporal data sets related to monitoring points such as hydrometric and climatic stations, reservoir inlets and outlets, water rights, etc. For the Rio Grande study area, the ArcHydro format is applied to the historical information collected in order to include and relate these time series to the monitoring points in the geodatabase. Its standard time series format is changed to include a relationship to the agency from

  17. SCIMAP: Modelling Diffuse Pollution in Large River Basins

    Science.gov (United States)

    Milledge, D.; Heathwaite, L.; Lane, S. N.; Reaney, S. M.

    2009-12-01

    Polluted rivers are a problem for the plants and animals that require clean water to survive. Watershed scale processes can influence instream aquatic ecosystems by delivering fine sediment, solutes and organic matter from diffuse sources. To improve our rivers we need to identify the pollution sources. Models can help us to do this but these rarely address the extent to which risky land uses are hydrologically-connected, and hence able to deliver, to the drainage network. Those that do tend to apply a full hydrological scheme, which is unfeasible for large watersheds. Here we develop a risk-based modelling framework, SCIMAP, for diffuse pollution from agriculture (Nitrate, Phosphate and Fine Sediment). In each case the basis of the analysis is the joint consideration of the probability of a unit of land (25 m2 cell) producing a particular environmental risk and then of that risk reaching the river. The components share a common treatment of hydrological connectivity but differ in their treatment of each pollution type. We test and apply SCIMAP using spatially-distributed instream water quality data for some of the UK’s largest catchments to infer the processes and the associated process parameters that matter in defining their concentrations. We use these to identify a series of risky field locations, where this land use is readily connected to the river system by overland flow.

  18. Comparative transcriptome analysis of muscular dystrophy models Large(myd), Dmd(mdx)/Large(myd) and Dmd(mdx): what makes them different?

    Science.gov (United States)

    Almeida, Camila F; Martins, Poliana Cm; Vainzof, Mariz

    2016-08-01

    Muscular dystrophies (MD) are a clinically and genetically heterogeneous group of Mendelian diseases. The underlying pathophysiology and phenotypic variability in each form are much more complex, suggesting the involvement of many other genes. Thus, here we studied the whole genome expression profile in muscles from three mice models for MD, at different time points: Dmd(mdx) (mutation in dystrophin gene), Large(myd-/-) (mutation in Large) and Dmd(mdx)/Large(myd-/-) (both mutations). The identification of altered biological functions can contribute to understand diseases and to find prognostic biomarkers and points for therapeutic intervention. We identified a substantial number of differentially expressed genes (DEGs) in each model, reflecting diseases' complexity. The main biological process affected in the three strains was immune system, accounting for the majority of enriched functional categories, followed by degeneration/regeneration and extracellular matrix remodeling processes. The most notable differences were in 21-day-old Dmd(mdx), with a high proportion of DEGs related to its regenerative capacity. A higher number of positive embryonic myosin heavy chain (eMyHC) fibers confirmed this. The new Dmd(mdx)/Large(myd-/-) model did not show a highly different transcriptome from the parental lineages, with a profile closer to Large(myd-/-), but not bearing the same regenerative potential as Dmd(mdx). This is the first report about transcriptome profile of a mouse model for congenital MD and Dmd(mdx)/Large(myd). By comparing the studied profiles, we conclude that alterations in biological functions due to the dystrophic process are very similar, and that the intense regeneration in Dmd(mdx) involves a large number of activated genes, not differentially expressed in the other two strains.

  19. Application of Pareto-efficient combustion modeling framework to large eddy simulations of turbulent reacting flows

    Science.gov (United States)

    Wu, Hao; Ihme, Matthias

    2017-11-01

    The modeling of turbulent combustion requires the consideration of different physico-chemical processes, involving a vast range of time and length scales as well as a large number of scalar quantities. To reduce the computational complexity, various combustion models are developed. Many of them can be abstracted using a lower-dimensional manifold representation. A key issue in using such lower-dimensional combustion models is the assessment as to whether a particular combustion model is adequate in representing a certain flame configuration. The Pareto-efficient combustion (PEC) modeling framework was developed to perform dynamic combustion model adaptation based on various existing manifold models. In this work, the PEC model is applied to a turbulent flame simulation, in which a computationally efficient flamelet-based combustion model is used in together with a high-fidelity finite-rate chemistry model. The combination of these two models achieves high accuracy in predicting pollutant species at a relatively low computational cost. The relevant numerical methods and parallelization techniques are also discussed in this work.

  20. Modelling of heat transfer during torrefaction of large lignocellulosic biomass

    Science.gov (United States)

    Regmi, Bharat; Arku, Precious; Tasnim, Syeda Humaira; Mahmud, Shohel; Dutta, Animesh

    2018-02-01

    Preparation of feedstock is a major energy intensive process for the thermochemical conversion of biomass into fuel. By eliminating the need to grind biomass prior to the torrefaction process, there would be a potential gain in the energy requirements as the entire step would be eliminated. In regards to a commercialization of torrefaction technology, this study has examined heat transfer inside large cylindrical biomass both numerically and experimentally during torrefaction. A numerical axis-symmetrical 2-D model for heat transfer during torrefaction at 270°C for 1 h was created in COMSOL Multiphysics 5.1 considering heat generation evaluated from the experiment. The model analyzed the temperature distribution within the core and on the surface of biomass during torrefaction for various sizes. The model results showed similarities with experimental results. The effect of L/D ratio on temperature distribution within biomass was observed by varying length and diameter and compared with experiments in literature to find out an optimal range of cylindrical biomass size suitable for torrefaction. The research demonstrated that a cylindrical biomass sample of 50 mm length with L/D ratio of 2 can be torrefied with a core-surface temperature difference of less than 30 °C. The research also demonstrated that sample length has a negligible effect on core-surface temperature difference during torrefaction when the diameter is fixed at 25 mm. This information will help to design a torrefaction processing system and develop a value chain for biomass supply without using an energy-intensive grinding process.

  1. Environmental Impacts of Large Scale Biochar Application Through Spatial Modeling

    Science.gov (United States)

    Huber, I.; Archontoulis, S.

    2017-12-01

    In an effort to study the environmental (emissions, soil quality) and production (yield) impacts of biochar application at regional scales we coupled the APSIM-Biochar model with the pSIMS parallel platform. So far the majority of biochar research has been concentrated on lab to field studies to advance scientific knowledge. Regional scale assessments are highly needed to assist decision making. The overall objective of this simulation study was to identify areas in the USA that have the most gain environmentally from biochar's application, as well as areas which our model predicts a notable yield increase due to the addition of biochar. We present the modifications in both APSIM biochar and pSIMS components that were necessary to facilitate these large scale model runs across several regions in the United States at a resolution of 5 arcminutes. This study uses the AgMERRA global climate data set (1980-2010) and the Global Soil Dataset for Earth Systems modeling as a basis for creating its simulations, as well as local management operations for maize and soybean cropping systems and different biochar application rates. The regional scale simulation analysis is in progress. Preliminary results showed that the model predicts that high quality soils (particularly those common to Iowa cropping systems) do not receive much, if any, production benefit from biochar. However, soils with low soil organic matter ( 0.5%) do get a noteworthy yield increase of around 5-10% in the best cases. We also found N2O emissions to be spatial and temporal specific; increase in some areas and decrease in some other areas due to biochar application. In contrast, we found increases in soil organic carbon and plant available water in all soils (top 30 cm) due to biochar application. The magnitude of these increases (% change from the control) were larger in soil with low organic matter (below 1.5%) and smaller in soils with high organic matter (above 3%) and also dependent on biochar

  2. Computation of Large Molecules with the Hartree-Fock Model

    Science.gov (United States)

    Clementi, Enrico

    1972-01-01

    The usual way to compute Hartree-Fock type functions for molecules is by an expansion of the one-electron functions (molecular orbitals) in a linear combination of analytical functions (LCAO-MO-SCF, linear combination of atomic orbitals—Molecular Orbital—Self Consistent field). The expansion coefficients are obtained variationally. This technique requires the computation of several multicenter two-electron integrals (representing the electron-electron interaction) proportional to the fourth power of the basis set size. There are several types of basis sets; the Gaussian type introduced by S. F. Boys is used herein. Since it requires from a minimum of 10 (or 15) Gaussian-type functions to about 25 (or 30) Gaussian functions to describe a second-row atom in a molecule, the fourth power dependency of the basis set has been the de facto bottleneck of quantum chemical computations in the last decade. In this paper, the concept is introduced of a “dynamical” basis set, which allows for drastic computational simplifications while retaining full numerical accuracy. Examples are given that show that computational saving in computer time of more than a factor of one hundred is achieved and that large basis sets (up to the order of several hundred Gaussian functions per molecule) can be used routinely. It is noted that the limitation in the Hartree-Fock energy (correlation energy error) can be easily computed by use of a statistical model introduced by Wigner for solid-state systems in 1934. Thus, large molecules can now be simulated by computational techniques without reverting to semi-empirical parameterization and without requiring enormous computational time and storage. PMID:16592020

  3. A cooperative strategy for parameter estimation in large scale systems biology models.

    Science.gov (United States)

    Villaverde, Alejandro F; Egea, Jose A; Banga, Julio R

    2012-06-22

    Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs ("threads") that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and

  4. A cooperative strategy for parameter estimation in large scale systems biology models

    Directory of Open Access Journals (Sweden)

    Villaverde Alejandro F

    2012-06-01

    Full Text Available Abstract Background Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. Results A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS, is presented. Its key feature is the cooperation between different programs (“threads” that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS. Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. Conclusions The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here

  5. Modeling and analysis of large-eddy simulations of particle-laden turbulent boundary layer flows

    KAUST Repository

    Rahman, Mustafa M.

    2017-01-05

    We describe a framework for the large-eddy simulation of solid particles suspended and transported within an incompressible turbulent boundary layer (TBL). For the fluid phase, the large-eddy simulation (LES) of incompressible turbulent boundary layer employs stretched spiral vortex subgrid-scale model and a virtual wall model similar to the work of Cheng, Pullin & Samtaney (J. Fluid Mech., 2015). This LES model is virtually parameter free and involves no active filtering of the computed velocity field. Furthermore, a recycling method to generate turbulent inflow is implemented. For the particle phase, the direct quadrature method of moments (DQMOM) is chosen in which the weights and abscissas of the quadrature approximation are tracked directly rather than the moments themselves. The numerical method in this framework is based on a fractional-step method with an energy-conservative fourth-order finite difference scheme on a staggered mesh. This code is parallelized based on standard message passing interface (MPI) protocol and is designed for distributed-memory machines. It is proposed to utilize this framework to examine transport of particles in very large-scale simulations. The solver is validated using the well know result of Taylor-Green vortex case. A large-scale sandstorm case is simulated and the altitude variations of number density along with its fluctuations are quantified.

  6. Effects of deceptive packaging and product involvement on purchase intention: an elaboration likelihood model perspective.

    Science.gov (United States)

    Lammers, H B

    2000-04-01

    From an Elaboration Likelihood Model perspective, it was hypothesized that postexposure awareness of deceptive packaging claims would have a greater negative effect on scores for purchase intention by consumers lowly involved rather than highly involved with a product (n = 40). Undergraduates who were classified as either highly or lowly (ns = 20 and 20) involved with M&Ms examined either a deceptive or non-deceptive package design for M&Ms candy and were subsequently informed of the deception employed in the packaging before finally rating their intention to purchase. As anticipated, highly deceived subjects who were low in involvement rated intention to purchase lower than their highly involved peers. Overall, the results attest to the robustness of the model and suggest that the model has implications beyond advertising effects and into packaging effects.

  7. Modeling the Relations among Parental Involvement, School Engagement and Academic Performance of High School Students

    Science.gov (United States)

    Al-Alwan, Ahmed F.

    2014-01-01

    The author proposed a model to explain how parental involvement and school engagement related to academic performance. Participants were (671) 9th and 10th graders students who completed two scales of "parental involvement" and "school engagement" in their regular classrooms. Results of the path analysis suggested that the…

  8. The Development of a Structural Equation Model to Demonstrate the Correlations between Marijuana Use and Involvement

    Science.gov (United States)

    Borcherding, Matthew J.

    2017-01-01

    This quantitative study examined the effects of marijuana on academic and social involvement in undergraduates using a structural equation model. The study was conducted at a midsized comprehensive community college in the Midwest and was guided by Astin's (1985) theory of student involvement. A survey link was e-mailed to all 4,527 eligible…

  9. Examining a Causal Model of Early Drug Involvement Among Inner City Junior High School Youths.

    Science.gov (United States)

    Dembo, Richard; And Others

    Reflecting the need to construct more inclusive, socially and culturally relevant conceptions of drug use than currently exist, the determinants of drug involvement among inner-city youths within the context of a causal model were investigated. The drug involvement of the Black and Puerto Rican junior high school girls and boys was hypothesized to…

  10. Empirical Models of Social Learning in a Large, Evolving Network.

    Directory of Open Access Journals (Sweden)

    Ayşe Başar Bener

    Full Text Available This paper advances theories of social learning through an empirical examination of how social networks change over time. Social networks are important for learning because they constrain individuals' access to information about the behaviors and cognitions of other people. Using data on a large social network of mobile device users over a one-month time period, we test three hypotheses: 1 attraction homophily causes individuals to form ties on the basis of attribute similarity, 2 aversion homophily causes individuals to delete existing ties on the basis of attribute dissimilarity, and 3 social influence causes individuals to adopt the attributes of others they share direct ties with. Statistical models offer varied degrees of support for all three hypotheses and show that these mechanisms are more complex than assumed in prior work. Although homophily is normally thought of as a process of attraction, people also avoid relationships with others who are different. These mechanisms have distinct effects on network structure. While social influence does help explain behavior, people tend to follow global trends more than they follow their friends.

  11. Empirical Models of Social Learning in a Large, Evolving Network.

    Science.gov (United States)

    Bener, Ayşe Başar; Çağlayan, Bora; Henry, Adam Douglas; Prałat, Paweł

    2016-01-01

    This paper advances theories of social learning through an empirical examination of how social networks change over time. Social networks are important for learning because they constrain individuals' access to information about the behaviors and cognitions of other people. Using data on a large social network of mobile device users over a one-month time period, we test three hypotheses: 1) attraction homophily causes individuals to form ties on the basis of attribute similarity, 2) aversion homophily causes individuals to delete existing ties on the basis of attribute dissimilarity, and 3) social influence causes individuals to adopt the attributes of others they share direct ties with. Statistical models offer varied degrees of support for all three hypotheses and show that these mechanisms are more complex than assumed in prior work. Although homophily is normally thought of as a process of attraction, people also avoid relationships with others who are different. These mechanisms have distinct effects on network structure. While social influence does help explain behavior, people tend to follow global trends more than they follow their friends.

  12. Applying the Plan-Do-Study-Act (PDSA) approach to a large pragmatic study involving safety net clinics.

    Science.gov (United States)

    Coury, Jennifer; Schneider, Jennifer L; Rivelli, Jennifer S; Petrik, Amanda F; Seibel, Evelyn; D'Agostini, Brieshon; Taplin, Stephen H; Green, Beverly B; Coronado, Gloria D

    2017-06-19

    The Plan-Do-Study-Act (PDSA) cycle is a commonly used improvement process in health care settings, although its documented use in pragmatic clinical research is rare. A recent pragmatic clinical research study, called the Strategies and Opportunities to STOP Colon Cancer in Priority Populations (STOP CRC), used this process to optimize the research implementation of an automated colon cancer screening outreach program in intervention clinics. We describe the process of using this PDSA approach, the selection of PDSA topics by clinic leaders, and project leaders' reactions to using PDSA in pragmatic research. STOP CRC is a cluster-randomized pragmatic study that aims to test the effectiveness of a direct-mail fecal immunochemical testing (FIT) program involving eight Federally Qualified Health Centers in Oregon and California. We and a practice improvement specialist trained in the PDSA process delivered structured presentations to leaders of these centers; the presentations addressed how to apply the PDSA process to improve implementation of a mailed outreach program offering colorectal cancer screening through FIT tests. Center leaders submitted PDSA plans and delivered reports via webinar at quarterly meetings of the project's advisory board. Project staff conducted one-on-one, 45-min interviews with project leads from each health center to assess the reaction to and value of the PDSA process in supporting the implementation of STOP CRC. Clinic-selected PDSA activities included refining the intervention staffing model, improving outreach materials, and changing workflow steps. Common benefits of using PDSA cycles in pragmatic research were that it provided a structure for staff to focus on improving the program and it allowed staff to test the change they wanted to see. A commonly reported challenge was measuring the success of the PDSA process with the available electronic medical record tools. Understanding how the PDSA process can be applied to pragmatic

  13. Large wind power plants modeling techniques for power system simulation studies

    Energy Technology Data Exchange (ETDEWEB)

    Larose, Christian; Gagnon, Richard; Turmel, Gilbert; Giroux, Pierre; Brochu, Jacques [IREQ Hydro-Quebec Research Institute, Varennes, QC (Canada); McNabb, Danielle; Lefebvre, Daniel [Hydro-Quebec TransEnergie, Montreal, QC (Canada)

    2009-07-01

    This paper presents efficient modeling techniques for the simulation of large wind power plants in the EMT domain using a parallel supercomputer. Using these techniques, large wind power plants can be simulated in detail, with each wind turbine individually represented, as well as the collector and receiving network. The simulation speed of the resulting models is fast enough to perform both EMT and transient stability studies. The techniques are applied to develop an EMT detailed model of a generic wind power plant consisting of 73 x 1.5-MW doubly-fed induction generator (DFIG) wind turbine. Validation of the modeling techniques is presented using a comparison with a Matlab/SimPowerSystems simulation. To demonstrate the simulation capabilities using these modeling techniques, simulations involving a 120-bus receiving network with two generic wind power plants (146 wind turbines) are performed. The complete system is modeled using the Hypersim simulator and Matlab/SimPowerSystems. The simulations are performed on a 32-processor supercomputer using an EMTP-like solution with a time step of 18.4 {mu}s. The simulation performance is 10 times slower than in real-time, which is a huge gain in performance compared to traditional tools. The simulation is designed to run in real-time so it never stops, resulting in a capability to perform thousand of tests via automatic testing tools. (orig.)

  14. Large Animal Stroke Models vs. Rodent Stroke Models, Pros and Cons, and Combination?

    Science.gov (United States)

    Cai, Bin; Wang, Ning

    2016-01-01

    Stroke is a leading cause of serious long-term disability worldwide and the second leading cause of death in many countries. Long-time attempts to salvage dying neurons via various neuroprotective agents have failed in stroke translational research, owing in part to the huge gap between animal stroke models and stroke patients, which also suggests that rodent models have limited predictive value and that alternate large animal models are likely to become important in future translational research. The genetic background, physiological characteristics, behavioral characteristics, and brain structure of large animals, especially nonhuman primates, are analogous to humans, and resemble humans in stroke. Moreover, relatively new regional imaging techniques, measurements of regional cerebral blood flow, and sophisticated physiological monitoring can be more easily performed on the same animal at multiple time points. As a result, we can use large animal stroke models to decrease the gap and promote translation of basic science stroke research. At the same time, we should not neglect the disadvantages of the large animal stroke model such as the significant expense and ethical considerations, which can be overcome by rodent models. Rodents should be selected as stroke models for initial testing and primates or cats are desirable as a second species, which was recommended by the Stroke Therapy Academic Industry Roundtable (STAIR) group in 2009.

  15. Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam's Window.

    Science.gov (United States)

    Onorante, Luca; Raftery, Adrian E

    2016-01-01

    Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam's window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods.

  16. Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam’s Window*

    Science.gov (United States)

    Onorante, Luca; Raftery, Adrian E.

    2015-01-01

    Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam’s window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods. PMID:26917859

  17. Virtualizing ancient Rome: 3D acquisition and modeling of a large plaster-of-Paris model of imperial Rome

    Science.gov (United States)

    Guidi, Gabriele; Frischer, Bernard; De Simone, Monica; Cioci, Andrea; Spinetti, Alessandro; Carosso, Luca; Micoli, Laura L.; Russo, Michele; Grasso, Tommaso

    2005-01-01

    Computer modeling through digital range images has been used for many applications, including 3D modeling of objects belonging to our cultural heritage. The scales involved range from small objects (e.g. pottery), to middle-sized works of art (statues, architectural decorations), up to very large structures (architectural and archaeological monuments). For any of these applications, suitable sensors and methodologies have been explored by different authors. The object to be modeled within this project is the "Plastico di Roma antica," a large plaster-of-Paris model of imperial Rome (16x17 meters) created in the last century. Its overall size therefore demands an acquisition approach typical of large structures, but it also is characterized extremely tiny details typical of small objects (houses are a few centimeters high; their doors, windows, etc. are smaller than 1 centimeter). This paper gives an account of the procedures followed for solving this "contradiction" and describes how a huge 3D model was acquired and generated by using a special metrology Laser Radar. The procedures for reorienting in a single reference system the huge point clouds obtained after each acquisition phase, thanks to the measurement of fixed redundant references, are described. The data set was split in smaller sub-areas 2 x 2 meters each for purposes of mesh editing. This subdivision was necessary owing to the huge number of points in each individual scan (50-60 millions). The final merge of the edited parts made it possible to create a single mesh. All these processes were made with software specifically designed for this project since no commercial package could be found that was suitable for managing such a large number of points. Preliminary models are presented. Finally, the significance of the project is discussed in terms of the overall project known as "Rome Reborn," of which the present acquisition is an important component.

  18. The use of public participation and economic appraisal for public involvement in large-scale hydropower projects: Case study of the Nam Theun 2 Hydropower Project

    International Nuclear Information System (INIS)

    Mirumachi, Naho; Torriti, Jacopo

    2012-01-01

    Gaining public acceptance is one of the main issues with large-scale low-carbon projects such as hydropower development. It has been recommended by the World Commission on Dams that to gain public acceptance, public involvement is necessary in the decision-making process (). As financially-significant actors in the planning and implementation of large-scale hydropower projects in developing country contexts, the paper examines the ways in which public involvement may be influenced by international financial institutions. Using the case study of the Nam Theun 2 Hydropower Project in Laos, the paper analyses how public involvement facilitated by the Asian Development Bank had a bearing on procedural and distributional justice. The paper analyses the extent of public participation and the assessment of full social and environmental costs of the project in the Cost-Benefit Analysis conducted during the project appraisal stage. It is argued that while efforts were made to involve the public, there were several factors that influenced procedural and distributional justice: the late contribution of the Asian Development Bank in the project appraisal stage; and the issue of non-market values and discount rate to calculate the full social and environmental costs. - Highlights: ► Public acceptance in large-scale hydropower projects is examined. ► Both procedural and distributional justice are important for public acceptance. ► International Financial Institutions can influence the level of public involvement. ► Public involvement benefits consideration of non-market values and discount rates.

  19. Towards large scale stochastic rainfall models for flood risk assessment in trans-national basins

    Science.gov (United States)

    Serinaldi, F.; Kilsby, C. G.

    2012-04-01

    While extensive research has been devoted to rainfall-runoff modelling for risk assessment in small and medium size watersheds, less attention has been paid, so far, to large scale trans-national basins, where flood events have severe societal and economic impacts with magnitudes quantified in billions of Euros. As an example, in the April 2006 flood events along the Danube basin at least 10 people lost their lives and up to 30 000 people were displaced, with overall damages estimated at more than half a billion Euros. In this context, refined analytical methods are fundamental to improve the risk assessment and, then, the design of structural and non structural measures of protection, such as hydraulic works and insurance/reinsurance policies. Since flood events are mainly driven by exceptional rainfall events, suitable characterization and modelling of space-time properties of rainfall fields is a key issue to perform a reliable flood risk analysis based on alternative precipitation scenarios to be fed in a new generation of large scale rainfall-runoff models. Ultimately, this approach should be extended to a global flood risk model. However, as the need of rainfall models able to account for and simulate spatio-temporal properties of rainfall fields over large areas is rather new, the development of new rainfall simulation frameworks is a challenging task involving that faces with the problem of overcoming the drawbacks of the existing modelling schemes (devised for smaller spatial scales), but keeping the desirable properties. In this study, we critically summarize the most widely used approaches for rainfall simulation. Focusing on stochastic approaches, we stress the importance of introducing suitable climate forcings in these simulation schemes in order to account for the physical coherence of rainfall fields over wide areas. Based on preliminary considerations, we suggest a modelling framework relying on the Generalized Additive Models for Location, Scale

  20. Economic Model Predictive Control for Large-Scale and Distributed Energy Systems

    DEFF Research Database (Denmark)

    Standardi, Laura

    In this thesis, we consider control strategies for large and distributed energy systems that are important for the implementation of smart grid technologies.  An electrical grid has to ensure reliability and avoid long-term interruptions in the power supply. Moreover, the share of Renewable Energy...... Sources (RESs) in the smart grids is increasing. These energy sources bring uncertainty to the production due to their fluctuations. Hence,smart grids need suitable control systems that are able to continuously balance power production and consumption.  We apply the Economic Model Predictive Control (EMPC......) strategy to optimise the economic performances of the energy systems and to balance the power production and consumption. In the case of large-scale energy systems, the electrical grid connects a high number of power units. Because of this, the related control problem involves a high number of variables...

  1. Relationships among Adolescents' Leisure Motivation, Leisure Involvement, and Leisure Satisfaction: A Structural Equation Model

    Science.gov (United States)

    Chen, Ying-Chieh; Li, Ren-Hau; Chen, Sheng-Hwang

    2013-01-01

    The purpose of this cross-sectional study was to test a cause-and-effect model of factors affecting leisure satisfaction among Taiwanese adolescents. A structural equation model was proposed in which the relationships among leisure motivation, leisure involvement, and leisure satisfaction were explored. The study collected data from 701 adolescent…

  2. A Reformulated Model of Barriers to Parental Involvement in Education: Comment on Hornby and Lafaele (2011)

    Science.gov (United States)

    Fan, Weihua; Li, Nan; Sandoval, Jaime Robert

    2018-01-01

    In a 2011 article in this journal, Hornby and Lafaele provided a comprehensive model to understand barriers that may adversely impact effectiveness of parental involvement (PI) in education. The proposed explanatory model provides researchers with a new comprehensive and systematic perspective of the phenomenon in question with references from an…

  3. The Role of Student Involvement and Perceptions of Integration in a Causal Model of Student Persistence.

    Science.gov (United States)

    Berger, Joseph B.; Milem, Jeffrey F.

    1999-01-01

    This study refined and applied an integrated model of undergraduate persistence (accounting for both behavioral and perceptual components) to examine first-year retention at a private, highly selective research university. Results suggest that including behaviorally based measures of involvement improves the model's explanatory power concerning…

  4. Adolescents and Music Media: Toward an Involvement-Mediational Model of Consumption and Self-Concept

    Science.gov (United States)

    Kistler, Michelle; Rodgers, Kathleen Boyce; Power, Thomas; Austin, Erica Weintraub; Hill, Laura Griner

    2010-01-01

    Using social cognitive theory and structural regression modeling, we examined pathways between early adolescents' music media consumption, involvement with music media, and 3 domains of self-concept (physical appearance, romantic appeal, and global self-worth; N=124). A mediational model was supported for 2 domains of self-concept. Music media…

  5. Modeling the spreading of large-scale wildland fires

    Science.gov (United States)

    Mohamed Drissi

    2015-01-01

    The objective of the present study is twofold. First, the last developments and validation results of a hybrid model designed to simulate fire patterns in heterogeneous landscapes are presented. The model combines the features of a stochastic small-world network model with those of a deterministic semi-physical model of the interaction between burning and non-burning...

  6. Modeling Large Time Series for Efficient Approximate Query Processing

    DEFF Research Database (Denmark)

    Perera, Kasun S; Hahmann, Martin; Lehner, Wolfgang

    2015-01-01

    -wise aggregation to derive the models. These models are initially created from the original data and are kept in the database along with it. Subsequent queries are answered using the stored models rather than scanning and processing the original datasets. In order to support model query processing, we maintain...

  7. How senior entomologists can be involved in the annual meeting: organization and the coming together of a large event

    Science.gov (United States)

    The Annual Meeting for the Entomological Society of America is a large event where planning is started at the end of the previous years’ meeting. The President of the Society named the Program Committee Co-Chairs for Entomology 2017 at the 2015 Annual Meeting, so that they could handle the duties o...

  8. Parallel runs of a large air pollution model on a grid of Sun computers

    DEFF Research Database (Denmark)

    Alexandrov, V.N.; Owczarz, W.; Thomsen, Per Grove

    2004-01-01

    Large -scale air pollution models can successfully be used in different environmental studies. These models are described mathematically by systems of partial differential equations. Splitting procedures followed by discretization of the spatial derivatives leads to several large systems of ordin...

  9. Large animal models of rare genetic disorders: sheep as phenotypically relevant models of human genetic disease.

    Science.gov (United States)

    Pinnapureddy, Ashish R; Stayner, Cherie; McEwan, John; Baddeley, Olivia; Forman, John; Eccles, Michael R

    2015-09-02

    Animals that accurately model human disease are invaluable in medical research, allowing a critical understanding of disease mechanisms, and the opportunity to evaluate the effect of therapeutic compounds in pre-clinical studies. Many types of animal models are used world-wide, with the most common being small laboratory animals, such as mice. However, rodents often do not faithfully replicate human disease, despite their predominant use in research. This discordancy is due in part to physiological differences, such as body size and longevity. In contrast, large animal models, including sheep, provide an alternative to mice for biomedical research due to their greater physiological parallels with humans. Completion of the full genome sequences of many species, and the advent of Next Generation Sequencing (NGS) technologies, means it is now feasible to screen large populations of domesticated animals for genetic variants that resemble human genetic diseases, and generate models that more accurately model rare human pathologies. In this review, we discuss the notion of using sheep as large animal models, and their advantages in modelling human genetic disease. We exemplify several existing naturally occurring ovine variants in genes that are orthologous to human disease genes, such as the Cln6 sheep model for Batten disease. These, and other sheep models, have contributed significantly to our understanding of the relevant human disease process, in addition to providing opportunities to trial new therapies in animals with similar body and organ size to humans. Therefore sheep are a significant species with respect to the modelling of rare genetic human disease, which we summarize in this review.

  10. Customer involvement in greening the supply chain: an interpretive structural modeling methodology

    Science.gov (United States)

    Kumar, Sanjay; Luthra, Sunil; Haleem, Abid

    2013-04-01

    The role of customers in green supply chain management needs to be identified and recognized as an important research area. This paper is an attempt to explore the involvement aspect of customers towards greening of the supply chain (SC). An empirical research approach has been used to collect primary data to rank different variables for effective customer involvement in green concept implementation in SC. An interpretive structural-based model has been presented, and variables have been classified using matrice d' impacts croises- multiplication appliqué a un classement analysis. Contextual relationships among variables have been established using experts' opinions. The research may help practicing managers to understand the interaction among variables affecting customer involvement. Further, this understanding may be helpful in framing the policies and strategies to green SC. Analyzing interaction among variables for effective customer involvement in greening SC to develop the structural model in the Indian perspective is an effort towards promoting environment consciousness.

  11. Medical staff involvement in nursing homes: development of a conceptual model and research agenda.

    Science.gov (United States)

    Shield, Renée; Rosenthal, Marsha; Wetle, Terrie; Tyler, Denise; Clark, Melissa; Intrator, Orna

    2014-02-01

    Medical staff (physicians, nurse practitioners, physicians' assistants) involvement in nursing homes (NH) is limited by professional guidelines, government policies, regulations, and reimbursements, creating bureaucratic burden. The conceptual NH Medical Staff Involvement Model, based on our mixed-methods research, applies the Donabedian "structure-process-outcomes" framework to the NH, identifying measures for a coordinated research agenda. Quantitative surveys and qualitative interviews conducted with medical directors, administrators and directors of nursing, other experts, residents and family members and Minimum Data Set, the Online Certification and Reporting System and Medicare Part B claims data related to NH structure, process, and outcomes were analyzed. NH control of medical staff, or structure, affects medical staff involvement in care processes and is associated with better outcomes (e.g., symptom management, appropriate transitions, satisfaction). The model identifies measures clarifying the impact of NH medical staff involvement on care processes and resident outcomes and has strong potential to inform regulatory policies.

  12. Bone marrow involvement in diffuse large B-cell lymphoma: correlation between FDG-PET uptake and type of cellular infiltrate

    Energy Technology Data Exchange (ETDEWEB)

    Paone, Gaetano; Itti, Emmanuel; Lin, Chieh; Meignan, Michel [Universite Paris 12, Department of Nuclear Medicine, Hopital Henri Mondor, Assistance Publique-Hopitaux de Paris (AP-HP), Creteil (France); Haioun, Corinne; Dupuis, Jehan [Universite Paris 12, Department of Clinical Haematology, Hopital Henri Mondor, Assistance Publique-Hopitaux de Paris (AP-HP), Creteil (France); Gaulard, Philippe [Universite Paris 12, Department of Pathology, Hopital Henri Mondor, Assistance Publique-Hopitaux de Paris (AP-HP), Creteil (France); Universite Paris 12, INSERM U841, Hopital Henri Mondor, Assistance Publique-Hopitaux de Paris (AP-HP), Creteil (France)

    2009-05-15

    To assess, in patients with diffuse large B-cell lymphoma (DLBCL), whether the low sensitivity of {sup 18}F-fluorodeoxyglucose positron emission tomography (FDG-PET) for bone marrow assessment may be explained by histological characteristics of the cellular infiltrate. From a prospective cohort of 110 patients with newly diagnosed aggressive lymphoma, 21 patients with DLBCL had bone marrow involvement. Pretherapeutic FDG-PET images were interpreted visually and semiquantitatively, then correlated with the type of cellular infiltrate and known prognostic factors. Of these 21 patients, 7 (33%) had lymphoid infiltrates with a prominent component of large transformed lymphoid cells (concordant bone marrow involvement, CBMI) and 14 (67%) had lymphoid infiltrates composed of small cells (discordant bone marrow involvement, DBMI). Only 10 patients (48%) had abnormal bone marrow FDG uptake, 6 of the 7 with CBMI and 4 of the 14 with DBMI. Therefore, FDG-PET positivity in the bone marrow was significantly associated with CBMI, while FDG-PET negativity was associated with DBMI (Fisher's exact test, p=0.024). There were no significant differences in gender, age and overall survival between patients with CBMI and DBMI, while the international prognostic index was significantly higher in patients with CBMI. Our study suggests that in patients with DLBCL with bone marrow involvement bone marrow FDG uptake depends on two types of infiltrate, comprising small (DBMI) or large (CBMI) cells. This may explain the apparent low sensitivity of FDG-PET previously reported for detecting bone marrow involvement. (orig.)

  13. Bone marrow involvement in diffuse large B-cell lymphoma: correlation between FDG-PET uptake and type of cellular infiltrate

    International Nuclear Information System (INIS)

    Paone, Gaetano; Itti, Emmanuel; Lin, Chieh; Meignan, Michel; Haioun, Corinne; Dupuis, Jehan; Gaulard, Philippe

    2009-01-01

    To assess, in patients with diffuse large B-cell lymphoma (DLBCL), whether the low sensitivity of 18 F-fluorodeoxyglucose positron emission tomography (FDG-PET) for bone marrow assessment may be explained by histological characteristics of the cellular infiltrate. From a prospective cohort of 110 patients with newly diagnosed aggressive lymphoma, 21 patients with DLBCL had bone marrow involvement. Pretherapeutic FDG-PET images were interpreted visually and semiquantitatively, then correlated with the type of cellular infiltrate and known prognostic factors. Of these 21 patients, 7 (33%) had lymphoid infiltrates with a prominent component of large transformed lymphoid cells (concordant bone marrow involvement, CBMI) and 14 (67%) had lymphoid infiltrates composed of small cells (discordant bone marrow involvement, DBMI). Only 10 patients (48%) had abnormal bone marrow FDG uptake, 6 of the 7 with CBMI and 4 of the 14 with DBMI. Therefore, FDG-PET positivity in the bone marrow was significantly associated with CBMI, while FDG-PET negativity was associated with DBMI (Fisher's exact test, p=0.024). There were no significant differences in gender, age and overall survival between patients with CBMI and DBMI, while the international prognostic index was significantly higher in patients with CBMI. Our study suggests that in patients with DLBCL with bone marrow involvement bone marrow FDG uptake depends on two types of infiltrate, comprising small (DBMI) or large (CBMI) cells. This may explain the apparent low sensitivity of FDG-PET previously reported for detecting bone marrow involvement. (orig.)

  14. MODELLING OF CARBON MONOXIDE AIR POLLUTION IN LARG CITIES BY EVALUETION OF SPECTRAL LANDSAT8 IMAGES

    Directory of Open Access Journals (Sweden)

    M. Hamzelo

    2015-12-01

    Full Text Available Air pollution in large cities is one of the major problems that resolve and reduce it need multiple applications and environmental management. Of The main sources of this pollution is industrial activities, urban and transport that enter large amounts of contaminants into the air and reduces its quality. With Variety of pollutants and high volume manufacturing, local distribution of manufacturing centers, Testing and measuring emissions is difficult. Substances such as carbon monoxide, sulfur dioxide, and unburned hydrocarbons and lead compounds are substances that cause air pollution and carbon monoxide is most important. Today, data exchange systems, processing, analysis and modeling is of important pillars of management system and air quality control. In this study, using the spectral signature of carbon monoxide gas as the most efficient gas pollution LANDSAT8 images in order that have better spatial resolution than appropriate spectral bands and weather meters،SAM classification algorithm and Geographic Information System (GIS , spatial distribution of carbon monoxide gas in Tehran over a period of one year from the beginning of 2014 until the beginning of 2015 at 11 map have modeled and then to the model valuation ،created maps were compared with the map provided by the Tehran quality comparison air company. Compare involved plans did with the error matrix and results in 4 types of care; overall, producer, user and kappa coefficient was investigated. Results of average accuracy were about than 80%, which indicates the fit method and data used for modeling.

  15. Long-Run Properties of Large-Scale Macroeconometric Models

    OpenAIRE

    Kenneth F. WALLIS-; John D. WHITLEY

    1987-01-01

    We consider alternative approaches to the evaluation of the long-run properties of dynamic nonlinear macroeconometric models, namely dynamic simulation over an extended database, or the construction and direct solution of the steady-state version of the model. An application to a small model of the UK economy is presented. The model is found to be unstable, but a stable form can be produced by simple alterations to the structure.

  16. Expanding the Work Phases Model: User and Expert Involvement in the Construction of Online Specialised Dictionaries

    DEFF Research Database (Denmark)

    Leroyer, Patrick

    The purpose of this article is to establish new proposals for the lexicographic process and the involvement of experts and users in the construction of online specialised dictionaries. It is argued that the ENeL action should also have a view to the development of innovative theories...... and methodologies for the construction of online specialised dictionaries, and a new model for user and expert involvement is proposed....

  17. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    NARCIS (Netherlands)

    Loon, van A.F.; Huijgevoort, van M.H.J.; Lanen, van H.A.J.

    2012-01-01

    Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological

  18. Software Tools For Large Scale Interactive Hydrodynamic Modeling

    NARCIS (Netherlands)

    Donchyts, G.; Baart, F.; van Dam, A; Jagers, B; van der Pijl, S.; Piasecki, M.

    2014-01-01

    Developing easy-to-use software that combines components for simultaneous visualization, simulation and interaction is a great challenge. Mainly, because it involves a number of disciplines, like computational fluid dynamics, computer graphics, high-performance computing. One of the main

  19. Integrated modeling of the Canadian Very Large Optical Telescope

    Science.gov (United States)

    Roberts, Scott C.; Pazder, John S.; Fitzsimmons, Joeleff T.; Herriot, Glen; Loewen, Nathan; Smith, Malcolm J.; Dunn, Jennifer; Saddlemyer, Leslie K.

    2004-07-01

    We describe the VLOT integrated model, which simulates the telescope optical performance under the influence of external disturbances including wind. Details of the implementation in the MATLAB/SIMULINK environment are given, and the data structures are described. The structural to optical interface is detailed, including a discussion of coordinate transformations. The optical model includes both an interface with ZEMAX to perform raytracing analysis and an efficient Linear Optics Model for producing telescope optical path differences from within MATLAB. An extensive set of optical analysis routines has been developed for use with the integrated model. The telescope finite element model, state-space formulation and the high fidelity 1500 mode modal state-space structural dynamics model are presented. Control systems and wind models are described. We present preliminary results, showing the delivered image quality under the influence of wind on the primary mirror, with and without primary mirror control.

  20. Modeling Temporal Behavior in Large Networks: A Dynamic Mixed-Membership Model

    Energy Technology Data Exchange (ETDEWEB)

    Rossi, R; Gallagher, B; Neville, J; Henderson, K

    2011-11-11

    Given a large time-evolving network, how can we model and characterize the temporal behaviors of individual nodes (and network states)? How can we model the behavioral transition patterns of nodes? We propose a temporal behavior model that captures the 'roles' of nodes in the graph and how they evolve over time. The proposed dynamic behavioral mixed-membership model (DBMM) is scalable, fully automatic (no user-defined parameters), non-parametric/data-driven (no specific functional form or parameterization), interpretable (identifies explainable patterns), and flexible (applicable to dynamic and streaming networks). Moreover, the interpretable behavioral roles are generalizable, computationally efficient, and natively supports attributes. We applied our model for (a) identifying patterns and trends of nodes and network states based on the temporal behavior, (b) predicting future structural changes, and (c) detecting unusual temporal behavior transitions. We use eight large real-world datasets from different time-evolving settings (dynamic and streaming). In particular, we model the evolving mixed-memberships and the corresponding behavioral transitions of Twitter, Facebook, IP-Traces, Email (University), Internet AS, Enron, Reality, and IMDB. The experiments demonstrate the scalability, flexibility, and effectiveness of our model for identifying interesting patterns, detecting unusual structural transitions, and predicting the future structural changes of the network and individual nodes.

  1. Large scale debris-flow hazard assessment: a geotechnical approach and GIS modelling

    Directory of Open Access Journals (Sweden)

    G. Delmonaco

    2003-01-01

    Full Text Available A deterministic distributed model has been developed for large-scale debris-flow hazard analysis in the basin of River Vezza (Tuscany Region – Italy. This area (51.6 km 2 was affected by over 250 landslides. These were classified as debris/earth flow mainly involving the metamorphic geological formations outcropping in the area, triggered by the pluviometric event of 19 June 1996. In the last decades landslide hazard and risk analysis have been favoured by the development of GIS techniques permitting the generalisation, synthesis and modelling of stability conditions on a large scale investigation (>1:10 000. In this work, the main results derived by the application of a geotechnical model coupled with a hydrological model for the assessment of debris flows hazard analysis, are reported. This analysis has been developed starting by the following steps: landslide inventory map derived by aerial photo interpretation, direct field survey, generation of a database and digital maps, elaboration of a DTM and derived themes (i.e. slope angle map, definition of a superficial soil thickness map, geotechnical soil characterisation through implementation of a backanalysis on test slopes, laboratory test analysis, inference of the influence of precipitation, for distinct return times, on ponding time and pore pressure generation, implementation of a slope stability model (infinite slope model and generalisation of the safety factor for estimated rainfall events with different return times. Such an approach has allowed the identification of potential source areas of debris flow triggering. This is used to detected precipitation events with estimated return time of 10, 50, 75 and 100 years. The model shows a dramatic decrease of safety conditions for the simulation when is related to a 75 years return time rainfall event. It corresponds to an estimated cumulated daily intensity of 280–330 mm. This value can be considered the hydrological triggering

  2. Benchmark of Deep Learning Models on Large Healthcare MIMIC Datasets

    OpenAIRE

    Purushotham, Sanjay; Meng, Chuizheng; Che, Zhengping; Liu, Yan

    2017-01-01

    Deep learning models (aka Deep Neural Networks) have revolutionized many fields including computer vision, natural language processing, speech recognition, and is being increasingly used in clinical healthcare applications. However, few works exist which have benchmarked the performance of the deep learning models with respect to the state-of-the-art machine learning models and prognostic scoring systems on publicly available healthcare datasets. In this paper, we present the benchmarking res...

  3. An approach to industrial water conservation--a case study involving two large manufacturing companies based in Australia.

    Science.gov (United States)

    Agana, Bernard A; Reeve, Darrell; Orbell, John D

    2013-01-15

    This study presents the application of an integrated water management strategy at two large Australian manufacturing companies that are contrasting in terms of their respective products. The integrated strategy, consisting of water audit, pinch analysis and membrane process application, was deployed in series to systematically identify water conservation opportunities. Initially, a water audit was deployed to completely characterize all water streams found at each production site. This led to the development of a water balance diagram which, together with water test results, served as a basis for subsequent enquiry. After the water audit, commercially available water pinch software was utilized to identify possible water reuse opportunities, some of which were subsequently implemented on site. Finally, utilizing a laboratory-scale test rig, membrane processes such as UF, NF and RO were evaluated for their suitability to treat the various wastewater streams. The membranes tested generally showed good contaminant rejection rates, slow flux decline rates, low energy usage and were well suited for treatment of specific wastewater streams. The synergy between the various components of this strategy has the potential to reduce substantial amounts of Citywater consumption and wastewater discharge across a diverse range of large manufacturing companies. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.

  4. Investigation of deep inelastic scattering processes involving large p$_{t}$ direct photons in the final state

    CERN Multimedia

    2002-01-01

    This experiment will investigate various aspects of photon-parton scattering and will be performed in the H2 beam of the SPS North Area with high intensity hadron beams up to 350 GeV/c. \\\\\\\\ a) The directly produced photon yield in deep inelastic hadron-hadron collisions. Large p$_{t}$ direct photons from hadronic interactions are presumably a result of a simple annihilation process of quarks and antiquarks or of a QCD-Compton process. The relative contribution of the two processes can be studied by using various incident beam projectiles $\\pi^{+}, \\pi^{-}, p$ and in the future $\\bar{p}$. \\\\\\\\b) The correlations between directly produced photons and their accompanying hadronic jets. We will examine events with a large p$_{t}$ direct photon for away-side jets. If jets are recognised their properties will be investigated. Differences between a gluon and a quark jet may become observable by comparing reactions where valence quark annihilations (away-side jet originates from a gluon) dominate over the QDC-Compton...

  5. VLSI (Very Large Scale Integrated Circuits) Device Reliability Models.

    Science.gov (United States)

    1984-12-01

    components have been particularly effective on phased array radars, including Cobra Dane, Pave Paws, Cobra Judy and AN/TPS-59. In spite of the large number...Quincy, MA 55. California Devices Promised Data San Jose, CA 56. Micro-Pac Industries Promised Data Garland TX 57. Teleydyne Philbrick No Data Available

  6. Truck Route Choice Modeling using Large Streams of GPS Data

    Science.gov (United States)

    2017-07-31

    The primary goal of this research was to use large streams of truck-GPS data to analyze travel routes (or paths) chosen by freight trucks to travel between different origin and destination (OD) location pairs in metropolitan regions of Florida. Two s...

  7. Symmetry in stochasticity: Random walk models of large-scale ...

    Indian Academy of Sciences (India)

    This paper describes the insights gained from the excursion set approach, in which various questions about the phenomenology of large-scale structure formation can be mapped to problems associated with the first crossing distribution of appropriately defined barriers by random walks. Much of this is summarized in R K ...

  8. Misspecified poisson regression models for large-scale registry data

    DEFF Research Database (Denmark)

    Grøn, Randi; Gerds, Thomas A.; Andersen, Per K.

    2016-01-01

    working models that are then likely misspecified. To support and improve conclusions drawn from such models, we discuss methods for sensitivity analysis, for estimation of average exposure effects using aggregated data, and a semi-parametric bootstrap method to obtain robust standard errors. The methods...

  9. Modelling expected train passenger delays on large scale railway networks

    DEFF Research Database (Denmark)

    Landex, Alex; Nielsen, Otto Anker

    2006-01-01

    Forecasts of regularity for railway systems have traditionally – if at all – been computed for trains, not for passengers. Relatively recently it has become possible to model and evaluate the actual passenger delays by a passenger regularity model for the operation already carried out. First...

  10. Comparing large-scale computational approaches to epidemic modeling: Agent-based versus structured metapopulation models

    Directory of Open Access Journals (Sweden)

    Merler Stefano

    2010-06-01

    Full Text Available Abstract Background In recent years large-scale computational models for the realistic simulation of epidemic outbreaks have been used with increased frequency. Methodologies adapt to the scale of interest and range from very detailed agent-based models to spatially-structured metapopulation models. One major issue thus concerns to what extent the geotemporal spreading pattern found by different modeling approaches may differ and depend on the different approximations and assumptions used. Methods We provide for the first time a side-by-side comparison of the results obtained with a stochastic agent-based model and a structured metapopulation stochastic model for the progression of a baseline pandemic event in Italy, a large and geographically heterogeneous European country. The agent-based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM model, based on high-resolution census data worldwide, and integrating airline travel flow data with short-range human mobility patterns at the global scale. The model also considers age structure data for Italy. GLEaM and the agent-based models are synchronized in their initial conditions by using the same disease parameterization, and by defining the same importation of infected cases from international travels. Results The results obtained show that both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing on the order of a few days. The relative difference of the epidemic size depends on the basic reproductive ratio, R0, and on the fact that the metapopulation model consistently yields a larger incidence than the agent-based model, as expected due to the differences in the structure in the intra-population contact pattern of the approaches. The age

  11. Comparing large-scale computational approaches to epidemic modeling: agent-based versus structured metapopulation models.

    Science.gov (United States)

    Ajelli, Marco; Gonçalves, Bruno; Balcan, Duygu; Colizza, Vittoria; Hu, Hao; Ramasco, José J; Merler, Stefano; Vespignani, Alessandro

    2010-06-29

    In recent years large-scale computational models for the realistic simulation of epidemic outbreaks have been used with increased frequency. Methodologies adapt to the scale of interest and range from very detailed agent-based models to spatially-structured metapopulation models. One major issue thus concerns to what extent the geotemporal spreading pattern found by different modeling approaches may differ and depend on the different approximations and assumptions used. We provide for the first time a side-by-side comparison of the results obtained with a stochastic agent-based model and a structured metapopulation stochastic model for the progression of a baseline pandemic event in Italy, a large and geographically heterogeneous European country. The agent-based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM) model, based on high-resolution census data worldwide, and integrating airline travel flow data with short-range human mobility patterns at the global scale. The model also considers age structure data for Italy. GLEaM and the agent-based models are synchronized in their initial conditions by using the same disease parameterization, and by defining the same importation of infected cases from international travels. The results obtained show that both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing on the order of a few days. The relative difference of the epidemic size depends on the basic reproductive ratio, R0, and on the fact that the metapopulation model consistently yields a larger incidence than the agent-based model, as expected due to the differences in the structure in the intra-population contact pattern of the approaches. The age breakdown analysis shows that similar attack rates are

  12. Giant-cell arteritis. Concordance study between aortic CT angiography and FDG-PET/CT in detection of large-vessel involvement

    Energy Technology Data Exchange (ETDEWEB)

    Boysson, Hubert de; Dumont, Anael; Boutemy, Jonathan; Maigne, Gwenola; Martin Silva, Nicolas; Sultan, Audrey; Bienvenu, Boris; Aouba, Achille [Caen University Hospital, Department of Internal Medicine, Caen (France); Liozon, Eric; Ly, Kim Heang [Limoges University Hospital, Department of Internal Medicine, Limoges (France); Lambert, Marc [Lille University Hospital, Department of Internal Medicine, Lille (France); Aide, Nicolas [Caen University Hospital, Department of Nuclear Medicine, Caen (France); INSERM U1086 ' ' ANTICIPE' ' , Francois Baclesse Cancer Centre, Caen (France); Manrique, Alain [Caen University Hospital, Department of Nuclear Medicine, Caen (France); Normandy University, Caen (France)

    2017-12-15

    The purpose of our study was to assess the concordance of aortic CT angiography (CTA) and FDG-PET/CT in the detection of large-vessel involvement at diagnosis in patients with giant-cell arteritis (GCA). We created a multicenter cohort of patients with GCA diagnosed between 2010 and 2015, and who underwent both FDG-PET/CT and aortic CTA before or in the first ten days following treatment introduction. Eight vascular segments were studied on each procedure. We calculated concordance between both imaging techniques in a per-patient and a per-segment analysis, using Cohen's kappa concordance index. We included 28 patients (21/7 women/men, median age 67 [56-82]). Nineteen patients had large-vessel involvement on PET/CT and 18 of these patients also presented positive findings on CTA. In a per-segment analysis, a median of 5 [1-7] and 3 [1-6] vascular territories were involved on positive PET/CT and CTA, respectively (p = 0.03). In qualitative analysis, i.e., positivity of the procedure suggesting a large-vessel involvement, the concordance rate between both procedures was 0.85 [0.64-1]. In quantitative analysis, i.e., per-segment analysis in both procedures, the global concordance rate was 0.64 [0.54-0.75]. Using FDG-PET/CT as a reference, CTA showed excellent sensitivity (95%) and specificity (100%) in a per-patient analysis. In a per-segment analysis, sensitivity and specificity were 61% and 97.9%, respectively. CTA and FDG-PET/CT were both able to detect large-vessel involvement in GCA with comparable results in a per-patient analysis. However, PET/CT showed higher performance in a per-segment analysis, especially in the detection of inflammation of the aorta's branches. (orig.)

  13. Fast Kalman-like filtering for large-dimensional linear and Gaussian state-space models

    KAUST Repository

    Ait-El-Fquih, Boujemaa

    2015-08-13

    This paper considers the filtering problem for linear and Gaussian state-space models with large dimensions, a setup in which the optimal Kalman Filter (KF) might not be applicable owing to the excessive cost of manipulating huge covariance matrices. Among the most popular alternatives that enable cheaper and reasonable computation is the Ensemble KF (EnKF), a Monte Carlo-based approximation. In this paper, we consider a class of a posteriori distributions with diagonal covariance matrices and propose fast approximate deterministic-based algorithms based on the Variational Bayesian (VB) approach. More specifically, we derive two iterative KF-like algorithms that differ in the way they operate between two successive filtering estimates; one involves a smoothing estimate and the other involves a prediction estimate. Despite its iterative nature, the prediction-based algorithm provides a computational cost that is, on the one hand, independent of the number of iterations in the limit of very large state dimensions, and on the other hand, always much smaller than the cost of the EnKF. The cost of the smoothing-based algorithm depends on the number of iterations that may, in some situations, make this algorithm slower than the EnKF. The performances of the proposed filters are studied and compared to those of the KF and EnKF through a numerical example.

  14. Multilevel method for modeling large-scale networks.

    Energy Technology Data Exchange (ETDEWEB)

    Safro, I. M. (Mathematics and Computer Science)

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  15. A model for recovery kinetics of aluminum after large strain

    DEFF Research Database (Denmark)

    Yu, Tianbo; Hansen, Niels

    2012-01-01

    A model is suggested to analyze recovery kinetics of heavily deformed aluminum. The model is based on the hardness of isothermal annealed samples before recrystallization takes place, and it can be extrapolated to longer annealing times to factor out the recrystallization component of the hardness...... for conditions where recovery and recrystallization overlap. The model is applied to the isothermal recovery at temperatures between 140 and 220°C of commercial purity aluminum deformed to true strain 5.5. EBSD measurements have been carried out to detect the onset of discontinuous recrystallization. Furthermore...

  16. Large Scale Community Detection Using a Small World Model

    Directory of Open Access Journals (Sweden)

    Ranjan Kumar Behera

    2017-11-01

    Full Text Available In a social network, small or large communities within the network play a major role in deciding the functionalities of the network. Despite of diverse definitions, communities in the network may be defined as the group of nodes that are more densely connected as compared to nodes outside the group. Revealing such hidden communities is one of the challenging research problems. A real world social network follows small world phenomena, which indicates that any two social entities can be reachable in a small number of steps. In this paper, nodes are mapped into communities based on the random walk in the network. However, uncovering communities in large-scale networks is a challenging task due to its unprecedented growth in the size of social networks. A good number of community detection algorithms based on random walk exist in literature. In addition, when large-scale social networks are being considered, these algorithms are observed to take considerably longer time. In this work, with an objective to improve the efficiency of algorithms, parallel programming framework like Map-Reduce has been considered for uncovering the hidden communities in social network. The proposed approach has been compared with some standard existing community detection algorithms for both synthetic and real-world datasets in order to examine its performance, and it is observed that the proposed algorithm is more efficient than the existing ones.

  17. Hybrid transfinite element modeling/analysis of nonlinear heat conduction problems involving phase change

    Science.gov (United States)

    Tamma, Kumar K.; Railkar, Sudhir B.

    1988-01-01

    The present paper describes the applicability of hybrid transfinite element modeling/analysis formulations for nonlinear heat conduction problems involving phase change. The methodology is based on application of transform approaches and classical Galerkin schemes with finite element formulations to maintain the modeling versatility and numerical features for computational analysis. In addition, in conjunction with the above, the effects due to latent heat are modeled using enthalpy formulations to enable a physically realistic approximation to be dealt computationally for materials exhibiting phase change within a narrow band of temperatures. Pertinent details of the approach and computational scheme adapted are described in technical detail. Numerical test cases of comparative nature are presented to demonstrate the applicability of the proposed formulations for numerical modeling/analysis of nonlinear heat conduction problems involving phase change.

  18. Job involvement of primary healthcare employees: does a service provision model play a role?

    Science.gov (United States)

    Koponen, Anne M; Laamanen, Ritva; Simonsen-Rehn, Nina; Sundell, Jari; Brommels, Mats; Suominen, Sakari

    2010-05-01

    To investigate whether the development of job involvement of primary healthcare (PHC) employees in Southern Municipality (SM), where PHC services were outsourced to an independent non-profit organisation, differed from that in the three comparison municipalities (M1, M2, M3) with municipal service providers. Also, the associations of job involvement with factors describing the psychosocial work environment were investigated. A panel mail survey 2000-02 in Finland (n=369, response rates 73% and 60%). The data were analysed by descriptive statistics and multivariate linear regression analysis. Despite the favourable development in the psychosocial work environment, job involvement decreased most in SM, which faced the biggest organisational changes. Job involvement decreased also in M3, where the psychosocial work environment deteriorated most. Job involvement in 2002 was best predicted by high baseline level of interactional justice and work control, positive change in interactional justice, and higher age. Also other factors, such as organisational stability, seemed to play a role; after controlling for the effect of the psychosocial work characteristics, job involvement was higher in M3 than in SM. Outsourcing of PHC services may decrease job involvement at least during the first years. A particular service provision model is better than the others only if it is superior in providing a favourable and stable psychosocial work environment.

  19. Traffic assignment models in large-scale applications

    DEFF Research Database (Denmark)

    Rasmussen, Thomas Kjær

    of observations of actual behaviour to obtain estimates of the (monetary) value of different travel time components, thereby increasing the behavioural realism of largescale models. vii The generation of choice sets is a vital component in route choice models. This is, however, not a straight-forward task in real......-perceptions. It is the commonly adopted assumption that the distributed elements follow unbounded distributions which induces the need to enumerate all paths in the SUE, no matter how unattractive they might be. The Deterministic User Equilibrium (DUE), on the other hand, has a built-in criterion distinguishing definitely unused...... non-universal choice sets and (ii) flow distribution according to random utility maximisation theory. One model allows distinction between used and unused routes based on the distribution of the random error terms, while the other model allows this distinction by posing restrictions on the costs...

  20. Large scale Bayesian nuclear data evaluation with consistent model defects

    International Nuclear Information System (INIS)

    Schnabel, G

    2015-01-01

    The aim of nuclear data evaluation is the reliable determination of cross sections and related quantities of the atomic nuclei. To this end, evaluation methods are applied which combine the information of experiments with the results of model calculations. The evaluated observables with their associated uncertainties and correlations are assembled into data sets, which are required for the development of novel nuclear facilities, such as fusion reactors for energy supply, and accelerator driven systems for nuclear waste incineration. The efficiency and safety of such future facilities is dependent on the quality of these data sets and thus also on the reliability of the applied evaluation methods. This work investigated the performance of the majority of available evaluation methods in two scenarios. The study indicated the importance of an essential component in these methods, which is the frequently ignored deficiency of nuclear models. Usually, nuclear models are based on approximations and thus their predictions may deviate from reliable experimental data. As demonstrated in this thesis, the neglect of this possibility in evaluation methods can lead to estimates of observables which are inconsistent with experimental data. Due to this finding, an extension of Bayesian evaluation methods is proposed to take into account the deficiency of the nuclear models. The deficiency is modeled as a random function in terms of a Gaussian process and combined with the model prediction. This novel formulation conserves sum rules and allows to explicitly estimate the magnitude of model deficiency. Both features are missing in available evaluation methods so far. Furthermore, two improvements of existing methods have been developed in the course of this thesis. The first improvement concerns methods relying on Monte Carlo sampling. A Metropolis-Hastings scheme with a specific proposal distribution is suggested, which proved to be more efficient in the studied scenarios than the

  1. Topological σ Models and Large N Matrix Integral

    Science.gov (United States)

    Eguchi, Tohru; Hori, Kentaro; Yang, Sung-Kil

    In this paper we describe in some detail the representation of the topological CP1 model in terms of a matrix integral which we have introduced in a previous article. We first discuss the integrable structure of the CP1 model and show that it is governed by an extension of the one-dimensional Toda hierarchy. We then introduce a matrix model which reproduces the sum over holomorphic maps from arbitrary Riemann surfaces onto CP1. We compute intersection numbers on the moduli space of curves using a geometrical method and show that the results agree with those predicted by the matrix model. We also develop a Landau-Ginzburg (LG) description of the CP1 model using a superpotential eX + et0,Q e-X given by the Lax operator of the Toda hierarchy (X is the LG field and t0,Q is the coupling constant of the Kähler class). The form of the superpotential indicates the close connection between CP1 and N=2 supersymmetric sine-Gordon theory which was noted sometime ago by several authors. We also discuss possible generalizations of our construction to other manifolds and present an LG formulation of the topological CP2 model.

  2. [Patient satisfaction of women involved in the pilot program of large scale mammography in the Ariana area of Tunisia].

    Science.gov (United States)

    Zeghal, D; Mahjoub, S; Zakraoui, M A; Ben Aissa, R; Zaanouni, E; Lazaar, I; Mbarek, F; Ouechtati, A; Zouari, F; Boussen, H; Gueddana, N

    2009-07-01

    Evaluate the degree of satisfaction of women included in the large scale mammography program of breast cancer screening in the state of Ariana in Tunisia. [corrected] Within the women explored by mammography, we have contaced 112 patients who had a positif screening requiring histological checking. We have established a questionnaire concerning: the invitation, the clinical examination, the result announcement and the therapeutic management. The average age of patients was 49 years. 64% had a primary education level. 80 women or 71.4% were satisfied with the process of screening and the method of announcement. The main cause of dissatisfaction for patients with cancer diagnosis was delay and difficult access to adjuvant treatments. Among patients who had histological diagnosis: 47.3% had a malignant disease (53 cases) against 37.5% of benign (42 cases). 100% of patients who had a pathological result reassuring are satisfied at the end of the screening program. The psychosocial impact of screening must be considered for the development of new programs. The waiting and announcement of results are essential factors that allow us to judge the success of the project, because of patient satisfaction will depend the quality of monitoring and adherence to screening.

  3. New Boundaries for School-Based Management: The High Involvement Model.

    Science.gov (United States)

    Wohlstetter, Priscilla; And Others

    1994-01-01

    The utility of school-based management (SBM) as a means of generating school improvement is examined. A model of high-involvement management is applied to show what makes SBM work and under what conditions in four school districts. The definitions of SBM must be expanded to include organizational redesign. (SLD)

  4. Alcohol Involvement and the Five-Factor Model of Personality: A Meta-Analysis

    Science.gov (United States)

    Malouff, John M.; Thorsteinsson, Einar B.; Rooke, Sally E.; Schutte, Nicola S.

    2007-01-01

    The purpose of this meta-analysis was to quantify the relationship between the Five-Factor Model of personality and alcohol involvement and to identify moderators of the relationship. The meta-analysis included 20 studies, 119 effect sizes, and 7,886 participants. Possible moderators examined included: five-factor rating type (self vs. other);…

  5. Predicting Preschoolers' Attachment Security from Fathers' Involvement, Internal Working Models, and Use of Social Support

    Science.gov (United States)

    Newland, Lisa A.; Coyl, Diana D.; Freeman, Harry

    2008-01-01

    Associations between preschoolers' attachment security, fathers' involvement (i.e. parenting behaviors and consistency) and fathering context (i.e. fathers' internal working models (IWMs) and use of social support) were examined in a subsample of 102 fathers, taken from a larger sample of 235 culturally diverse US families. The authors predicted…

  6. Large Scale Computing for the Modelling of Whole Brain Connectivity

    DEFF Research Database (Denmark)

    Albers, Kristoffer Jon

    , which allows us to couple and explore different models and sampling procedures in runtime, still being applied to full-sized data. Using the implemented tools, we demonstrate that the models successfully can be applied for clustering whole-brain connectivity networks. Without being informed of spatial......The human brain constitutes an impressive network formed by the structural and functional connectivity patterns between billions of neurons. Modern functional and diffusion magnetic resonance imaging (fMRI and dMRI) provides unprecedented opportunities for exploring the functional and structural...... organization of the brain in continuously increasing resolution. From these images, networks of structural and functional connectivity can be constructed. Bayesian stochastic block modelling provides a prominent data-driven approach for uncovering the latent organization, by clustering the networks into groups...

  7. Modeling and simulation of large scale stirred tank

    Science.gov (United States)

    Neuville, John R.

    The purpose of this dissertation is to provide a written record of the evaluation performed on the DWPF mixing process by the construction of numerical models that resemble the geometry of this process. There were seven numerical models constructed to evaluate the DWPF mixing process and four pilot plants. The models were developed with Fluent software and the results from these models were used to evaluate the structure of the flow field and the power demand of the agitator. The results from the numerical models were compared with empirical data collected from these pilot plants that had been operated at an earlier date. Mixing is commonly used in a variety ways throughout industry to blend miscible liquids, disperse gas through liquid, form emulsions, promote heat transfer and, suspend solid particles. The DOE Sites at Hanford in Richland Washington, West Valley in New York, and Savannah River Site in Aiken South Carolina have developed a process that immobilizes highly radioactive liquid waste. The radioactive liquid waste at DWPF is an opaque sludge that is mixed in a stirred tank with glass frit particles and water to form slurry of specified proportions. The DWPF mixing process is composed of a flat bottom cylindrical mixing vessel with a centrally located helical coil, and agitator. The helical coil is used to heat and cool the contents of the tank and can improve flow circulation. The agitator shaft has two impellers; a radial blade and a hydrofoil blade. The hydrofoil is used to circulate the mixture between the top region and bottom region of the tank. The radial blade sweeps the bottom of the tank and pushes the fluid in the outward radial direction. The full scale vessel contains about 9500 gallons of slurry with flow behavior characterized as a Bingham Plastic. Particles in the mixture have an abrasive characteristic that cause excessive erosion to internal vessel components at higher impeller speeds. The desire for this mixing process is to ensure the

  8. The pig as a large preclinical model for therapeutic human anti-cancer vaccine development

    DEFF Research Database (Denmark)

    Overgaard, Nana Haahr; Frøsig, Thomas Mørch; Welner, Simon

    2016-01-01

    Development of therapeutic cancer vaccines has largely been based on rodent models and the majority failed to establish therapeutic responses in clinical trials. We therefore used pigs as a large animal model for human cancer vaccine development due to the large similarity between the porcine...

  9. Dynamic Modeling, Optimization, and Advanced Control for Large Scale Biorefineries

    DEFF Research Database (Denmark)

    Prunescu, Remus Mihail

    Second generation biorefineries transform agricultural wastes into biochemicals with higher added value, e.g. bioethanol, which is thought to become a primary component in liquid fuels [1]. Extensive endeavors have been conducted to make the production process feasible on a large scale, and recen......Second generation biorefineries transform agricultural wastes into biochemicals with higher added value, e.g. bioethanol, which is thought to become a primary component in liquid fuels [1]. Extensive endeavors have been conducted to make the production process feasible on a large scale......-time monitoring. The Inbicon biorefinery converts wheat straw into bioethanol utilizing steam, enzymes, and genetically modified yeast. The biomass is first pretreated in a steam pressurized and continuous thermal reactor where lignin is relocated, and hemicellulose partially hydrolyzed such that cellulose...... becomes more accessible to enzymes. The biorefinery is integrated with a nearby power plant following the Integrated Biomass Utilization System (IBUS) principle for reducing steam costs [4]. During the pretreatment, by-products are also created such as organic acids, furfural, and pseudo-lignin, which act...

  10. A Research Framework for Understanding the Practical Impact of Family Involvement in the Juvenile Justice System: The Juvenile Justice Family Involvement Model.

    Science.gov (United States)

    Walker, Sarah Cusworth; Bishop, Asia S; Pullmann, Michael D; Bauer, Grace

    2015-12-01

    Family involvement is recognized as a critical element of service planning for children's mental health, welfare and education. For the juvenile justice system, however, parents' roles in this system are complex due to youths' legal rights, public safety, a process which can legally position parents as plaintiffs, and a historical legacy of blaming parents for youth indiscretions. Three recent national surveys of juvenile justice-involved parents reveal that the current paradigm elicits feelings of stress, shame and distrust among parents and is likely leading to worse outcomes for youth, families and communities. While research on the impact of family involvement in the justice system is starting to emerge, the field currently has no organizing framework to guide a research agenda, interpret outcomes or translate findings for practitioners. We propose a research framework for family involvement that is informed by a comprehensive review and content analysis of current, published arguments for family involvement in juvenile justice along with a synthesis of family involvement efforts in other child-serving systems. In this model, family involvement is presented as an ascending, ordinal concept beginning with (1) exclusion, and moving toward climates characterized by (2) information-giving, (3) information-eliciting and (4) full, decision-making partnerships. Specific examples of how courts and facilities might align with these levels are described. Further, the model makes predictions for how involvement will impact outcomes at multiple levels with applications for other child-serving systems.

  11. Large-area dry bean yield prediction modeling in Mexico

    Science.gov (United States)

    Given the importance of dry bean in Mexico, crop yield predictions before harvest are valuable for authorities of the agricultural sector, in order to define support for producers. The aim of this study was to develop an empirical model to estimate the yield of dry bean at the regional level prior t...

  12. Models of 'obesity' in large animals and birds.

    Science.gov (United States)

    Clarke, Iain J

    2008-01-01

    Most laboratory-based research on obesity is carried out in rodents, but there are a number of other interesting models in the animal kingdom that are instructive. This includes domesticated animal species such as pigs and sheep, as well as wild, migrating and hibernating species. Larger animals allow particular experimental manipulations that are not possible in smaller animals and especially useful models have been developed to address issues such as manipulation of fetal development. Although some of the most well-studied models are ruminants, with metabolic control that differs from monogastrics, the general principles of metabolic regulation still pertain. It is possible to obtain much more accurate endocrine profiles in larger animals and this has provided important data in relation to leptin and ghrelin physiology. Genetic models have been created in domesticated animals through selection and these complement those of the laboratory rodent. This short review highlights particular areas of research in domesticated and wild species that expand our knowledge of systems that are important for our understanding of obesity and metabolism.

  13. Searches for phenomena beyond the Standard Model at the Large ...

    Indian Academy of Sciences (India)

    Keywords. LHC; ATLAS; CMS; BSM; supersymmetry; exotic. Abstract. The LHC has delivered several fb-1 of data in spring and summer 2011, opening new windows of opportunity for discovering phenomena beyond the Standard Model. A summary of the searches conducted by the ATLAS and CMS experiments based on ...

  14. Optimisation of a large WWTP thanks to mathematical modelling.

    Science.gov (United States)

    Printemps, C; Baudin, A; Dormoy, T; Zug, M; Vanrolleghem, P A

    2004-01-01

    Better controlling and optimising the plant's processes has become a priority for WWTP (Wastewater Treatment Plant) managers. The main objective of this project is to develop a simplified mathematical tool able to reproduce and anticipate the behaviour of the Tougas WWTP (Nantes, France). This tool is aimed to be used directly by the managers of the site. The mathematical WWTP model was created using the software WEST. This paper describes the studied site and the modelling results obtained during the stage of the model calibration and validation. The good simulation results have allowed to show that despite a first very simple description of the WWTP, the model was able to correctly predict the nitrogen composition (ammonia and nitrate) of the effluent and the daily sludge extraction. Then, a second more detailed configuration of the WWTP was implemented. It has allowed to independently study the behaviour of each of four biological trains. Once this first stage will be completely achieved, the remainder of the study will focus on the operational use of a simplified simulator with the purpose of optimising the Tougas WWTP operation.

  15. Energy-aware semantic modeling in large scale infrastructures

    NARCIS (Netherlands)

    Zhu, H.; van der Veldt, K.; Grosso, P.; Zhao, Z.; Liao, X.; de Laat, C.

    2012-01-01

    Including the energy profile of the computing infrastructure in the decision process for scheduling computing tasks and allocating resources is essential to improve the system energy efficiency. However, the lack of an effective model of the infrastructure energy information makes it difficult for

  16. Searches for phenomena beyond the Standard Model at the Large

    Indian Academy of Sciences (India)

    The LHC has delivered several fb-1 of data in spring and summer 2011, opening new windows of opportunity for discovering phenomena beyond the Standard Model. A summary of the searches conducted by the ATLAS and CMS experiments based on about 1 fb-1 of data is presented.

  17. Symmetry-guided large-scale shell-model theory

    Czech Academy of Sciences Publication Activity Database

    Launey, K. D.; Dytrych, Tomáš; Draayer, J. P.

    2016-01-01

    Roč. 89, JUL (2016), s. 101-136 ISSN 0146-6410 R&D Projects: GA ČR GA16-16772S Institutional support: RVO:61389005 Keywords : Ab intio shell-model theory * Symplectic symmetry * Collectivity * Clusters * Hoyle state * Orderly patterns in nuclei from first principles Subject RIV: BE - Theoretical Physics Impact factor: 11.229, year: 2016

  18. [First experience of using modified high-dose therapy NHL-BFM-90 in diffuse large B-cell lymphosarcoma with primary skin involvement. A case report].

    Science.gov (United States)

    Zamiatina, V I; Magomedova, A U; Kravchenko, S K; Giliazitdinova, E A; Iliushkina, E A; Zvonkov, E E; Kaplanskaia, I B; Obukhova, T N; Kliasova, G A; Gorgidze, L A; Churakova, Zh V; Kremenetskaia, A M; Vorob'ev, A I

    2009-01-01

    Primary skin large B-cell lymphosarcomas (PLBCL) present with skin lesions, other organs and systems are not involved. As CHOP courses are not high effective in PLBCL, we were the first to treat a patient with modified block therapy NHL BFM-90. A complete remission was achieved after the first course of polychemotherapy and was consolidated by two courses of treatment. Further follow-up is needed.

  19. Misspecified poisson regression models for large-scale registry data: inference for 'large n and small p'.

    Science.gov (United States)

    Grøn, Randi; Gerds, Thomas A; Andersen, Per K

    2016-03-30

    Poisson regression is an important tool in register-based epidemiology where it is used to study the association between exposure variables and event rates. In this paper, we will discuss the situation with 'large n and small p', where n is the sample size and p is the number of available covariates. Specifically, we are concerned with modeling options when there are time-varying covariates that can have time-varying effects. One problem is that tests of the proportional hazards assumption, of no interactions between exposure and other observed variables, or of other modeling assumptions have large power due to the large sample size and will often indicate statistical significance even for numerically small deviations that are unimportant for the subject matter. Another problem is that information on important confounders may be unavailable. In practice, this situation may lead to simple working models that are then likely misspecified. To support and improve conclusions drawn from such models, we discuss methods for sensitivity analysis, for estimation of average exposure effects using aggregated data, and a semi-parametric bootstrap method to obtain robust standard errors. The methods are illustrated using data from the Danish national registries investigating the diabetes incidence for individuals treated with antipsychotics compared with the general unexposed population. Copyright © 2015 John Wiley & Sons, Ltd.

  20. Airflow and radon transport modeling in four large buildings

    International Nuclear Information System (INIS)

    Fan, J.B.; Persily, A.K.

    1995-01-01

    Computer simulations of multizone airflow and contaminant transport were performed in four large buildings using the program CONTAM88. This paper describes the physical characteristics of the buildings and their idealizations as multizone building airflow systems. These buildings include a twelve-story multifamily residential building, a five-story mechanically ventilated office building with an atrium, a seven-story mechanically ventilated office building with an underground parking garage, and a one-story school building. The air change rates and interzonal airflows of these buildings are predicted for a range of wind speeds, indoor-outdoor temperature differences, and percentages of outdoor air intake in the supply air Simulations of radon transport were also performed in the buildings to investigate the effects of indoor-outdoor temperature difference and wind speed on indoor radon concentrations

  1. Enhanced ICP for the Registration of Large-Scale 3D Environment Models: An Experimental Study

    Science.gov (United States)

    Han, Jianda; Yin, Peng; He, Yuqing; Gu, Feng

    2016-01-01

    One of the main applications of mobile robots is the large-scale perception of the outdoor environment. One of the main challenges of this application is fusing environmental data obtained by multiple robots, especially heterogeneous robots. This paper proposes an enhanced iterative closest point (ICP) method for the fast and accurate registration of 3D environmental models. First, a hierarchical searching scheme is combined with the octree-based ICP algorithm. Second, an early-warning mechanism is used to perceive the local minimum problem. Third, a heuristic escape scheme based on sampled potential transformation vectors is used to avoid local minima and achieve optimal registration. Experiments involving one unmanned aerial vehicle and one unmanned surface vehicle were conducted to verify the proposed technique. The experimental results were compared with those of normal ICP registration algorithms to demonstrate the superior performance of the proposed method. PMID:26891298

  2. Implementation of an Online Chemistry Model to a Large Eddy Simulation Model (PALM-4U0

    Science.gov (United States)

    Mauder, M.; Khan, B.; Forkel, R.; Banzhaf, S.; Russo, E. E.; Sühring, M.; Kanani-Sühring, F.; Raasch, S.; Ketelsen, K.

    2017-12-01

    Large Eddy Simulation (LES) models permit to resolve relevant scales of turbulent motion, so that these models can capture the inherent unsteadiness of atmospheric turbulence. However, LES models are so far hardly applied for urban air quality studies, in particular chemical transformation of pollutants. In this context, BMBF (Bundesministerium für Bildung und Forschung) funded a joint project, MOSAIK (Modellbasierte Stadtplanung und Anwendung im Klimawandel / Model-based city planning and application in climate change) with the main goal to develop a new highly efficient urban climate model (UCM) that also includes atmospheric chemical processes. The state-of-the-art LES model PALM; Maronga et al, 2015, Geosci. Model Dev., 8, doi:10.5194/gmd-8-2515-2015), has been used as a core model for the new UCM named as PALM-4U. For the gas phase chemistry, a fully coupled 'online' chemistry model has been implemented into PALM. The latest version of the Kinetic PreProcessor (KPP) Version 2.3, has been utilized for the numerical integration of chemical species. Due to the high computational demands of the LES model, compromises in the description of chemical processes are required. Therefore, a reduced chemistry mechanism, which includes only major pollutants namely O3, NO, NO2, CO, a highly simplified VOC chemistry and a small number of products have been implemented. This work shows preliminary results of the advection, and chemical transformation of atmospheric pollutants. Non-cyclic boundaries have been used for inflow and outflow in east-west directions while periodic boundary conditions have been implemented to the south-north lateral boundaries. For practical applications, our approach is to go beyond the simulation of single street canyons to chemical transformation, advection and deposition of air pollutants in the larger urban canopy. Tests of chemistry schemes and initial studies of chemistry-turbulence, transport and transformations are presented.

  3. Uncertainty Quantification for Large-Scale Ice Sheet Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Ghattas, Omar [Univ. of Texas, Austin, TX (United States)

    2016-02-05

    This report summarizes our work to develop advanced forward and inverse solvers and uncertainty quantification capabilities for a nonlinear 3D full Stokes continental-scale ice sheet flow model. The components include: (1) forward solver: a new state-of-the-art parallel adaptive scalable high-order-accurate mass-conservative Newton-based 3D nonlinear full Stokes ice sheet flow simulator; (2) inverse solver: a new adjoint-based inexact Newton method for solution of deterministic inverse problems governed by the above 3D nonlinear full Stokes ice flow model; and (3) uncertainty quantification: a novel Hessian-based Bayesian method for quantifying uncertainties in the inverse ice sheet flow solution and propagating them forward into predictions of quantities of interest such as ice mass flux to the ocean.

  4. A comparison of updating algorithms for large $N$ reduced models

    CERN Document Server

    Pérez, Margarita García; Keegan, Liam; Okawa, Masanori; Ramos, Alberto

    2015-01-01

    We investigate Monte Carlo updating algorithms for simulating $SU(N)$ Yang-Mills fields on a single-site lattice, such as for the Twisted Eguchi-Kawai model (TEK). We show that performing only over-relaxation (OR) updates of the gauge links is a valid simulation algorithm for the Fabricius and Haan formulation of this model, and that this decorrelates observables faster than using heat-bath updates. We consider two different methods of implementing the OR update: either updating the whole $SU(N)$ matrix at once, or iterating through $SU(2)$ subgroups of the $SU(N)$ matrix, we find the same critical exponent in both cases, and only a slight difference between the two.

  5. A noise generation and propagation model for large wind farms

    DEFF Research Database (Denmark)

    Bertagnolio, Franck

    2016-01-01

    A wind turbine noise calculation model is combined with a ray tracing method in order to estimate wind farm noise in its surrounding assuming an arbitrary topography. The wind turbine noise model is used to generate noise spectra for which each turbine is approximated as a point source. However......, the detailed three-dimensional directivity features are taken into account for the further calculation of noise propagation over the surrounding terrain. An arbitrary number of turbines constituting a wind farm can be spatially distributed. The noise from each individual turbine is propagated into the far......-field using the ray tracing method. These results are added up assuming the noise from each turbine is uncorrelated. The methodology permits to estimate a wind farm noise map over the surrounding terrain in a reasonable amount of computational time on a personal computer....

  6. Toward a New Model of Fatherhood? Discourses on the Process of Paternal Involvement in Urban Spain

    Directory of Open Access Journals (Sweden)

    Marc Barbeta-Viñas

    2017-01-01

    Full Text Available In recent decades quantitative studies have documented an increase in paternal involvement. These changes have led to hypotheses of a new model of fatherhood. The aim of this paper is to explore the discourses of Spanish fathers regarding paternal involvement, identifying its structure and changing tendencies. The analysis is based on eight focus groups conducted in Madrid and Barcelona. Among the main findings we emphasize that the transition of fatherhood,not without contradictions, is causing the traditional homogeneous fatherhood to evolve toward a more complex and multidimensional conceptualization.

  7. Biofidelic Human Activity Modeling and Simulation with Large Variability

    Science.gov (United States)

    2014-11-25

    exact match or a close representation. Efforts were made to ensure that the activity models can be integrated into widely used game engines and image...integrated into widely used game engines and image generators. ABOUT THE AUTHORS Dr. John Camp is a computer research scientist employed by AFRL. Dr...M&S) has been increasingly used in simulation-based training and virtual reality ( VR ). However, human M&S technology currently used in various

  8. Large area application of a corn hazard model. [Soviet Union

    Science.gov (United States)

    Ashburn, P.; Taylor, T. W. (Principal Investigator)

    1981-01-01

    An application test of the crop calendar portion of a corn (maize) stress indicator model developed by the early warning, crop condition assessment component of AgRISTARS was performed over the corn for grain producing regions of the U.S.S.R. during the 1980 crop year using real data. Performance of the crop calendar submodel was favorable; efficiency gains in meteorological data analysis time were on a magnitude of 85 to 90 percent.

  9. Chemical Modeling for Large-Eddy Simulation of Turbulent Combustion

    Science.gov (United States)

    2009-03-31

    Swirl Burner 11 2 Development of an Interactive Platform for Generation, Comparison, and Evaluation of Kinetic Models for JP-8 Surrogate Fuels 13...the refined mesh resolution is increased. Application of the RLSG to a turbulent bunsen flame, however, showed that the flame front solution remained... bunsen flame. A schematic of this LES is shown in Fig. 4. The contour cut plane shows the temperature field, while the isocontour shows the level

  10. Hydrogeochemical modeling of large fluvial basins: impact of climate change

    International Nuclear Information System (INIS)

    Beaulieu, E.

    2011-01-01

    The chemical weathering of continental surfaces represents the one of carbon sinks at the Earth's surface which regulates the climate through feedback mechanism. The weathering intensity is controlled by climate but also by lithology, vegetal cover, hydrology and presence of smectites and acids in soils. In this work, a study at global scale on grid cells highlighted that a CO 2 concentration increase in the atmosphere would involve a decrease of evapotranspiration due to stomatal progressive closure, and a rise of soil acidity related to enhanced bio-spheric productivity. These changes would promote the silicates chemical weathering and as a result, would lead to CO 2 consumption increase by 3% for 100 ppmv of CO 2 concentration rise in the atmosphere. Then, the study on the one of the most important catchments located in arctic environment, the Mackenzie basin (Canada), showed the high sensitivity of chemical weathering to sulfuric acid production. Indeed, the Mackenzie mean CO 2 consumption has decreased by 56%, taking account the pyrite presence in the catchment. In addition, the mean CO 2 consumption of this basin could rise by 53% between today climate and a climatic scenario predicted for the end of century. (author)

  11. A two-dimensional analytical model of vapor intrusion involving vertical heterogeneity

    Science.gov (United States)

    Yao, Yijun; Verginelli, Iason; Suuberg, Eric M.

    2017-05-01

    In this work, we present an analytical chlorinated vapor intrusion (CVI) model that can estimate source-to-indoor air concentration attenuation by simulating two-dimensional (2-D) vapor concentration profile in vertically heterogeneous soils overlying a homogenous vapor source. The analytical solution describing the 2-D soil gas transport was obtained by applying a modified Schwarz-Christoffel mapping method. A partial field validation showed that the developed model provides results (especially in terms of indoor emission rates) in line with the measured data from a case involving a building overlying a layered soil. In further testing, it was found that the new analytical model can very closely replicate the results of three-dimensional (3-D) numerical models at steady state in scenarios involving layered soils overlying homogenous groundwater sources. By contrast, by adopting a two-layer approach (capillary fringe and vadose zone) as employed in the EPA implementation of the Johnson and Ettinger model, the spatially and temporally averaged indoor concentrations in the case of groundwater sources can be higher than the ones estimated by the numerical model up to two orders of magnitude. In short, the model proposed in this work can represent an easy-to-use tool that can simulate the subsurface soil gas concentration in layered soils overlying a homogenous vapor source while keeping the simplicity of an analytical approach that requires much less computational effort.

  12. A two-dimensional analytical model of vapor intrusion involving vertical heterogeneity.

    Science.gov (United States)

    Yao, Yijun; Verginelli, Iason; Suuberg, Eric M

    2017-05-01

    In this work, we present an analytical chlorinated vapor intrusion (CVI) model that can estimate source-to-indoor air concentration attenuation by simulating two-dimensional (2-D) vapor concentration profile in vertically heterogeneous soils overlying a homogenous vapor source. The analytical solution describing the 2-D soil gas transport was obtained by applying a modified Schwarz-Christoffel mapping method. A partial field validation showed that the developed model provides results (especially in terms of indoor emission rates) in line with the measured data from a case involving a building overlying a layered soil. In further testing, it was found that the new analytical model can very closely replicate the results of three-dimensional (3-D) numerical models at steady state in scenarios involving layered soils overlying homogenous groundwater sources. By contrast, by adopting a two-layer approach (capillary fringe and vadose zone) as employed in the EPA implementation of the Johnson and Ettinger model, the spatially and temporally averaged indoor concentrations in the case of groundwater sources can be higher than the ones estimated by the numerical model up to two orders of magnitude. In short, the model proposed in this work can represent an easy-to-use tool that can simulate the subsurface soil gas concentration in layered soils overlying a homogenous vapor source while keeping the simplicity of an analytical approach that requires much less computational effort.

  13. Canadian Whole-Farm Model Holos - Development, Stakeholder Involvement, and Model Application

    Science.gov (United States)

    Kroebel, R.; Janzen, H.; Beauchemin, K. A.

    2017-12-01

    Agriculture and Agri-Food Canada's Holos model, based mostly on emission factors, aims to explore the effect of management on Canadian whole-farm greenhouse gas emissions. The model includes 27 commonly grown annual and perennial crops, summer fallow, grassland, and 8 types of tree plantings, along with beef, dairy, sheep, swine and other livestock or poultry operations. Model outputs encompass net emissions of CO2, CH4, and N2O (in CO2 equivalents), calculated for various farm components. Where possible, algorithms are drawn from peer-reviewed publications. For consistency, Holos is aligned with the Canadian sustainability indicator and national greenhouse gas inventory objectives. Although primarily an exploratory tool for research, the model's design makes it accessible and instructive also to agricultural producers, educators, and policy makers. Model development, therefore, proceeds iteratively, with extensive stakeholder feedback from training sessions or annual workshops. To make the model accessible to diverse users, the team developed a multi-layered interface, with general farming scenarios for general use, but giving access to detailed coefficients and assumptions to researchers. The model relies on extensive climate, soil, and agronomic databases to populate regionally-applicable default values thereby minimizing keyboard entries. In an initial application, the model was used to assess greenhouse gas emissions from the Canadian beef production system; it showed that enteric methane accounted for 63% of total GHG emissions and that 84% of emissions originated from the cow-calf herd. The model further showed that GHG emission intensity per kg beef, nationally, declined by 14% from 1981 to 2011, owing to gains in production efficiency. Holos is now being used to consider further potential advances through improved rations or other management options. We are now aiming to expand into questions of grazing management, and are developing a novel carbon

  14. Effects of uncertainty in model predictions of individual tree volume on large area volume estimates

    Science.gov (United States)

    Ronald E. McRoberts; James A. Westfall

    2014-01-01

    Forest inventory estimates of tree volume for large areas are typically calculated by adding model predictions of volumes for individual trees. However, the uncertainty in the model predictions is generally ignored with the result that the precision of the large area volume estimates is overestimated. The primary study objective was to estimate the effects of model...

  15. Modeling a Large Data Acquisition Network in a Simulation Framework

    CERN Document Server

    Colombo, Tommaso; The ATLAS collaboration

    2015-01-01

    The ATLAS detector at CERN records particle collision “events” delivered by the Large Hadron Collider. Its data-acquisition system is a distributed software system that identifies, selects, and stores interesting events in near real-time, with an aggregate throughput of several 10 GB/s. It is a distributed software system executed on a farm of roughly 2000 commodity worker nodes communicating via TCP/IP on an Ethernet network. Event data fragments are received from the many detector readout channels and are buffered, collected together, analyzed and either stored permanently or discarded. This system, and data-acquisition systems in general, are sensitive to the latency of the data transfer from the readout buffers to the worker nodes. Challenges affecting this transfer include the many-to-one communication pattern and the inherently bursty nature of the traffic. In this paper we introduce the main performance issues brought about by this workload, focusing in particular on the so-called TCP incast pathol...

  16. Large-scale groundwater modeling using global datasets: a test case for the Rhine-Meuse basin

    Directory of Open Access Journals (Sweden)

    E. H. Sutanudjaja

    2011-09-01

    Full Text Available The current generation of large-scale hydrological models does not include a groundwater flow component. Large-scale groundwater models, involving aquifers and basins of multiple countries, are still rare mainly due to a lack of hydro-geological data which are usually only available in developed countries. In this study, we propose a novel approach to construct large-scale groundwater models by using global datasets that are readily available. As the test-bed, we use the combined Rhine-Meuse basin that contains groundwater head data used to verify the model output. We start by building a distributed land surface model (30 arc-second resolution to estimate groundwater recharge and river discharge. Subsequently, a MODFLOW transient groundwater model is built and forced by the recharge and surface water levels calculated by the land surface model. Results are promising despite the fact that we still use an offline procedure to couple the land surface and MODFLOW groundwater models (i.e. the simulations of both models are separately performed. The simulated river discharges compare well to the observations. Moreover, based on our sensitivity analysis, in which we run several groundwater model scenarios with various hydro-geological parameter settings, we observe that the model can reasonably well reproduce the observed groundwater head time series. However, we note that there are still some limitations in the current approach, specifically because the offline-coupling technique simplifies the dynamic feedbacks between surface water levels and groundwater heads, and between soil moisture states and groundwater heads. Also the current sensitivity analysis ignores the uncertainty of the land surface model output. Despite these limitations, we argue that the results of the current model show a promise for large-scale groundwater modeling practices, including for data-poor environments and at the global scale.

  17. Degree of multicollinearity and variables involved in linear dependence in additive-dominant models

    Directory of Open Access Journals (Sweden)

    Juliana Petrini

    2012-12-01

    Full Text Available The objective of this work was to assess the degree of multicollinearity and to identify the variables involved in linear dependence relations in additive-dominant models. Data of birth weight (n=141,567, yearling weight (n=58,124, and scrotal circumference (n=20,371 of Montana Tropical composite cattle were used. Diagnosis of multicollinearity was based on the variance inflation factor (VIF and on the evaluation of the condition indexes and eigenvalues from the correlation matrix among explanatory variables. The first model studied (RM included the fixed effect of dam age class at calving and the covariates associated to the direct and maternal additive and non-additive effects. The second model (R included all the effects of the RM model except the maternal additive effects. Multicollinearity was detected in both models for all traits considered, with VIF values of 1.03 - 70.20 for RM and 1.03 - 60.70 for R. Collinearity increased with the increase of variables in the model and the decrease in the number of observations, and it was classified as weak, with condition index values between 10.00 and 26.77. In general, the variables associated with additive and non-additive effects were involved in multicollinearity, partially due to the natural connection between these covariables as fractions of the biological types in breed composition.

  18. Consumer input into health care: Time for a new active and comprehensive model of consumer involvement.

    Science.gov (United States)

    Hall, Alix E; Bryant, Jamie; Sanson-Fisher, Rob W; Fradgley, Elizabeth A; Proietto, Anthony M; Roos, Ian

    2018-03-07

    To ensure the provision of patient-centred health care, it is essential that consumers are actively involved in the process of determining and implementing health-care quality improvements. However, common strategies used to involve consumers in quality improvements, such as consumer membership on committees and collection of patient feedback via surveys, are ineffective and have a number of limitations, including: limited representativeness; tokenism; a lack of reliable and valid patient feedback data; infrequent assessment of patient feedback; delays in acquiring feedback; and how collected feedback is used to drive health-care improvements. We propose a new active model of consumer engagement that aims to overcome these limitations. This model involves the following: (i) the development of a new measure of consumer perceptions; (ii) low cost and frequent electronic data collection of patient views of quality improvements; (iii) efficient feedback to the health-care decision makers; and (iv) active involvement of consumers that fosters power to influence health system changes. © 2018 The Authors Health Expectations published by John Wiley & Sons Ltd.

  19. Portraiture of constructivist parental involvement: A model to develop a community of practice

    Science.gov (United States)

    Dignam, Christopher Anthony

    This qualitative research study addressed the problem of the lack of parental involvement in secondary school science. Increasing parental involvement is vital in supporting student academic achievement and social growth. The purpose of this emergent phenomenological study was to identify conditions required to successfully construct a supportive learning environment to form partnerships between students, parents, and educators. The overall research question in this study investigated the conditions necessary to successfully enlist parental participation with students during science inquiry investigations at the secondary school level. One hundred thirteen pairs of parents and students engaged in a 6-week scientific inquiry activity and recorded attitudinal data in dialogue journals, questionnaires, open-ended surveys, and during one-one-one interviews conducted by the researcher between individual parents and students. Comparisons and cross-interpretations of inter-rater, codified, triangulated data were utilized for identifying emergent themes. Data analysis revealed the active involvement of parents in researching with their child during inquiry investigations, engaging in journaling, and assessing student performance fostered partnerships among students, parents, and educators and supported students' social skills development. The resulting model, employing constructivist leadership and enlisting parent involvement, provides conditions and strategies required to develop a community of practice that can help effect social change. The active involvement of parents fostered improved efficacy and a holistic mindset to develop in parents, students, and teachers. Based on these findings, the interactive collaboration of parents in science learning activities can proactively facilitate a community of practice that will assist educators in facilitating social change.

  20. Development of a Gravid Uterus Model for the Study of Road Accidents Involving Pregnant Women.

    Science.gov (United States)

    Auriault, F; Thollon, L; Behr, M

    2016-01-01

    Car accident simulations involving pregnant women are well documented in the literature and suggest that intra-uterine pressure could be responsible for the phenomenon of placental abruption, underlining the need for a realistic amniotic fluid model, including fluid-structure interactions (FSI). This study reports the development and validation of an amniotic fluid model using an Arbitrary Lagrangian Eulerian formulation in the LS-DYNA environment. Dedicated to the study of the mechanisms responsible for fetal injuries resulting from road accidents, the fluid model was validated using dynamic loading tests. Drop tests were performed on a deformable water-filled container at acceleration levels that would be experienced in a gravid uterus during a frontal car collision at 25 kph. During the test device braking phase, container deformation induced by inertial effects and FSI was recorded by kinematic analysis. These tests were then simulated in the LS-DYNA environment to validate a fluid model under dynamic loading, based on the container deformations. Finally, the coupling between the amniotic fluid model and an existing finite-element full-body pregnant woman model was validated in terms of pressure. To do so, experimental test results performed on four postmortem human surrogates (PMHS) (in which a physical gravid uterus model was inserted) were used. The experimental intra-uterine pressure from these tests was compared to intra uterine pressure from a numerical simulation performed under the same loading conditions. Both free fall numerical and experimental responses appear strongly correlated. The relationship between the amniotic fluid model and pregnant woman model provide intra-uterine pressure values correlated with the experimental test responses. The use of an Arbitrary Lagrangian Eulerian formulation allows the analysis of FSI between the amniotic fluid and the gravid uterus during a road accident involving pregnant women.

  1. Towards agile large-scale predictive modelling in drug discovery with flow-based programming design principles.

    Science.gov (United States)

    Lampa, Samuel; Alvarsson, Jonathan; Spjuth, Ola

    2016-01-01

    Predictive modelling in drug discovery is challenging to automate as it often contains multiple analysis steps and might involve cross-validation and parameter tuning that create complex dependencies between tasks. With large-scale data or when using computationally demanding modelling methods, e-infrastructures such as high-performance or cloud computing are required, adding to the existing challenges of fault-tolerant automation. Workflow management systems can aid in many of these challenges, but the currently available systems are lacking in the functionality needed to enable agile and flexible predictive modelling. We here present an approach inspired by elements of the flow-based programming paradigm, implemented as an extension of the Luigi system which we name SciLuigi. We also discuss the experiences from using the approach when modelling a large set of biochemical interactions using a shared computer cluster.Graphical abstract.

  2. A model comparison study of large-scale mantle lithosphere dynamics driven by subduction

    Science.gov (United States)

    OzBench, Mark; Regenauer-Lieb, Klaus; Stegman, Dave R.; Morra, Gabriele; Farrington, Rebecca; Hale, Alina; May, Dave A.; Freeman, Justin; Bourgouin, Laurent; Mühlhaus, Hans; Moresi, Louis

    2008-12-01

    Modelling subduction involves solving the dynamic interaction between a rigid (solid yet deformable) plate and the fluid (easily deformable) mantle. Previous approaches neglected the solid-like behavior of the lithosphere by only considering a purely fluid description. However, over the past 5 years, a more self-consistent description of a mechanically differentiated subducting plate has emerged. The key feature in this mechanical description is incorporation of a strong core which provides small resistance to plate bending at subduction zones while simultaneously providing adequate stretching resistance such that slab pull drives forward plate motion. Additionally, the accompanying numerical approaches for simulating large-scale lithospheric deformation processes coupled to the underlying viscous mantle flow, have been become available. Here we put forward three fundamentally different numerical strategies, each of which is capabable of treating the advection of mechanically distinct materials that describe the subducting plate. We demonstrate their robustness by calculating the numerically challenging problem of subduction of a 6000 km wide slab at high-resolution in three-dimensions, the successfuly achievement of which only a few codes in the world can presently even attempt. In spite of the differences of the approaches, all three codes pass the simple qualitative test of developing an "S-bend" trench curvature previously observed in similar models. While reproducing this emergent feature validates that the lithosphere-mantle interaction has been correctly modelled, this is not a numerical benchmark in the traditional sense where the objective is for all codes to achieve exact agreement on a unique numerical solution. However, we do provide some quantitative comparisons such as trench and plate kinematics in addition to discussing the strength and weaknesses of the individual approaches. Consequently, we believe these developed algorithms can now be applied to

  3. Highly efficient model updating for structural condition assessment of large-scale bridges.

    Science.gov (United States)

    2015-02-01

    For eciently updating models of large-scale structures, the response surface (RS) method based on radial basis : functions (RBFs) is proposed to model the input-output relationship of structures. The key issues for applying : the proposed method a...

  4. Small- and large-signal modeling of InP HBTs in transferred-substrate technology

    DEFF Research Database (Denmark)

    Johansen, Tom Keinicke; Rudolph, Matthias; Jensen, Thomas

    2014-01-01

    a direct parameter extraction methodology dedicated to III–V based HBTs. It is shown that the modeling of measured S-parameters can be improved in the millimeter-wave frequency range by augmenting the small-signal model with a description of AC current crowding. The extracted elements of the small......-signal model structure are employed as a starting point for the extraction of a large-signal model. The developed large-signal model for the TS-HBTs accurately predicts the DC over temperature and small-signal performance over bias as well as the large-signal performance at millimeter-wave frequencies....

  5. Stochastic modelling of a large subduction interface earthquake in Wellington, New Zealand

    Science.gov (United States)

    Francois-Holden, C.; Zhao, J.

    2012-12-01

    The Wellington region, home of New Zealand's capital city, is cut by a number of major right-lateral strike slip faults, and is underlain by the currently locked west-dipping subduction interface between the down going Pacific Plate, and the over-riding Australian Plate. A potential cause of significant earthquake loss in the Wellington region is a large magnitude (perhaps 8+) "subduction earthquake" on the Australia-Pacific plate interface, which lies ~23 km beneath Wellington City. "It's Our Fault" is a project involving a comprehensive study of Wellington's earthquake risk. Its objective is to position Wellington city to become more resilient, through an encompassing study of the likelihood of large earthquakes, and the effects and impacts of these earthquakes on humans and the built environment. As part of the "It's Our Fault" project, we are working on estimating ground motions from potential large plate boundary earthquakes. We present the latest results on ground motion simulations in terms of response spectra and acceleration time histories. First we characterise the potential interface rupture area based on previous geodetically-derived estimates interface of slip deficit. Then, we entertain a suitable range of source parameters, including various rupture areas, moment magnitudes, stress drops, slip distributions and rupture propagation directions. Our comprehensive study also includes simulations from historical large world subduction events translated into the New Zealand subduction context, such as the 2003 M8.3 Tokachi-Oki Japan earthquake and the M8.8 2010 Chili earthquake. To model synthetic seismograms and the corresponding response spectra we employed the EXSIM code developed by Atkinson et al. (2009), with a regional attenuation model based on the 3D attenuation model for the lower North-Island which has been developed by Eberhart-Phillips et al. (2005). The resulting rupture scenarios all produce long duration shaking, and peak ground

  6. Model Selection and Hypothesis Testing for Large-Scale Network Models with Overlapping Groups

    Directory of Open Access Journals (Sweden)

    Tiago P. Peixoto

    2015-03-01

    Full Text Available The effort to understand network systems in increasing detail has resulted in a diversity of methods designed to extract their large-scale structure from data. Unfortunately, many of these methods yield diverging descriptions of the same network, making both the comparison and understanding of their results a difficult challenge. A possible solution to this outstanding issue is to shift the focus away from ad hoc methods and move towards more principled approaches based on statistical inference of generative models. As a result, we face instead the more well-defined task of selecting between competing generative processes, which can be done under a unified probabilistic framework. Here, we consider the comparison between a variety of generative models including features such as degree correction, where nodes with arbitrary degrees can belong to the same group, and community overlap, where nodes are allowed to belong to more than one group. Because such model variants possess an increasing number of parameters, they become prone to overfitting. In this work, we present a method of model selection based on the minimum description length criterion and posterior odds ratios that is capable of fully accounting for the increased degrees of freedom of the larger models and selects the best one according to the statistical evidence available in the data. In applying this method to many empirical unweighted networks from different fields, we observe that community overlap is very often not supported by statistical evidence and is selected as a better model only for a minority of them. On the other hand, we find that degree correction tends to be almost universally favored by the available data, implying that intrinsic node proprieties (as opposed to group properties are often an essential ingredient of network formation.

  7. System Model Bias Processing Approach for Regional Coordinated States Information Involved Filtering

    Directory of Open Access Journals (Sweden)

    Zebo Zhou

    2016-01-01

    Full Text Available In the Kalman filtering applications, the conventional dynamic model which connects the states information of two consecutive epochs by state transition matrix is usually predefined and assumed to be invariant. Aiming to improve the adaptability and accuracy of dynamic model, we propose multiple historical states involved filtering algorithm. An autoregressive model is used as the dynamic model which is subsequently combined with observation model for deriving the optimal window-recursive filter formulae in the sense of minimum mean square error principle. The corresponding test statistics characteristics of system residuals are discussed in details. The test statistics of regional predicted residuals are then constructed in a time-window for model bias testing with two hypotheses, that is, the null and alternative hypotheses. Based on the innovations test statistics, we develop a model bias processing procedure including bias detection, location identification, and state correction. Finally, the minimum detectable bias and bias-to-noise ratio are both computed for evaluating the internal and external reliability of overall system, respectively.

  8. The Cauchy problem for a model of immiscible gas flow with large data

    Energy Technology Data Exchange (ETDEWEB)

    Sande, Hilde

    2008-12-15

    The thesis consists of an introduction and two papers; 1. The solution of the Cauchy problem with large data for a model of a mixture of gases. 2. Front tracking for a model of immiscible gas flow with large data. (AG) refs, figs

  9. Simplified Model for the Population Dynamics Involved in a Malaria Crisis

    International Nuclear Information System (INIS)

    Kenfack-Jiotsa, A.; Fotsa-Ngaffo, F.

    2009-12-01

    We adapt a simple model of predator-prey to the population involved in a crisis of malaria. The study is made only in the stream blood inside the human body except for the liver. Particularly we look at the dynamics of the malaria parasites 'merozoites' and their interaction with the blood components, more specifically the red blood cells (RBC) and the immune response grouped under the white blood cells (WBC). The stability analysis of the system reveals an important practical direction to investigate as regards the ratio WBC over RBC since it is a fundamental parameter that characterizes stable regions. The model numerically presents a wide range of possible features of the disease. Even with its simplified form, the model not only recovers well-known results but in addition predicts possible hidden phenomenon and an interesting clinical feature a malaria crisis. (author)

  10. A comparative modeling and molecular docking study on Mycobacterium tuberculosis targets involved in peptidoglycan biosynthesis.

    Science.gov (United States)

    Fakhar, Zeynab; Naiker, Suhashni; Alves, Claudio N; Govender, Thavendran; Maguire, Glenn E M; Lameira, Jeronimo; Lamichhane, Gyanu; Kruger, Hendrik G; Honarparvar, Bahareh

    2016-11-01

    An alarming rise of multidrug-resistant Mycobacterium tuberculosis strains and the continuous high global morbidity of tuberculosis have reinvigorated the need to identify novel targets to combat the disease. The enzymes that catalyze the biosynthesis of peptidoglycan in M. tuberculosis are essential and noteworthy therapeutic targets. In this study, the biochemical function and homology modeling of MurI, MurG, MraY, DapE, DapA, Alr, and Ddl enzymes of the CDC1551 M. tuberculosis strain involved in the biosynthesis of peptidoglycan cell wall are reported. Generation of the 3D structures was achieved with Modeller 9.13. To assess the structural quality of the obtained homology modeled targets, the models were validated using PROCHECK, PDBsum, QMEAN, and ERRAT scores. Molecular dynamics simulations were performed to calculate root mean square deviation (RMSD) and radius of gyration (Rg) of MurI and MurG target proteins and their corresponding templates. For further model validation, RMSD and Rg for selected targets/templates were investigated to compare the close proximity of their dynamic behavior in terms of protein stability and average distances. To identify the potential binding mode required for molecular docking, binding site information of all modeled targets was obtained using two prediction algorithms. A docking study was performed for MurI to determine the potential mode of interaction between the inhibitor and the active site residues. This study presents the first accounts of the 3D structural information for the selected M. tuberculosis targets involved in peptidoglycan biosynthesis.

  11. The sheep as a large osteoporotic model for orthopaedic research in humans

    DEFF Research Database (Denmark)

    Cheng, L.; Ding, Ming; Li, Z.

    2008-01-01

    Although small animals as rodents are very popular animals for osteoporosis models , large animals models are necessary for research of human osteoporotic diseases. Sheep osteoporosis models are becoming more important because of its unique advantages for osteoporosis reseach. Sheep are docile...... intake restriction and glucocorticoid application are the most effective methods for sheep osteoporosis model. Sheep osteoporosis model is an ideal animal model for studying various medicines reacting to osteoporosis and other treatment methods such as prosthetic replacement reacting to osteoporotic...

  12. Peripheral nervous system involvement in systemic lupus erythematosus: Prevalence, clinical and immunological characteristics, treatment and outcome of a large cohort from a single centre.

    Science.gov (United States)

    Toledano, Pilar; Orueta, Ramón; Rodríguez-Pintó, Ignasi; Valls-Solé, Josep; Cervera, Ricard; Espinosa, Gerard

    2017-07-01

    Disorders of peripheral nervous system in patients with systemic lupus erythematosus (PNS-SLE) are a major cause of morbidity. The aims of the present study were to determine the prevalence of PNS-SLE involvement in a large cohort of SLE patients from a single centre, to characterize such involvement, treatment modalities and outcome, and to identify the possible variables that may be associated with its presence. We performed an observational cross-sectional study that included all SLE patients being followed in our department between March and December 2015 who met at least one of the PNS-SLE case definitions proposed in 1999 by the American College of Rheumatology. Overall, 93 out of 524 (17,7%) patients presented with PNS-SLE syndrome; 90 (96.8%) of them were women with a mean age at PNS-SLE syndrome diagnosis was 44.8±14.1years and the average time from diagnosis of SLE to PNS-SLE diagnosis was 88 (range, 541-400) months. The most frequent manifestation was polyneuropathy (36.6%), followed by non-compression mononeuropathy (23.7%), cranial neuropathy and myasthenia gravis (7.5%, each), and Guillain-Barré syndrome (1.1%). The most frequent electrodiagnostic tests (EDX) pattern was axonal degeneration, present in 49 patients that corresponded to 80.3% of the overall EDX patterns. Mixed sensory-motor neuropathy was the most common type of involvement accounted for 56% of cases. Thirty-six out of 90 (40%) received glucocorticoids and/or immunosuppressant agents. Overall, global response (complete and/or partial) to treatments was achieved in 77.4% of patients without differences between the types of PNS-SLE involvement. Older age at SLE diagnosis (37.3±14.8 versus 30.8±12; p=0.001) and absence of hematologic involvement as cumulative SLE manifestation (11.8% versus 21.5%; p=0.034) had independent statistical significant associations with PNS-SLE development. The PNS-SLE involvement is not uncommon. Its most frequent manifestation is sensory-motor axonal

  13. Validating a Model of Motivational Factors Influencing Involvement for Parents of Transition-Age Youth with Disabilities

    Science.gov (United States)

    Hirano, Kara A.; Shanley, Lina; Garbacz, S. Andrew; Rowe, Dawn A.; Lindstrom, Lauren; Leve, Leslie D.

    2018-01-01

    Parent involvement is a predictor of postsecondary education and employment outcomes, but rigorous measures of parent involvement for youth with disabilities are lacking. Hirano, Garbacz, Shanley, and Rowe adapted scales based on Hoover-Dempsey and Sandler model of parent involvement for use with parents of youth with disabilities aged 14 to 23.…

  14. University Physics Students' Use of Models in Explanations of Phenomena Involving Interaction between Metals and Electromagnetic Radiation.

    Science.gov (United States)

    Redfors, Andreas; Ryder, Jim

    2001-01-01

    Examines third year university physics students' use of models when explaining familiar phenomena involving interaction between metals and electromagnetic radiation. Concludes that few students use a single model consistently. (Contains 27 references.) (DDR)

  15. A Computational Model of a Descending Mechanosensory Pathway Involved in Active Tactile Sensing.

    Directory of Open Access Journals (Sweden)

    Jan M Ache

    2015-07-01

    Full Text Available Many animals, including humans, rely on active tactile sensing to explore the environment and negotiate obstacles, especially in the dark. Here, we model a descending neural pathway that mediates short-latency proprioceptive information from a tactile sensor on the head to thoracic neural networks. We studied the nocturnal stick insect Carausius morosus, a model organism for the study of adaptive locomotion, including tactually mediated reaching movements. Like mammals, insects need to move their tactile sensors for probing the environment. Cues about sensor position and motion are therefore crucial for the spatial localization of tactile contacts and the coordination of fast, adaptive motor responses. Our model explains how proprioceptive information about motion and position of the antennae, the main tactile sensors in insects, can be encoded by a single type of mechanosensory afferents. Moreover, it explains how this information is integrated and mediated to thoracic neural networks by a diverse population of descending interneurons (DINs. First, we quantified responses of a DIN population to changes in antennal position, motion and direction of movement. Using principal component (PC analysis, we find that only two PCs account for a large fraction of the variance in the DIN response properties. We call the two-dimensional space spanned by these PCs 'coding-space' because it captures essential features of the entire DIN population. Second, we model the mechanoreceptive input elements of this descending pathway, a population of proprioceptive mechanosensory hairs monitoring deflection of the antennal joints. Finally, we propose a computational framework that can model the response properties of all important DIN types, using the hair field model as its only input. This DIN model is validated by comparison of tuning characteristics, and by mapping the modelled neurons into the two-dimensional coding-space of the real DIN population. This

  16. An explanatory model of maths achievement:Perceived parental involvement and academic motivation.

    Science.gov (United States)

    Rodríguez, Susana; Piñeiro, Isabel; Gómez-Taibo, Mª L; Regueiro, Bibiana; Estévez, Iris; Valle, Antonio

    2017-05-01

    Although numerous studies have tried to explain performance in maths very few have deeply explored the relationship between different variables and how they jointly explain mathematical performance. With a sample of 897 students in 5th and 6th grade in Primary Education and using structural equation modeling (SEM), this study analyzes how the perception of parents’ beliefs is related to children´s beliefs, their involvement in mathematical tasks and their performance. Perceived parental involvement contributes to the motivation of their children in mathematics. Direct supervision of students’ academic work by parents may increase students’ concerns about the image and rating of their children, but not their academic performance. In fact, maths achievement depends directly and positively on the parents’ expectations and children’s maths self-efficacy and negatively on the parents’ help in tasks and performance goal orientation. Perceived parental involvement contributes to children’s motivation in maths essentially conveying confidence in their abilities and showing interest in their progress and schoolwork.

  17. Using the Health Belief Model to explain patient involvement in patient safety.

    Science.gov (United States)

    Bishop, Andrea C; Baker, G Ross; Boyle, Todd A; MacKinnon, Neil J

    2015-12-01

    With the knowledge that patient safety incidents can significantly impact patients, providers and health-care organizations, greater emphasis on patient involvement as a means to mitigate risks warrants further research. To understand whether patient perceptions of patient safety play a role in patient involvement in factual and challenging patient safety practices and whether the constructs of the Health Belief Model (HBM) help to explain such perceptions. Partial least squares (PLS) analysis of survey data. Four inpatient units located in two tertiary hospitals in Atlantic Canada. Patients discharged from participating units between November 2010 and January 2011. None. A total of 217 of the 587 patient surveys were returned for a final response rate of 37.0%. The PLS analysis revealed relationships between patient perceptions of threat and self-efficacy and the performance of factual and challenging patient safety practices, explaining 46 and 42% of the variance, respectively. The results from this study provide evidence for the constructs and relationships set forth by the HBM. Perceptions of patient safety were shown to influence patient likelihood for engaging in selected patient safety practices. While perceptions of barriers and benefits and threats were found to be a contributing factor to patient involvement in patient safety practices, self-efficacy plays an important role as a mediating factor. Overall, the use of the HBM within patient safety provides for increased understanding of how such perceptions can be influenced to improve patient engagement in promoting safer health care. © 2014 John Wiley & Sons Ltd.

  18. Social Work Involvement in Advance Care Planning: Findings from a Large Survey of Social Workers in Hospice and Palliative Care Settings.

    Science.gov (United States)

    Stein, Gary L; Cagle, John G; Christ, Grace H

    2017-03-01

    Few data are available describing the involvement and activities of social workers in advance care planning (ACP). We sought to provide data about (1) social worker involvement and leadership in ACP conversations with patients and families; and (2) the extent of functions and activities when these discussions occur. We conducted a large web-based survey of social workers employed in hospice, palliative care, and related settings to explore their role, participation, and self-rated competency in facilitating ACP discussions. Respondents were recruited through the Social Work Hospice and Palliative Care Network and the National Hospice and Palliative Care Organization. Descriptive analyses were conducted on the full sample of respondents (N = 641) and a subsample of clinical social workers (N = 456). Responses were analyzed to explore differences in ACP involvement by practice setting. Most clinical social workers (96%) reported that social workers in their department are conducting ACP discussions with patients/families. Majorities also participate in, and lead, ACP discussions (69% and 60%, respectively). Most respondents report that social workers are responsible for educating patients/families about ACP options (80%) and are the team members responsible for documenting ACP (68%). Compared with other settings, oncology and inpatient palliative care social workers were less likely to be responsible for ensuring that patients/families are informed of ACP options and documenting ACP preferences. Social workers are prominently involved in facilitating, leading, and documenting ACP discussions. Policy-makers, administrators, and providers should incorporate the vital contributions of social work professionals in policies and programs supporting ACP.

  19. Modeling the Effect of Climate Change on Large Fire Size, Counts, and Intensities Using the Large Fire Simulator (FSim)

    Science.gov (United States)

    Riley, K. L.; Haas, J. R.; Finney, M.; Abatzoglou, J. T.

    2013-12-01

    Changes in climate can be expected to cause changes in wildfire activity due to a combination of shifts in weather (temperature, precipitation, relative humidity, wind speed and direction) and vegetation. Changes in vegetation could include type conversions, altered forest structure, and shifts in species composition, the effects of which could be mitigated or exacerbated by management activities. Further, changes in suppression response and effectiveness may alter potential wildfire activity, as well as the consequences of wildfire. Feedbacks among these factors are extremely complex and uncertain. The ability to anticipate changes driven by fire weather (largely outside of human control) can lead to development of fire and fuel management strategies aimed at mitigating current and future risk. Therefore, in this study we focus on isolating the effects of climate-induced changes in weather on wildfire activity. Specifically, we investigated the effect of changes in weather on fire activity in the Canadian Rockies ecoregion, which encompasses Glacier National Park and several large wilderness areas to the south. To model the ignition, growth, and containment of wildfires, we used the Large Fire Simulator (FSim), which we coupled with current and projected future climatic conditions. Weather streams were based on data from 14 downscaled Global Circulation Models (GCMs) from the Coupled Model Intercomparison Project Phase 5 (CMIP5) using the Representative Concentration Pathways (RCP) 45 and 85 for the years 2040-2060. While all GCMs indicate increases in temperature for this area, which would be expected to exacerbate fire activity, precipitation predictions for the summer wildfire season are more variable, ranging from a decrease of approximately 50 mm to an increase of approximately 50 mm. Windspeeds are generally predicted to decrease, which would reduce rates of spread and fire intensity. The net effect of these weather changes on the size, number, and intensity

  20. Using radar altimetry to update a large-scale hydrological model of the Brahmaputra river basin

    DEFF Research Database (Denmark)

    Finsen, F.; Milzow, Christian; Smith, R.

    2014-01-01

    of the Brahmaputra is excellent (17 high-quality virtual stations from ERS-2, 6 from Topex and 10 from Envisat are available for the Brahmaputra). In this study, altimetry data are used to update a large-scale Budyko-type hydrological model of the Brahmaputra river basin in real time. Altimetry measurements...... improved model performance considerably. The Nash-Sutcliffe model efficiency increased from 0.77 to 0.83. Real-time river basin modelling using radar altimetry has the potential to improve the predictive capability of large-scale hydrological models elsewhere on the planet....

  1. Analytical model of the statistical properties of contrast of large-scale ionospheric inhomogeneities.

    Science.gov (United States)

    Vsekhsvyatskaya, I. S.; Evstratova, E. A.; Kalinin, Yu. K.; Romanchuk, A. A.

    1989-08-01

    A new analytical model is proposed for the distribution of variations of the relative electron-density contrast of large-scale ionospheric inhomogeneities. The model is characterized by other-than-zero skewness and kurtosis. It is shown that the model is applicable in the interval of horizontal dimensions of inhomogeneities from hundreds to thousands of kilometers.

  2. Large-signal PIN diode model for ultra-fast photodetectors

    DEFF Research Database (Denmark)

    Krozer, Viktor; Fritsche, C

    2005-01-01

    A large-signal model for PIN photodetector is presented, which can be applied to ultra-fast photodetection and THz signal generation. The model takes into account the tunnelling and avalanche breakdown, which is important for avalanche photodiodes. The model is applied to ultra-fast superlattice ...

  3. How uncertainty in socio-economic variables affects large-scale transport model forecasts

    DEFF Research Database (Denmark)

    Manzo, Stefano; Nielsen, Otto Anker; Prato, Carlo Giacomo

    2015-01-01

    time, especially with respect to large-scale transport models. The study described in this paper contributes to fill the gap by investigating the effects of uncertainty in socio-economic variables growth rate projections on large-scale transport model forecasts, using the Danish National Transport......A strategic task assigned to large-scale transport models is to forecast the demand for transport over long periods of time to assess transport projects. However, by modelling complex systems transport models have an inherent uncertainty which increases over time. As a consequence, the longer...... the period forecasted the less reliable is the forecasted model output. Describing uncertainty propagation patterns over time is therefore important in order to provide complete information to the decision makers. Among the existing literature only few studies analyze uncertainty propagation patterns over...

  4. JC virus agnoprotein enhances large T antigen binding to the origin of viral DNA replication: evidence for its involvement in viral DNA replication.

    Science.gov (United States)

    Saribas, A Sami; White, Martyn K; Safak, Mahmut

    2012-11-10

    Agnoprotein is required for the successful completion of the JC virus (JCV) life cycle and was previously shown to interact with JCV large T-antigen (LT-Ag). Here, we further characterized agnoprotein's involvement in viral DNA replication. Agnoprotein enhances the DNA binding activity of LT-Ag to the viral origin (Ori) without directly interacting with DNA. The predicted amphipathic α-helix of agnoprotein plays a major role in this enhancement. All three phenylalanine (Phe) residues of agnoprotein localize to this α-helix and Phe residues in general are known to play critical roles in protein-protein interaction, protein folding and stability. The functional relevance of all Phe residues was investigated by mutagenesis. When all were mutated to alanine (Ala), the mutant virus (F31AF35AF39A) replicated significantly less efficiently than each individual Phe mutant virus alone, indicating the importance of Phe residues for agnoprotein function. Collectively, these studies indicate a close involvement of agnoprotein in viral DNA replication. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. Large-Signal Code TESLA: Improvements in the Implementation and in the Model

    National Research Council Canada - National Science Library

    Chernyavskiy, Igor A; Vlasov, Alexander N; Anderson, Jr., Thomas M; Cooke, Simon J; Levush, Baruch; Nguyen, Khanh T

    2006-01-01

    We describe the latest improvements made in the large-signal code TESLA, which include transformation of the code to a Fortran-90/95 version with dynamical memory allocation and extension of the model...

  6. Large Deviations for Stochastic Models of Two-Dimensional Second Grade Fluids

    Energy Technology Data Exchange (ETDEWEB)

    Zhai, Jianliang, E-mail: zhaijl@ustc.edu.cn [University of Science and Technology of China, School of Mathematical Sciences (China); Zhang, Tusheng, E-mail: Tusheng.Zhang@manchester.ac.uk [University of Manchester, School of Mathematics (United Kingdom)

    2017-06-15

    In this paper, we establish a large deviation principle for stochastic models of incompressible second grade fluids. The weak convergence method introduced by Budhiraja and Dupuis (Probab Math Statist 20:39–61, 2000) plays an important role.

  7. Various approaches to the modelling of large scale 3-dimensional circulation in the Ocean

    Digital Repository Service at National Institute of Oceanography (India)

    Shaji, C.; Bahulayan, N.; Rao, A.D.; Dube, S.K.

    In this paper, the three different approaches to the modelling of large scale 3-dimensional flow in the ocean such as the diagnostic, semi-diagnostic (adaptation) and the prognostic are discussed in detail. Three-dimensional solutions are obtained...

  8. Analysis and Modelling of Pedestrian Movement Dynamics at Large-scale Events

    NARCIS (Netherlands)

    Duives, D.C.

    2016-01-01

    To what extent can we model the movements of pedestrians who walk across a large-scale event terrain? This dissertation answers this question by analysing the operational movement dynamics of pedestrians in crowds at several large music and sport events in the Netherlands and extracting the key

  9. Measurements in SUGRA Models with Large $\\tan\\beta$ at LHC

    CERN Document Server

    Hinchliffe, Ian

    1999-01-01

    We present an example of a scenario of particle production and decay in supersymmetry models in which the supersymmetry breaking is transmitted to the observable world via gravitational interactions. The case is chosen so that there is a large production of tau leptons in the final state. It is characteristic of large $\\tan\\beta$ that decays into muons and electrons may be supressed.

  10. Possible Role of GADD45γ Methylation in Diffuse Large B-Cell Lymphoma: Does It Affect the Progression and Tissue Involvement?

    Directory of Open Access Journals (Sweden)

    İkbal Cansu Barış

    2015-12-01

    Full Text Available INTRODUCTION: Diffuse large B-cell lymphoma (DLBCL is the most common type of non-Hodgkin lymphoma among adults and is characterized by heterogeneous clinical, immunophenotypic, and genetic features. Different mechanisms deregulating cell cycle and apoptosis play a role in the pathogenesis of DLBCL. Growth arrest DNA damage-inducible 45 (GADD45γ is an important gene family involved in these mechanisms. The aims of this study are to determine the frequency of GADD45γ methylation, to evaluate the correlation between GADD45γ methylation and protein expression, and to investigate the relation between methylation status and clinicopathologic parameters in DLBCL tissues and reactive lymphoid node tissues from patients with reactive lymphoid hyperplasia. METHODS: Thirty-six tissue samples of DLBCL and 40 nonmalignant reactive lymphoid node tissues were analyzed in this study. Methylation-sensitive high-resolution melting analysis was used for the determination of GADD45γ methylation status. The GADD45γ protein expression was determined by immunohistochemistry. RESULTS: GADD45γ methylation was frequent (50.0% in DLBCL. It was also significantly higher in advanced-stage tumors compared with early-stage (p=0.041. In contrast, unmethylated GADD45γ was associated with nodal involvement as the primary anatomical site (p=0.040. DISCUSSION AND CONCLUSION: The results of this study show that, in contrast to solid tumors, the frequency of GADD45γ methylation is higher and this epigenetic alteration of GADD45γ may be associated with progression in DLBCL. In addition, nodal involvement is more likely to be present in patients with unmethylated GADD45γ.

  11. Distinct regions of the large extracellular domain of tetraspanin CD9 are involved in the control of human multinucleated giant cell formation.

    Directory of Open Access Journals (Sweden)

    Rachel S Hulme

    Full Text Available Multinucleated giant cells, formed by the fusion of monocytes/macrophages, are features of chronic granulomatous inflammation associated with infections or the persistent presence of foreign material. The tetraspanins CD9 and CD81 regulate multinucleated giant cell formation: soluble recombinant proteins corresponding to the large extracellular domain (EC2 of human but not mouse CD9 can inhibit multinucleated giant cell formation, whereas human CD81 EC2 can antagonise this effect. Tetraspanin EC2 are all likely to have a conserved three helix sub-domain and a much less well-conserved or hypervariable sub-domain formed by short helices and interconnecting loops stabilised by two or more disulfide bridges. Using CD9/CD81 EC2 chimeras and point mutants we have mapped the specific regions of the CD9 EC2 involved in multinucleated giant cell formation. These were primarily located in two helices, one in each sub-domain. The cysteine residues involved in the formation of the disulfide bridges in CD9 EC2 were all essential for inhibitory activity but a conserved glycine residue in the tetraspanin-defining 'CCG' motif was not. A tyrosine residue in one of the active regions that is not conserved between human and mouse CD9 EC2, predicted to be solvent-exposed, was found to be only peripherally involved in this activity. We have defined two spatially-distinct sites on the CD9 EC2 that are required for inhibitory activity. Agents that target these sites could have therapeutic applications in diseases in which multinucleated giant cells play a pathogenic role.

  12. DNA databanks and consent: a suggested policy option involving an authorization model.

    Science.gov (United States)

    Caulfield, Timothy; Upshur, Ross E G; Daar, Abdallah

    2003-01-03

    Genetic databases are becoming increasingly common as a means of determining the relationship between lifestyle, environmental exposures and genetic diseases. These databases rely on large numbers of research subjects contributing their genetic material to successfully explore the genetic basis of disease. However, as all possible research questions that can be posed of the data are unknown, an unresolved ethical issue is the status of informed consent for future research uses of genetic material. In this paper, we discuss the difficulties of an informed consent model for future ineffable uses of genetic data. We argue that variations on consent, such as presumed consent, blanket consent or constructed consent fail to meet the standards required by current informed consent doctrine and are distortions of the original concept. In this paper, we propose the concept of an authorization model whereby participants in genetic data banks are able to exercise a certain amount of control over future uses of genetic data. We argue this preserves the autonomy of individuals at the same time as allowing them to give permission and discretion to researchers for certain types of research. The authorization model represents a step forward in the debate about informed consent in genetic databases. The move towards an authorization model would require changes in the regulatory and legislative environments. Additionally, empirical support of the utility and acceptability of authorization is required.

  13. DNA databanks and consent: A suggested policy option involving an authorization model

    Directory of Open Access Journals (Sweden)

    Daar Abdallah

    2003-01-01

    Full Text Available Abstract Background Genetic databases are becoming increasingly common as a means of determining the relationship between lifestyle, environmental exposures and genetic diseases. These databases rely on large numbers of research subjects contributing their genetic material to successfully explore the genetic basis of disease. However, as all possible research questions that can be posed of the data are unknown, an unresolved ethical issue is the status of informed consent for future research uses of genetic material. Discussion In this paper, we discuss the difficulties of an informed consent model for future ineffable uses of genetic data. We argue that variations on consent, such as presumed consent, blanket consent or constructed consent fail to meet the standards required by current informed consent doctrine and are distortions of the original concept. In this paper, we propose the concept of an authorization model whereby participants in genetic data banks are able to exercise a certain amount of control over future uses of genetic data. We argue this preserves the autonomy of individuals at the same time as allowing them to give permission and discretion to researchers for certain types of research. Summary The authorization model represents a step forward in the debate about informed consent in genetic databases. The move towards an authorization model would require changes in the regulatory and legislative environments. Additionally, empirical support of the utility and acceptability of authorization is required.

  14. Monte Carlo model of light transport in scintillating fibers and large scintillators

    International Nuclear Information System (INIS)

    Chakarova, R.

    1995-01-01

    A Monte Carlo model is developed which simulates the light transport in a scintillator surrounded by a transparent layer with different surface properties. The model is applied to analyse the light collection properties of scintillating fibers and a large scintillator wrapped in aluminium foil. The influence of the fiber interface characteristics on the light yield is investigated in detail. Light output results as well as time distributions are obtained for the large scintillator case. 15 refs, 16 figs

  15. Regional modeling of large wildfires under current and potential future climates in Colorado and Wyoming, USA

    Science.gov (United States)

    West, Amanda; Kumar, Sunil; Jarnevich, Catherine S.

    2016-01-01

    Regional analysis of large wildfire potential given climate change scenarios is crucial to understanding areas most at risk in the future, yet wildfire models are not often developed and tested at this spatial scale. We fit three historical climate suitability models for large wildfires (i.e. ≥ 400 ha) in Colorado andWyoming using topography and decadal climate averages corresponding to wildfire occurrence at the same temporal scale. The historical models classified points of known large wildfire occurrence with high accuracies. Using a novel approach in wildfire modeling, we applied the historical models to independent climate and wildfire datasets, and the resulting sensitivities were 0.75, 0.81, and 0.83 for Maxent, Generalized Linear, and Multivariate Adaptive Regression Splines, respectively. We projected the historic models into future climate space using data from 15 global circulation models and two representative concentration pathway scenarios. Maps from these geospatial analyses can be used to evaluate the changing spatial distribution of climate suitability of large wildfires in these states. April relative humidity was the most important covariate in all models, providing insight to the climate space of large wildfires in this region. These methods incorporate monthly and seasonal climate averages at a spatial resolution relevant to land management (i.e. 1 km2) and provide a tool that can be modified for other regions of North America, or adapted for other parts of the world.

  16. Cavitary pulmonary involvement of diffuse large B-cell lymphoma transformed from extra nodal marginal zone B-cell lymphoma MALT type.

    Science.gov (United States)

    Yamane, Hiromichi; Ohsawa, Masahiro; Shiote, Yasuhiro; Umemura, Shigeki; Suwaki, Toshimitsu; Shirakawa, Atsuko; Kamei, Haruhito; Takigawa, Nagio; Kiura, Katsuyuki

    2011-12-01

    We describe a case of pulmonary diffuse large B-cell lymphoma (DLBCL), which was thought to arise from extranodal marginal zone lymphoma of mucosa-associated lymphoid tissue (MALT lymphoma). A 68-year-old woman presented with a 2-month history of cough and bloody sputum. The chest X-ray and computed tomography revealed a mass with cavitation in the right lower lobe. Transbronchial biopsy specimens revealed a granulomatous infiltration without malignant cells. However, diagnosis of MALT lymphoma was established from gastric biopsy specimen. Subsequently, a right lower lobectomy was performed because of hemoptysis. Examination of the resected specimen revealed a diffuse large B-cell lymphoma, which was considered to have transformed from MALT lymphoma, because both lung and stomach lesions had the chromosomal translocation t(11;18)(q21;q21) in common. In addition, there were no nodules, masses, alveolar or interstitial infiltrates in the lung fields, which are usually observed in the case of marginal zone B-cell lymphoma of bronchial mucosa-associated lymphoid tissue. These findings indicate that involvement of DLBCL have to be considered in patients with MALT lymphoma and cavitary lesion of the lung.

  17. Large-watershed flood simulation and forecasting based on different-resolution distributed hydrological model

    Science.gov (United States)

    Li, J.

    2017-12-01

    Large-watershed flood simulation and forecasting is very important for a distributed hydrological model in the application. There are some challenges including the model's spatial resolution effect, model performance and accuracy and so on. To cope with the challenge of the model's spatial resolution effect, different model resolution including 1000m*1000m, 600m*600m, 500m*500m, 400m*400m, 200m*200m were used to build the distributed hydrological model—Liuxihe model respectively. The purpose is to find which one is the best resolution for Liuxihe model in Large-watershed flood simulation and forecasting. This study sets up a physically based distributed hydrological model for flood forecasting of the Liujiang River basin in south China. Terrain data digital elevation model (DEM), soil type and land use type are downloaded from the website freely. The model parameters are optimized by using an improved Particle Swarm Optimization(PSO) algorithm; And parameter optimization could reduce the parameter uncertainty that exists for physically deriving model parameters. The different model resolution (200m*200m—1000m*1000m ) are proposed for modeling the Liujiang River basin flood with the Liuxihe model in this study. The best model's spatial resolution effect for flood simulation and forecasting is 200m*200m.And with the model's spatial resolution reduction, the model performance and accuracy also become worse and worse. When the model resolution is 1000m*1000m, the flood simulation and forecasting result is the worst, also the river channel divided based on this resolution is differs from the actual one. To keep the model with an acceptable performance, minimum model spatial resolution is needed. The suggested threshold model spatial resolution for modeling the Liujiang River basin flood is a 500m*500m grid cell, but the model spatial resolution with a 200m*200m grid cell is recommended in this study to keep the model at a best performance.

  18. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    Directory of Open Access Journals (Sweden)

    A. F. Van Loon

    2012-11-01

    Full Text Available Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological drought? To answer this question, we evaluated the simulation of drought propagation in an ensemble mean of ten large-scale models, both land-surface models and global hydrological models, that participated in the model intercomparison project of WATCH (WaterMIP. For a selection of case study areas, we studied drought characteristics (number of droughts, duration, severity, drought propagation features (pooling, attenuation, lag, lengthening, and hydrological drought typology (classical rainfall deficit drought, rain-to-snow-season drought, wet-to-dry-season drought, cold snow season drought, warm snow season drought, composite drought.

    Drought characteristics simulated by large-scale models clearly reflected drought propagation; i.e. drought events became fewer and longer when moving through the hydrological cycle. However, more differentiation was expected between fast and slowly responding systems, with slowly responding systems having fewer and longer droughts in runoff than fast responding systems. This was not found using large-scale models. Drought propagation features were poorly reproduced by the large-scale models, because runoff reacted immediately to precipitation, in all case study areas. This fast reaction to precipitation, even in cold climates in winter and in semi-arid climates in summer, also greatly influenced the hydrological drought typology as identified by the large-scale models. In general, the large-scale models had the correct representation of drought types, but the percentages of occurrence had some important mismatches, e.g. an overestimation of classical rainfall deficit droughts, and an

  19. Cadmium Handling, Toxicity and Molecular Targets Involved during Pregnancy: Lessons from Experimental Models

    Directory of Open Access Journals (Sweden)

    Tania Jacobo-Estrada

    2017-07-01

    Full Text Available Even decades after the discovery of Cadmium (Cd toxicity, research on this heavy metal is still a hot topic in scientific literature: as we wrote this review, more than 1440 scientific articles had been published and listed by the PubMed.gov website during 2017. Cadmium is one of the most common and harmful heavy metals present in our environment. Since pregnancy is a very particular physiological condition that could impact and modify essential pathways involved in the handling of Cd, the prenatal life is a critical stage for exposure to this non-essential element. To give the reader an overview of the possible mechanisms involved in the multiple organ toxic effects in fetuses after the exposure to Cd during pregnancy, we decided to compile some of the most relevant experimental studies performed in experimental models and to summarize the advances in this field such as the Cd distribution and the factors that could alter it (diet, binding-proteins and membrane transporters, the Cd-induced toxicity in dams (preeclampsia, fertility, kidney injury, alteration in essential element homeostasis and bone mineralization, in placenta and in fetus (teratogenicity, central nervous system, liver and kidney.

  20. De novo characterization of the spleen transcriptome of the large yellow croaker (Pseudosciaena crocea and analysis of the immune relevant genes and pathways involved in the antiviral response.

    Directory of Open Access Journals (Sweden)

    Yinnan Mu

    Full Text Available The large yellow croaker (Pseudosciaena crocea is an economically important marine fish in China. To understand the molecular basis for antiviral defense in this species, we used Illumia paired-end sequencing to characterize the spleen transcriptome of polyriboinosinic:polyribocytidylic acid [poly(I:C]-induced large yellow croakers. The library produced 56,355,728 reads and assembled into 108,237 contigs. As a result, 15,192 unigenes were found from this transcriptome. Gene ontology analysis showed that 4,759 genes were involved in three major functional categories: biological process, cellular component, and molecular function. We further ascertained that numerous consensus sequences were homologous to known immune-relevant genes. Kyoto Encyclopedia of Genes and Genomes orthology mapping annotated 5,389 unigenes and identified numerous immune-relevant pathways. These immune-relevant genes and pathways revealed major antiviral immunity effectors, including but not limited to: pattern recognition receptors, adaptors and signal transducers, the interferons and interferon-stimulated genes, inflammatory cytokines and receptors, complement components, and B-cell and T-cell antigen activation molecules. Moreover, the partial genes of Toll-like receptor signaling pathway, RIG-I-like receptors signaling pathway, Janus kinase-Signal Transducer and Activator of Transcription (JAK-STAT signaling pathway, and T-cell receptor (TCR signaling pathway were found to be changed after poly(I:C induction by real-time polymerase chain reaction (PCR analysis, suggesting that these signaling pathways may be regulated by poly(I:C, a viral mimic. Overall, the antivirus-related genes and signaling pathways that were identified in response to poly(I:C challenge provide valuable leads for further investigation of the antiviral defense mechanism in the large yellow croaker.

  1. De novo characterization of the spleen transcriptome of the large yellow croaker (Pseudosciaena crocea) and analysis of the immune relevant genes and pathways involved in the antiviral response

    KAUST Repository

    Mu, Yinnan

    2014-05-12

    The large yellow croaker (Pseudosciaena crocea) is an economically important marine fish in China. To understand the molecular basis for antiviral defense in this species, we used Illumia paired-end sequencing to characterize the spleen transcriptome of polyriboinosinic:polyribocytidylic acid [poly(I:C)]-induced large yellow croakers. The library produced 56,355,728 reads and assembled into 108,237 contigs. As a result, 15,192 unigenes were found from this transcriptome. Gene ontology analysis showed that 4,759 genes were involved in three major functional categories: biological process, cellular component, and molecular function. We further ascertained that numerous consensus sequences were homologous to known immune-relevant genes. Kyoto Encyclopedia of Genes and Genomes orthology mapping annotated 5,389 unigenes and identified numerous immune-relevant pathways. These immune-relevant genes and pathways revealed major antiviral immunity effectors, including but not limited to: pattern recognition receptors, adaptors and signal transducers, the interferons and interferon-stimulated genes, inflammatory cytokines and receptors, complement components, and B-cell and T-cell antigen activation molecules. Moreover, the partial genes of Toll-like receptor signaling pathway, RIG-I-like receptors signaling pathway, Janus kinase-Signal Transducer and Activator of Transcription (JAK-STAT) signaling pathway, and T-cell receptor (TCR) signaling pathway were found to be changed after poly(I:C) induction by real-time polymerase chain reaction (PCR) analysis, suggesting that these signaling pathways may be regulated by poly(I:C), a viral mimic. Overall, the antivirus-related genes and signaling pathways that were identified in response to poly(I:C) challenge provide valuable leads for further investigation of the antiviral defense mechanism in the large yellow croaker. © 2014 Mu et al.

  2. Large animal and primate models of spinal cord injury for the testing of novel therapies.

    Science.gov (United States)

    Kwon, Brian K; Streijger, Femke; Hill, Caitlin E; Anderson, Aileen J; Bacon, Mark; Beattie, Michael S; Blesch, Armin; Bradbury, Elizabeth J; Brown, Arthur; Bresnahan, Jacqueline C; Case, Casey C; Colburn, Raymond W; David, Samuel; Fawcett, James W; Ferguson, Adam R; Fischer, Itzhak; Floyd, Candace L; Gensel, John C; Houle, John D; Jakeman, Lyn B; Jeffery, Nick D; Jones, Linda Ann Truett; Kleitman, Naomi; Kocsis, Jeffery; Lu, Paul; Magnuson, David S K; Marsala, Martin; Moore, Simon W; Mothe, Andrea J; Oudega, Martin; Plant, Giles W; Rabchevsky, Alexander Sasha; Schwab, Jan M; Silver, Jerry; Steward, Oswald; Xu, Xiao-Ming; Guest, James D; Tetzlaff, Wolfram

    2015-07-01

    Large animal and primate models of spinal cord injury (SCI) are being increasingly utilized for the testing of novel therapies. While these represent intermediary animal species between rodents and humans and offer the opportunity to pose unique research questions prior to clinical trials, the role that such large animal and primate models should play in the translational pipeline is unclear. In this initiative we engaged members of the SCI research community in a questionnaire and round-table focus group discussion around the use of such models. Forty-one SCI researchers from academia, industry, and granting agencies were asked to complete a questionnaire about their opinion regarding the use of large animal and primate models in the context of testing novel therapeutics. The questions centered around how large animal and primate models of SCI would be best utilized in the spectrum of preclinical testing, and how much testing in rodent models was warranted before employing these models. Further questions were posed at a focus group meeting attended by the respondents. The group generally felt that large animal and primate models of SCI serve a potentially useful role in the translational pipeline for novel therapies, and that the rational use of these models would depend on the type of therapy and specific research question being addressed. While testing within these models should not be mandatory, the detection of beneficial effects using these models lends additional support for translating a therapy to humans. These models provides an opportunity to evaluate and refine surgical procedures prior to use in humans, and safety and bio-distribution in a spinal cord more similar in size and anatomy to that of humans. Our results reveal that while many feel that these models are valuable in the testing of novel therapies, important questions remain unanswered about how they should be used and how data derived from them should be interpreted. Copyright © 2015 Elsevier

  3. Processes and parameters involved in modeling radionuclide transport from bedded salt repositories. Final report. Technical memorandum

    International Nuclear Information System (INIS)

    Evenson, D.E.; Prickett, T.A.; Showalter, P.A.

    1979-07-01

    The parameters necessary to model radionuclide transport in salt beds are identified and described. A proposed plan for disposal of the radioactive wastes generated by nuclear power plants is to store waste canisters in repository sites contained in stable salt formations approximately 600 meters below the ground surface. Among the principal radioactive wastes contained in these canisters will be radioactive isotopes of neptunium, americium, uranium, and plutonium along with many highly radioactive fission products. A concern with this form of waste disposal is the possibility of ground-water flow occurring in the salt beds and endangering water supplies and the public health. Specifically, the research investigated the processes involved in the movement of radioactive wastes from the repository site by groundwater flow. Since the radioactive waste canisters also generate heat, temperature is an important factor. Among the processes affecting movement of radioactive wastes from a repository site in a salt bed are thermal conduction, groundwater movement, ion exchange, radioactive decay, dissolution and precipitation of salt, dispersion and diffusion, adsorption, and thermomigration. In addition, structural changes in the salt beds as a result of temperature changes are important. Based upon the half-lives of the radioactive wastes, he period of concern is on the order of a million years. As a result, major geologic phenomena that could affect both the salt bed and groundwater flow in the salt beds was considered. These phenomena include items such as volcanism, faulting, erosion, glaciation, and the impact of meteorites. CDM reviewed all of the critical processes involved in regional groundwater movement of radioactive wastes and identified and described the parameters that must be included to mathematically model their behavior. In addition, CDM briefly reviewed available echniques to measure these parameters

  4. Predicting large wildfires across western North America by modeling seasonal variation in soil water balance.

    Science.gov (United States)

    Waring, Richard H; Coops, Nicholas C

    A lengthening of the fire season, coupled with higher temperatures, increases the probability of fires throughout much of western North America. Although regional variation in the frequency of fires is well established, attempts to predict the occurrence of fire at a spatial resolution soil water reserves were coupled more directly to maximum leaf area index (LAI max ) and stomatal behavior. In an earlier publication, we used LAI max and a process-based forest growth model to derive and map the maximum available soil water storage capacity (ASW max ) of forested lands in western North America at l km resolution. To map large fires, we used data products acquired from NASA's Moderate Resolution Imaging Spectroradiometers (MODIS) over the period 2000-2009. To establish general relationships that incorporate the major biophysical processes that control evaporation and transpiration as well as the flammability of live and dead trees, we constructed a decision tree model (DT). We analyzed seasonal variation in the relative availability of soil water ( fASW ) for the years 2001, 2004, and 2007, representing respectively, low, moderate, and high rankings of areas burned. For these selected years, the DT predicted where forest fires >1 km occurred and did not occur at ~100,000 randomly located pixels with an average accuracy of 69 %. Extended over the decade, the area predicted burnt varied by as much as 50 %. The DT identified four seasonal combinations, most of which included exhaustion of ASW during the summer as critical; two combinations involving antecedent conditions the previous spring or fall accounted for 86 % of the predicted fires. The approach introduced in this paper can help identify forested areas where management efforts to reduce fire hazards might prove most beneficial.

  5. Modeling and Control of a Large Nuclear Reactor A Three-Time-Scale Approach

    CERN Document Server

    Shimjith, S R; Bandyopadhyay, B

    2013-01-01

    Control analysis and design of large nuclear reactors requires a suitable mathematical model representing the steady state and dynamic behavior of the reactor with reasonable accuracy. This task is, however, quite challenging because of several complex dynamic phenomena existing in a reactor. Quite often, the models developed would be of prohibitively large order, non-linear and of complex structure not readily amenable for control studies. Moreover, the existence of simultaneously occurring dynamic variations at different speeds makes the mathematical model susceptible to numerical ill-conditioning, inhibiting direct application of standard control techniques. This monograph introduces a technique for mathematical modeling of large nuclear reactors in the framework of multi-point kinetics, to obtain a comparatively smaller order model in standard state space form thus overcoming these difficulties. It further brings in innovative methods for controller design for systems exhibiting multi-time-scale property,...

  6. Recent Advances in Detailed Chemical Kinetic Models for Large Hydrocarbon and Biodiesel Transportation Fuels

    Energy Technology Data Exchange (ETDEWEB)

    Westbrook, C K; Pitz, W J; Curran, H J; Herbinet, O; Mehl, M

    2009-03-30

    n-Hexadecane and 2,2,4,4,6,8,8-heptamethylnonane represent the primary reference fuels for diesel that are used to determine cetane number, a measure of the ignition property of diesel fuel. With the development of chemical kinetics models for these two primary reference fuels for diesel, a new capability is now available to model diesel fuel ignition. Also, we have developed chemical kinetic models for a whole series of large n-alkanes and a large iso-alkane to represent these chemical classes in fuel surrogates for conventional and future fuels. Methyl decanoate and methyl stearate are large methyl esters that are closely related to biodiesel fuels, and kinetic models for these molecules have also been developed. These chemical kinetic models are used to predict the effect of the fuel molecule size and structure on ignition characteristics under conditions found in internal combustion engines.

  7. Optimization of large-scale heterogeneous system-of-systems models.

    Energy Technology Data Exchange (ETDEWEB)

    Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Lee, Herbert K. H. (University of California, Santa Cruz, Santa Cruz, CA); Hart, William Eugene; Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Woodruff, David L. (University of California, Davis, Davis, CA)

    2012-01-01

    Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

  8. Water and salt balance modelling to predict the effects of land-use changes in forested catchments. 3. The large catchment model

    Science.gov (United States)

    Sivapalan, Murugesu; Viney, Neil R.; Jeevaraj, Charles G.

    1996-03-01

    This paper presents an application of a long-term, large catchment-scale, water balance model developed to predict the effects of forest clearing in the south-west of Western Australia. The conceptual model simulates the basic daily water balance fluxes in forested catchments before and after clearing. The large catchment is divided into a number of sub-catchments (1-5 km2 in area), which are taken as the fundamental building blocks of the large catchment model. The responses of the individual subcatchments to rainfall and pan evaporation are conceptualized in terms of three inter-dependent subsurface stores A, B and F, which are considered to represent the moisture states of the subcatchments. Details of the subcatchment-scale water balance model have been presented earlier in Part 1 of this series of papers. The response of any subcatchment is a function of its local moisture state, as measured by the local values of the stores. The variations of the initial values of the stores among the subcatchments are described in the large catchment model through simple, linear equations involving a number of similarity indices representing topography, mean annual rainfall and level of forest clearing.The model is applied to the Conjurunup catchment, a medium-sized (39·6 km2) catchment in the south-west of Western Australia. The catchment has been heterogeneously (in space and time) cleared for bauxite mining and subsequently rehabilitated. For this application, the catchment is divided into 11 subcatchments. The model parameters are estimated by calibration, by comparing observed and predicted runoff values, over a 18 year period, for the large catchment and two of the subcatchments. Excellent fits are obtained.

  9. Development of Numerical Codes for Modeling Electromagnetic Behavior at High Frequencies Near Large Objects

    Science.gov (United States)

    Joshi, R. P.; Deshpande, M. D. (Technical Monitor)

    2003-01-01

    A study into the problem of determining electromagnetic solutions at high frequencies for problems involving complex geometries, large sizes and multiple sources (e.g. antennas) has been initiated. Typical applications include the behavior of antennas (and radiators) installed on complex conducting structures (e.g. ships, aircrafts, etc..) with strong interactions between antennas, the radiation patterns, and electromagnetic signals is of great interest for electromagnetic compatibility control. This includes the overall performance evaluation and control of all on-board radiating systems, electromagnetic interference, and personnel radiation hazards. Electromagnetic computational capability exists at NASA LaRC, and many of the codes developed are based on the Moment Method (MM). However, the MM is computationally intensive, and this places a limit on the size of objects and structures that can be modeled. Here, two approaches are proposed: (i) a current-based hybrid scheme that combines the MM with Physical optics, and (ii) an Alternating Direction Implicit-Finite Difference Time Domain (ADI-FDTD) method. The essence of a hybrid technique is to split the overall scattering surface(s) into two regions: (a) a MM zone (MMZ) which can be used over any part of the given geometry, but is most essential over irregular and "non-smooth" geometries, and (b) a PO sub-region (POSR). Currents induced on the scattering and reflecting surfaces can then be computed in two ways depending on whether the region belonged to the MMZ or was part of the POSR. For the MMZ, the current calculations proceed in terms of basis functions with undetermined coefficients (as in the usual MM method), and the answer obtained by solving a system of linear equations. Over the POSR, conduction is obtained as a superposition of two contributions: (i) currents due to the incident magnetic field, and (ii) currents produced by the mutual induction from conduction within the MMZ. This effectively leads to

  10. A Regression Algorithm for Model Reduction of Large-Scale Multi-Dimensional Problems

    Science.gov (United States)

    Rasekh, Ehsan

    2011-11-01

    Model reduction is an approach for fast and cost-efficient modelling of large-scale systems governed by Ordinary Differential Equations (ODEs). Multi-dimensional model reduction has been suggested for reduction of the linear systems simultaneously with respect to frequency and any other parameter of interest. Multi-dimensional model reduction is also used to reduce the weakly nonlinear systems based on Volterra theory. Multiple dimensions degrade the efficiency of reduction by increasing the size of the projection matrix. In this paper a new methodology is proposed to efficiently build the reduced model based on regression analysis. A numerical example confirms the validity of the proposed regression algorithm for model reduction.

  11. A dynamic programming approach for quickly estimating large network-based MEV models

    DEFF Research Database (Denmark)

    Mai, Tien; Frejinger, Emma; Fosgerau, Mogens

    2017-01-01

    by a rooted, directed graph where each node without successor is an alternative. We formulate a family of MEV models as dynamic discrete choice models on graphs of correlation structures and show that the dynamic models are consistent with MEV theory and generalize the network MEV model (Daly and Bierlaire......We propose a way to estimate a family of static Multivariate Extreme Value (MEV) models with large choice sets in short computational time. The resulting model is also straightforward and fast to use for prediction. Following Daly and Bierlaire (2006), the correlation structure is defined...

  12. The Hamburg large scale geostrophic ocean general circulation model. Cycle 1

    International Nuclear Information System (INIS)

    Maier-Reimer, E.; Mikolajewicz, U.

    1992-02-01

    The rationale for the Large Scale Geostrophic ocean circulation model (LSG-OGCM) is based on the observations that for a large scale ocean circulation model designed for climate studies, the relevant characteristic spatial scales are large compared with the internal Rossby radius throughout most of the ocean, while the characteristic time scales are large compared with the periods of gravity modes and barotropic Rossby wave modes. In the present version of the model, the fast modes have been filtered out by a conventional technique of integrating the full primitive equations, including all terms except the nonlinear advection of momentum, by an implicit time integration method. The free surface is also treated prognostically, without invoking a rigid lid approximation. The numerical scheme is unconditionally stable and has the additional advantage that it can be applied uniformly to the entire globe, including the equatorial and coastal current regions. (orig.)

  13. Involvement of the endocannabinoid system in phencyclidine-induced cognitive deficits modelling schizophrenia.

    Science.gov (United States)

    Vigano, Daniela; Guidali, Cinzia; Petrosino, Stefania; Realini, Natalia; Rubino, Tiziana; Di Marzo, Vincenzo; Parolaro, Daniela

    2009-06-01

    Recent advances in the neurobiology of cannabinoids have renewed interest in the association between cannabis and schizophrenia. Our studies showed that chronic-intermittent phencyclidine (PCP) treatment of rats, an animal model of schizophrenia-like cognitive deficit, impaired recognition memory in the novel object recognition (NOR) test and induced alterations in CB1 receptor functionality and in endocannabinoid levels mainly in the prefrontal cortex. In this region, we observed a significant reduction in GTPgammaS binding (-41%) accompanied by an increase in the levels of the endocannabinoid 2-AG (+38%) in PCP-treated rats, suggesting that a maladaptation of the endocannabinoid system might contribute to the glutamatergic-related cognitive symptoms encountered in schizophrenia disorders. Moreover, we evaluated the ability of the main psychoactive ingredient of marijuana, Delta9-tetrahydrocannabinol (THC), to modulate the cognitive dysfunctions and neuroadaptations in the endocannabinoid system induced by PCP. Chronic THC co-treatment worsened PCP-induced cognitive impairment, without inducing any effect per se, and in parallel, it provoked a severe reduction in the levels of the other endocannabinoid, AEA, vs. either vehicle (-73%) or PCP (-64%), whereas it reversed the PCP-induced increase in 2-AG levels. These results point to the involvement of the endocannabinoid system in this pharmacological model of cognitive dysfunction, with a potentially different role of AEA and 2-AG in schizophrenia-like behaviours and suggest that prolonged cannabis use might aggravate cognitive performances induced by chronic PCP by throwing off-balance the endocannabinoid system.

  14. The Involvement of the Oxidative Stress in Murine Blue LED Light-Induced Retinal Damage Model.

    Science.gov (United States)

    Nakamura, Maho; Kuse, Yoshiki; Tsuruma, Kazuhiro; Shimazawa, Masamitsu; Hara, Hideaki

    2017-01-01

    The aim of study was to establish a mouse model of blue light emitting diode (LED) light-induced retinal damage and to evaluate the effects of the antioxidant N-acetylcysteine (NAC). Mice were exposed to 400 or 800 lx blue LED light for 2 h, and were evaluated for retinal damage 5 d later by electroretinogram amplitude and outer nuclear layer (ONL) thickness. Additionally, we investigated the effect of blue LED light exposure on shorts-wave-sensitive opsin (S-opsin), and rhodopsin expression by immunohistochemistry. Blue LED light induced light intensity dependent retinal damage and led to collapse of S-opsin and altered rhodopsin localization from inner and outer segments to ONL. Conversely, NAC administered at 100 or 250 mg/kg intraperitoneally twice a day, before dark adaptation and before light exposure. NAC protected the blue LED light-induced retinal damage in a dose-dependent manner. Further, blue LED light-induced decreasing of S-opsin levels and altered rhodopsin localization, which were suppressed by NAC. We established a mouse model of blue LED light-induced retinal damage and these findings indicated that oxidative stress was partially involved in blue LED light-induced retinal damage.

  15. Involving mental health service users in suicide-related research: a qualitative inquiry model.

    Science.gov (United States)

    Lees, David; Procter, Nicholas; Fassett, Denise; Handley, Christine

    2016-03-01

    To describe the research model developed and successfully deployed as part of a multi-method qualitative study investigating suicidal service-users' experiences of mental health nursing care. Quality mental health care is essential to limiting the occurrence and burden of suicide, however there is a lack of relevant research informing practice in this context. Research utilising first-person accounts of suicidality is of particular importance to expanding the existing evidence base. However, conducting ethical research to support this imperative is challenging. The model discussed here illustrates specific and more generally applicable principles for qualitative research regarding sensitive topics and involving potentially vulnerable service-users. Researching into mental health service users with first-person experience of suicidality requires stakeholder and institutional support, researcher competency, and participant recruitment, consent, confidentiality, support and protection. Research with service users into their experiences of sensitive issues such as suicidality can result in rich and valuable data, and may also provide positive experiences of collaboration and inclusivity. If challenges are not met, objectification and marginalisation of service-users may be reinforced, and limitations in the evidence base and service provision may be perpetuated.

  16. Methodology for Measurement the Energy Efficiency Involving Solar Heating Systems Using Stochastic Modelling

    Directory of Open Access Journals (Sweden)

    Bruno G. Menita

    2017-01-01

    Full Text Available The purpose of the present study is to evaluate gains through measurement and verification methodology adapted from the International Performance Measurement and Verification Protocol, from case studies involving Energy Efficiency Projects in the Goias State, Brazil. This paper also presents the stochastic modelling for the generation of future scenarios of electricity saving resulted by these Energy Efficiency Projects. The model is developed by using the Geometric Brownian Motion Stochastic Process with Mean Reversion associated with the Monte Carlo simulation technique. Results show that the electricity saved from the replacement of electric showers by solar water heating systems in homes of low-income families has great potential to bring financial benefits to such families, and that the reduction in peak demand obtained from this Energy Efficiency Action is advantageous to the Brazilian electrical system. Results contemplate also the future scenarios of electricity saving and a sensitivity analysis in order to verify how values of some parameters influence on the results, once there is no historical data available for obtaining these values.

  17. A structural model of customer satisfaction and trust in vendors involved in mobile commerce

    Directory of Open Access Journals (Sweden)

    Suki, N.M.

    2011-01-01

    Full Text Available The purpose of this paper is to provide an explanation of factors influencing customer satisfaction and trust in vendors involved in mobile commerce (m-commerce. The study sample consists of 200 respondents. Data were analyzed by employing structural equation modelling (SEM supported by AMOS 5.0 with maximum likelihood estimation in order to test the proposed hypotheses. The proposed model was empirically tested and results confirmed that users’ satisfaction with vendors in m-commerce was not significantly influenced by two antecedents of the vendor’s website quality: interactivity and customisation, and also two antecedents of mobile technology quality: usefulness and ease-of-use. Meanwhile, users’ trust towards the vendor in m-commerce is affected by users’ satisfaction with the vendor. Interestingly, vendor quality dimensions such as responsiveness and brand image influence customer satisfaction with vendors in m-commerce. Based on the findings, vendors in m-commerce should focus on the factors which generate more satisfaction and trust among customers. For vendors in general, the results can help them to better develop customer trust in m-commerce. Vendors of m-commerce can provide a more satisfying experience for customers.

  18. Finite Mixture Multilevel Multidimensional Ordinal IRT Models for Large Scale Cross-Cultural Research

    Science.gov (United States)

    de Jong, Martijn G.; Steenkamp, Jan-Benedict E. M.

    2010-01-01

    We present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups of countries have different measurement operations, while…

  19. On Applications of Rasch Models in International Comparative Large-Scale Assessments: A Historical Review

    Science.gov (United States)

    Wendt, Heike; Bos, Wilfried; Goy, Martin

    2011-01-01

    Several current international comparative large-scale assessments of educational achievement (ICLSA) make use of "Rasch models", to address functions essential for valid cross-cultural comparisons. From a historical perspective, ICLSA and Georg Rasch's "models for measurement" emerged at about the same time, half a century ago. However, the…

  20. A simple atmospheric boundary layer model applied to large eddy simulations of wind turbine wakes

    DEFF Research Database (Denmark)

    Troldborg, Niels; Sørensen, Jens Nørkær; Mikkelsen, Robert Flemming

    2014-01-01

    A simple model for including the influence of the atmospheric boundary layer in connection with large eddy simulations of wind turbine wakes is presented and validated by comparing computed results with measurements as well as with direct numerical simulations. The model is based on an immersed...

  1. Induction of continuous expanding infrarenal aortic aneurysms in a large porcine animal model

    DEFF Research Database (Denmark)

    Kloster, Brian Ozeraitis; Lund, Lars; Lindholt, Jes S.

    2015-01-01

    BackgroundA large animal model with a continuous expanding infrarenal aortic aneurysm gives access to a more realistic AAA model with anatomy and physiology similar to humans, and thus allows for new experimental research in the natural history and treatment options of the disease. Methods10 pigs...

  2. Control-Oriented Model of Molar Scavenge Oxygen Fraction for Exhaust Recirculation in Large Diesel Engines

    DEFF Research Database (Denmark)

    Nielsen, Kræn Vodder; Blanke, Mogens; Eriksson, Lars

    2016-01-01

    therefore focus on deriving and validating a mean-value model of a large two-stroke crosshead diesel engines with EGR. The model introduces a number of amendments and extensions to previous, complex models and shows in theory and practice that a simplified nonlinear model captures all essential dynamics...... the behavior of the scavenge oxygen fraction well over the entire envelope of load and blower speed range that are relevant for EGR. The simplicity of the new model makes it suitable for observer and control design, which are essential steps to meet the emission requirements for marine diesel engines that take......Exhaust gas recirculation (EGR) systems have been introduced to large marine engines in order to reduce NOx formation. Adequate modelling for control design is one of the bottlenecks to design EGR control that also meets emission requirements during transient loading conditions. This paper...

  3. Presence of a large β(1-3)glucan linked to chitin at the Saccharomyces cerevisiae mother-bud neck suggests involvement in localized growth control.

    Science.gov (United States)

    Cabib, Enrico; Blanco, Noelia; Arroyo, Javier

    2012-04-01

    Previous results suggested that the chitin ring present at the yeast mother-bud neck, which is linked specifically to the nonreducing ends of β(1-3)glucan, may help to suppress cell wall growth at the neck by competing with β(1-6)glucan and thereby with mannoproteins for their attachment to the same sites. Here we explored whether the linkage of chitin to β(1-3)glucan may also prevent the remodeling of this polysaccharide that would be necessary for cell wall growth. By a novel mild procedure, β(1-3)glucan was isolated from cell walls, solubilized by carboxymethylation, and fractionated by size exclusion chromatography, giving rise to a very high-molecular-weight peak and to highly polydisperse material. The latter material, soluble in alkali, may correspond to glucan being remodeled, whereas the large-size fraction would be the final cross-linked structural product. In fact, the β(1-3)glucan of buds, where growth occurs, is solubilized by alkali. A gas1 mutant with an expected defect in glucan elongation showed a large increase in the polydisperse fraction. By a procedure involving sodium hydroxide treatment, carboxymethylation, fractionation by affinity chromatography on wheat germ agglutinin-agarose, and fractionation by size chromatography on Sephacryl columns, it was shown that the β(1-3)glucan attached to chitin consists mostly of high-molecular-weight material. Therefore, it appears that linkage to chitin results in a polysaccharide that cannot be further remodeled and does not contribute to growth at the neck. In the course of these experiments, the new finding was made that part of the chitin forms a noncovalent complex with β(1-3)glucan.

  4. Knowledge discovery in large model datasets in the marine environment: the THREDDS Data Server example

    Directory of Open Access Journals (Sweden)

    A. Bergamasco

    2012-06-01

    Full Text Available In order to monitor, describe and understand the marine environment, many research institutions are involved in the acquisition and distribution of ocean data, both from observations and models. Scientists from these institutions are spending too much time looking for, accessing, and reformatting data: they need better tools and procedures to make the science they do more efficient. The U.S. Integrated Ocean Observing System (US-IOOS is working on making large amounts of distributed data usable in an easy and efficient way. It is essentially a network of scientists, technicians and technologies designed to acquire, collect and disseminate observational and modelled data resulting from coastal and oceanic marine regions investigations to researchers, stakeholders and policy makers. In order to be successful, this effort requires standard data protocols, web services and standards-based tools. Starting from the US-IOOS approach, which is being adopted throughout much of the oceanographic and meteorological sectors, we describe here the CNR-ISMAR Venice experience in the direction of setting up a national Italian IOOS framework using the THREDDS (THematic Real-time Environmental Distributed Data Services Data Server (TDS, a middleware designed to fill the gap between data providers and data users. The TDS provides services that allow data users to find the data sets pertaining to their scientific needs, to access, to visualize and to use them in an easy way, without downloading files to the local workspace. In order to achieve this, it is necessary that the data providers make their data available in a standard form that the TDS understands, and with sufficient metadata to allow the data to be read and searched in a standard way. The core idea is then to utilize a Common Data Model (CDM, a unified conceptual model that describes different datatypes within each dataset. More specifically, Unidata (www.unidata.ucar.edu has developed CDM

  5. Modeling economic costs of disasters and recovery involving positive effects of reconstruction: analysis using a dynamic CGE model

    Science.gov (United States)

    Xie, W.; Li, N.; Wu, J.-D.; Hao, X.-L.

    2013-11-01

    Disaster damages have negative effects on economy, whereas reconstruction investments have positive effects. The aim of this study is to model economic causes of disasters and recovery involving positive effects of reconstruction activities. Computable general equilibrium (CGE) model is a promising approach because it can incorporate these two kinds of shocks into a unified framework and further avoid double-counting problem. In order to factor both shocks in CGE model, direct loss is set as the amount of capital stock reduced on supply side of economy; A portion of investments restore the capital stock in existing period; An investment-driven dynamic model is formulated due to available reconstruction data, and the rest of a given country's saving is set as an endogenous variable. The 2008 Wenchuan Earthquake is selected as a case study to illustrate the model, and three scenarios are constructed: S0 (no disaster occurs), S1 (disaster occurs with reconstruction investment) and S2 (disaster occurs without reconstruction investment). S0 is taken as business as usual, and the differences between S1 and S0 and that between S2 and S0 can be interpreted as economic losses including reconstruction and excluding reconstruction respectively. The study showed that output from S1 is found to be closer to real data than that from S2. S2 overestimates economic loss by roughly two times that under S1. The gap in economic aggregate between S1 and S0 is reduced to 3% in 2011, a level that should take another four years to achieve under S2.

  6. Biomembrane models and drug-biomembrane interaction studies: Involvement in drug design and development

    Science.gov (United States)

    Pignatello, R.; Musumeci, T.; Basile, L.; Carbone, C.; Puglisi, G.

    2011-01-01

    Contact with many different biological membranes goes along the destiny of a drug after its systemic administration. From the circulating macrophage cells to the vessel endothelium, to more complex absorption barriers, the interaction of a biomolecule with these membranes largely affects its rate and time of biodistribution in the body and at the target sites. Therefore, investigating the phenomena occurring on the cell membranes, as well as their different interaction with drugs in the physiological or pathological conditions, is important to exploit the molecular basis of many diseases and to identify new potential therapeutic strategies. Of course, the complexity of the structure and functions of biological and cell membranes, has pushed researchers toward the proposition and validation of simpler two- and three-dimensional membrane models, whose utility and drawbacks will be discussed. This review also describes the analytical methods used to look at the interactions among bioactive compounds with biological membrane models, with a particular accent on the calorimetric techniques. These studies can be considered as a powerful tool for medicinal chemistry and pharmaceutical technology, in the steps of designing new drugs and optimizing the activity and safety profile of compounds already used in the therapy. PMID:21430952

  7. Biomembrane models and drug-biomembrane interaction studies: Involvement in drug design and development.

    Science.gov (United States)

    Pignatello, R; Musumeci, T; Basile, L; Carbone, C; Puglisi, G

    2011-01-01

    Contact with many different biological membranes goes along the destiny of a drug after its systemic administration. From the circulating macrophage cells to the vessel endothelium, to more complex absorption barriers, the interaction of a biomolecule with these membranes largely affects its rate and time of biodistribution in the body and at the target sites. Therefore, investigating the phenomena occurring on the cell membranes, as well as their different interaction with drugs in the physiological or pathological conditions, is important to exploit the molecular basis of many diseases and to identify new potential therapeutic strategies. Of course, the complexity of the structure and functions of biological and cell membranes, has pushed researchers toward the proposition and validation of simpler two- and three-dimensional membrane models, whose utility and drawbacks will be discussed. This review also describes the analytical methods used to look at the interactions among bioactive compounds with biological membrane models, with a particular accent on the calorimetric techniques. These studies can be considered as a powerful tool for medicinal chemistry and pharmaceutical technology, in the steps of designing new drugs and optimizing the activity and safety profile of compounds already used in the therapy.

  8. Biomembrane models and drug-biomembrane interaction studies: Involvement in drug design and development

    Directory of Open Access Journals (Sweden)

    R Pignatello

    2011-01-01

    Full Text Available Contact with many different biological membranes goes along the destiny of a drug after its systemic administration. From the circulating macrophage cells to the vessel endothelium, to more complex absorption barriers, the interaction of a biomolecule with these membranes largely affects its rate and time of biodistribution in the body and at the target sites. Therefore, investigating the phenomena occurring on the cell membranes, as well as their different interaction with drugs in the physiological or pathological conditions, is important to exploit the molecular basis of many diseases and to identify new potential therapeutic strategies. Of course, the complexity of the structure and functions of biological and cell membranes, has pushed researchers toward the proposition and validation of simpler two- and three-dimensional membrane models, whose utility and drawbacks will be discussed. This review also describes the analytical methods used to look at the interactions among bioactive compounds with biological membrane models, with a particular accent on the calorimetric techniques. These studies can be considered as a powerful tool for medicinal chemistry and pharmaceutical technology, in the steps of designing new drugs and optimizing the activity and safety profile of compounds already used in the therapy.

  9. Analogue scale modelling of extensional tectonic processes using a large state-of-the-art centrifuge

    Science.gov (United States)

    Park, Heon-Joon; Lee, Changyeol

    2017-04-01

    Analogue scale modelling of extensional tectonic processes such as rifting and basin opening has been numerously conducted. Among the controlling factors, gravitational acceleration (g) on the scale models was regarded as a constant (Earth's gravity) in the most of the analogue model studies, and only a few model studies considered larger gravitational acceleration by using a centrifuge (an apparatus generating large centrifugal force by rotating the model at a high speed). Although analogue models using a centrifuge allow large scale-down and accelerated deformation that is derived by density differences such as salt diapir, the possible model size is mostly limited up to 10 cm. A state-of-the-art centrifuge installed at the KOCED Geotechnical Centrifuge Testing Center, Korea Advanced Institute of Science and Technology (KAIST) allows a large surface area of the scale-models up to 70 by 70 cm under the maximum capacity of 240 g-tons. Using the centrifuge, we will conduct analogue scale modelling of the extensional tectonic processes such as opening of the back-arc basin. Acknowledgement This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (grant number 2014R1A6A3A04056405).

  10. Optimizing Prediction Using Bayesian Model Averaging: Examples Using Large-Scale Educational Assessments.

    Science.gov (United States)

    Kaplan, David; Lee, Chansoon

    2018-01-01

    This article provides a review of Bayesian model averaging as a means of optimizing the predictive performance of common statistical models applied to large-scale educational assessments. The Bayesian framework recognizes that in addition to parameter uncertainty, there is uncertainty in the choice of models themselves. A Bayesian approach to addressing the problem of model uncertainty is the method of Bayesian model averaging. Bayesian model averaging searches the space of possible models for a set of submodels that satisfy certain scientific principles and then averages the coefficients across these submodels weighted by each model's posterior model probability (PMP). Using the weighted coefficients for prediction has been shown to yield optimal predictive performance according to certain scoring rules. We demonstrate the utility of Bayesian model averaging for prediction in education research with three examples: Bayesian regression analysis, Bayesian logistic regression, and a recently developed approach for Bayesian structural equation modeling. In each case, the model-averaged estimates are shown to yield better prediction of the outcome of interest than any submodel based on predictive coverage and the log-score rule. Implications for the design of large-scale assessments when the goal is optimal prediction in a policy context are discussed.

  11. Large-scale hydrology in Europe : observed patterns and model performance

    Energy Technology Data Exchange (ETDEWEB)

    Gudmundsson, Lukas

    2011-06-15

    In a changing climate, terrestrial water storages are of great interest as water availability impacts key aspects of ecosystem functioning. Thus, a better understanding of the variations of wet and dry periods will contribute to fully grasp processes of the earth system such as nutrient cycling and vegetation dynamics. Currently, river runoff from small, nearly natural, catchments is one of the few variables of the terrestrial water balance that is regularly monitored with detailed spatial and temporal coverage on large scales. River runoff, therefore, provides a foundation to approach European hydrology with respect to observed patterns on large scales, with regard to the ability of models to capture these.The analysis of observed river flow from small catchments, focused on the identification and description of spatial patterns of simultaneous temporal variations of runoff. These are dominated by large-scale variations of climatic variables but also altered by catchment processes. It was shown that time series of annual low, mean and high flows follow the same atmospheric drivers. The observation that high flows are more closely coupled to large scale atmospheric drivers than low flows, indicates the increasing influence of catchment properties on runoff under dry conditions. Further, it was shown that the low-frequency variability of European runoff is dominated by two opposing centres of simultaneous variations, such that dry years in the north are accompanied by wet years in the south.Large-scale hydrological models are simplified representations of our current perception of the terrestrial water balance on large scales. Quantification of the models strengths and weaknesses is the prerequisite for a reliable interpretation of simulation results. Model evaluations may also enable to detect shortcomings with model assumptions and thus enable a refinement of the current perception of hydrological systems. The ability of a multi model ensemble of nine large

  12. MACOP-B and Involved-Field Radiotherapy Is an Effective and Safe Therapy for Primary Mediastinal Large B Cell Lymphoma

    International Nuclear Information System (INIS)

    De Sanctis, Vitaliana; Finolezzi, Erica; Osti, Mattia Falchetto; Grapulin, Lavinia; Alfo, Marco; Pescarmona, Edoardo; Berardi, Francesca; Natalino, Fiammetta; Moleti, Maria Luisa; Di Rocco, Alice; Enrici, Riccardo Maurizi; Foa, Robin; Martelli, Maurizio

    2008-01-01

    Purpose: To report the clinical findings and long-term results of front-line, third-generation MACOP-B (methotrexate, doxorubicin, cyclophosphamide, vincristine, prednisone, and bleomycin) chemotherapy and mediastinal involved-field radiotherapy (IFRT) in 85 consecutive, previously untreated patients with primary mediastinal large B cell lymphoma (PMLBCL) diagnosed and managed at a single institution. Methods and Materials: Between 1991 and April 2004, 92 consecutive, untreated patients with PMLBCL were treated at our institution. The median age was 33 years (range, 15-61 years), 46 patients (50%) showed a mediastinal syndrome at onset; 52 patients (57%) showed a low/low-intermediate (0 to 1) and 40 patients (43%) an intermediate-high/high (2 to 3) International Prognostic Index (IPI) score. Eighty-five patients were treated with standard chemotherapy (MACOP-B), and 80 underwent mediastinal IFRT at a dose of 30-36 Gy. Results: After a MACOP-B regimen, the overall response rate was 87% and the partial response rate 9%. After chemotherapy, 67 Ga scintigraphy/positron emission tomography results were positive in 43 of 52 patients (83%), whereas after IFRT 11 of 52 patients (21%) remained positive (p 67 Ga scintigraphy/positron emission tomography in patients responsive to chemotherapy

  13. From Principles to Details: Integrated Framework for Architecture Modelling of Large Scale Software Systems

    Directory of Open Access Journals (Sweden)

    Andrzej Zalewski

    2013-06-01

    Full Text Available There exist numerous models of software architecture (box models, ADL’s, UML, architectural decisions, architecture modelling frameworks (views, enterprise architecture frameworks and even standards recommending practice for the architectural description. We show in this paper, that there is still a gap between these rather abstract frameworks/standards and existing architecture models. Frameworks and standards define what should be modelled rather than which models should be used and how these models are related to each other. We intend to prove that a less abstract modelling framework is needed for the effective modelling of large scale software intensive systems. It should provide a more precise guidance kinds of models to be employed and how they should relate to each other. The paper defines principles that can serve as base for an integrated model. Finally, structure of such a model has been proposed. It comprises three layers: the upper one – architectural policy – reflects corporate policy and strategies in architectural terms, the middle one –system organisation pattern – represents the core structural concepts and their rationale at a given level of scope, the lower one contains detailed architecture models. Architectural decisions play an important role here: they model the core architectural concepts explaining detailed models as well as organise the entire integrated model and the relations between its submodels.

  14. Groundwater Flow and Thermal Modeling to Support a Preferred Conceptual Model for the Large Hydraulic Gradient North of Yucca Mountain

    International Nuclear Information System (INIS)

    McGraw, D.; Oberlander, P.

    2007-01-01

    The purpose of this study is to report on the results of a preliminary modeling framework to investigate the causes of the large hydraulic gradient north of Yucca Mountain. This study builds on the Saturated Zone Site-Scale Flow and Transport Model (referenced herein as the Site-scale model (Zyvoloski, 2004a)), which is a three-dimensional saturated zone model of the Yucca Mountain area. Groundwater flow was simulated under natural conditions. The model framework and grid design describe the geologic layering and the calibration parameters describe the hydrogeology. The Site-scale model is calibrated to hydraulic heads, fluid temperature, and groundwater flowpaths. One area of interest in the Site-scale model represents the large hydraulic gradient north of Yucca Mountain. Nearby water levels suggest over 200 meters of hydraulic head difference in less than 1,000 meters horizontal distance. Given the geologic conceptual models defined by various hydrogeologic reports (Faunt, 2000, 2001; Zyvoloski, 2004b), no definitive explanation has been found for the cause of the large hydraulic gradient. Luckey et al. (1996) presents several possible explanations for the large hydraulic gradient as provided below: The gradient is simply the result of flow through the upper volcanic confining unit, which is nearly 300 meters thick near the large gradient. The gradient represents a semi-perched system in which flow in the upper and lower aquifers is predominantly horizontal, whereas flow in the upper confining unit would be predominantly vertical. The gradient represents a drain down a buried fault from the volcanic aquifers to the lower Carbonate Aquifer. The gradient represents a spillway in which a fault marks the effective northern limit of the lower volcanic aquifer. The large gradient results from the presence at depth of the Eleana Formation, a part of the Paleozoic upper confining unit, which overlies the lower Carbonate Aquifer in much of the Death Valley region. The

  15. Groundwater Flow and Thermal Modeling to Support a Preferred Conceptual Model for the Large Hydraulic Gradient North of Yucca Mountain

    Energy Technology Data Exchange (ETDEWEB)

    McGraw, D.; Oberlander, P.

    2007-12-18

    The purpose of this study is to report on the results of a preliminary modeling framework to investigate the causes of the large hydraulic gradient north of Yucca Mountain. This study builds on the Saturated Zone Site-Scale Flow and Transport Model (referenced herein as the Site-scale model (Zyvoloski, 2004a), which is a three-dimensional saturated zone model of the Yucca Mountain area. Groundwater flow was simulated under natural conditions. The model framework and grid design describe the geologic layering and the calibration parameters describe the hydrogeology. The Site-scale model is calibrated to hydraulic heads, fluid temperature, and groundwater flowpaths. One area of interest in the Site-scale model represents the large hydraulic gradient north of Yucca Mountain. Nearby water levels suggest over 200 meters of hydraulic head difference in less than 1,000 meters horizontal distance. Given the geologic conceptual models defined by various hydrogeologic reports (Faunt, 2000, 2001; Zyvoloski, 2004b), no definitive explanation has been found for the cause of the large hydraulic gradient. Luckey et al. (1996) presents several possible explanations for the large hydraulic gradient as provided below: The gradient is simply the result of flow through the upper volcanic confining unit, which is nearly 300 meters thick near the large gradient. The gradient represents a semi-perched system in which flow in the upper and lower aquifers is predominantly horizontal, whereas flow in the upper confining unit would be predominantly vertical. The gradient represents a drain down a buried fault from the volcanic aquifers to the lower Carbonate Aquifer. The gradient represents a spillway in which a fault marks the effective northern limit of the lower volcanic aquifer. The large gradient results from the presence at depth of the Eleana Formation, a part of the Paleozoic upper confining unit, which overlies the lower Carbonate Aquifer in much of the Death Valley region. The

  16. Modeling and Control of Direct Driven PMSG for Ultra Large Wind Turbines

    OpenAIRE

    Ahmed M. Hemeida; Wael A. Farag; Osama A. Mahgoub

    2011-01-01

    This paper focuses on developing an integrated reliable and sophisticated model for ultra large wind turbines And to study the performance and analysis of vector control on large wind turbines. With the advance of power electronics technology, direct driven multi-pole radial flux PMSG (Permanent Magnet Synchronous Generator) has proven to be a good choice for wind turbines manufacturers. To study the wind energy conversion systems, it is important to develop a wind turbin...

  17. Diffuse and Focal Brain Injury in a Large Animal Model of PTE: Mechanisms Underlying Epileptogenesis

    Science.gov (United States)

    2017-10-01

    Conclusions: A) Contusion injury validation and neuropathology B) Grid electrode development and testing C) Wireless Large Animal Custom Enclosure...In addition, we will test the NF-L and GFAP immunoassay to begin quantification of this biomarkers, as well as collecting serum from the animals pre...AWARD NUMBER: W81XWH-16-1-0675 TITLE: Diffuse and Focal Brain Injury in a Large Animal Model of PTE: Mechanisms Underlying Epileptogenesis

  18. Molecular dynamics simulations of large integral membrane proteins with an implicit membrane model.

    Science.gov (United States)

    Tanizaki, Seiichiro; Feig, Michael

    2006-01-12

    The heterogeneous dielectric generalized Born (HDGB) methodology is an the extension of the GBMV model for the simulation of integral membrane proteins with an implicit membrane environment. Three large integral membrane proteins, the bacteriorhodopsin monomer and trimer and the BtuCD protein, were simulated with the HDGB model in order to evaluate how well thermodynamic and dynamic properties are reproduced. Effects of the truncation of electrostatic interactions were examined. For all proteins, the HDGB model was able to generate stable trajectories that remained close to the starting experimental structures, in excellent agreement with explicit membrane simulations. Dynamic properties evaluated through a comparison of B-factors are also in good agreement with experiment and explicit membrane simulations. However, overall flexibility was slightly underestimated with the HDGB model unless a very large electrostatic cutoff is employed. Results with the HDGB model are further compared with equivalent simulations in implicit aqueous solvent, demonstrating that the membrane environment leads to more realistic simulations.

  19. The three-point function as a probe of models for large-scale structure

    International Nuclear Information System (INIS)

    Frieman, J.A.; Gaztanaga, E.

    1993-01-01

    The authors analyze the consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime. Several variations of the standard Ω = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, R p ∼20 h -1 Mpc, e.g., low-matter-density (non-zero cosmological constant) models, open-quote tilted close-quote primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, et al. The authors show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q J at large scales, r approx-gt R p . Current observational constraints on the three-point amplitudes Q 3 and S 3 can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales

  20. Model checking methodology for large systems, faults and asynchronous behaviour. SARANA 2011 work report

    International Nuclear Information System (INIS)

    Lahtinen, J.; Launiainen, T.; Heljanko, K.; Ropponen, J.

    2012-01-01

    Digital instrumentation and control (I and C) systems are challenging to verify. They enable complicated control functions, and the state spaces of the models easily become too large for comprehensive verification through traditional methods. Model checking is a formal method that can be used for system verification. A number of efficient model checking systems are available that provide analysis tools to determine automatically whether a given state machine model satisfies the desired safety properties. This report reviews the work performed in the Safety Evaluation and Reliability Analysis of Nuclear Automation (SARANA) project in 2011 regarding model checking. We have developed new, more exact modelling methods that are able to capture the behaviour of a system more realistically. In particular, we have developed more detailed fault models depicting the hardware configuration of a system, and methodology to model function-block-based systems asynchronously. In order to improve the usability of our model checking methods, we have developed an algorithm for model checking large modular systems. The algorithm can be used to verify properties of a model that could otherwise not be verified in a straightforward manner. (orig.)

  1. Model checking methodology for large systems, faults and asynchronous behaviour. SARANA 2011 work report

    Energy Technology Data Exchange (ETDEWEB)

    Lahtinen, J. [VTT Technical Research Centre of Finland, Espoo (Finland); Launiainen, T.; Heljanko, K.; Ropponen, J. [Aalto Univ., Espoo (Finland). Dept. of Information and Computer Science

    2012-07-01

    Digital instrumentation and control (I and C) systems are challenging to verify. They enable complicated control functions, and the state spaces of the models easily become too large for comprehensive verification through traditional methods. Model checking is a formal method that can be used for system verification. A number of efficient model checking systems are available that provide analysis tools to determine automatically whether a given state machine model satisfies the desired safety properties. This report reviews the work performed in the Safety Evaluation and Reliability Analysis of Nuclear Automation (SARANA) project in 2011 regarding model checking. We have developed new, more exact modelling methods that are able to capture the behaviour of a system more realistically. In particular, we have developed more detailed fault models depicting the hardware configuration of a system, and methodology to model function-block-based systems asynchronously. In order to improve the usability of our model checking methods, we have developed an algorithm for model checking large modular systems. The algorithm can be used to verify properties of a model that could otherwise not be verified in a straightforward manner. (orig.)

  2. Modelling atopic dermatitis during the morphogenetic process involved in reconstruction of a human epidermis.

    Science.gov (United States)

    De Vuyst, É; Mound, A; Lambert de Rouvroit, C; Poumay, Y

    Most crucial role of epidermis is to maintain efficient barrier between the organism and its environment. This barrier is however perturbed in inflammatory skin conditions like atopic dermatitis (AD), one common chronic disease. This review depicts characteristics of a model intending to reproduce epidermal features of AD in vitro. Firstly, methyl-β-cyclodextrin (MβCD) during reconstruction of epidermis was used to deplete cholesterol from plasma membrane because this condition reproduces characteristics of AD at transcriptomic level in monolayer cultures. Major changes are confirmed after same treatment inside reconstructed human epidermis (RHE). However, since early treatment do not reveal impairment to reconstruct a functional epidermal barrier and given the importance of the Th2 dysregulated immune response in AD, cholesterol-depleted RHE at day 11 of reconstruction were then incubated with three Th2-related cytokines (IL-4, IL-13 and IL-25) previously reported as playing important roles in the development of AD, as well as altering overall function of epidermal barrier. When combining both treatments, essential epidermal features of AD are observed. Indeed, RHE then exhibit spongiosis, disappearing granular layer, alteration of barrier function, as well as dysregulated expression levels for genes involved in AD pathogenesis. Moreover, while trying to identify individual roles for each component used to create AD-like alterations, incubation with IL-4 following cholesterol depletion from plasma membrane was found inducing most of the reported alterations. This model suggests potential for better investigations of epidermal AD features and may be considered for eventual in vitro screening of cosmetics or therapeutic compounds. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  3. Software engineering the mixed model for genome-wide association studies on large samples.

    Science.gov (United States)

    Zhang, Zhiwu; Buckler, Edward S; Casstevens, Terry M; Bradbury, Peter J

    2009-11-01

    Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample size and number of markers used for GWAS is increasing dramatically, resulting in greater statistical power to detect those associations. The use of mixed models with increasingly large data sets depends on the availability of software for analyzing those models. While multiple software packages implement the mixed model method, no single package provides the best combination of fast computation, ability to handle large samples, flexible modeling and ease of use. Key elements of association analysis with mixed models are reviewed, including modeling phenotype-genotype associations using mixed models, population stratification, kinship and its estimation, variance component estimation, use of best linear unbiased predictors or residuals in place of raw phenotype, improving efficiency and software-user interaction. The available software packages are evaluated, and suggestions made for future software development.

  4. Prospectus: towards the development of high-fidelity models of wall turbulence at large Reynolds number

    Science.gov (United States)

    Klewicki, J. C.; Chini, G. P.; Gibson, J. F.

    2017-01-01

    Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier–Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted. This article is part of the themed issue ‘Toward the development of high-fidelity models of wall turbulence at large Reynolds number’. PMID:28167585

  5. Large-scale 3-D modeling by integration of resistivity models and borehole data through inversion

    DEFF Research Database (Denmark)

    Foged, N.; Marker, Pernille Aabye; Christiansen, A. V.

    2014-01-01

    and the borehole data set in one variable. Finally, we use k-means clustering to generate a 3-D model of the subsurface structures. We apply the procedure to the Norsminde survey in Denmark, integrating approximately 700 boreholes and more than 100 000 resistivity models from an airborne survey...... in the parameterization of the 3-D model covering 156 km2. The final five-cluster 3-D model differentiates between clay materials and different high-resistivity materials from information held in the resistivity model and borehole observations, respectively....

  6. Topic modeling for cluster analysis of large biological and medical datasets.

    Science.gov (United States)

    Zhao, Weizhong; Zou, Wen; Chen, James J

    2014-01-01

    The big data moniker is nowhere better deserved than to describe the ever-increasing prodigiousness and complexity of biological and medical datasets. New methods are needed to generate and test hypotheses, foster biological interpretation, and build validated predictors. Although multivariate techniques such as cluster analysis may allow researchers to identify groups, or clusters, of related variables, the accuracies and effectiveness of traditional clustering methods diminish for large and hyper dimensional datasets. Topic modeling is an active research field in machine learning and has been mainly used as an analytical tool to structure large textual corpora for data mining. Its ability to reduce high dimensionality to a small number of latent variables makes it suitable as a means for clustering or overcoming clustering difficulties in large biological and medical datasets. In this study, three topic model-derived clustering methods, highest probable topic assignment, feature selection and feature extraction, are proposed and tested on the cluster analysis of three large datasets: Salmonella pulsed-field gel electrophoresis (PFGE) dataset, lung cancer dataset, and breast cancer dataset, which represent various types of large biological or medical datasets. All three various methods are shown to improve the efficacy/effectiveness of clustering results on the three datasets in comparison to traditional methods. A preferable cluster analysis method emerged for each of the three datasets on the basis of replicating known biological truths. Topic modeling could be advantageously applied to the large datasets of biological or medical research. The three proposed topic model-derived clustering methods, highest probable topic assignment, feature selection and feature extraction, yield clustering improvements for the three different data types. Clusters more efficaciously represent truthful groupings and subgroupings in the data than traditional methods, suggesting

  7. Characterization of Pneumococcal Genes Involved in Bloodstream Invasion in a Mouse Model.

    Directory of Open Access Journals (Sweden)

    Layla K Mahdi

    Full Text Available Streptococcus pneumoniae (the pneumococcus continues to account for significant morbidity and mortality worldwide, causing life-threatening diseases such as pneumonia, bacteremia and meningitis, as well as less serious infections such as sinusitis, conjunctivitis and otitis media. Current polysaccharide vaccines are strictly serotype-specific and also drive the emergence of non-vaccine serotype strains. In this study, we used microarray analysis to compare gene expression patterns of either serotype 4 or serotype 6A pneumococci in the nasopharynx and blood of mice, as a model to identify genes involved in invasion of blood in the context of occult bacteremia in humans. In this manner, we identified 26 genes that were significantly up-regulated in the nasopharynx and 36 genes that were significantly up-regulated in the blood that were common to both strains. Gene Ontology classification revealed that transporter and DNA binding (transcription factor activities constitute the significantly different molecular functional categories for genes up-regulated in the nasopharynx and blood. Targeted mutagenesis of selected genes from both niches and subsequent virulence and pathogenesis studies identified the manganese-dependent superoxide dismutase (SodA as most likely to be essential for colonization, and the cell wall-associated serine protease (PrtA as important for invasion of blood. This work extends our previous analyses and suggests that both PrtA and SodA warrant examination in future studies aimed at prevention and/or control of pneumococcal disease.

  8. Spinal dopaminergic involvement in the antihyperalgesic effect of antidepressants in a rat model of neuropathic pain.

    Science.gov (United States)

    Chen, Mi; Hoshino, Hajime; Saito, Shigeru; Yang, Yang; Obata, Hideaki

    2017-05-10

    Antidepressants such as tricyclic antidepressants, and serotonin noradrenaline reuptake inhibitors are a first-line treatment for neuropathic pain. Here, we aimed to determine the involvement of the spinal dopaminergic system in the antihyperalgesic effects of antidepressants in a rat model of neuropathic pain induced by spinal nerve ligation (SNL). The right L5 spinal nerve of male Sprague-Dawley rats was ligated under inhalation anesthesia to induce hyperalgesia. Behavioral testing was performed by measuring ipsilateral hindpaw withdrawal thresholds after intraperitoneal injection of amitriptyline, duloxetine, milnacipran, and fluoxetine. D2-like receptors were blocked by intrathecal administration of sulpiride. We also determined the concentrations of dopamine in the spinal cord using microdialysis after injection of antidepressants. The dopamine contents in the spinal dorsal horn were also measured in normal and SNL rats at 2, 3, 4, and 8 weeks after SNL surgery. Intraperitoneal injection of amitriptyline, duloxetine, milnacipran, and fluoxetine (3-30mg/kg) produced antihyperalgesic effects, and prevented by intrathecal pre-injection of sulpiride (30μg). Microdialysis revealed the dopamine levels in the spinal cord were increased after intraperitoneal injection of each antidepressant (10mg/kg). Furthermore, the dopamine content in homogenized spinal cord tissue were increased at 2 weeks after SNL and then subsequently declined. Our results suggest that the effect of antidepressants against neuropathic pain is related to modulation of not only noradrenalin and serotonin but also dopamine levels in the spinal cord. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Use of the LQ model with large fraction sizes results in underestimation of isoeffect doses

    International Nuclear Information System (INIS)

    Sheu, Tommy; Molkentine, Jessica; Transtrum, Mark K.; Buchholz, Thomas A.; Withers, Hubert Rodney; Thames, Howard D.; Mason, Kathy A.

    2013-01-01

    Purpose: To test the appropriateness of the linear-quadratic (LQ) model to describe survival of jejunal crypt clonogens after split doses with variable (small 1–6 Gy, large 8–13 Gy) first dose, as a model of its appropriateness for both small and large fraction sizes. Methods: C3Hf/KamLaw mice were exposed to whole body irradiation using 300 kVp X-rays at a dose rate of 1.84 Gy/min, and the number of viable jejunal crypts was determined using the microcolony assay. 14 Gy total dose was split into unequal first and second fractions separated by 4 h. Data were analyzed using the LQ model, the lethal potentially lethal (LPL) model, and a repair-saturation (RS) model. Results: Cell kill was greater in the group receiving the larger fraction first, creating an asymmetry in the plot of survival vs size of first dose, as opposed to the prediction of the LQ model of a symmetric response. There was a significant difference in the estimated βs (higher β after larger first doses), but no significant difference in the αs, when large doses were given first vs small doses first. This difference results in underestimation (based on present data by approximately 8%) of isoeffect doses using LQ model parameters based on small fraction sizes. While the LPL model also predicted a symmetric response inconsistent with the data, the RS model results were consistent with the observed asymmetry. Conclusion: The LQ model underestimates doses for isoeffective crypt-cell survival with large fraction sizes (in the present setting, >9 Gy)

  10. Expression profiles of genes involved in xenobiotic metabolism and disposition in human renal tissues and renal cell models

    Energy Technology Data Exchange (ETDEWEB)

    Van der Hauwaert, Cynthia; Savary, Grégoire [EA4483, Université de Lille 2, Faculté de Médecine de Lille, Pôle Recherche, 59045 Lille (France); Buob, David [Institut de Pathologie, Centre de Biologie Pathologie Génétique, Centre Hospitalier Régional Universitaire de Lille, 59037 Lille (France); Leroy, Xavier; Aubert, Sébastien [Institut de Pathologie, Centre de Biologie Pathologie Génétique, Centre Hospitalier Régional Universitaire de Lille, 59037 Lille (France); Institut National de la Santé et de la Recherche Médicale, UMR837, Centre de Recherche Jean-Pierre Aubert, Equipe 5, 59045 Lille (France); Flamand, Vincent [Service d' Urologie, Hôpital Huriez, Centre Hospitalier Régional Universitaire de Lille, 59037 Lille (France); Hennino, Marie-Flore [EA4483, Université de Lille 2, Faculté de Médecine de Lille, Pôle Recherche, 59045 Lille (France); Service de Néphrologie, Hôpital Huriez, Centre Hospitalier Régional Universitaire de Lille, 59037 Lille (France); Perrais, Michaël [Institut National de la Santé et de la Recherche Médicale, UMR837, Centre de Recherche Jean-Pierre Aubert, Equipe 5, 59045 Lille (France); and others

    2014-09-15

    Numerous xenobiotics have been shown to be harmful for the kidney. Thus, to improve our knowledge of the cellular processing of these nephrotoxic compounds, we evaluated, by real-time PCR, the mRNA expression level of 377 genes encoding xenobiotic-metabolizing enzymes (XMEs), transporters, as well as nuclear receptors and transcription factors that coordinate their expression in eight normal human renal cortical tissues. Additionally, since several renal in vitro models are commonly used in pharmacological and toxicological studies, we investigated their metabolic capacities and compared them with those of renal tissues. The same set of genes was thus investigated in HEK293 and HK2 immortalized cell lines in commercial primary cultures of epithelial renal cells and in proximal tubular cell primary cultures. Altogether, our data offers a comprehensive description of kidney ability to process xenobiotics. Moreover, by hierarchical clustering, we observed large variations in gene expression profiles between renal cell lines and renal tissues. Primary cultures of proximal tubular epithelial cells exhibited the highest similarities with renal tissue in terms of transcript profiling. Moreover, compared to other renal cell models, Tacrolimus dose dependent toxic effects were lower in proximal tubular cell primary cultures that display the highest metabolism and disposition capacity. Therefore, primary cultures appear to be the most relevant in vitro model for investigating the metabolism and bioactivation of nephrotoxic compounds and for toxicological and pharmacological studies. - Highlights: • Renal proximal tubular (PT) cells are highly sensitive to xenobiotics. • Expression of genes involved in xenobiotic disposition was measured. • PT cells exhibited the highest similarities with renal tissue.

  11. Global models underestimate large decadal declining and rising water storage trends relative to GRACE satellite data.

    Science.gov (United States)

    Scanlon, Bridget R; Zhang, Zizhan; Save, Himanshu; Sun, Alexander Y; Müller Schmied, Hannes; van Beek, Ludovicus P H; Wiese, David N; Wada, Yoshihide; Long, Di; Reedy, Robert C; Longuevergne, Laurent; Döll, Petra; Bierkens, Marc F P

    2018-02-06

    Assessing reliability of global models is critical because of increasing reliance on these models to address past and projected future climate and human stresses on global water resources. Here, we evaluate model reliability based on a comprehensive comparison of decadal trends (2002-2014) in land water storage from seven global models (WGHM, PCR-GLOBWB, GLDAS NOAH, MOSAIC, VIC, CLM, and CLSM) to trends from three Gravity Recovery and Climate Experiment (GRACE) satellite solutions in 186 river basins (∼60% of global land area). Medians of modeled basin water storage trends greatly underestimate GRACE-derived large decreasing (≤-0.5 km 3 /y) and increasing (≥0.5 km 3 /y) trends. Decreasing trends from GRACE are mostly related to human use (irrigation) and climate variations, whereas increasing trends reflect climate variations. For example, in the Amazon, GRACE estimates a large increasing trend of ∼43 km 3 /y, whereas most models estimate decreasing trends (-71 to 11 km 3 /y). Land water storage trends, summed over all basins, are positive for GRACE (∼71-82 km 3 /y) but negative for models (-450 to -12 km 3 /y), contributing opposing trends to global mean sea level change. Impacts of climate forcing on decadal land water storage trends exceed those of modeled human intervention by about a factor of 2. The model-GRACE comparison highlights potential areas of future model development, particularly simulated water storage. The inability of models to capture large decadal water storage trends based on GRACE indicates that model projections of climate and human-induced water storage changes may be underestimated. Copyright © 2018 the Author(s). Published by PNAS.

  12. Meson productions at large transverse momentum in core-valence parton model

    CERN Document Server

    Biyajima, M

    1976-01-01

    A quark parton model with an impulse approximation is presented for describing from small momentum to large momentum phenomena in inclusive reactions. An annihilation process by quark-antiquark collision and a pair creation process by gluon-gluon collision are assumed to be dominant. By making use of these two diagrams the authors can explain the particle ratios in addition to the various large p/sub T/-distributions at FNAL and CERN-ISR. Model calculations suggests that gluon collisions are playing an important role as well as quark-antiquark annihilation process. (24 refs).

  13. Large Differences in Terrestrial Vegetation Production Derived from Satellite-Based Light Use Efficiency Models

    Directory of Open Access Journals (Sweden)

    Wenwen Cai

    2014-09-01

    Full Text Available Terrestrial gross primary production (GPP is the largest global CO2 flux and determines other ecosystem carbon cycle variables. Light use efficiency (LUE models may have the most potential to adequately address the spatial and temporal dynamics of GPP, but recent studies have shown large model differences in GPP simulations. In this study, we investigated the GPP differences in the spatial and temporal patterns derived from seven widely used LUE models at the global scale. The result shows that the global annual GPP estimates over the period 2000–2010 varied from 95.10 to 139.71 Pg C∙yr−1 among models. The spatial and temporal variation of global GPP differs substantially between models, due to different model structures and dominant environmental drivers. In almost all models, water availability dominates the interannual variability of GPP over large vegetated areas. Solar radiation and air temperature are not the primary controlling factors for interannual variability of global GPP estimates for most models. The disagreement among the current LUE models highlights the need for further model improvement to quantify the global carbon cycle.

  14. VISTopic: A visual analytics system for making sense of large document collections using hierarchical topic modeling

    Directory of Open Access Journals (Sweden)

    Yi Yang

    2017-03-01

    Full Text Available Effective analysis of large text collections remains a challenging problem given the growing volume of available text data. Recently, text mining techniques have been rapidly developed for automatically extracting key information from massive text data. Topic modeling, as one of the novel techniques that extracts a thematic structure from documents, is widely used to generate text summarization and foster an overall understanding of the corpus content. Although powerful, this technique may not be directly applicable for general analytics scenarios since the topics and topic–document relationship are often presented probabilistically in models. Moreover, information that plays an important role in knowledge discovery, for example, times and authors, is hardly reflected in topic modeling for comprehensive analysis. In this paper, we address this issue by presenting a visual analytics system, VISTopic, to help users make sense of large document collections based on topic modeling. VISTopic first extracts a set of hierarchical topics using a novel hierarchical latent tree model (HLTM (Liu et al., 2014. In specific, a topic view accounting for the model features is designed for overall understanding and interactive exploration of the topic organization. To leverage multi-perspective information for visual analytics, VISTopic further provides an evolution view to reveal the trend of topics and a document view to show details of topical documents. Three case studies based on the dataset of IEEE VIS conference demonstrate the effectiveness of our system in gaining insights from large document collections. Keywords: Topic-modeling, Text visualization, Visual analytics

  15. Microfluidic very large scale integration (VLSI) modeling, simulation, testing, compilation and physical synthesis

    CERN Document Server

    Pop, Paul; Madsen, Jan

    2016-01-01

    This book presents the state-of-the-art techniques for the modeling, simulation, testing, compilation and physical synthesis of mVLSI biochips. The authors describe a top-down modeling and synthesis methodology for the mVLSI biochips, inspired by microelectronics VLSI methodologies. They introduce a modeling framework for the components and the biochip architecture, and a high-level microfluidic protocol language. Coverage includes a topology graph-based model for the biochip architecture, and a sequencing graph to model for biochemical application, showing how the application model can be obtained from the protocol language. The techniques described facilitate programmability and automation, enabling developers in the emerging, large biochip market. · Presents the current models used for the research on compilation and synthesis techniques of mVLSI biochips in a tutorial fashion; · Includes a set of "benchmarks", that are presented in great detail and includes the source code of several of the techniques p...

  16. Large eddy simulation of spanwise rotating turbulent channel flow with dynamic variants of eddy viscosity model

    Science.gov (United States)

    Jiang, Zhou; Xia, Zhenhua; Shi, Yipeng; Chen, Shiyi

    2018-04-01

    A fully developed spanwise rotating turbulent channel flow has been numerically investigated utilizing large-eddy simulation. Our focus is to assess the performances of the dynamic variants of eddy viscosity models, including dynamic Vreman's model (DVM), dynamic wall adapting local eddy viscosity (DWALE) model, dynamic σ (Dσ ) model, and the dynamic volumetric strain-stretching (DVSS) model, in this canonical flow. The results with dynamic Smagorinsky model (DSM) and direct numerical simulations (DNS) are used as references. Our results show that the DVM has a wrong asymptotic behavior in the near wall region, while the other three models can correctly predict it. In the high rotation case, the DWALE can get reliable mean velocity profile, but the turbulence intensities in the wall-normal and spanwise directions show clear deviations from DNS data. DVSS exhibits poor predictions on both the mean velocity profile and turbulence intensities. In all three cases, Dσ performs the best.

  17. Large tau and tau neutrino electric dipole moments in models with vectorlike multiplets

    International Nuclear Information System (INIS)

    Ibrahim, Tarek; Nath, Pran

    2010-01-01

    It is shown that the electric dipole moment of the τ lepton several orders of magnitude larger than predicted by the standard model can be generated from mixings in models with vectorlike mutiplets. The electric dipole moment (EDM) of the τ lepton arises from loops involving the exchange of the W, the charginos, the neutralinos, the sleptons, the mirror leptons, and the mirror sleptons. The EDM of the Dirac τ neutrino is also computed from loops involving the exchange of the W, the charginos, the mirror leptons, and the mirror sleptons. A numerical analysis is presented, and it is shown that the EDMs of the τ lepton and the τ neutrino which lie just a couple of orders of magnitude below the sensitivity of the current experiment can be achieved. Thus the predictions of the model are testable in an improved experiment on the EDM of the τ and the τ neutrino.

  18. The Large Office Environment - Measurement and Modeling of the Wideband Radio Channel

    DEFF Research Database (Denmark)

    Andersen, Jørgen Bach; Nielsen, Jesper Ødum; Bauch, Gerhard

    2006-01-01

    In a future 4G or WLAN wideband application we can imagine multiple users in a large office environment con-sisting of a single room with partitions. Up to now, indoor radio channel measurement and modelling has mainly concentrated on scenarios with several office rooms and corridors. We present...... here measurements at 5.8GHz for 100 MHz bandwidth and a novel modelling approach for the wideband radio channel in a large office room envi-ronment. An acoustic like reverberation theory is pro-posed that allows to specify a tapped delay line model just from the room dimensions and an average...... calculated from the measurements. The pro-posed model can likely also be applied to indoor hot spot scenarios....

  19. Modeling and experiments of biomass combustion in a large-scale grate boiler

    DEFF Research Database (Denmark)

    Yin, Chungen; Rosendahl, Lasse; Kær, Søren Knudsen

    2007-01-01

    is inherently more difficult due to the complexity of the solid biomass fuel bed on the grate, the turbulent reacting flow in the combustion chamber and the intensive interaction between them. This paper presents the CFD validation efforts for a modern large-scale biomass-fired grate boiler. Modeling......Grate furnaces are currently a main workhorse in large-scale firing of biomass for heat and power production. A biomass grate fired furnace can be interpreted as a cross-flow reactor, where biomass is fed in a thick layer perpendicular to the primary air flow. The bottom of the biomass bed...... is exposed to preheated inlet air while the top of the bed resides within the furnace. Mathematical modeling is an efficient way to understand and improve the operation and design of combustion systems. Compared to modeling of pulverized fuel furnaces, CFD modeling of biomass-fired grate furnaces...

  20. Improved large-signal GaN HEMT model suitable for intermodulation distortion analysis

    Science.gov (United States)

    Liu, Lin-Sheng; Luo, Ji

    2011-12-01

    In this article, a complete empirical large-signal model of GaN high electron-mobility transistors (HEMTs) is presented. The developed nonlinear model employs differentiable trigonometric function continuously to describe the drain-source current characteristic and its higher order derivatives, making itself suitable for the simulation of intermodulation distortion (IMD) in microwave circuits. Besides, an improved charge-conservative gate charge model is proposed to accurately trace the nonlinear gate-source and gate-drain capacitances. The model validity is demonstrated for different 0.25-µm gate-length GaN HEMTs. The simulation results of small-signal S-parameters, radio frequency (RF) large-signal power performances and two-tone IMD products show an excellent agreement with the measured data.

  1. Large deformation analysis of adhesive by Eulerian method with new material model

    International Nuclear Information System (INIS)

    Maeda, K; Nishiguchi, K; Iwamoto, T; Okazawa, S

    2010-01-01

    The material model to describe large deformation of a pressure sensitive adhesive (PSA) is presented. A relationship between stress and strain of PSA includes viscoelasticity and rubber-elasticity. Therefore, we propose the material model for describing viscoelasticity and rubber-elasticity, and extend the presented material model to the rate form for three dimensional finite element analysis. After proposing the material model for PSA, we formulate the Eulerian method to simulate large deformation behavior. In the Eulerian calculation, the Piecewise Linear Interface Calculation (PLIC) method for capturing material surface is employed. By using PLIC method, we can impose dynamic and kinematic boundary conditions on captured material surface. The representative two computational examples are calculated to check validity of the present methods.

  2. Additional Survival Benefit of Involved-Lesion Radiation Therapy After R-CHOP Chemotherapy in Limited Stage Diffuse Large B-Cell Lymphoma

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Jeanny [Department of Radiation Oncology, Seoul National University College of Medicine, Seoul (Korea, Republic of); Kim, Il Han, E-mail: ihkim@snu.ac.kr [Department of Radiation Oncology, Seoul National University College of Medicine, Seoul (Korea, Republic of); Cancer Research Institute, Seoul National University College of Medicine, Seoul (Korea, Republic of); Institute of Radiation Medicine, Medical Research Center, Seoul National University, Seoul (Korea, Republic of); Kim, Byoung Hyuck [Department of Radiation Oncology, Seoul National University College of Medicine, Seoul (Korea, Republic of); Kim, Tae Min; Heo, Dae Seog [Department of Internal Medicine, Seoul National University Hospital, Seoul (Korea, Republic of)

    2015-05-01

    Purpose: The purpose of this study was to evaluate the role of involved-lesion radiation therapy (ILRT) after rituximab, cyclophosphamide, doxorubicin, vincristine, and prednisone (R-CHOP) chemotherapy in limited stage diffuse large B-cell lymphoma (DLBCL) by comparing outcomes of R-CHOP therapy alone with R-CHOP followed by ILRT. Methods and Materials: We identified 198 patients treated with R-CHOP (median, 6 cycles) for pathologically confirmed DLBCL of limited stage from July 2004 to December 2012. Clinical characteristics of these patients were 33% with stage I and 66.7% with stage II; 79.8% were in the low or low-intermediate risk group; 13.6% had B symptoms; 29.8% had bulky tumors (≥7 cm); and 75.3% underwent ≥6 cycles of R-CHOP therapy. RT was given to 43 patients (21.7%) using ILRT technique, which included the prechemotherapy tumor volume with a median margin of 2 cm (median RT dose: 36 Gy). Results: After a median follow-up of 40 months, 3-year progression-free survival (PFS) and overall survival (OS) were 85.8% and 88.9%, respectively. Multivariate analysis showed ≥6 cycles of R-CHOP (PFS, P=.004; OS, P=.004) and ILRT (PFS, P=.021; OS, P=.014) were favorable prognosticators of PFS and OS. A bulky tumor (P=.027) and response to R-CHOP (P=.012) were also found to be independent factors of OS. In subgroup analysis, the effect of ILRT was prominent in patients with a bulky tumor (PFS, P=.014; OS, P=.030) or an elevated level of serum lactate dehydrogenase (LDH; PFS, P=.004; OS, P=.012). Conclusions: Our results suggest that ILRT after R-CHOP therapy improves PFS and OS in patients with limited stage DLBCL, especially in those with bulky disease or an elevated serum LDH level.

  3. Transcriptome analysis and discovery of genes involved in immune pathways in large yellow croaker (Larimichthys crocea) under high stocking density stress.

    Science.gov (United States)

    Sun, Peng; Bao, Peibo; Tang, Baojun

    2017-09-01

    The large yellow croaker, Larimichthys crocea, is an economically important maricultured species in southeast China. Owing to the importance of stocking densities in commercial fish production, it is crucial to establish the physiological responses and molecular mechanisms that govern adaptation to crowding in order to optimize welfare and health. In the present study, an extensive immunity-related analysis was performed at the transcriptome level in L. crocea in response to crowding stress. Over 145 million high-quality reads were generated and de novo assembled into a final set of 40,123 unigenes. Gene Ontology and genome analyses revealed that molecular function, biological process, intracellular, ion binding, and cell process were the most highly enriched pathways among genes that were differentially expressed under stress. Among all of the pathways involved, 16 pathways were related to the immune system, among which the complement and coagulation cascades pathway was the most enriched for differentially expressed immunity-related genes, followed by the chemokine signaling pathway, toll-like receptor signaling pathway, and leukocyte transendothelial migration pathway. The consistently high expression of immune-related genes in the complement and coagulation cascades pathway (from 24 to 96 h after being subjected to stress) suggested its importance in both response to stress and resistance against bacterial invasion at an early stage. These results also demonstrated that crowding can significantly induce immunological responses in fish. However, long-term exposure to stress eventually impairs the defense capability in fish. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Large fractions of CO2-fixing microorganisms in pristine limestone aquifers appear to be involved in the oxidation of reduced sulfur and nitrogen compounds

    Science.gov (United States)

    Herrmann, Martina; Rusznyák, Anna; Akob, Denise M.; Schulze, Isabel; Opitz, Sebastian; Totsche, Kai Uwe; Küsel, Kirsten

    2015-01-01

    The traditional view of the dependency of subsurface environments on surface-derived allochthonous carbon inputs is challenged by increasing evidence for the role of lithoautotrophy in aquifer carbon flow. We linked information on autotrophy (Calvin-Benson-Bassham cycle) with that from total microbial community analysis in groundwater at two superimposed—upper and lower—limestone groundwater reservoirs (aquifers). Quantitative PCR revealed that up to 17% of the microbial population had the genetic potential to fix CO2 via the Calvin cycle, with abundances of cbbM and cbbL genes, encoding RubisCO (ribulose-1,5-bisphosphate carboxylase/oxygenase) forms I and II, ranging from 1.14 × 103 to 6 × 106 genes liter−1 over a 2-year period. The structure of the active microbial communities based on 16S rRNA transcripts differed between the two aquifers, with a larger fraction of heterotrophic, facultative anaerobic, soil-related groups in the oxygen-deficient upper aquifer. Most identified CO2-assimilating phylogenetic groups appeared to be involved in the oxidation of sulfur or nitrogen compounds and harbored both RubisCO forms I and II, allowing efficient CO2 fixation in environments with strong oxygen and CO2 fluctuations. The genera Sulfuricellaand Nitrosomonas were represented by read fractions of up to 78 and 33%, respectively, within the cbbM and cbbL transcript pool and accounted for 5.6 and 3.8% of 16S rRNA sequence reads, respectively, in the lower aquifer. Our results indicate that a large fraction of bacteria in pristine limestone aquifers has the genetic potential for autotrophic CO2 fixation, with energy most likely provided by the oxidation of reduced sulfur and nitrogen compounds.

  5. Phase-field-based lattice Boltzmann modeling of large-density-ratio two-phase flows

    Science.gov (United States)

    Liang, Hong; Xu, Jiangrong; Chen, Jiangxing; Wang, Huili; Chai, Zhenhua; Shi, Baochang

    2018-03-01

    In this paper, we present a simple and accurate lattice Boltzmann (LB) model for immiscible two-phase flows, which is able to deal with large density contrasts. This model utilizes two LB equations, one of which is used to solve the conservative Allen-Cahn equation, and the other is adopted to solve the incompressible Navier-Stokes equations. A forcing distribution function is elaborately designed in the LB equation for the Navier-Stokes equations, which make it much simpler than the existing LB models. In addition, the proposed model can achieve superior numerical accuracy compared with previous Allen-Cahn type of LB models. Several benchmark two-phase problems, including static droplet, layered Poiseuille flow, and spinodal decomposition are simulated to validate the present LB model. It is found that the present model can achieve relatively small spurious velocity in the LB community, and the obtained numerical results also show good agreement with the analytical solutions or some available results. Lastly, we use the present model to investigate the droplet impact on a thin liquid film with a large density ratio of 1000 and the Reynolds number ranging from 20 to 500. The fascinating phenomena of droplet splashing is successfully reproduced by the present model and the numerically predicted spreading radius exhibits to obey the power law reported in the literature.

  6. Bilevel Traffic Evacuation Model and Algorithm Design for Large-Scale Activities

    Directory of Open Access Journals (Sweden)

    Danwen Bao

    2017-01-01

    Full Text Available This paper establishes a bilevel planning model with one master and multiple slaves to solve traffic evacuation problems. The minimum evacuation network saturation and shortest evacuation time are used as the objective functions for the upper- and lower-level models, respectively. The optimizing conditions of this model are also analyzed. An improved particle swarm optimization (PSO method is proposed by introducing an electromagnetism-like mechanism to solve the bilevel model and enhance its convergence efficiency. A case study is carried out using the Nanjing Olympic Sports Center. The results indicate that, for large-scale activities, the average evacuation time of the classic model is shorter but the road saturation distribution is more uneven. Thus, the overall evacuation efficiency of the network is not high. For induced emergencies, the evacuation time of the bilevel planning model is shortened. When the audience arrival rate is increased from 50% to 100%, the evacuation time is shortened from 22% to 35%, indicating that the optimization effect of the bilevel planning model is more effective compared to the classic model. Therefore, the model and algorithm presented in this paper can provide a theoretical basis for the traffic-induced evacuation decision making of large-scale activities.

  7. Parameterization of a Hydrological Model for a Large, Ungauged Urban Catchment

    Directory of Open Access Journals (Sweden)

    Gerald Krebs

    2016-10-01

    Full Text Available Urbanization leads to the replacement of natural areas by impervious surfaces and affects the catchment hydrological cycle with adverse environmental impacts. Low impact development tools (LID that mimic hydrological processes of natural areas have been developed and applied to mitigate these impacts. Hydrological simulations are one possibility to evaluate the LID performance but the associated small-scale processes require a highly spatially distributed and explicit modeling approach. However, detailed data for model development are often not available for large urban areas, hampering the model parameterization. In this paper we propose a methodology to parameterize a hydrological model to a large, ungauged urban area by maintaining at the same time a detailed surface discretization for direct parameter manipulation for LID simulation and a firm reliance on available data for model conceptualization. Catchment delineation was based on a high-resolution digital elevation model (DEM and model parameterization relied on a novel model regionalization approach. The impact of automated delineation and model regionalization on simulation results was evaluated for three monitored study catchments (5.87–12.59 ha. The simulated runoff peak was most sensitive to accurate catchment discretization and calibration, while both the runoff volume and the fit of the hydrograph were less affected.

  8. Scale breaking effects in the quark-parton model for large P perpendicular phenomena

    International Nuclear Information System (INIS)

    Baier, R.; Petersson, B.

    1977-01-01

    We discuss how the scaling violations suggested by an asymptotically free parton model, i.e., the Q 2 -dependence of the transverse momentum of partons within hadrons may affect the parton model description of large p perpendicular phenomena. We show that such a mechanism can provide an explanation for the magnitude of the opposite side correlations and their dependence on the trigger momentum. (author)

  9. Programming in the Large based on the Business Process Modeling Notation

    OpenAIRE

    Emig, Christian; Momm, Christof; Weisser, Jochen; Abeck, Sebastian

    2005-01-01

    A software application is related to the processes it supports. Today, UML diagrams esp. use case diagrams and activity diagrams are often used to model the relevant aspects of the processes within the analysis phase. In the design phase the models are manually mapped to the business layer of the software application. In the context of Service-oriented Architectures (SOA) Programming in the Large takes a different approach: Business processes are described in a programming language, i.e. a pr...

  10. Modeling of drug release from matrix systems involving moving boundaries: approximate analytical solutions.

    Science.gov (United States)

    Lee, Ping I

    2011-10-10

    The purpose of this review is to provide an overview of approximate analytical solutions to the general moving boundary diffusion problems encountered during the release of a dispersed drug from matrix systems. Starting from the theoretical basis of the Higuchi equation and its subsequent improvement and refinement, available approximate analytical solutions for the more complicated cases involving heterogeneous matrix, boundary layer effect, finite release medium, surface erosion, and finite dissolution rate are also discussed. Among various modeling approaches, the pseudo-steady state assumption employed in deriving the Higuchi equation and related approximate analytical solutions appears to yield reasonably accurate results in describing the early stage release of a dispersed drug from matrices of different geometries whenever the initial drug loading (A) is much larger than the drug solubility (C(s)) in the matrix (or A≫C(s)). However, when the drug loading is not in great excess of the drug solubility (i.e. low A/C(s) values) or when the drug loading approaches the drug solubility (A→C(s)) which occurs often with drugs of high aqueous solubility, approximate analytical solutions based on the pseudo-steady state assumption tend to fail, with the Higuchi equation for planar geometry exhibiting a 11.38% error as compared with the exact solution. In contrast, approximate analytical solutions to this problem without making the pseudo-steady state assumption, based on either the double-integration refinement of the heat balance integral method or the direct simplification of available exact analytical solutions, show close agreement with the exact solutions in different geometries, particularly in the case of low A/C(s) values or drug loading approaching the drug solubility (A→C(s)). However, the double-integration heat balance integral approach is generally more useful in obtaining approximate analytical solutions especially when exact solutions are not

  11. Study of the involvement of allogeneic MSCs in bone formation using the model of transgenic mice.

    Science.gov (United States)

    Kuznetsova, Daria; Prodanets, Natalia; Rodimova, Svetlana; Antonov, Evgeny; Meleshina, Aleksandra; Timashev, Peter; Zagaynova, Elena

    2017-05-04

    Mesenchymal stem cells (MSCs) are thought to be the most attractive type of cells for bone repair. However, much still remains unknown about MSCs and needs to be clarified before this treatment can be widely applied in the clinical practice. The purpose of this study was to establish the involvement of allogeneic MSCs in the bone formation in vivo, using a model of transgenic mice and genetically labeled cells. Polylactide scaffolds with hydroxyapatite obtained by surface selective laser sintering were used. The scaffolds were sterilized and individually seeded with MSCs from the bone marrow of 5-week-old GFP(+) transgenic C57/Bl6 or GFP(-)C57/Bl6 mice. 4-mm-diameter critical-size defects were created on the calvarial bone of mice using a dental bur. Immediately after the generation of the cranial bone defects, the scaffolds with or without seeded cells were implanted into the injury sites. The cranial bones were harvested at either 6 or 12 weeks after the implantation. GFP(+) transgenic mice having scaffolds with unlabeled MSCs were used for the observation of the host cell migration into the scaffold. GFP(-) mice having scaffolds with GFP(+)MSCs were used to assess the functioning of the seeded MSCs. The obtained data demonstrated that allogeneic MSCs were found on the scaffolds 6 and 12 weeks post-implantation. By week 12, a newly formed bone tissue from the seeded cells was observed, without an osteogenic pre-differentiation. The host cells did not appear, and the control scaffolds without seeded cells remained empty. Besides, a possibility of vessel formation from seeded MSCs was shown, without a preliminary cell cultivation under controlled conditions.

  12. Attitudes towards people with mental illness among psychiatrists, psychiatric nurses, involved family members and the general population in a large city in Guangzhou, China.

    Science.gov (United States)

    Sun, Bin; Fan, Ni; Nie, Sha; Zhang, Minglin; Huang, Xini; He, Hongbo; Rosenheck, Robert A

    2014-01-01

    Stigma towards people with mental illness is believed to be widespread in low and middle income countries. This study assessed the attitudes towards people with mental illness among psychiatrists, psychiatric nurses, involved family members of patients in a psychiatric facility and the general public using a standard 43-item survey (N = 535). Exploratory factor analysis identified four distinctive attitudes which were then compared using Analysis of Covariance (ANCOVA) among the four groups, all with ties to the largest psychiatric facility in Guangzhou, China, adjusting for sociodemographic differences. Four uncorrelated factors expressed preferences for 1) community-based treatment, social integration and a biopsychosocial model of causation, 2) direct personal relationships with people with mental illness, 3) a lack of fear and positive views of personal interactions with people with mental illness, 4) disbelief in superstitious explanations of mental illness. Statistically significant differences favored community-based treatment and biopsychosocial causation (factor 1) among professional groups (psychiatrists and nurses) as compared with family members and the general public (p problems of their relatives and support in their care.

  13. Imaging the Chicxulub central crater zone from large scale seismic acoustic wave propagation and gravity modeling

    Science.gov (United States)

    Fucugauchi, J. U.; Ortiz-Aleman, C.; Martin, R.

    2017-12-01

    Large complex craters are characterized by central uplifts that represent large-scale differential movement of deep basement from the transient cavity. Here we investigate the central sector of the large multiring Chicxulub crater, which has been surveyed by an array of marine, aerial and land-borne geophysical methods. Despite high contrasts in physical properties,contrasting results for the central uplift have been obtained, with seismic reflection surveys showing lack of resolution in the central zone. We develop an integrated seismic and gravity model for the main structural elements, imaging the central basement uplift and melt and breccia units. The 3-D velocity model built from interpolation of seismic data is validated using perfectly matched layer seismic acoustic wave propagation modeling, optimized at grazing incidence using shift in the frequency domain. Modeling shows significant lack of illumination in the central sector, masking presence of the central uplift. Seismic energy remains trapped in an upper low velocity zone corresponding to the sedimentary infill, melt/breccias and surrounding faulted blocks. After conversion of seismic velocities into a volume of density values, we use massive parallel forward gravity modeling to constrain the size and shape of the central uplift that lies at 4.5 km depth, providing a high-resolution image of crater structure.The Bouguer anomaly and gravity response of modeled units show asymmetries, corresponding to the crater structure and distribution of post-impact carbonates, breccias, melt and target sediments

  14. Absorption and scattering coefficient dependence of laser-Doppler flowmetry models for large tissue volumes

    International Nuclear Information System (INIS)

    Binzoni, T; Leung, T S; Ruefenacht, D; Delpy, D T

    2006-01-01

    Based on quasi-elastic scattering theory (and random walk on a lattice approach), a model of laser-Doppler flowmetry (LDF) has been derived which can be applied to measurements in large tissue volumes (e.g. when the interoptode distance is >30 mm). The model holds for a semi-infinite medium and takes into account the transport-corrected scattering coefficient and the absorption coefficient of the tissue, and the scattering coefficient of the red blood cells. The model holds for anisotropic scattering and for multiple scattering of the photons by the moving scatterers of finite size. In particular, it has also been possible to take into account the simultaneous presence of both Brownian and pure translational movements. An analytical and simplified version of the model has also been derived and its validity investigated, for the case of measurements in human skeletal muscle tissue. It is shown that at large optode spacing it is possible to use the simplified model, taking into account only a 'mean' light pathlength, to predict the blood flow related parameters. It is also demonstrated that the 'classical' blood volume parameter, derived from LDF instruments, may not represent the actual blood volume variations when the investigated tissue volume is large. The simplified model does not need knowledge of the tissue optical parameters and thus should allow the development of very simple and cost-effective LDF hardware

  15. Zone modelling of the thermal performances of a large-scale bloom reheating furnace

    International Nuclear Information System (INIS)

    Tan, Chee-Keong; Jenkins, Joana; Ward, John; Broughton, Jonathan; Heeley, Andy

    2013-01-01

    This paper describes the development and comparison of a two- (2D) and three-dimensional (3D) mathematical models, based on the zone method of radiation analysis, to simulate the thermal performances of a large bloom reheating furnace. The modelling approach adopted in the current paper differs from previous work since it takes into account the net radiation interchanges between the top and bottom firing sections of the furnace and also allows for enthalpy exchange due to the flows of combustion products between these sections. The models were initially validated at two different furnace throughput rates using experimental and plant's model data supplied by Tata Steel. The results to-date demonstrated that the model predictions are in good agreement with measured heating profiles of the blooms encountered in the actual furnace. It was also found no significant differences between the predictions from the 2D and 3D models. Following the validation, the 2D model was then used to assess the impact of the furnace responses to changing throughput rate. It was found that the potential furnace response to changing throughput rate influences the settling time of the furnace to the next steady state operation. Overall the current work demonstrates the feasibility and practicality of zone modelling and its potential for incorporation into a model based furnace control system. - Highlights: ► 2D and 3D zone models of large-scale bloom reheating furnace. ► The models were validated with experimental and plant model data. ► Examine the transient furnace response to changing the furnace throughput rates. ► No significant differences found between the predictions from the 2D and 3D models.

  16. 5D Modelling: An Efficient Approach for Creating Spatiotemporal Predictive 3D Maps of Large-Scale Cultural Resources

    Science.gov (United States)

    Doulamis, A.; Doulamis, N.; Ioannidis, C.; Chrysouli, C.; Grammalidis, N.; Dimitropoulos, K.; Potsiou, C.; Stathopoulou, E.-K.; Ioannides, M.

    2015-08-01

    Outdoor large-scale cultural sites are mostly sensitive to environmental, natural and human made factors, implying an imminent need for a spatio-temporal assessment to identify regions of potential cultural interest (material degradation, structuring, conservation). On the other hand, in Cultural Heritage research quite different actors are involved (archaeologists, curators, conservators, simple users) each of diverse needs. All these statements advocate that a 5D modelling (3D geometry plus time plus levels of details) is ideally required for preservation and assessment of outdoor large scale cultural sites, which is currently implemented as a simple aggregation of 3D digital models at different time and levels of details. The main bottleneck of such an approach is its complexity, making 5D modelling impossible to be validated in real life conditions. In this paper, a cost effective and affordable framework for 5D modelling is proposed based on a spatial-temporal dependent aggregation of 3D digital models, by incorporating a predictive assessment procedure to indicate which regions (surfaces) of an object should be reconstructed at higher levels of details at next time instances and which at lower ones. In this way, dynamic change history maps are created, indicating spatial probabilities of regions needed further 3D modelling at forthcoming instances. Using these maps, predictive assessment can be made, that is, to localize surfaces within the objects where a high accuracy reconstruction process needs to be activated at the forthcoming time instances. The proposed 5D Digital Cultural Heritage Model (5D-DCHM) is implemented using open interoperable standards based on the CityGML framework, which also allows the description of additional semantic metadata information. Visualization aspects are also supported to allow easy manipulation, interaction and representation of the 5D-DCHM geometry and the respective semantic information. The open source 3DCity

  17. Atlantis model outputs - Developing end-to-end models of the California Current Large Marine Ecosystem

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The purpose of this project is to develop spatially discrete end-to-end models of the California Current LME, linking oceanography, biogeochemistry, food web...

  18. Analysis and Design Environment for Large Scale System Models and Collaborative Model Development Project

    Data.gov (United States)

    National Aeronautics and Space Administration — As NASA modeling efforts grow more complex and more distributed among many working groups, new tools and technologies are required to integrate their efforts...

  19. Analysis and Design Environment for Large Scale System Models and Collaborative Model Development, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — As NASA modeling efforts grow more complex and more distributed among many working groups, new tools and technologies are required to integrate their efforts...

  20. Mixed-signal instrumentation for large-signal device characterization and modelling

    NARCIS (Netherlands)

    Marchetti, M.

    2013-01-01

    This thesis concentrates on the development of advanced large-signal measurement and characterization tools to support technology development, model extraction and validation, and power amplifier (PA) designs that address the newly introduced third and fourth generation (3G and 4G) wideband

  1. Using a Student-Manipulated Model to Enhance Student Learning in a Large Lecture Class

    Science.gov (United States)

    Gray, Kyle; Steer, David; McConnell, David; Owens, Katharine

    2010-01-01

    Despite years of formal education, approximately one-third of all undergraduate students still cannot explain the causes of the seasons. Student manipulation of a handheld model is one approach to teaching this concept; however, the large number of students in many introductory classes can dissuade instructors from utilizing this teaching…

  2. A large deviations approach to the transient of the Erlang loss model

    NARCIS (Netherlands)

    Mandjes, M.R.H.; Ridder, Annemarie

    2001-01-01

    This paper deals with the transient behavior of the Erlang loss model. After scaling both arrival rate and number of trunks, an asymptotic analysis of the blocking probability is given. Apart from that, the most likely path to blocking is given. Compared to Shwartz and Weiss [Large Deviations for

  3. Fluid-conveying flexible pipes modeled by large-deflection finite elements in multibody systems

    NARCIS (Netherlands)

    Meijaard, Jacob Philippus

    2013-01-01

    The modeling and simulation of flexible multibody systems containing fluid-conveying pipes are considered. It is assumed that the mass-flow rate is prescribed and constant and the pipe cross section is piecewise uniform. An existing beam element capable of handling large motions is modified to

  4. Wind turbine large-eddy simulations on very coarse grid resolutions using an actuator line model

    NARCIS (Netherlands)

    Martínez-Tossas, Luis A.; Stevens, Richard J.A.M.; Meneveau, Charles

    2016-01-01

    In this work the accuracy of the Actuator Line Model (ALM) in Large Eddy Simulations of wind turbine flow is studied under the specific conditions of very coarse spatial resolutions. For finely-resolved conditions, it is known that ALM provides better accuracy compared to the standard Actuator Disk

  5. Wind Farm Large-Eddy Simulations on Very Coarse Grid Resolutions using an Actuator Line Model

    NARCIS (Netherlands)

    Martinez, L.A.; Meneveau, C.; Stevens, Richard Johannes Antonius Maria

    2016-01-01

    In this work the accuracy of the Actuator Line Model (ALM) in Large Eddy Simula- tions of wind turbine flow is studied under the speci c conditions of very coarse spatial resolutions. For finely-resolved conditions, it is known that ALM provides better accuracy compared to the standard Actuator Disk

  6. On large-scale shell-model calculations in sup 4 He

    Energy Technology Data Exchange (ETDEWEB)

    Bishop, R.F.; Flynn, M.F. (Manchester Univ. (UK). Inst. of Science and Technology); Bosca, M.C.; Buendia, E.; Guardiola, R. (Granada Univ. (Spain). Dept. de Fisica Moderna)

    1990-03-01

    Most shell-model calculations of {sup 4}He require very large basis spaces for the energy spectrum to stabilise. Coupled cluster methods and an exact treatment of the centre-of-mass motion dramatically reduce the number of configurations. We thereby obtain almost exact results with small bases, but which include states of very high excitation energy. (author).

  7. Formation and disruption of tonotopy in a large-scale model of the auditory cortex

    Czech Academy of Sciences Publication Activity Database

    Tomková, M.; Tomek, J.; Novák, Ondřej; Zelenka, Ondřej; Syka, Josef; Brom, C.

    2015-01-01

    Roč. 39, č. 2 (2015), s. 131-153 ISSN 0929-5313 R&D Projects: GA ČR(CZ) GAP303/12/1347 Institutional support: RVO:68378041 Keywords : auditory cortex * large-scale model * spiking neuron * oscillation * STDP * tonotopy Subject RIV: FH - Neurology Impact factor: 1.871, year: 2015

  8. Large Deviations for the Annealed Ising Model on Inhomogeneous Random Graphs: Spins and Degrees

    Science.gov (United States)

    Dommers, Sander; Giardinà, Cristian; Giberti, Claudio; Hofstad, Remco van der

    2018-04-01

    We prove a large deviations principle for the total spin and the number of edges under the annealed Ising measure on generalized random graphs. We also give detailed results on how the annealing over the Ising model changes the degrees of the vertices in the graph and show how it gives rise to interesting correlated random graphs.

  9. Large-order behavior of nondecoupling effects in the standard model and triviality

    International Nuclear Information System (INIS)

    Aoki, K.

    1994-01-01

    We compute some nondecoupling effects in the standard model, such as the ρ parameter, to all orders in the coupling constant expansion. We analyze their large order behavior and explicitly show how they are related to the nonperturbative cutoff dependence of these nondecoupling effects due to the triviality of the theory

  10. Identifiability in N-mixture models: a large-scale screening test with bird data.

    Science.gov (United States)

    Kéry, Marc

    2018-02-01

    Binomial N-mixture models have proven very useful in ecology, conservation, and monitoring: they allow estimation and modeling of abundance separately from detection probability using simple counts. Recently, doubts about parameter identifiability have been voiced. I conducted a large-scale screening test with 137 bird data sets from 2,037 sites. I found virtually no identifiability problems for Poisson and zero-inflated Poisson (ZIP) binomial N-mixture models, but negative-binomial (NB) models had problems in 25% of all data sets. The corresponding multinomial N-mixture models had no problems. Parameter estimates under Poisson and ZIP binomial and multinomial N-mixture models were extremely similar. Identifiability problems became a little more frequent with smaller sample sizes (267 and 50 sites), but were unaffected by whether the models did or did not include covariates. Hence, binomial N-mixture model parameters with Poisson and ZIP mixtures typically appeared identifiable. In contrast, NB mixtures were often unidentifiable, which is worrying since these were often selected by Akaike's information criterion. Identifiability of binomial N-mixture models should always be checked. If problems are found, simpler models, integrated models that combine different observation models or the use of external information via informative priors or penalized likelihoods, may help. © 2017 by the Ecological Society of America.

  11. Molecular modeling and simulation of FabG, an enzyme involved in the fatty acid pathway of Streptococcus pyogenes.

    Science.gov (United States)

    Shafreen, Rajamohmed Beema; Pandian, Shunmugiah Karutha

    2013-09-01

    Streptococcus pyogenes (SP) is the major cause of pharyngitis accompanied by strep throat infections in humans. 3-keto acyl reductase (FabG), an important enzyme involved in the elongation cycle of the fatty acid pathway of S. pyogenes, is essential for synthesis of the cell-membrane, virulence factors and quorum sensing-related mechanisms. Targeting SPFabG may provide an important aid for the development of drugs against S. pyogenes. However, the absence of a crystal structure for FabG of S. pyogenes limits the development of structure-based drug designs. Hence, in the present study, a homology model of FabG was generated using the X-ray crystallographic structure of Aquifex aeolicus (PDB ID: 2PNF). The modeled structure was refined using energy minimization. Furthermore, active sites were predicted, and a large dataset of compounds was screened against SPFabG. The ligands were docked using the LigandFit module that is available from Discovery Studio version 2.5. From this list, 13 best hit ligands were chosen based on the docking score and binding energy. All of the 13 ligands were screened for Absorption, Distribution, Metabolism, Excretion and Toxicity (ADMET) properties. From this, the two best descriptors, along with one descriptor that lay outside the ADMET plot, were selected for molecular dynamic (MD) simulation. In vitro testing of the ligands using biological assays further substantiated the efficacy of the ligands that were screened based on the in silico methods. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Covariance approximation for large multivariate spatial data sets with an application to multiple climate model errors

    KAUST Repository

    Sang, Huiyan

    2011-12-01

    This paper investigates the cross-correlations across multiple climate model errors. We build a Bayesian hierarchical model that accounts for the spatial dependence of individual models as well as cross-covariances across different climate models. Our method allows for a nonseparable and nonstationary cross-covariance structure. We also present a covariance approximation approach to facilitate the computation in the modeling and analysis of very large multivariate spatial data sets. The covariance approximation consists of two parts: a reduced-rank part to capture the large-scale spatial dependence, and a sparse covariance matrix to correct the small-scale dependence error induced by the reduced rank approximation. We pay special attention to the case that the second part of the approximation has a block-diagonal structure. Simulation results of model fitting and prediction show substantial improvement of the proposed approximation over the predictive process approximation and the independent blocks analysis. We then apply our computational approach to the joint statistical modeling of multiple climate model errors. © 2012 Institute of Mathematical Statistics.

  13. Flexible non-linear predictive models for large-scale wind turbine diagnostics

    DEFF Research Database (Denmark)

    Bach-Andersen, Martin; Rømer-Odgaard, Bo; Winther, Ole

    2017-01-01

    We demonstrate how flexible non-linear models can provide accurate and robust predictions on turbine component temperature sensor data using data-driven principles and only a minimum of system modeling. The merits of different model architectures are evaluated using data from a large set...... of turbines operating under diverse conditions. We then go on to test the predictive models in a diagnostic setting, where the output of the models are used to detect mechanical faults in rotor bearings. Using retrospective data from 22 actual rotor bearing failures, the fault detection performance...... of the models are quantified using a structured framework that provides the metrics required for evaluating the performance in a fleet wide monitoring setup. It is demonstrated that faults are identified with high accuracy up to 45 days before a warning from the hard-threshold warning system....

  14. Traffic Flow Prediction Model for Large-Scale Road Network Based on Cloud Computing

    Directory of Open Access Journals (Sweden)

    Zhaosheng Yang

    2014-01-01

    Full Text Available To increase the efficiency and precision of large-scale road network traffic flow prediction, a genetic algorithm-support vector machine (GA-SVM model based on cloud computing is proposed in this paper, which is based on the analysis of the characteristics and defects of genetic algorithm and support vector machine. In cloud computing environment, firstly, SVM parameters are optimized by the parallel genetic algorithm, and then this optimized parallel SVM model is used to predict traffic flow. On the basis of the traffic flow data of Haizhu District in Guangzhou City, the proposed model was verified and compared with the serial GA-SVM model and parallel GA-SVM model based on MPI (message passing interface. The results demonstrate that the parallel GA-SVM model based on cloud computing has higher prediction accuracy, shorter running time, and higher speedup.

  15. Hybrid Reynolds-Averaged/Large Eddy Simulation of the Flow in a Model SCRamjet Cavity Flameholder

    Science.gov (United States)

    Baurle, R. A.

    2016-01-01

    Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. Experimental data available for this configuration include velocity statistics obtained from particle image velocimetry. Several turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged/large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This e ort was undertaken to not only assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community, but to also begin to understand how this capability can best be used to augment standard Reynolds-averaged simulations. The numerical errors were quantified for the steady-state simulations, and at least qualitatively assessed for the scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results displayed a high degree of variability when comparing the flameholder fuel distributions obtained from each turbulence model. This prompted the consideration of applying the higher-fidelity scale-resolving simulations as a surrogate "truth" model to calibrate the Reynolds-averaged closures in a non-reacting setting prior to their use for the combusting simulations. In general, the Reynolds-averaged velocity profile predictions at the lowest fueling level matched the particle imaging measurements almost as well as was observed for the non-reacting condition. However, the velocity field predictions proved to be more sensitive to the flameholder fueling rate than was indicated in the measurements.

  16. Compensatory hypertrophy of the teres minor muscle after large rotator cuff tear model in adult male rat.

    Science.gov (United States)

    Ichinose, Tsuyoshi; Yamamoto, Atsushi; Kobayashi, Tsutomu; Shitara, Hitoshi; Shimoyama, Daisuke; Iizuka, Haku; Koibuchi, Noriyuki; Takagishi, Kenji

    2016-02-01

    Rotator cuff tear (RCT) is a common musculoskeletal disorder in the elderly. The large RCT is often irreparable due to the retraction and degeneration of the rotator cuff muscle. The integrity of the teres minor (TM) muscle is thought to affect postoperative functional recovery in some surgical treatments. Hypertrophy of the TM is found in some patients with large RCTs; however, the process underlying this hypertrophy is still unclear. The objective of this study was to determine if compensatory hypertrophy of the TM muscle occurs in a large RCT rat model. Twelve Wistar rats underwent transection of the suprascapular nerve and the supraspinatus and infraspinatus tendons in the left shoulder. The rats were euthanized 4 weeks after the surgery, and the cuff muscles were collected and weighed. The cross-sectional area and the involvement of Akt/mammalian target of rapamycin (mTOR) signaling were examined in the remaining TM muscle. The weight and cross-sectional area of the TM muscle was higher in the operated-on side than in the control side. The phosphorylated Akt/Akt protein ratio was not significantly different between these sides. The phosphorylated-mTOR/mTOR protein ratio was significantly higher on the operated-on side. Transection of the suprascapular nerve and the supraspinatus and infraspinatus tendons activates mTOR signaling in the TM muscle, which results in muscle hypertrophy. The Akt-signaling pathway may not be involved in this process. Nevertheless, activation of mTOR signaling in the TM muscle after RCT may be an effective therapeutic target of a large RCT. Copyright © 2016 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  17. The Undergraduate ALFALFA Team: A Model for Involving Undergraduates in Major Legacy Astronomy Research

    Science.gov (United States)

    Troischt, Parker; Koopmann, Rebecca A.; Haynes, Martha P.; Higdon, Sarah; Balonek, Thomas J.; Cannon, John M.; Coble, Kimberly A.; Craig, David; Durbala, Adriana; Finn, Rose; Hoffman, G. Lyle; Kornreich, David A.; Lebron, Mayra E.; Crone-Odekon, Mary; O'Donoghue, Aileen A.; Olowin, Ronald Paul; Pantoja, Carmen; Rosenberg, Jessica L.; Venkatesan, Aparna; Wilcots, Eric M.; Alfalfa Team

    2015-01-01

    The NSF-sponsored Undergraduate ALFALFA (Arecibo Legacy Fast ALFA) Team (UAT) is a consortium of 19 institutions founded to promote undergraduate research and faculty development within the extragalactic ALFALFA HI blind survey project and follow-up programs. The collaborative nature of the UAT allows faculty and students from a wide ​range of public and private colleges and especially those with small astronomy programs to develop scholarly collaborations. Components of the program include an annual undergraduate workshop at Arecibo Observatory, observing runs at Arecibo, computer infrastructure, summer and academic year research projects, and dissemination at national meetings (e.g., Alfvin et al., Martens et al., Sanders et al., this meeting). Through this model, faculty and students are learning how science is accomplished in a large collaboration while contributing to the scientific goals of a major legacy survey. In the 7 years of the program, 23 faculty and more than 220 undergraduate students have participated at a significant level. 40% of them have been women and members of underrepresented groups. Faculty, many of whom were new to the collaboration and had expertise in other fields, contribute their diverse sets of skills to ALFALFA ​related projects via observing, data reduction, collaborative research, and research with students. 142 undergraduate students have attended the annual workshops at Arecibo Observatory, interacting with faculty, graduate students, their peers, and Arecibo staff in lectures, group activities, tours, and observing runs. Team faculty have supervised 131 summer research projects and 94 academic year (e.g., senior thesis) projects. 62 students have traveled to Arecibo Observatory for observing runs and 46 have presented their results at national meetings. 93% of alumni are attending graduate school and/or pursuing a career in STEM. Half of those pursuing graduate degrees in Physics or Astronomy are women. This work has been

  18. Volterra representation enables modeling of complex synaptic nonlinear dynamics in large-scale simulations.

    Science.gov (United States)

    Hu, Eric Y; Bouteiller, Jean-Marie C; Song, Dong; Baudry, Michel; Berger, Theodore W

    2015-01-01

    Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations.

  19. SWAT meta-modeling as support of the management scenario analysis in large watersheds.

    Science.gov (United States)

    Azzellino, A; Çevirgen, S; Giupponi, C; Parati, P; Ragusa, F; Salvetti, R

    2015-01-01

    In the last two decades, numerous models and modeling techniques have been developed to simulate nonpoint source pollution effects. Most models simulate the hydrological, chemical, and physical processes involved in the entrainment and transport of sediment, nutrients, and pesticides. Very often these models require a distributed modeling approach and are limited in scope by the requirement of homogeneity and by the need to manipulate extensive data sets. Physically based models are extensively used in this field as a decision support for managing the nonpoint source emissions. A common characteristic of this type of model is a demanding input of several state variables that makes the calibration and effort-costing in implementing any simulation scenario more difficult. In this study the USDA Soil and Water Assessment Tool (SWAT) was used to model the Venice Lagoon Watershed (VLW), Northern Italy. A Multi-Layer Perceptron (MLP) network was trained on SWAT simulations and used as a meta-model for scenario analysis. The MLP meta-model was successfully trained and showed an overall accuracy higher than 70% both on the training and on the evaluation set, allowing a significant simplification in conducting scenario analysis.

  20. Potential large animal models for gene therapy of human genetic diseases of immune and blood cell systems.

    Science.gov (United States)

    Bauer, Thomas R; Adler, Rima L; Hickstein, Dennis D

    2009-01-01

    Genetic mutations involving the cellular components of the hematopoietic system--red blood cells, white blood cells, and platelets--manifest clinically as anemia, infection, and bleeding. Although gene targeting has recapitulated many of these diseases in mice, these murine homologues are limited as translational models by their small size and brief life span as well as the fact that mutations induced by gene targeting do not always faithfully reflect the clinical manifestations of such mutations in humans. Many of these limitations can be overcome by identifying large animals with genetic diseases of the hematopoietic system corresponding to their human disease counterparts. In this article, we describe human diseases of the cellular components of the hematopoietic system that have counterparts in large animal species, in most cases carrying mutations in the same gene (CD18 in leukocyte adhesion deficiency) or genes in interacting proteins (DNA cross-link repair 1C protein and protein kinase, DNA-activated catalytic polypeptide in radiation-sensitive severe combined immunodeficiency). Furthermore, we describe the potential of these animal models to serve as disease-specific preclinical models for testing the efficacy and safety of clinical interventions such as hematopoietic stem cell transplantation or gene therapy before their use in humans with the corresponding disease.

  1. Going the extra mile - creating a co-operative model for supporting patient and public involvement in research.

    Science.gov (United States)

    Horobin, Adele

    2016-01-01

    In 2014, the Chief Medical Officer and Director General of Research and Development commissioned a review of patient and public involvement in the National Institute for Health Research. The report on this review, entitled 'Going the Extra Mile' was published in March, 2015. It described the bold goal of expecting all people using health and social care, and increasing numbers of the public, to be aware of and choosing to be involved in research. This requires more effort to build public awareness of research and better support for the public and researchers to do patient and public involvement in research. The author has created a new way of providing support for patient and public involvement based on co-operation between organisations. Termed 'share-banking', this model pools limited resources across organisations to deliver a regional programme of support activities for patient and public involvement over the long term. This includes helping organisations to share and learn from each other to avoid 're-inventing wheels' (where separate organisations each develop the same thing from the beginning). The 'Going the Extra Mile' report recommends that local organisations should work together to deliver public involvement activities across a region. 'Share-banking' should help fulfil this recommendation. The 'Going the Extra Mile' final report opened with the ambition to increase the public's awareness, participation and involvement in research. It stated the need for public and researchers to be better supported to do public involvement. A new co-operative model, termed 'share-banking', has been developed whereby organisations pool limited resources to create and sustain support for patient and public involvement in research. This should fulfil the 'Going the Extra Mile' report's recommendation to take a collaborative, cross-organisational and regional approach to public involvement.

  2. REIONIZATION ON LARGE SCALES. I. A PARAMETRIC MODEL CONSTRUCTED FROM RADIATION-HYDRODYNAMIC SIMULATIONS

    International Nuclear Information System (INIS)

    Battaglia, N.; Trac, H.; Cen, R.; Loeb, A.

    2013-01-01

    We present a new method for modeling inhomogeneous cosmic reionization on large scales. Utilizing high-resolution radiation-hydrodynamic simulations with 2048 3 dark matter particles, 2048 3 gas cells, and 17 billion adaptive rays in a L = 100 Mpc h –1 box, we show that the density and reionization redshift fields are highly correlated on large scales (∼> 1 Mpc h –1 ). This correlation can be statistically represented by a scale-dependent linear bias. We construct a parametric function for the bias, which is then used to filter any large-scale density field to derive the corresponding spatially varying reionization redshift field. The parametric model has three free parameters that can be reduced to one free parameter when we fit the two bias parameters to simulation results. We can differentiate degenerate combinations of the bias parameters by combining results for the global ionization histories and correlation length between ionized regions. Unlike previous semi-analytic models, the evolution of the reionization redshift field in our model is directly compared cell by cell against simulations and performs well in all tests. Our model maps the high-resolution, intermediate-volume radiation-hydrodynamic simulations onto lower-resolution, larger-volume N-body simulations (∼> 2 Gpc h –1 ) in order to make mock observations and theoretical predictions

  3. A 3D thermal runaway propagation model for a large format lithium ion battery module

    International Nuclear Information System (INIS)

    Feng, Xuning; Lu, Languang; Ouyang, Minggao; Li, Jiangqiu; He, Xiangming

    2016-01-01

    In this paper, a 3D thermal runaway (TR) propagation model is built for a large format lithium ion battery module. The 3D TR propagation model is built based on the energy balance equation. Empirical equations are utilized to simplify the calculation of the chemical kinetics for TR, whereas equivalent thermal resistant layer is employed to simplify the heat transfer through the thin thermal layer. The 3D TR propagation model is validated by experiment and can provide beneficial discussions on the mechanisms of TR propagation. According to the modeling analysis of the 3D model, the TR propagation can be delayed or prevented through: 1) increasing the TR triggering temperature; 2) reducing the total electric energy released during TR; 3) enhancing the heat dissipation level; 4) adding extra thermal resistant layer between adjacent batteries. The TR propagation is successfully prevented in the model and validated by experiment. The model with 3D temperature distribution provides a beneficial tool for researchers to study the TR propagation mechanisms and for engineers to design a safer battery pack. - Highlights: • A 3D thermal runaway (TR) propagation model for Li-ion battery pack is built. • The 3D TR propagation model can fit experimental results well. • Temperature distributions during TR propagation are presented using the 3D model. • Modeling analysis provides solutions for the prevention of TR propagation. • Quantified solutions to prevent TR propagation in battery pack are discussed.

  4. Cardiac regeneration using pluripotent stem cells—Progression to large animal models

    Directory of Open Access Journals (Sweden)

    James J.H. Chong

    2014-11-01

    Full Text Available Pluripotent stem cells (PSCs have indisputable cardiomyogenic potential and therefore have been intensively investigated as a potential cardiac regenerative therapy. Current directed differentiation protocols are able to produce high yields of cardiomyocytes from PSCs and studies in small animal models of cardiovascular disease have proven sustained engraftment and functional efficacy. Therefore, the time is ripe for cardiac regenerative therapies using PSC derivatives to be tested in large animal models that more closely resemble the hearts of humans. In this review, we discuss the results of our recent study using human embryonic stem cell derived cardiomyocytes (hESC-CM in a non-human primate model of ischemic cardiac injury. Large scale remuscularization, electromechanical coupling and short-term arrhythmias demonstrated by our hESC-CM grafts are discussed in the context of other studies using adult stem cells for cardiac regeneration.

  5. Design of roundness measurement model with multi-systematic error for cylindrical components with large radius.

    Science.gov (United States)

    Sun, Chuanzhi; Wang, Lei; Tan, Jiubin; Zhao, Bo; Tang, Yangchao

    2016-02-01

    The paper designs a roundness measurement model with multi-systematic error, which takes eccentricity, probe offset, radius of tip head of probe, and tilt error into account for roundness measurement of cylindrical components. The effects of the systematic errors and radius of components are analysed in the roundness measurement. The proposed method is built on the instrument with a high precision rotating spindle. The effectiveness of the proposed method is verified by experiment with the standard cylindrical component, which is measured on a roundness measuring machine. Compared to the traditional limacon measurement model, the accuracy of roundness measurement can be increased by about 2.2 μm using the proposed roundness measurement model for the object with a large radius of around 37 mm. The proposed method can improve the accuracy of roundness measurement and can be used for error separation, calibration, and comparison, especially for cylindrical components with a large radius.

  6. Scales of Langmuir circulation generated using a large-eddy simulation model

    International Nuclear Information System (INIS)

    Skyllingstad, Eric D.

    2001-01-01

    Sensitivity experiments were performed using a large-eddy simulation (LES) turbulence model of the ocean surface boundary layer. Parameters defining wind and wave forcing were varied to help understand how different forcing affects the formation and dispersive qualities of Langmuir circulation (LC). Comparison of the model with observed surface velocity variance shows a consistent linear increase in velocity scale with increasing wave Stokes drift, however, the model systematically under predicts the velocity scale for large Stokes drift. Results using particle trajectories show that in open-ocean conditions, wave forcing dominates the structure of near surface turbulence causing organized LC cells that actively collect surface material. With weak waves, surface particles display a more random pattern in comparison to strong wave cases. Analysis of the turbulence kinetic energy budget shows that the reduction in wave forcing is offset by shear production, which produces less organized patterns in surface material in comparison to LC. (Author)

  7. A testing facility for large scale models at 100 bar and 3000C to 10000C

    International Nuclear Information System (INIS)

    Zemann, H.

    1978-07-01

    A testing facility for large scale model tests is in construction under support of the Austrian Industry. It will contain a Prestressed Concrete Pressure Vessel (PCPV) with hot linear (300 0 C at 100 bar), an electrical heating system (1.2 MW, 1000 0 C), a gas supply system, and a cooling system for the testing space. The components themselves are models for advanced high temperature applications. The first main component which was tested successfully was the PCPV. Basic investigation of the building materials, improvements of concrete gauges, large scale model tests and measurements within the structural concrete and on the liner from the beginning of construction during the period of prestressing, the period of stabilization and the final pressurizing tests have been made. On the basis of these investigations a computer controlled safety surveillance system for long term high pressure, high temperature tests has been developed. (author)

  8. A Novel Iterative and Dynamic Trust Computing Model for Large Scaled P2P Networks

    Directory of Open Access Journals (Sweden)

    Zhenhua Tan

    2016-01-01

    Full Text Available Trust management has been emerging as an essential complementary part to security mechanisms of P2P systems, and trustworthiness is one of the most important concepts driving decision making and establishing reliable relationships. Collusion attack is a main challenge to distributed P2P trust model. Large scaled P2P systems have typical features, such as large scaled data with rapid speed, and this paper presented an iterative and dynamic trust computation model named IDTrust (Iterative and Dynamic Trust model according to these properties. First of all, a three-layered distributed trust communication architecture was presented in IDTrust so as to separate evidence collector and trust decision from P2P service. Then an iterative and dynamic trust computation method was presented to improve efficiency, where only latest evidences were enrolled during one iterative computation. On the basis of these, direct trust model, indirect trust model, and global trust model were presented with both explicit and implicit evidences. We consider multifactors in IDTrust model according to different malicious behaviors, such as similarity, successful transaction rate, and time decay factors. Simulations and analysis proved the rightness and efficiency of IDTrust against attacks with quick respond and sensitiveness during trust decision.

  9. Simulation of hydrogen release and combustion in large scale geometries: models and methods

    International Nuclear Information System (INIS)

    Beccantini, A.; Dabbene, F.; Kudriakov, S.; Magnaud, J.P.; Paillere, H.; Studer, E.

    2003-01-01

    The simulation of H2 distribution and combustion in confined geometries such as nuclear reactor containments is a challenging task from the point of view of numerical simulation, as it involves quite disparate length and time scales, which need to resolved appropriately and efficiently. Cea is involved in the development and validation of codes to model such problems, for external clients such as IRSN (TONUS code), Technicatome (NAUTILUS code) or for its own safety studies. This paper provides an overview of the physical and numerical models developed for such applications, as well as some insight into the current research topics which are being pursued. Examples of H2 mixing and combustion simulations are given. (authors)

  10. Cloud-enabled large-scale land surface model simulations with the NASA Land Information System

    Science.gov (United States)

    Duffy, D.; Vaughan, G.; Clark, M. P.; Peters-Lidard, C. D.; Nijssen, B.; Nearing, G. S.; Rheingrover, S.; Kumar, S.; Geiger, J. V.

    2017-12-01

    Developed by the Hydrological Sciences Laboratory at NASA Goddard Space Flight Center (GSFC), the Land Information System (LIS) is a high-performance software framework for terrestrial hydrology modeling and data assimilation. LIS provides the ability to integrate satellite and ground-based observational products and advanced modeling algorithms to extract land surface states and fluxes. Through a partnership with the National Center for Atmospheric Research (NCAR) and the University of Washington, the LIS model is currently being extended to include the Structure for Unifying Multiple Modeling Alternatives (SUMMA). With the addition of SUMMA in LIS, meaningful simulations containing a large multi-model ensemble will be enabled and can provide advanced probabilistic continental-domain modeling capabilities at spatial scales relevant for water managers. The resulting LIS/SUMMA application framework is difficult for non-experts to install due to the large amount of dependencies on specific versions of operating systems, libraries, and compilers. This has created a significant barrier to entry for domain scientists that are interested in using the software on their own systems or in the cloud. In addition, the requirement to support multiple run time environments across the LIS community has created a significant burden on the NASA team. To overcome these challenges, LIS/SUMMA has been deployed using Linux containers, which allows for an entire software package along with all dependences to be installed within a working runtime environment, and Kubernetes, which orchestrates the deployment of a cluster of containers. Within a cloud environment, users can now easily create a cluster of virtual machines and run large-scale LIS/SUMMA simulations. Installations that have taken weeks and months can now be performed in minutes of time. This presentation will discuss the steps required to create a cloud-enabled large-scale simulation, present examples of its use, and

  11. Large-scale building energy efficiency retrofit: Concept, model and control

    International Nuclear Information System (INIS)

    Wu, Zhou; Wang, Bo; Xia, Xiaohua

    2016-01-01

    BEER (Building energy efficiency retrofit) projects are initiated in many nations and regions over the world. Existing studies of BEER focus on modeling and planning based on one building and one year period of retrofitting, which cannot be applied to certain large BEER projects with multiple buildings and multi-year retrofit. In this paper, the large-scale BEER problem is defined in a general TBT (time-building-technology) framework, which fits essential requirements of real-world projects. The large-scale BEER is newly studied in the control approach rather than the optimization approach commonly used before. Optimal control is proposed to design optimal retrofitting strategy in terms of maximal energy savings and maximal NPV (net present value). The designed strategy is dynamically changing on dimensions of time, building and technology. The TBT framework and the optimal control approach are verified in a large BEER project, and results indicate that promising performance of energy and cost savings can be achieved in the general TBT framework. - Highlights: • Energy efficiency retrofit of many buildings is studied. • A TBT (time-building-technology) framework is proposed. • The control system of the large-scale BEER is modeled. • The optimal retrofitting strategy is obtained.

  12. Students’ Involvement in Authentic Modelling Practices as Contexts in Chemistry Education

    NARCIS (Netherlands)

    Prins, G.T.; Bulte, A.M.W.; van Driel, J.H.; Pilot, A.

    2009-01-01

    In science education students should come to understand the nature and significance of models. A promising strategy to achieve this goal is using authentic modelling practices as contexts for meaningful learning of models and modelling. An authentic practice is defined as professionals working with

  13. Functional models for large-scale gene regulation networks: realism and fiction.

    Science.gov (United States)

    Lagomarsino, Marco Cosentino; Bassetti, Bruno; Castellani, Gastone; Remondini, Daniel

    2009-04-01

    High-throughput experiments are shedding light on the topology of large regulatory networks and at the same time their functional states, namely the states of activation of the nodes (for example transcript or protein levels) in different conditions, times, environments. We now possess a certain amount of information about these two levels of description, stored in libraries, databases and ontologies. A current challenge is to bridge the gap between topology and function, i.e. developing quantitative models aimed at characterizing the expression patterns of large sets of genes. However, approaches that work well for small networks become impossible to master at large scales, mainly because parameters proliferate. In this review we discuss the state of the art of large-scale functional network models, addressing the issue of what can be considered as "realistic" and what the main limitations may be. We also show some directions for future work, trying to set the goals that future models should try to achieve. Finally, we will emphasize the possible benefits in the understanding of biological mechanisms underlying complex multifactorial diseases, and in the development of novel strategies for the description and the treatment of such pathologies.

  14. Impact of resilience and job involvement on turnover intention of new graduate nurses using structural equation modeling.

    Science.gov (United States)

    Yu, Mi; Lee, Haeyoung

    2018-03-06

    Nurses' turnover intention is not just a result of their maladjustment to the field; it is an organizational issue. This study aimed to construct a structural model to verify the effects of new graduate nurses' work environment satisfaction, emotional labor, and burnout on their turnover intention, with consideration of resilience and job involvement, and to test the adequacy of the developed model. A cross-sectional study and a structural equation modelling approach were used. A nationwide survey was conducted of 371 new nurses who were working in hospitals for ≤18 months between July and October, 2014. The final model accounted for 40% of the variance in turnover intention. Emotional labor and burnout had a significant positive direct effect and an indirect effect on nurses' turnover intention. Resilience had a positive direct effect on job involvement. Job involvement had a negative direct effect on turnover intention. Resilience and job involvement mediated the effect of work environment satisfaction, emotional labor, and burnout on turnover intention. It is important to strengthen new graduate nurses' resilience in order to increase their job involvement and to reduce their turnover intention. © 2018 Japan Academy of Nursing Science.

  15. A numerical model to evaluate the flow distribution in a large solar collector field

    DEFF Research Database (Denmark)

    Bava, Federico; Dragsted, Janne; Furbo, Simon

    2017-01-01

    This study presents a numerical model to evaluate the flow distribution in a large solar collector field, with solar collectors connected both in series and in parallel. The boundary conditions of the systems, such as flow rate, temperature, fluid type and layout of the collector field can...... be easily changed in the model. The model was developed in Matlab and the calculated pressure drop and flow distribution were compared with measurements from a solar collector field. A good agreement between model and measurements was found. The model was then used to study the flow distribution...... in different conditions. Balancing valves proved to be an effective way to achieve uniform flow distribution also in conditions different from those for which the valves were regulated. For small solar collector fields with limited number of collector rows connected in parallel, balancing valves...

  16. Modal Measurements and Model Corrections of A Large Stroke Compliant Mechanism

    Directory of Open Access Journals (Sweden)

    Wijma W.

    2014-08-01

    Full Text Available In modelling flexure based mechanisms, generally flexures are modelled perfectly aligned and nominal values are assumed for the dimensions. To test the validity of these assumptions for a two Degrees Of Freedom (DOF large stroke compliant mechanism, eigenfrequency and mode shape measurements are compared to results obtained with a flexible multibody model. The mechanism consists of eleven cross flexures and seven interconnecting bodies. From the measurements 30% lower eigenfrequencies are observed than those obtained with the model. With a simplified model, it is demonstrated that these differences can be attributed to wrongly assumed leaf spring thickness and misalignment of the leaf springs in the cross flexures. These manufacturing tolerances thus significantly affect the behaviour of the two DOF mechanism, even though it was designed using the exact constraint design principle. This design principle avoids overconstraints to limit internal stresses due to manufacturing tolerances, yet this paper shows clearly that manufacturing imperfections can still result in significantly different dynamic behaviour.

  17. Introduction to focus issue: Synchronization in large networks and continuous media—data, models, and supermodels

    Science.gov (United States)

    Duane, Gregory S.; Grabow, Carsten; Selten, Frank; Ghil, Michael

    2017-12-01

    The synchronization of loosely coupled chaotic systems has increasingly found applications to large networks of differential equations and to models of continuous media. These applications are at the core of the present Focus Issue. Synchronization between a system and its model, based on limited observations, gives a new perspective on data assimilation. Synchronization among different models of the same system defines a supermodel that can achieve partial consensus among models that otherwise disagree in several respects. Finally, novel methods of time series analysis permit a better description of synchronization in a system that is only observed partially and for a relatively short time. This Focus Issue discusses synchronization in extended systems or in components thereof, with particular attention to data assimilation, supermodeling, and their applications to various areas, from climate modeling to macroeconomics.

  18. Introduction to focus issue: Synchronization in large networks and continuous media-data, models, and supermodels.

    Science.gov (United States)

    Duane, Gregory S; Grabow, Carsten; Selten, Frank; Ghil, Michael

    2017-12-01

    The synchronization of loosely coupled chaotic systems has increasingly found applications to large networks of differential equations and to models of continuous media. These applications are at the core of the present Focus Issue. Synchronization between a system and its model, based on limited observations, gives a new perspective on data assimilation. Synchronization among different models of the same system defines a supermodel that can achieve partial consensus among models that otherwise disagree in several respects. Finally, novel methods of time series analysis permit a better description of synchronization in a system that is only observed partially and for a relatively short time. This Focus Issue discusses synchronization in extended systems or in components thereof, with particular attention to data assimilation, supermodeling, and their applications to various areas, from climate modeling to macroeconomics.

  19. Reference Management Methodologies for Large Structural Models at Kennedy Space Center

    Science.gov (United States)

    Jones, Corey; Bingham, Ryan; Schmidt, Rick

    2011-01-01

    There have been many challenges associated with modeling some of NASA KSC's largest structures. Given the size of the welded structures here at KSC, it was critically important to properly organize model struc.ture and carefully manage references. Additionally, because of the amount of hardware to be installed on these structures, it was very important to have a means to coordinate between different design teams and organizations, check for interferences, produce consistent drawings, and allow for simple release processes. Facing these challenges, the modeling team developed a unique reference management methodology and model fidelity methodology. This presentation will describe the techniques and methodologies that were developed for these projects. The attendees will learn about KSC's reference management and model fidelity methodologies for large structures. The attendees will understand the goals of these methodologies. The attendees will appreciate the advantages of developing a reference management methodology.

  20. A review of dissolved oxygen and biochemical oxygen demand models for large rivers

    International Nuclear Information System (INIS)

    Haider, H.; Al, W.

    2013-01-01

    Development and modifications of mathematical models for Dissolved Oxygen (DO) are reviewed in this paper. The field and laboratory methods to estimate the kinetics of Carbonaceous Biochemical Oxygen Demand (CBOD) and Nitrogenous Biochemical Oxygen Demand (NBOD) are also presented. This review also includes recent approaches of BOD and DO modeling beside the conventional ones along with their applicability to the natural rivers. The most frequently available public domain computer models and their applications in real life projects are also briefly covered. The literature survey reveals that currently there is more emphasis on solution techniques rather than understanding the mechanisms and processes that control DO in large rivers. The DO modeling software contains inbuilt coefficients and parameters that may not reflect the specific conditions under study. It is therefore important that the selected mathematical and computer models must incorporate the relevant processes specific to the river under study and be within the available resources in term of data collection. (author)

  1. Management and services for large-scale virtual 3D urban model data based on network

    Science.gov (United States)

    He, Zhengwei; Chen, Jing; Wu, Huayi

    2008-10-01

    The buildings in modern city are complex and diverse, and the quantity is huge. These bring very big challenge for constructing 3D GIS under network circumstance and eventually realizing the Digital Earth. After analyzed the characteristic of network service about massive 3D urban building model data, this paper focuses on the organization and management of spatial data and the network services strategy, proposes a progressive network transmission schema based on the spatial resolution and the component elements of 3D building model data. Next, this paper put forward multistage-link three-dimensional spatial data organization model and encoding method of spatial index based on fully level quadtree structure. Then, a virtual earth platform, called GeoGlobe, was developed using above theory. Experimental results show that above 3D spatial data management model and service theory can availably provide network services for large-scale 3D urban model data. The application results and user experience good .

  2. Modeling and experiments of biomass combustion in a large-scale grate boiler

    DEFF Research Database (Denmark)

    Yin, Chungen; Rosendahl, Lasse; Kær, Søren Knudsen

    2007-01-01

    is exposed to preheated inlet air while the top of the bed resides within the furnace. Mathematical modeling is an efficient way to understand and improve the operation and design of combustion systems. Compared to modeling of pulverized fuel furnaces, CFD modeling of biomass-fired grate furnaces...... is inherently more difficult due to the complexity of the solid biomass fuel bed on the grate, the turbulent reacting flow in the combustion chamber and the intensive interaction between them. This paper presents the CFD validation efforts for a modern large-scale biomass-fired grate boiler. Modeling...... quite much with the conditions in the real furnace. Combustion instabilities in the fuel bed impose big challenges to give reliable grate inlet BCs for the CFD modeling; the deposits formed on furnace walls and air nozzles make it difficult to define precisely the wall BCs and air jet BCs...

  3. Development of estrogen receptor beta binding prediction model using large sets of chemicals.

    Science.gov (United States)

    Sakkiah, Sugunadevi; Selvaraj, Chandrabose; Gong, Ping; Zhang, Chaoyang; Tong, Weida; Hong, Huixiao

    2017-11-03

    We developed an ER β binding prediction model to facilitate identification of chemicals specifically bind ER β or ER α together with our previously developed ER α binding model. Decision Forest was used to train ER β binding prediction model based on a large set of compounds obtained from EADB. Model performance was estimated through 1000 iterations of 5-fold cross validations. Prediction confidence was analyzed using predictions from the cross validations. Informative chemical features for ER β binding were identified through analysis of the frequency data of chemical descriptors used in the models in the 5-fold cross validations. 1000 permutations were conducted to assess the chance correlation. The average accuracy of 5-fold cross validations was 93.14% with a standard deviation of 0.64%. Prediction confidence analysis indicated that the higher the prediction confidence the more accurate the predictions. Permutation testing results revealed that the prediction model is unlikely generated by chance. Eighteen informative descriptors were identified to be important to ER β binding prediction. Application of the prediction model to the data from ToxCast project yielded very high sensitivity of 90-92%. Our results demonstrated ER β binding of chemicals could be accurately predicted using the developed model. Coupling with our previously developed ER α prediction model, this model could be expected to facilitate drug development through identification of chemicals that specifically bind ER β or ER α .

  4. Long-Run Effects in Large Heterogeneous Panel Data Models with Cross-Sectionally Correlated Errors

    OpenAIRE

    Chudik, Alexander; Mohaddes, Kamiar; Pesaran, M Hashem; Raissi, Mehdi

    2016-01-01

    This paper develops a cross-sectionally augmented distributed lag (CS-DL) approach to the estimation of long-run effects in large dynamic heterogeneous panel data models with cross-sectionally dependent errors. The asymptotic distribution of the CS-DL estimator is derived under coefficient heterogeneity in the case where the time dimension (T) and the cross-section dimension (N) are both large. The CS-DL approach is compared with more standard panel data estimators that are based on autoregre...

  5. Long-run effects in large heterogenous panel data models with cross-sectionally correlated errors

    OpenAIRE

    Chudik, Alexander; Mohaddes, Kamiar; Pesaran, M. Hashem; Raissi, Mehdi

    2015-01-01

    This paper develops a cross-sectionally augmented distributed lag (CS-DL) approach to the estimation of long-run effects in large dynamic heterogeneous panel data models with cross-sectionally dependent errors. The asymptotic distribution of the CS-DL estimator is derived under coefficient heterogeneity in the case where the time dimension (T) and the cross-section dimension (N) are both large. The CS-DL approach is compared with more standard panel data estimators that are based on autoregre...

  6. A phase transition between small- and large-field models of inflation

    International Nuclear Information System (INIS)

    Itzhaki, Nissan; Kovetz, Ely D

    2009-01-01

    We show that models of inflection point inflation exhibit a phase transition from a region in parameter space where they are of large-field type to a region where they are of small-field type. The phase transition is between a universal behavior, with respect to the initial condition, at the large-field region and non-universal behavior at the small-field region. The order parameter is the number of e-foldings. We find integer critical exponents at the transition between the two phases.

  7. Design and construction of a large reinforced concrete containment model to be tested to failure

    International Nuclear Information System (INIS)

    Ucciferro, J.J.; Horschel, D.S.

    1987-01-01

    The US Nuclear Regulatory Commission is investigating the performance of LWR containments subjected to severe accidents. This work is being performed by the Containment Integrity Division at Sandia National Laboratories (Sandia). The latest research effort involves the testing of a 1/6-scale reinforced concrete containment model. The containment, which was designed and constructed by United Engineers and Constructors, is the largest and most complex model of its kind. The design and construction of the containment model are the subject of this paper. The objective of the containment model tests is to generate data that can be used to qualify methods for reliably predicting the response of LWR containment buildings to severe accident loads. The data recorded during testing include deformations and leakage past sealing surfaces, as well as strains and displacements of the containment shell

  8. A Multi-Resolution Spatial Model for Large Datasets Based on the Skew-t Distribution

    KAUST Repository

    Tagle, Felipe

    2017-12-06

    Large, non-Gaussian spatial datasets pose a considerable modeling challenge as the dependence structure implied by the model needs to be captured at different scales, while retaining feasible inference. Skew-normal and skew-t distributions have only recently begun to appear in the spatial statistics literature, without much consideration, however, for the ability to capture dependence at multiple resolutions, and simultaneously achieve feasible inference for increasingly large data sets. This article presents the first multi-resolution spatial model inspired by the skew-t distribution, where a large-scale effect follows a multivariate normal distribution and the fine-scale effects follow a multivariate skew-normal distributions. The resulting marginal distribution for each region is skew-t, thereby allowing for greater flexibility in capturing skewness and heavy tails characterizing many environmental datasets. Likelihood-based inference is performed using a Monte Carlo EM algorithm. The model is applied as a stochastic generator of daily wind speeds over Saudi Arabia.

  9. A refined regional modeling approach for the Corn Belt - Experiences and recommendations for large-scale integrated modeling

    Science.gov (United States)

    Panagopoulos, Yiannis; Gassman, Philip W.; Jha, Manoj K.; Kling, Catherine L.; Campbell, Todd; Srinivasan, Raghavan; White, Michael; Arnold, Jeffrey G.

    2015-05-01

    Nonpoint source pollution from agriculture is the main source of nitrogen and phosphorus in the stream systems of the Corn Belt region in the Midwestern US. This region is comprised of two large river basins, the intensely row-cropped Upper Mississippi River Basin (UMRB) and Ohio-Tennessee River Basin (OTRB), which are considered the key contributing areas for the Northern Gulf of Mexico hypoxic zone according to the US Environmental Protection Agency. Thus, in this area it is of utmost importance to ensure that intensive agriculture for food, feed and biofuel production can coexist with a healthy water environment. To address these objectives within a river basin management context, an integrated modeling system has been constructed with the hydrologic Soil and Water Assessment Tool (SWAT) model, capable of estimating river basin responses to alternative cropping and/or management strategies. To improve modeling performance compared to previous studies and provide a spatially detailed basis for scenario development, this SWAT Corn Belt application incorporates a greatly refined subwatershed structure based on 12-digit hydrologic units or 'subwatersheds' as defined by the US Geological Service. The model setup, calibration and validation are time-demanding and challenging tasks for these large systems, given the scale intensive data requirements, and the need to ensure the reliability of flow and pollutant load predictions at multiple locations. Thus, the objectives of this study are both to comprehensively describe this large-scale modeling approach, providing estimates of pollution and crop production in the region as well as to present strengths and weaknesses of integrated modeling at such a large scale along with how it can be improved on the basis of the current modeling structure and results. The predictions were based on a semi-automatic hydrologic calibration approach for large-scale and spatially detailed modeling studies, with the use of the Sequential

  10. SMILE: experimental results of the WP4 PTS large scale test performed on a component in terms of cracked cylinder involving warm pre-stress

    International Nuclear Information System (INIS)

    Kerkhof, K.; Bezdikian, G.; Moinereau, D.; Dahl, A; Wadier, Y.; Gilles, P.; Keim, E.; Chapuliot, S.; Taylor, N.; Lidbury, D.; Sharples, J.; Budden, P.; Siegele, D.; Nagel, G.; Bass, R.; Emond, D.

    2005-01-01

    The Reactor Pressure Vessel (RPV) is an essential component, which is liable to limit the lifetime duration of PWR plants. The assessment of defects in RPV subjected to pressurized thermal shock (PTS) transients made at an European level generally does not necessarily consider the beneficial effect of the load history (Warm Pre-stress, WPS). The SMILE project - Structural Margin Improvements in aged embrittled RPV with Load history Effects-aims to give sufficient elements to demonstrate, to model and to validate the beneficial WPS effect. It also aims to harmonize the different approaches in the national codes and standards regarding the inclusion of the WPS effect in a RPV structural integrity assessment. The project includes significant experimental work on WPS type experiments with C(T) specimens and a PTS type transient experiment on a large component. This paper deals with the results of the PTS type transient experiment on a component-like specimen subjected to WPS- loading, the so called Validation Test, carried out within the framework of work package WP4. The test specimen consists of a cylindrical thick walled specimen with a thickness of 40 mm and an outer diameter of 160 mm, provided with an internal fully circumferential crack with a depth of about 15 mm. The specified load path type is Load-Cool-Unload-Fracture (LCUF). No crack initiation occurred during cooling (thermal shock loading) although the loading path crossed the fracture toughness curve in the transition region. The benefit of the WPS-effect by final re-loading up to fracture in the lower shelf region, was shown clearly. The corresponding fracture load during reloading in the lower shelf region was significantly higher than the crack initiation values of the original material in the lower shelf region. The post test fractographic evaluation showed that the fracture mode was predominantly cleavage fracture also with some secondary cracks emanating from major crack. (authors)

  11. Modeling large offshore wind farms under different atmospheric stability regimes with the Park wake model

    DEFF Research Database (Denmark)

    Pena Diaz, Alfredo; Réthoré, Pierre-Elouan; Rathmann, Ole

    2014-01-01

    We evaluate a modified version of the Park wake model against power data from a west-east row in the middle of the Horns Rev I offshore wind farm. The evaluation is performed on data classified in four different atmospheric stability conditions, for a narrow wind speed range, and a wide range...... turbines on the row and those using the WAsP recommended value closer to the data for the first turbines. It is generally seen that under stable and unstable atmospheric conditions the power deficits are the highest and lowest, respectively, but the wind conditions under both stability regimes...

  12. Modeling large offshore wind farms under different atmospheric stability regimes with the Park wake model

    DEFF Research Database (Denmark)

    Peña, Alfredo; Réthoré, Pierre-Elouan; Rathmann, Ole

    2013-01-01

    Here, we evaluate a modified version of the Park wake model against power data from a west-east row in the middle of the Horns Rev I offshore wind farm. The evaluation is performed on data classified in four different atmospheric stability conditions, for a narrow wind speed range, and a wide range...... turbines and those using the WAsP recommended value closer to the data for the first turbines. It is generally seen that under stable and unstable atmospheric conditions the power deficits are the highest and lowest, respectively, but the wind conditions under both stability regimes are different...

  13. Findings and Challenges in Fine-Resolution Large-Scale Hydrological Modeling

    Science.gov (United States)

    Her, Y. G.

    2017-12-01

    Fine-resolution large-scale (FL) modeling can provide the overall picture of the hydrological cycle and transport while taking into account unique local conditions in the simulation. It can also help develop water resources management plans consistent across spatial scales by describing the spatial consequences of decisions and hydrological events extensively. FL modeling is expected to be common in the near future as global-scale remotely sensed data are emerging, and computing resources have been advanced rapidly. There are several spatially distributed models available for hydrological analyses. Some of them rely on numerical methods such as finite difference/element methods (FDM/FEM), which require excessive computing resources (implicit scheme) to manipulate large matrices or small simulation time intervals (explicit scheme) to maintain the stability of the solution, to describe two-dimensional overland processes. Others make unrealistic assumptions such as constant overland flow velocity to reduce the computational loads of the simulation. Thus, simulation efficiency often comes at the expense of precision and reliability in FL modeling. Here, we introduce a new FL continuous hydrological model and its application to four watersheds in different landscapes and sizes from 3.5 km2 to 2,800 km2 at the spatial resolution of 30 m on an hourly basis. The model provided acceptable accuracy statistics in reproducing hydrological observations made in the watersheds. The modeling outputs including the maps of simulated travel time, runoff depth, soil water content, and groundwater recharge, were animated, visualizing the dynamics of hydrological processes occurring in the watersheds during and between storm events. Findings and challenges were discussed in the context of modeling efficiency, accuracy, and reproducibility, which we found can be improved by employing advanced computing techniques and hydrological understandings, by using remotely sensed hydrological

  14. A large-scale stochastic spatiotemporal model for Aedes albopictus-borne chikungunya epidemiology

    Science.gov (United States)

    Chandra, Nastassya L.; Proestos, Yiannis; Lelieveld, Jos; Christophides, George K.; Parham, Paul E.

    2017-01-01

    Chikungunya is a viral disease transmitted to humans primarily via the bites of infected Aedes mosquitoes. The virus caused a major epidemic in the Indian Ocean in 2004, affecting millions of inhabitants, while cases have also been observed in Europe since 2007. We developed a stochastic spatiotemporal model of Aedes albopictus-borne chikungunya transmission based on our recently developed environmentally-driven vector population dynamics model. We designed an integrated modelling framework incorporating large-scale gridded climate datasets to investigate disease outbreaks on Reunion Island and in Italy. We performed Bayesian parameter inference on the surveillance data, and investigated the validity and applicability of the underlying biological assumptions. The model successfully represents the outbreak and measures of containment in Italy, suggesting wider applicability in Europe. In its current configuration, the model implies two different viral strains, thus two different outbreaks, for the two-stage Reunion Island epidemic. Characterisation of the posterior distributions indicates a possible relationship between the second larger outbreak on Reunion Island and the Italian outbreak. The model suggests that vector control measures, with different modes of operation, are most effective when applied in combination: adult vector intervention has a high impact but is short-lived, larval intervention has a low impact but is long-lasting, and quarantining infected territories, if applied strictly, is effective in preventing large epidemics. We present a novel approach in analysing chikungunya outbreaks globally using a single environmentally-driven mathematical model. Our study represents a significant step towards developing a globally applicable Ae. albopictus-borne chikungunya transmission model, and introduces a guideline for extending such models to other vector-borne diseases. PMID:28362820

  15. An Axiomatic Analysis Approach for Large-Scale Disaster-Tolerant Systems Modeling

    Directory of Open Access Journals (Sweden)

    Theodore W. Manikas

    2011-02-01

    Full Text Available Disaster tolerance in computing and communications systems refers to the ability to maintain a degree of functionality throughout the occurrence of a disaster. We accomplish the incorporation of disaster tolerance within a system by simulating various threats to the system operation and identifying areas for system redesign. Unfortunately, extremely large systems are not amenable to comprehensive simulation studies due to the large computational complexity requirements. To address this limitation, an axiomatic approach that decomposes a large-scale system into smaller subsystems is developed that allows the subsystems to be independently modeled. This approach is implemented using a data communications network system example. The results indicate that the decomposition approach produces simulation responses that are similar to the full system approach, but with greatly reduced simulation time.

  16. A dynamic subgrid scale model for Large Eddy Simulations based on the Mori-Zwanzig formalism

    Science.gov (United States)

    Parish, Eric J.; Duraisamy, Karthik

    2017-11-01

    The development of reduced models for complex multiscale problems remains one of the principal challenges in computational physics. The optimal prediction framework of Chorin et al. [1], which is a reformulation of the Mori-Zwanzig (M-Z) formalism of non-equilibrium statistical mechanics, provides a framework for the development of mathematically-derived reduced models of dynamical systems. Several promising models have emerged from the optimal prediction community and have found application in molecular dynamics and turbulent flows. In this work, a new M-Z-based closure model that addresses some of the deficiencies of existing methods is developed. The model is constructed by exploiting similarities between two levels of coarse-graining via the Germano identity of fluid mechanics and by assuming that memory effects have a finite temporal support. The appeal of the proposed model, which will be referred to as the 'dynamic-MZ-τ' model, is that it is parameter-free and has a structural form imposed by the mathematics of the coarse-graining process (rather than the phenomenological assumptions made by the modeler, such as in classical subgrid scale models). To promote the applicability of M-Z models in general, two procedures are presented to compute the resulting model form, helping to bypass the tedious error-prone algebra that has proven to be a hindrance to the construction of M-Z-based models for complex dynamical systems. While the new formulation is applicable to the solution of general partial differential equations, demonstrations are presented in the context of Large Eddy Simulation closures for the Burgers equation, decaying homogeneous turbulence, and turbulent channel flow. The performance of the model and validity of the underlying assumptions are investigated in detail.

  17. On a Numerical and Graphical Technique for Evaluating some Models Involving Rational Expectations

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders Rygh

    Campbell and Shiller (1987) proposed a graphical technique for the present value model which consists of plotting the spread and theoretical spread as calculated from the cointegrated vector autoregressive model. We extend these techniques to a number of rational expectation models and give...

  18. On a numerical and graphical technique for evaluating some models involving rational expectations

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders Rygh

    Campbell and Shiller (1987) proposed a graphical technique for the present value model which consists of plotting the spread and theoretical spread as calculated from the cointegrated vector autoregressive model. We extend these techniques to a number of rational expectation models and give...

  19. B→Xs+ missing energy in models with large extra dimensions

    International Nuclear Information System (INIS)

    Mahajan, Namit

    2003-01-01

    We study the neutral current flavor changing rare decay mode B→X s + missing energy within the framework of theories with large extra spatial dimensions. The corresponding standard model signature is B→X s +νν-bar. But in theories with large extra dimensions, it is possible to have scalars and gravitons in the final state making it quite distinct from any other scenario where there are no gravitons and the scalars are far too much heavier than the B meson to be present as external particles. We give an estimate of the branching ratio for such processes for different values of the number of extra dimensions and the scale of the effective theory. The predicted branching ratios can be comparable with the standard model rate for a restrictive choice of the parameters

  20. Lepton number violation in theories with a large number of standard model copies

    International Nuclear Information System (INIS)

    Kovalenko, Sergey; Schmidt, Ivan; Paes, Heinrich

    2011-01-01

    We examine lepton number violation (LNV) in theories with a saturated black hole bound on a large number of species. Such theories have been advocated recently as a possible solution to the hierarchy problem and an explanation of the smallness of neutrino masses. On the other hand, the violation of the lepton number can be a potential phenomenological problem of this N-copy extension of the standard model as due to the low quantum gravity scale black holes may induce TeV scale LNV operators generating unacceptably large rates of LNV processes. We show, however, that this issue can be avoided by introducing a spontaneously broken U 1(B-L) . Then, due to the existence of a specific compensation mechanism between contributions of different Majorana neutrino states, LNV processes in the standard model copy become extremely suppressed with rates far beyond experimental reach.

  1. A PRACTICAL ONTOLOGY FOR THE LARGE-SCALE MODELING OF SCHOLARLY ARTIFACTS AND THEIR USAGE

    Energy Technology Data Exchange (ETDEWEB)

    RODRIGUEZ, MARKO A. [Los Alamos National Laboratory; BOLLEN, JOHAN [Los Alamos National Laboratory; VAN DE SOMPEL, HERBERT [Los Alamos National Laboratory

    2007-01-30

    The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. As a remedy to the third constraint, this article presents a scholarly ontology that was engineered to represent those classes for which large-scale bibliographic and usage data exists, supports usage research, and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts (e.g. authors and journals) and an accompanying 1 billion usage events. The real world instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support. They present the ontology, discuss its instantiation, and provide some example inference rules for calculating various scholarly artifact metrics.

  2. Fumarylacetoacetate hydrolase deficient pigs are a novel large animal model of metabolic liver disease

    Directory of Open Access Journals (Sweden)

    Raymond D. Hickey

    2014-07-01

    FAH-deficiency produced a lethal defect in utero that was corrected by administration of 2-(2-nitro-4-trifluoromethylbenzoyl-1,3 cyclohexanedione (NTBC throughout pregnancy. Animals on NTBC were phenotypically normal at birth; however, the animals were euthanized approximately four weeks after withdrawal of NTBC due to clinical decline and physical examination findings of severe liver injury and encephalopathy consistent with acute liver failure. Biochemical and histological analyses, characterized by diffuse and severe hepatocellular damage, confirmed the diagnosis of severe liver injury. FAH−/− pigs provide the first genetically engineered large animal model of a metabolic liver disorder. Future applications of FAH−/− pigs include discovery research as a large animal model of HT1 and spontaneous acute liver failure, and preclinical testing of the efficacy of liver cell therapies, including transplantation of hepatocytes, liver stem cells, and pluripotent stem cell-derived hepatocytes.

  3. Centrifuge modelling of large diameter pile in sand subject to lateral loading

    DEFF Research Database (Denmark)

    Leth, Caspar Thrane

    and cyclic behaviour of large diameter rigid piles in dry sand by use of physical modelling. The physical modelling has been carried out at Department of Civil Engineering at the Danish Technical University (DTU.BYG), in the period from 2005 to 2009. The main centrifuge facilities, and especially...... the equipment for lateral load tests were at the start of the research in 2005 outdated and a major part of the work with the geotechnical centrifuge included renovation and upgrading of the facilities. The research with respect to testing of large diameter piles included:  Construction of equipment...... with embedment lengths of 6, 8 and 10 times the diameter. The tests have been carried out with a load eccentricity of 2.5 m to 6.5 m above the sand surface. The present report includes a description of the centrifuge facilities, applied test procedure and equipment along with presentation of the obtained results....

  4. Grid-connection of large offshore windfarms utilizing VSC-HVDC: Modeling and grid impact

    DEFF Research Database (Denmark)

    Xue, Yijing; Akhmatov, Vladislav

    2009-01-01

    Utilization of Voltage Source Converter (VSC) – High Voltage Direct Current (HVDC) systems for grid-connection of large offshore windfarms becomes relevant as installed power capacities as well as distances to the connection points of the on-land transmission systems increase. At the same time...... for grid-connection of large offshore windfarms. The VSC-HVDC model is implemented using a general approach of independent control of active and reactive power in normal operation situations. The on-land VSC inverter, which is also called a grid-side inverter, provides voltage support to the transmission...... system and comprises a LVFRT solution in short-circuit faults. The presented model, LVFRT solution and impact on the system stability are investigated as a case study of a 1,000 MW offshore windfarm grid-connected through four parallel VSC-HVDC systems each with a 280 MVA power rating. The investigation...

  5. Dynamic model of frequency control in Danish power system with large scale integration of wind power

    DEFF Research Database (Denmark)

    Basit, Abdul; Hansen, Anca Daniela; Sørensen, Poul Ejnar

    2013-01-01

    This work evaluates the impact of large scale integration of wind power in future power systems when 50% of load demand can be met from wind power. The focus is on active power balance control, where the main source of power imbalance is an inaccurate wind speed forecast. In this study, a Danish...... power system model with large scale of wind power is developed and a case study for an inaccurate wind power forecast is investigated. The goal of this work is to develop an adequate power system model that depicts relevant dynamic features of the power plants and compensates for load generation...... imbalances, caused by inaccurate wind speed forecast, by an appropriate control of the active power production from power plants....

  6. Longitudinal Modeling of Adolescents' Activity Involvement, Problem Peer Associations, and Youth Smoking

    Science.gov (United States)

    Metzger, Aaron; Dawes, Nickki; Mermelstein, Robin; Wakschlag, Lauren

    2011-01-01

    Longitudinal associations among different types of organized activity involvement, problem peer associations, and cigarette smoking were examined in a sample of 1040 adolescents (mean age = 15.62 at baseline, 16.89 at 15-month assessment, 17.59 at 24 months) enriched for smoking experimentation (83% had tried smoking). A structural equation model…

  7. Modelling of phase equilibria and related properties of mixtures involving lipids

    DEFF Research Database (Denmark)

    Cunico, Larissa

    Many challenges involving physical and thermodynamic properties in the production of edible oils and biodiesel are observed, such as availability of experimental data and realiable prediction. In the case of lipids, a lack of experimental data for pure components and also for their mixtures in open...

  8. Design of Large Thinned Arrays Using Different Biogeography-Based Optimization Migration Models

    Directory of Open Access Journals (Sweden)

    Sotirios K. Goudos

    2016-01-01

    Full Text Available Array thinning is a common discrete-valued combinatorial optimization problem. Evolutionary algorithms are suitable techniques for above-mentioned problem. Biogeography-Based Optimization (BBO, which is inspired by the science of biogeography, is a stochastic population-based evolutionary algorithm (EA. The original BBO uses a linear migration model to describe how species migrate from one island to another. Other nonlinear migration models have been proposed in the literature. In this paper, we apply BBO with four different migration models to five different large array design cases. Additionally we compare results with five other popular algorithms. The problems dimensions range from 150 to 300. The results show that BBO with sinusoidal migration model generally performs better than the other algorithms. However, these results are considered to be indicative and do not generally apply to all optimization problems in antenna design.

  9. Large scale inference in the Infinite Relational Model: Gibbs sampling is not enough

    DEFF Research Database (Denmark)

    Albers, Kristoffer Jon; Moth, Andreas Leon Aagard; Mørup, Morten

    2013-01-01

    The stochastic block-model and its non-parametric extension, the Infinite Relational Model (IRM), have become key tools for discovering group-structure in complex networks. Identifying these groups is a combinatorial inference problem which is usually solved by Gibbs sampling. However, whether...... Gibbs sampling suffices and can be scaled to the modeling of large scale real world complex networks has not been examined sufficiently. In this paper we evaluate the performance and mixing ability of Gibbs sampling in the Infinite Relational Model (IRM) by implementing a high performance Gibbs sampler....... We find that Gibbs sampling can be computationally scaled to handle millions of nodes and billions of links. Investigating the behavior of the Gibbs sampler for different sizes of networks we find that the mixing ability decreases drastically with the network size, clearly indicating a need...

  10. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    This paper presents a comprehensive approach to sensitivity and uncertainty analysis of large-scale computer models that is analytic (deterministic) in principle and that is firmly based on the model equations. The theory and application of two systems based upon computer calculus, GRESS and ADGEN, are discussed relative to their role in calculating model derivatives and sensitivities without a prohibitive initial manpower investment. Storage and computational requirements for these two systems are compared for a gradient-enhanced version of the PRESTO-II computer model. A Deterministic Uncertainty Analysis (DUA) method that retains the characteristics of analytically computing result uncertainties based upon parameter probability distributions is then introduced and results from recent studies are shown. 29 refs., 4 figs., 1 tab

  11. The Topological CP1 Model and the Large-N Matrix Integral

    Science.gov (United States)

    Eguchi, Tohru; Yang, Sung-Kil

    We discuss the topological CP1 model which consists of the holomorphic maps from Riemann surfaces onto CP1. We construct a large-N matrix model which reproduces precisely the partition function of the CP1 model at all genera of Riemann surfaces. The action of our matrix model has the form \\begin{array}{c} {\\rm Tr}\\, V (M) = -2\\, {\\rm Tr}\\, M (\\log M -1) +2\\displaystyle\\sum t_{n, P}\\, {\\rm Tr}\\, M^n (\\log M -c_n)\\\\[5pt] \\kern8pt +\\displaystyle\\sum 1/n\\cdot t_{n -1, Q}\\, {\\rm Tr}\\, M^n\\,, \\quad c_n =\\displaystyle\\sum_1^n 1/j\\,, \\end{array} where M is an N × N Hermitian matrix and tn,P(tn,Q), (n = 0, 1, 2, …) are the coupling constants of the nth descendant of the puncture (Kähler) operator.

  12. The Oncopig Cancer Model: An Innovative Large Animal Translational Oncology Platform

    DEFF Research Database (Denmark)

    Schachtschneider, Kyle M.; Schwind, Regina M.; Newson, Jordan

    2017-01-01

    , and outcomes). To date, however, cancer research progress has been markedly hampered by lack of a genotypically, anatomically, and physiologically relevant large animal model. Without progressive cancer models, discoveries are hindered and cures are improbable. Herein, we describe a transgenic porcine model......Despite an improved understanding of cancer molecular biology, immune landscapes, and advancements in cytotoxic, biologic, and immunologic anti-cancer therapeutics, cancer remains a leading cause of death worldwide. More than 8.2 million deaths were attributed to cancer in 2012......, and it is anticipated that cancer incidence will continue to rise, with 19.3 million cases expected by 2025. The development and investigation of new diagnostic modalities and innovative therapeutic tools is critical for reducing the global cancer burden. Toward this end, transitional animal models serve a crucial role...

  13. Full-Scale Approximations of Spatio-Temporal Covariance Models for Large Datasets

    KAUST Repository

    Zhang, Bohai

    2014-01-01

    Various continuously-indexed spatio-temporal process models have been constructed to characterize spatio-temporal dependence structures, but the computational complexity for model fitting and predictions grows in a cubic order with the size of dataset and application of such models is not feasible for large datasets. This article extends the full-scale approximation (FSA) approach by Sang and Huang (2012) to the spatio-temporal context to reduce computational complexity. A reversible jump Markov chain Monte Carlo (RJMCMC) algorithm is proposed to select knots automatically from a discrete set of spatio-temporal points. Our approach is applicable to nonseparable and nonstationary spatio-temporal covariance models. We illustrate the effectiveness of our method through simulation experiments and application to an ozone measurement dataset.

  14. Role of subgrid-scale modeling in large eddy simulation of wind turbine wake interactions

    DEFF Research Database (Denmark)

    Sarlak, Hamid; Meneveau, C.; Sørensen, Jens Nørkær

    2015-01-01

    A series of simulations are carried out to evaluate specific features of the Large Eddy Simulation (LES) technique in wind turbine wake interactions. We aim to model wake interactions of two aligned model rotors. The effects of the rotor resolution, actuator line force filter size, and Reynolds...... number are investigated at certain tip speed ratios. The numerical results are validated against wind tunnel measurements in terms of the mean velocity, turbulence intensity and the power and thrust coefficients. Special emphasis is placed on the role played by subgrid scale (SGS) models in affecting...... the flow structures and turbine loading, as this has been studied less in prior investigations. It is found that, compared with the effects of rotor resolution and force kernel size, the SGS models have only a minor impact on the wake and predicted power performance. These observations confirm the usual...

  15. RE-Europe, a large-scale dataset for modeling a highly renewable European electricity system.

    Science.gov (United States)

    Jensen, Tue V; Pinson, Pierre

    2017-11-28

    Future highly renewable energy systems will couple to complex weather and climate dynamics. This coupling is generally not captured in detail by the open models developed in the power and energy system communities, where such open models exist. To enable modeling such a future energy system, we describe a dedicated large-scale dataset for a renewable electric power system. The dataset combines a transmission network model, as well as information for generation and demand. Generation includes conventional generators with their technical and economic characteristics, as well as weather-driven forecasts and corresponding realizations for renewable energy generation for a period of 3 years. These may be scaled according to the envisioned degrees of renewable penetration in a future European energy system. The spatial coverage, completeness and resolution of this dataset, open the door to the evaluation, scaling analysis and replicability check of a wealth of proposals in, e.g., market design, network actor coordination and forecasting of renewable power generation.

  16. RE-Europe, a large-scale dataset for modeling a highly renewable European electricity system

    Science.gov (United States)

    Jensen, Tue V.; Pinson, Pierre

    2017-11-01

    Future highly renewable energy systems will couple to complex weather and climate dynamics. This coupling is generally not captured in detail by the open models developed in the power and energy system communities, where such open models exist. To enable modeling such a future energy system, we describe a dedicated large-scale dataset for a renewable electric power system. The dataset combines a transmission network model, as well as information for generation and demand. Generation includes conventional generators with their technical and economic characteristics, as well as weather-driven forecasts and corresponding realizations for renewable energy generation for a period of 3 years. These may be scaled according to the envisioned degrees of renewable penetration in a future European energy system. The spatial coverage, completeness and resolution of this dataset, open the door to the evaluation, scaling analysis and replicability check of a wealth of proposals in, e.g., market design, network actor coordination and forecasting of renewable power generation.

  17. RE-Europe, a large-scale dataset for modeling a highly renewable European electricity system

    DEFF Research Database (Denmark)

    Jensen, Tue Vissing; Pinson, Pierre

    2017-01-01

    Future highly renewable energy systems will couple to complex weather and climate dynamics. This coupling is generally not captured(R2.8) in detailby the open models developed in the power and energy system communities, where such open models exist. To enable modeling such a future energy system......, we describe a dedicated large-scale dataset for a renewable electric power system. The dataset combines a transmission network model, as well as information for generation and demand. Generation includes conventional generators with their technical and economic characteristics, as well as weather......-driven forecasts and corresponding realizations for renewable energy generation for a period of 3 years.(R2.9) These may be scaled according to the envisioned degrees of renewable penetration in a future European energy system.(R2.10) The spatial coverage, completeness and resolution of this dataset, open the door...

  18. Standard Model Higgs searches with the ATLAS detector at the Large Hadron Collider

    CERN Document Server

    Nisati, Aleandro

    2012-01-01

    The investigation of the mechanism responsible for electroweak symmetry breaking is one of the most important tasks of the scientific program of the Large Hadron Collider. The experimental results on the search of the Standard Model Higgs boson with 1 to 2 fb^-1 of proton proton collision data at sqrt s=7 TeV recorded by the ATLAS detector are presented and discussed. No significant excess of events is found with respect to the expectations from Standard Model processes, and the production of a Higgs boson is excluded at 95% Confidence Level for the mass regions 144-232, 256-282 and 296-466 GeV.

  19. 56 mm twin aperture model dipole magnet for the large hadron collider

    Energy Technology Data Exchange (ETDEWEB)

    Ikaeheimo, J.; Savelainen, M.

    1996-08-01

    A 56 mm twin aperature model dipole magnet for the Large Hadron Collider has been built at the European Laboratory for Particle Physics (CERN). The magnet design incorporates stainless steel collars and a special yoke structure to minimize saturation induced field errors. The magnet has proved to be the most successful model prototype constructed so far. In the tests the design field of 10.0 Telsa was achieved with a record-short training. In this paper, the quench performance and the electromagnetic behavior of the magnet are presented and discussed.

  20. A survey of modelling methods for high-fidelity wind farm simulations using large eddy simulation

    DEFF Research Database (Denmark)

    Breton, Simon-Philippe; Sumner, J.; Sørensen, Jens Nørkær

    2017-01-01

    Large eddy simulations (LES) of wind farms have the capability to provide valuable and detailed information about the dynamics of wind turbine wakes. For this reason, their use within the wind energy research community is on the rise, spurring the development of new models and methods. This review...... surveys the most common schemes available to model the rotor, atmospheric conditions and terrain effects within current state-of-the-art LES codes, of which an overview is provided. A summary of the experimental research data available for validation of LES codes within the context of single and multiple...

  1. Investigating the determinants of E-banking loyalty for large business customers: two empirical models

    OpenAIRE

    Fragata, A.; Moustakas, E.

    2013-01-01

    The current research paper proposes two models for the determinants of E-banking Loyalty for large business customers. The results demonstrated that five main quality dimensions were identified for the e-banking portals: assurance, reliability, convenience and quality monitoring by the financial director of the company. The results also confirm that e-banking quality has a strong impact on e-banking loyalty via the mediating effect of e-trust and switching costs have stro...

  2. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2013-12-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel\\'s MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  3. Cross-flow turbines: progress report on physical and numerical model studies at large laboratory scale

    Science.gov (United States)

    Wosnik, Martin; Bachant, Peter

    2016-11-01

    Cross-flow turbines show potential in marine hydrokinetic (MHK) applications. A research focus is on accurately predicting device performance and wake evolution to improve turbine array layouts for maximizing overall power output, i.e., minimizing wake interference, or taking advantage of constructive wake interaction. Experiments were carried with large laboratory-scale cross-flow turbines D O (1 m) using a turbine test bed in a large cross-section tow tank, designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. Several turbines of varying solidity were employed, including the UNH Reference Vertical Axis Turbine (RVAT) and a 1:6 scale model of the DOE-Sandia Reference Model 2 (RM2) turbine. To improve parameterization in array simulations, an actuator line model (ALM) was developed to provide a computationally feasible method for simulating full turbine arrays inside Navier-Stokes models. Results are presented for the simulation of performance and wake dynamics of cross-flow turbines and compared with experiments and body-fitted mesh, blade-resolving CFD. Supported by NSF-CBET Grant 1150797, Sandia National Laboratories.

  4. Ensemble modeling to predict habitat suitability for a large-scale disturbance specialist.

    Science.gov (United States)

    Latif, Quresh S; Saab, Victoria A; Dudley, Jonathan G; Hollenbeck, Jeff P

    2013-11-01

    managers attempting to balance salvage logging with habitat conservation in burned-forest landscapes where black-backed woodpecker nest location data are not immediately available. Ensemble modeling represents a promising tool for guiding conservation of large-scale disturbance specialists.

  5. Large Scale Skill in Regional Climate Modeling and the Lateral Boundary Condition Scheme

    Science.gov (United States)

    Veljović, K.; Rajković, B.; Mesinger, F.

    2009-04-01

    Several points are made concerning the somewhat controversial issue of regional climate modeling: should a regional climate model (RCM) be expected to maintain the large scale skill of the driver global model that is supplying its lateral boundary condition (LBC)? Given that this is normally desired, is it able to do so without help via the fairly popular large scale nudging? Specifically, without such nudging, will the RCM kinetic energy necessarily decrease with time compared to that of the driver model or analysis data as suggested by a study using the Regional Atmospheric Modeling System (RAMS)? Finally, can the lateral boundary condition scheme make a difference: is the almost universally used but somewhat costly relaxation scheme necessary for a desirable RCM performance? Experiments are made to explore these questions running the Eta model in two versions differing in the lateral boundary scheme used. One of these schemes is the traditional relaxation scheme, and the other the Eta model scheme in which information is used at the outermost boundary only, and not all variables are prescribed at the outflow boundary. Forecast lateral boundary conditions are used, and results are verified against the analyses. Thus, skill of the two RCM forecasts can be and is compared not only against each other but also against that of the driver global forecast. A novel verification method is used in the manner of customary precipitation verification in that forecast spatial wind speed distribution is verified against analyses by calculating bias adjusted equitable threat scores and bias scores for wind speeds greater than chosen wind speed thresholds. In this way, focusing on a high wind speed value in the upper troposphere, verification of large scale features we suggest can be done in a manner that may be more physically meaningful than verifications via spectral decomposition that are a standard RCM verification method. The results we have at this point are somewhat

  6. Coupled daily streamflow and water temperature modelling in large river basins

    Directory of Open Access Journals (Sweden)

    M. T. H. van Vliet

    2012-11-01

    Full Text Available Realistic estimates of daily streamflow and water temperature are required for effective management of water resources (e.g. for electricity and drinking water production and freshwater ecosystems. Although hydrological and process-based water temperature modelling approaches have been successfully applied to small catchments and short time periods, much less work has been done at large spatial and temporal scales. We present a physically based modelling framework for daily river discharge and water temperature simulations applicable to large river systems on a global scale. Model performance was tested globally at 1/2 × 1/2° spatial resolution and a daily time step for the period 1971–2000. We made specific evaluations on large river basins situated in different hydro-climatic zones and characterized by different anthropogenic impacts. Effects of anthropogenic heat discharges on simulated water temperatures were incorporated by using global gridded thermoelectric water use datasets and representing thermal discharges as point sources into the heat advection equation. This resulted in a significant increase in the quality of the water temperature simulations for thermally polluted basins (Rhine, Meuse, Danube and Mississippi. Due to large reservoirs in the Columbia which affect streamflow and thermal regimes, a reservoir routing model was used. This resulted in a significant improvement in the performance of the river discharge and water temperature modelling. Overall, realistic estimates were obtained at daily time step for both river discharge (median normalized BIAS = 0.3; normalized RMSE = 1.2; r = 0.76 and water temperature (median BIAS = −0.3 °C; RMSE = 2.8 °C; r = 0.91 for the entire validation period, with similar performance during warm, dry periods. Simulated water temperatures are sensitive to headwater temperature, depending on resolution and flow velocity. A high sensitivity of water temperature to river

  7. Assessment of large-scale water storage dynamics in the Community Land Model

    Science.gov (United States)

    Swenson, S. C.; Lawrence, D. M.

    2015-12-01

    A fundamental task of the Community Land Model (CLM; the land component of the Community Earth System Model) is the partitioning of precipitation into evapotranspiration (ET), runoff, and water storage. Testing model performance against site-level observations provides important insight, but can be challenging to extrapolate to the larger spatial scales at which Earth System models typically operate. Traditionally, measurements of river discharge have provided the best, and in many cases only, metrics with which to assess the performance of land models at large spatial scales (i.e. regional to continental scale river basins). Because the quantity of discharge measurements has declined globally, and the human modification and management of rivers has increased, new methods of testing land model performance are needed. As global observations of total water storage (TWS) and ET have become available, the potential for direct assessment of the quality of the simulated water budget exists. In this presentation, we use TWS observations from the GRACE satellite project and upscaled flux tower measurements from the FLUXNET-MTE dataset to assess the performance of CLM parameterizations such as canopy interception, storage, and evaporation, soil evaporation, and soil moisture and groundwater dynamics. We then give examples of alternative model parameterizations, and show how these parameterizations improve model performance relative to GRACE and FLUXNET-MTE based metrics.

  8. Using Simplified Models to Assist Fault Detection and Diagnosis in Large Hydrogenerators

    Directory of Open Access Journals (Sweden)

    Geraldo Carvalho Brito Junior

    2017-01-01

    Full Text Available Based on experimental evidence collected in a set of twenty 700 MW hydrogenerators, this article shows that the operating conditions of large hydrogenerators journal bearings may have unpredictable and significant changes, without apparent reasons. These changes prevent the accurate determination of bearing dynamic coefficients and make the prediction of these machines dynamic behavior unfeasible, even using refined models. This makes it difficult to differentiate the normal changes in hydrogenerators dynamics from the changes created by a fault event. To overcome such difficulty, this article proposes a back-to-basics step, the using of simplified mathematical models to assist hydrogenerators vibration monitoring and exemplifies this proposal by modeling a 700 MW hydrogenerator. A first model estimates the influence of changes in bearing operating conditions in the bearing stiffnesses, considering only the hydrodynamic effects of an isoviscous oil film with linear thickness distribution. A second model simulates hydrogenerators dynamics using only 10 degrees of freedom, giving the monitored vibrations as outputs, under normal operating conditions or in the presence of a fault. This article shows that simplified models may give satisfactory results when bearing operating conditions are properly determined, results comparable to those obtained by more refined models or by measurements in the modeled hydrogenerator.

  9. Shallow to Deep Convection Transition over a Heterogeneous Land Surface Using the Land Model Coupled Large-Eddy Simulation

    Science.gov (United States)

    Lee, J.; Zhang, Y.; Klein, S. A.

    2017-12-01

    The triggering of the land breeze, and hence the development of deep convection over heterogeneous land should be understood as a consequence of the complex processes involving various factors from land surface and atmosphere simultaneously. That is a sub-grid scale process that many large-scale models have difficulty incorporating it into the parameterization scheme partly due to lack of our understanding. Thus, it is imperative that we approach the problem using a high-resolution modeling framework. In this study, we use SAM-SLM (Lee and Khairoutdinov, 2015), a large-eddy simulation model coupled to a land model, to explore the cloud effect such as cold pool, the cloud shading and the soil moisture memory on the land breeze structure and the further development of cloud and precipitation over a heterogeneous land surface. The atmospheric large scale forcing and the initial sounding are taken from the new composite case study of the fair-weather, non-precipitating shallow cumuli at ARM SGP (Zhang et al., 2017). We model the land surface as a chess board pattern with alternating leaf area index (LAI). The patch contrast of the LAI is adjusted to encompass the weak to strong heterogeneity amplitude. The surface sensible- and latent heat fluxes are computed according to the given LAI representing the differential surface heating over a heterogeneous land surface. Separate from the surface forcing imposed from the originally modeled surface, the cases that transition into the moist convection can induce another layer of the surface heterogeneity from the 1) radiation shading by clouds, 2) adjusted soil moisture pattern by the rain, 3) spreading cold pool. First, we assess and quantifies the individual cloud effect on the land breeze and the moist convection under the weak wind to simplify the feedback processes. And then, the same set of experiments is repeated under sheared background wind with low level jet, a typical summer time wind pattern at ARM SGP site, to

  10. Modeling the Hydrologic Effects of Large-Scale Green Infrastructure Projects with GIS

    Science.gov (United States)

    Bado, R. A.; Fekete, B. M.; Khanbilvardi, R.

    2015-12-01

    Impervious surfaces in urban areas generate excess runoff, which in turn causes flooding, combined sewer overflows, and degradation of adjacent surface waters. Municipal environmental protection agencies have shown a growing interest in mitigating these effects with 'green' infrastructure practices that partially restore the perviousness and water holding capacity of urban centers. Assessment of the performance of current and future green infrastructure projects is hindered by the lack of adequate hydrological modeling tools; conventional techniques fail to account for the complex flow pathways of urban environments, and detailed analyses are difficult to prepare for the very large domains in which green infrastructure projects are implemented. Currently, no standard toolset exists that can rapidly and conveniently predict runoff, consequent inundations, and sewer overflows at a city-wide scale. We demonstrate how streamlined modeling techniques can be used with open-source GIS software to efficiently model runoff in large urban catchments. Hydraulic parameters and flow paths through city blocks, roadways, and sewer drains are automatically generated from GIS layers, and ultimately urban flow simulations can be executed for a variety of rainfall conditions. With this methodology, users can understand the implications of large-scale land use changes and green/gray storm water retention systems on hydraulic loading, peak flow rates, and runoff volumes.

  11. Strong motion modeling at the Paducah Diffusion Facility for a large New Madrid earthquake

    International Nuclear Information System (INIS)

    Herrmann, R.B.

    1991-01-01

    The Paducah Diffusion Facility is within 80 kilometers of the location of the very large New Madrid earthquakes which occurred during the winter of 1811-1812. Because of their size, seismic moment of 2.0 x 10 27 dyne-cm or moment magnitude M w = 7.5, the possible recurrence of these earthquakes is a major element in the assessment of seismic hazard at the facility. Probabilistic hazard analysis can provide uniform hazard response spectra estimates for structure evaluation, but a deterministic modeling of a such a large earthquake can provide strong constraints on the expected duration of motion. The large earthquake is modeled by specifying the earthquake fault and its orientation with respect to the site, and by specifying the rupture process. Synthetic time histories, based on forward modeling of the wavefield, from each subelement are combined to yield a three component time history at the site. Various simulations are performed to sufficiently exercise possible spatial and temporal distributions of energy release on the fault. Preliminary results demonstrate the sensitivity of the method to various assumptions, and also indicate strongly that the total duration of ground motion at the site is controlled primarily by the length of the rupture process on the fault

  12. LASSIE: simulating large-scale models of biochemical systems on GPUs.

    Science.gov (United States)

    Tangherloni, Andrea; Nobile, Marco S; Besozzi, Daniela; Mauri, Giancarlo; Cazzaniga, Paolo

    2017-05-10

    Mathematical modeling and in silico analysis are widely acknowledged as complementary tools to biological laboratory methods, to achieve a thorough understanding of emergent behaviors of cellular processes in both physiological and perturbed conditions. Though, the simulation of large-scale models-consisting in hundreds or thousands of reactions and molecular species-can rapidly overtake the capabilities of Central Processing Units (CPUs). The purpose of this work is to exploit alternative high-performance computing solutions, such as Graphics Processing Units (GPUs), to allow the investigation of these models at reduced computational costs. LASSIE is a "black-box" GPU-accelerated deterministic simulator, specifically designed for large-scale models and not requiring any expertise in mathematical modeling, simulation algorithms or GPU programming. Given a reaction-based model of a cellular process, LASSIE automatically generates the corresponding system of Ordinary Differential Equations (ODEs), assuming mass-action kinetics. The numerical solution of the ODEs is obtained by automatically switching between the Runge-Kutta-Fehlberg method in the absence of stiffness, and the Backward Differentiation Formulae of first order in presence of stiffness. The computational performance of LASSIE are assessed using a set of randomly generated synthetic reaction-based models of increasing size, ranging from 64 to 8192 reactions and species, and compared to a CPU-implementation of the LSODA numerical integration algorithm. LASSIE adopts a novel fine-grained parallelization strategy to distribute on the GPU cores all the calculations required to solve the system of ODEs. By virtue of this implementation, LASSIE achieves up to 92× speed-up with respect to LSODA, therefore reducing the running time from approximately 1 month down to 8 h to simulate models consisting in, for instance, four thousands of reactions and species. Notably, thanks to its smaller memory footprint, LASSIE

  13. Geo-structural modelling for potential large rock slide in Machu Picchu

    Science.gov (United States)

    Spizzichino, D.; Delmonaco, G.; Margottini, C.; Mazzoli, S.

    2009-04-01

    The monumental complex of the Historical Sanctuary of Machu Picchu, declared as World Heritage Site by UNESCO in 1983, is located in the Andean chain at approx. 80 km from Cuzco (Peru) and at an elevation of 2430 m a.s.l. along the Urubamba River Valley. From a geological point of view, the Machu Picchu granitoid pluton, forming part of the larger "Quillabamba granite", is one of a series of plutons intruded along the axial zone of the high Eastern Cordillera Permo-Liassic rift system including a variety of rock types, dominantly granites and granodiorites. The most evident structures at the outcrop scale consist of planar joint sets that may be variably reactivated and exhibiting 4 main orientations. At present, the site is affected by geological risk due to frequent landslides that threaten security and tourist exploitation. In the last years, the international landslide scientific community has promoted a multi-discipline joint programme mainly finalised to slope deformation monitoring and analysis after the warning, launched in 2001, of a potential collapse of the citadel, caused by a huge rock slide. The contribute of the Italian research team was devoted to implement a landslide risk analysis and an innovative remote sensing techniques. The main scope of this work is to present the implementation of a geo-structural modelling aimed at defining present and potential slope stability conditions of the Machu Picchu Citadel. Data have been collected by geological, structural and geomechanical field surveys and laboratory tests in order to reconstruct the geomorphological evolution of the area. Landslide types and evolution are strictly controlled by regional tectonic uplift and structural setting. Several slope instability phenomena have been identified and classified according to mechanism, material involved and state of activity. Rock falls, debris flows, rock slides and debris slides are the main surveyed landslide types. Rock slides and rock falls may produce

  14. On the stability analysis of a general discrete-time population model involving predation and Allee effects

    International Nuclear Information System (INIS)

    Merdan, H.; Duman, O.

    2009-01-01

    This paper presents the stability analysis of equilibrium points of a general discrete-time population dynamics involving predation with and without Allee effects which occur at low population density. The mathematical analysis and numerical simulations show that the Allee effect has a stabilizing role on the local stability of the positive equilibrium points of this model.

  15. Assessing the economic impact of paternal involvement: a comparison of the generalized linear model versus decision analysis trees.

    Science.gov (United States)

    Salihu, Hamisu M; Salemi, Jason L; Nash, Michelle C; Chandler, Kristen; Mbah, Alfred K; Alio, Amina P

    2014-08-01

    Lack of paternal involvement has been shown to be associated with adverse pregnancy outcomes, including infant morbidity and mortality, but the impact on health care costs is unknown. Various methodological approaches have been used in cost minimization and cost effectiveness analyses and it remains unclear how cost estimates vary according to the analytic strategy adopted. We illustrate a methodological comparison of decision analysis modeling and generalized linear modeling (GLM) techniques using a case study that assesses the cost-effectiveness of potential father involvement interventions. We conducted a 12-year retrospective cohort study using a statewide enhanced maternal-infant database that contains both clinical and nonclinical information. A missing name for the father on the infant's birth certificate was used as a proxy for lack of paternal involvement, the main exposure of this study. Using decision analysis modeling and GLM, we compared all infant inpatient hospitalization costs over the first year of life. Costs were calculated from hospital charges using department-level cost-to-charge ratios and were adjusted for inflation. In our cohort of 2,243,891 infants, 9.2% had a father uninvolved during pregnancy. Lack of paternal involvement was associated with higher rates of preterm birth, small-for-gestational age, and infant morbidity and mortality. Both analytic approaches estimate significantly higher per-infant costs for father uninvolved pregnancies (decision analysis model: $1,827, GLM: $1,139). This paper provides sufficient evidence that healthcare costs could be significantly reduced through enhanced father involvement during pregnancy, and buttresses the call for a national program to involve fathers in antenatal care.

  16. A Framework for Quantitative Modeling of Neural Circuits Involved in Sleep-to-Wake Transition

    Directory of Open Access Journals (Sweden)

    Siamak eSorooshyari

    2015-02-01

    Full Text Available Identifying the neuronal circuits and dynamics of sleep-to-wake transition is essential to understanding brain regulation of behavioral states, including sleep-wake cycles, arousal, and hyperarousal. Recent work by different laboratories has used optogenetics to determine the role of individual neuromodulators in state transitions. The optogenetically-driven data does not yet provide a multi-dimensional schematic of the mechanisms underlying changes in vigilance states. This work presents a modeling framework to interpret, assist, and drive research on the sleep-regulatory network. We identify feedback, redundancy, and gating hierarchy as three fundamental aspects of this model. The presented model is expected to expand as additional data on the contribution of each transmitter to a vigilance state becomes available. Incorporation of conductance-based models of neuronal ensembles into this model and existing models of cortical excitability will provide more comprehensive insight into sleep dynamics as well as sleep and arousal-related disorders.

  17. Lessons from the Large Hadron Collider for model-based experimentation : the concept of a model of data acquisition and the scope of the hierarchy of models

    NARCIS (Netherlands)

    Karaca, Koray

    2017-01-01

    According to the hierarchy of models (HoM) account of scientific experimentation developed by Patrick Suppes and elaborated by Deborah Mayo, theoretical considerations about the phenomena of interest are involved in an experiment through theoretical models that in turn relate to experimental data

  18. Large-Scale Transport Model Uncertainty and Sensitivity Analysis: Distributed Sources in Complex Hydrogeologic Systems

    International Nuclear Information System (INIS)

    Sig Drellack, Lance Prothro

    2007-01-01

    The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result of the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The

  19. Towards a Framework for the Stochastic Modelling of Subgrid Scale Fluxes for Large Eddy Simulation

    Directory of Open Access Journals (Sweden)

    Thomas von Larcher

    2015-04-01

    Full Text Available We focus on a mixed deterministic-stochastic subgrid scale modelling strategy currently under development for application in Finite Volume Large Eddy Simulation (LES codes. Our concept is based on the integral conservation laws for mass, momentum and energy of a flow field. We model the space-time structure of the flux correction terms to create a discrete formulation. Advanced methods of time series analysis for the data-based construction of stochastic models with inherently non-stationary statistical properties and concepts of information theory based on a modified Akaike information criterion and on the Bayesian information criterion for the model discrimination are used to construct surrogate models for the non-resolved flux fluctuations. Vector-valued auto-regressive models with external influences form the basis for the modelling approach. The reconstruction capabilities of the modelling ansatz are tested against fully 3D turbulent channel flow data computed by direct numerical simulation and, in addition, against a turbulent Taylor-Green vortex flow showing a transition from laminar to a turbulent flow state. The modelling approach for the LES closure is different in both test cases. In the channel flow we consider an implicit LES ansatz. In the Taylor-Green vortex flow, it follows an explicit closure approach. We present here the outcome of our reconstruction tests and show specific results of the non-trivial time series data analysis. Started with a generally stochastic ansatz we found, surprisingly, that the deterministic model part already yields small residuals and is, therefore, good enough to fit the flux correction terms well. In the Taylor-Green vortex flow, we found additionally time-dependent features confirming that our modelling approach is capable of detecting changes in the temporal structure of the flow. The results encourage us to launch a more ambitious attempt at dynamic LES closure along these lines.

  20. Global Sensitivity Analysis for Large-scale Socio-hydrological Models using the Cloud

    Science.gov (United States)

    Hu, Y.; Garcia-Cabrejo, O.; Cai, X.; Valocchi, A. J.; Dupont, B.

    2014-12-01

    In the context of coupled human and natural system (CHNS), incorporating human factors into water resource management provides us with the opportunity to understand the interactions between human and environmental systems. A multi-agent system (MAS) model is designed to couple with the physically-based Republican River Compact Administration (RRCA) groundwater model, in an attempt to understand the declining water table and base flow in the heavily irrigated Republican River basin. For MAS modelling, we defined five behavioral parameters (κ_pr, ν_pr, κ_prep, ν_prep and λ) to characterize the agent's pumping behavior given the uncertainties of the future crop prices and precipitation. κ and ν describe agent's beliefs in their prior knowledge of the mean and variance of crop prices (κ_pr, ν_pr) and precipitation (κ_prep, ν_prep), and λ is used to describe the agent's attitude towards the fluctuation of crop profits. Notice that these human behavioral parameters as inputs to the MAS model are highly uncertain and even not measurable. Thus, we estimate the influences of these behavioral parameters on the coupled models using Global Sensitivity Analysis (GSA). In this paper, we address two main challenges arising from GSA with such a large-scale socio-hydrological model by using Hadoop-based Cloud Computing techniques and Polynomial Chaos Expansion (PCE) based variance decomposition approach. As a result, 1,000 scenarios of the coupled models are completed within two hours with the Hadoop framework, rather than about 28days if we run those scenarios sequentially. Based on the model results, GSA using PCE is able to measure the impacts of the spatial and temporal variations of these behavioral parameters on crop profits and water table, and thus identifies two influential parameters, κ_pr and λ. The major contribution of this work is a methodological framework for the application of GSA in large-scale socio-hydrological models. This framework attempts to