WorldWideScience

Sample records for models involving large

  1. LARGE VESSEL INVOLVEMENT IN BEHCET’S DISEASE

    Directory of Open Access Journals (Sweden)

    AR. Jamshidi F. Davatchi

    2004-08-01

    Full Text Available Large vessel involvement is one of the hallmarks of Behcet’s disease (BD but its prevalence varies widely due to ethnic variation or environmental factors. The aim of this study is to find the characteristics of vasculo-Behcet (VB in Iran. In a cohort of 4769 patients with BD, those with vascular involvement were selected. Different manifestations of disease were compared with the remaining group of patients. A confidence interval at 95% (CI was calculated for each item. Vascular involvement was seen in 409 cases (8.6%; CI, 0.8. Venous involvement was seen in 396 cases, deep vein thrombosis in 294 (6.2%; CI, 0.7, superficial phlebitis in 108 (2.3%; CI, 0.4 and large vein thrombosis in 45 (0.9%; CI, 0.3. Arterial involvement was seen in 28 patients (25 aneurysms and 4 thromboses. Thirteen patients showed both arterial and venous involvement. The mean age of the patients with VB was slightly higher (P<0.03, but the disease duration was significantly longer (P<0.0003. VB was more common in men. As the presenting sign, ocular lesions were less frequent in VB (P<0.0006, while skin lesions were over 2 times more common in these cases (P<0.000001. VB was associated with a higher frequency of genital aphthosis, skin involvement, joint manifestations, epididymitis, CNS lesions and GI involvement. The juvenile form was less common in VB (P<0.03. High ESR was more frequent in VB (P=0.000002, but the frequency of false positive VDRL, pathergy phenomenon, HLA-B5 or HLA-B27 showed no significant difference between the two groups. In Iranian patients with BD, vascular involvement is not common and large vessel involvement is rare. It may be sex-related, and is more common in well-established disease with multiple organ involvement and longer disease duration.

  2. Fatal crashes involving large numbers of vehicles and weather.

    Science.gov (United States)

    Wang, Ying; Liang, Liming; Evans, Leonard

    2017-12-01

    Adverse weather has been recognized as a significant threat to traffic safety. However, relationships between fatal crashes involving large numbers of vehicles and weather are rarely studied according to the low occurrence of crashes involving large numbers of vehicles. By using all 1,513,792 fatal crashes in the Fatality Analysis Reporting System (FARS) data, 1975-2014, we successfully described these relationships. We found: (a) fatal crashes involving more than 35 vehicles are most likely to occur in snow or fog; (b) fatal crashes in rain are three times as likely to involve 10 or more vehicles as fatal crashes in good weather; (c) fatal crashes in snow [or fog] are 24 times [35 times] as likely to involve 10 or more vehicles as fatal crashes in good weather. If the example had used 20 vehicles, the risk ratios would be 6 for rain, 158 for snow, and 171 for fog. To reduce the risk of involvement in fatal crashes with large numbers of vehicles, drivers should slow down more than they currently do under adverse weather conditions. Driver deaths per fatal crash increase slowly with increasing numbers of involved vehicles when it is snowing or raining, but more steeply when clear or foggy. We conclude that in order to reduce risk of involvement in crashes involving large numbers of vehicles, drivers must reduce speed in fog, and in snow or rain, reduce speed by even more than they already do. Copyright © 2017 National Safety Council and Elsevier Ltd. All rights reserved.

  3. Large vessel involvement by IgG4-related disease

    Science.gov (United States)

    Perugino, Cory A.; Wallace, Zachary S.; Meyersohn, Nandini; Oliveira, George; Stone, James R.; Stone, John H.

    2016-01-01

    Abstract Objectives: IgG4-related disease (IgG4-RD) is an immune-mediated fibroinflammatory condition that can affect multiple organs and lead to tumefactive, tissue-destructive lesions. Reports have described inflammatory aortitis and periaortitis, the latter in the setting of retroperitoneal fibrosis (RPF), but have not distinguished adequately between these 2 manifestations. The frequency, radiologic features, and response of vascular complications to B cell depletion remain poorly defined. We describe the clinical features, radiology findings, and treatment response in a cohort of 36 patients with IgG4-RD affecting large blood vessels. Methods: Clinical records of all patients diagnosed with IgG4-RD in our center were reviewed. All radiologic studies were reviewed. We distinguished between primary large blood vessel inflammation and secondary vascular involvement. Primary involvement was defined as inflammation in the blood vessel wall as a principal focus of disease. Secondary vascular involvement was defined as disease caused by the effects of adjacent inflammation on the blood vessel wall. Results: Of the 160 IgG4-RD patients in this cohort, 36 (22.5%) had large-vessel involvement. The mean age at disease onset of the patients with large-vessel IgG4-RD was 54.6 years. Twenty-eight patients (78%) were male and 8 (22%) were female. Thirteen patients (36%) had primary IgG4-related vasculitis and aortitis with aneurysm formation comprised the most common manifestation. This affected 5.6% of the entire IgG4-RD cohort and was observed in the thoracic aorta in 8 patients, the abdominal aorta in 4, and both the thoracic and abdominal aorta in 3. Three of these aneurysms were complicated by aortic dissection or contained perforation. Periaortitis secondary to RPF accounted for 27 of 29 patients (93%) of secondary vascular involvement by IgG4-RD. Only 5 patients demonstrated evidence of both primary and secondary blood vessel involvement. Of those treated with

  4. Comparing the performance of SIMD computers by running large air pollution models

    DEFF Research Database (Denmark)

    Brown, J.; Hansen, Per Christian; Wasniewski, J.

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on these computers. Using a realistic large-scale model, we gained detailed insight about the performance of the computers involved when used to solve large-scale scientific...... problems that involve several types of numerical computations. The computers used in our study are the Connection Machines CM-200 and CM-5, and the MasPar MP-2216...

  5. Consumers' Reaction towards Involvement of Large Retailers in ...

    African Journals Online (AJOL)

    User

    markets, fair trade products need LRs distribution channels and not the old system of using speciality ... analysis employed to identify customers' reaction to large retailers' involvement in selling ...... The Journal of Socio-Economics. Vol. 37: pp.

  6. Modeling and simulation of large HVDC systems

    Energy Technology Data Exchange (ETDEWEB)

    Jin, H.; Sood, V.K.

    1993-01-01

    This paper addresses the complexity and the amount of work in preparing simulation data and in implementing various converter control schemes and the excessive simulation time involved in modelling and simulation of large HVDC systems. The Power Electronic Circuit Analysis program (PECAN) is used to address these problems and a large HVDC system with two dc links is simulated using PECAN. A benchmark HVDC system is studied to compare the simulation results with those from other packages. The simulation time and results are provided in the paper.

  7. Estimation in a multiplicative mixed model involving a genetic relationship matrix

    Directory of Open Access Journals (Sweden)

    Eccleston John A

    2009-04-01

    Full Text Available Abstract Genetic models partitioning additive and non-additive genetic effects for populations tested in replicated multi-environment trials (METs in a plant breeding program have recently been presented in the literature. For these data, the variance model involves the direct product of a large numerator relationship matrix A, and a complex structure for the genotype by environment interaction effects, generally of a factor analytic (FA form. With MET data, we expect a high correlation in genotype rankings between environments, leading to non-positive definite covariance matrices. Estimation methods for reduced rank models have been derived for the FA formulation with independent genotypes, and we employ these estimation methods for the more complex case involving the numerator relationship matrix. We examine the performance of differing genetic models for MET data with an embedded pedigree structure, and consider the magnitude of the non-additive variance. The capacity of existing software packages to fit these complex models is largely due to the use of the sparse matrix methodology and the average information algorithm. Here, we present an extension to the standard formulation necessary for estimation with a factor analytic structure across multiple environments.

  8. Spatial occupancy models for large data sets

    Science.gov (United States)

    Johnson, Devin S.; Conn, Paul B.; Hooten, Mevin B.; Ray, Justina C.; Pond, Bruce A.

    2013-01-01

    Since its development, occupancy modeling has become a popular and useful tool for ecologists wishing to learn about the dynamics of species occurrence over time and space. Such models require presence–absence data to be collected at spatially indexed survey units. However, only recently have researchers recognized the need to correct for spatially induced overdisperison by explicitly accounting for spatial autocorrelation in occupancy probability. Previous efforts to incorporate such autocorrelation have largely focused on logit-normal formulations for occupancy, with spatial autocorrelation induced by a random effect within a hierarchical modeling framework. Although useful, computational time generally limits such an approach to relatively small data sets, and there are often problems with algorithm instability, yielding unsatisfactory results. Further, recent research has revealed a hidden form of multicollinearity in such applications, which may lead to parameter bias if not explicitly addressed. Combining several techniques, we present a unifying hierarchical spatial occupancy model specification that is particularly effective over large spatial extents. This approach employs a probit mixture framework for occupancy and can easily accommodate a reduced-dimensional spatial process to resolve issues with multicollinearity and spatial confounding while improving algorithm convergence. Using open-source software, we demonstrate this new model specification using a case study involving occupancy of caribou (Rangifer tarandus) over a set of 1080 survey units spanning a large contiguous region (108 000 km2) in northern Ontario, Canada. Overall, the combination of a more efficient specification and open-source software allows for a facile and stable implementation of spatial occupancy models for large data sets.

  9. Metal-Oxide Film Conversions Involving Large Anions

    Energy Technology Data Exchange (ETDEWEB)

    Pretty, S.; Zhang, X.; Shoesmith, D.W.; Wren, J.C. [The University of Western Ontario, Chemistry Department, 1151 Richmond St., N6A 5B7, London, Ontario (Canada)

    2008-07-01

    The main objective of my research is to establish the mechanism and kinetics of metal-oxide film conversions involving large anions (I{sup -}, Br{sup -}, S{sup 2-}). Within a given group, the anions will provide insight on the effect of anion size on the film conversion, while comparison of Group 6 and Group 7 anions will provide insight on the effect of anion charge. This research has a range of industrial applications, for example, hazardous radioiodine can be immobilized by reaction with Ag to yield AgI. From the perspective of public safety, radioiodine is one of the most important fission products from the uranium fuel because of its large fuel inventory, high volatility, and radiological hazard. Additionally, because of its mobility, the gaseous iodine concentration is a critical parameter for safety assessment and post-accident management. A full kinetic analysis using electrochemical techniques has been performed on the conversion of Ag{sub 2}O to (1) AgI and (2) AgBr. (authors)

  10. Metal-Oxide Film Conversions Involving Large Anions

    International Nuclear Information System (INIS)

    Pretty, S.; Zhang, X.; Shoesmith, D.W.; Wren, J.C.

    2008-01-01

    The main objective of my research is to establish the mechanism and kinetics of metal-oxide film conversions involving large anions (I - , Br - , S 2- ). Within a given group, the anions will provide insight on the effect of anion size on the film conversion, while comparison of Group 6 and Group 7 anions will provide insight on the effect of anion charge. This research has a range of industrial applications, for example, hazardous radioiodine can be immobilized by reaction with Ag to yield AgI. From the perspective of public safety, radioiodine is one of the most important fission products from the uranium fuel because of its large fuel inventory, high volatility, and radiological hazard. Additionally, because of its mobility, the gaseous iodine concentration is a critical parameter for safety assessment and post-accident management. A full kinetic analysis using electrochemical techniques has been performed on the conversion of Ag 2 O to (1) AgI and (2) AgBr. (authors)

  11. Research on Francis Turbine Modeling for Large Disturbance Hydropower Station Transient Process Simulation

    Directory of Open Access Journals (Sweden)

    Guangtao Zhang

    2015-01-01

    Full Text Available In the field of hydropower station transient process simulation (HSTPS, characteristic graph-based iterative hydroturbine model (CGIHM has been widely used when large disturbance hydroturbine modeling is involved. However, by this model, iteration should be used to calculate speed and pressure, and slow convergence or no convergence problems may be encountered for some reasons like special characteristic graph profile, inappropriate iterative algorithm, or inappropriate interpolation algorithm, and so forth. Also, other conventional large disturbance hydroturbine models are of some disadvantages and difficult to be used widely in HSTPS. Therefore, to obtain an accurate simulation result, a simple method for hydroturbine modeling is proposed. By this method, both the initial operating point and the transfer coefficients of linear hydroturbine model keep changing during simulation. Hence, it can reflect the nonlinearity of the hydroturbine and be used for Francis turbine simulation under large disturbance condition. To validate the proposed method, both large disturbance and small disturbance simulations of a single hydrounit supplying a resistive, isolated load were conducted. It was shown that the simulation result is consistent with that of field test. Consequently, the proposed method is an attractive option for HSTPS involving Francis turbine modeling under large disturbance condition.

  12. A Reasoned Action Model of Male Client Involvement in Commercial Sex Work in Kibera, A Large Informal Settlement in Nairobi, Kenya.

    Science.gov (United States)

    Roth, Eric Abella; Ngugi, Elizabeth; Benoit, Cecilia; Jansson, Mikael; Hallgrimsdottir, Helga

    2014-01-01

    Male clients of female sex workers (FSWs) are epidemiologically important because they can form bridge groups linking high- and low-risk subpopulations. However, because male clients are hard to locate, they are not frequently studied. Recent research emphasizes searching for high-risk behavior groups in locales where new sexual partnerships form and the threat of HIV transmission is high. Sub-Saharan Africa public drinking venues satisfy these criteria. Accordingly, this study developed and implemented a rapid assessment methodology to survey men in bars throughout the large informal settlement of Kibera, Nairobi, Kenya, with the goal of delineating cultural and economic rationales associated with male participation in commercial sex. The study sample consisted of 220 male patrons of 110 bars located throughout Kibera's 11 communities. Logistic regression analysis incorporating a modified Reasoned Action Model indicated that a social norm condoning commercial sex among male peers and the cultural belief that men should practice sex before marriage support commercial sex involvement. Conversely, lacking money to drink and/or pay for sexual services were barriers to male commercial sex involvement. Results are interpreted in light of possible harm reduction programs focusing on FSWs' male clients.

  13. Severities of transportation accidents involving large packages

    Energy Technology Data Exchange (ETDEWEB)

    Dennis, A.W.; Foley, J.T. Jr.; Hartman, W.F.; Larson, D.W.

    1978-05-01

    The study was undertaken to define in a quantitative nonjudgmental technical manner the abnormal environments to which a large package (total weight over 2 tons) would be subjected as the result of a transportation accident. Because of this package weight, air shipment was not considered as a normal transportation mode and was not included in the study. The abnormal transportation environments for shipment by motor carrier and train were determined and quantified. In all cases the package was assumed to be transported on an open flat-bed truck or an open flat-bed railcar. In an earlier study, SLA-74-0001, the small-package environments were investigated. A third transportation study, related to the abnormal environment involving waterways transportation, is now under way at Sandia Laboratories and should complete the description of abnormal transportation environments. Five abnormal environments were defined and investigated, i.e., fire, impact, crush, immersion, and puncture. The primary interest of the study was directed toward the type of large package used to transport radioactive materials; however, the findings are not limited to this type of package but can be applied to a much larger class of material shipping containers.

  14. Severities of transportation accidents involving large packages

    International Nuclear Information System (INIS)

    Dennis, A.W.; Foley, J.T. Jr.; Hartman, W.F.; Larson, D.W.

    1978-05-01

    The study was undertaken to define in a quantitative nonjudgmental technical manner the abnormal environments to which a large package (total weight over 2 tons) would be subjected as the result of a transportation accident. Because of this package weight, air shipment was not considered as a normal transportation mode and was not included in the study. The abnormal transportation environments for shipment by motor carrier and train were determined and quantified. In all cases the package was assumed to be transported on an open flat-bed truck or an open flat-bed railcar. In an earlier study, SLA-74-0001, the small-package environments were investigated. A third transportation study, related to the abnormal environment involving waterways transportation, is now under way at Sandia Laboratories and should complete the description of abnormal transportation environments. Five abnormal environments were defined and investigated, i.e., fire, impact, crush, immersion, and puncture. The primary interest of the study was directed toward the type of large package used to transport radioactive materials; however, the findings are not limited to this type of package but can be applied to a much larger class of material shipping containers

  15. Large-scale groundwater modeling using global datasets: a test case for the Rhine-Meuse basin

    NARCIS (Netherlands)

    Sutanudjaja, E.H.; Beek, L.P.H. van; Jong, S.M. de; Geer, F.C. van; Bierkens, M.F.P.

    2011-01-01

    The current generation of large-scale hydrological models does not include a groundwater flow component. Large-scale groundwater models, involving aquifers and basins of multiple countries, are still rare mainly due to a lack of hydro-geological data which are usually only available in

  16. Large-scale groundwater modeling using global datasets: A test case for the Rhine-Meuse basin

    NARCIS (Netherlands)

    Sutanudjaja, E.H.; Beek, L.P.H. van; Jong, S.M. de; Geer, F.C. van; Bierkens, M.F.P.

    2011-01-01

    The current generation of large-scale hydrological models does not include a groundwater flow component. Large-scale groundwater models, involving aquifers and basins of multiple countries, are still rare mainly due to a lack of hydro-geological data which are usually only available in developed

  17. Large-scale groundwater modeling using global datasets: A test case for the Rhine-Meuse basin

    NARCIS (Netherlands)

    Sutanudjaja, E.H.; Beek, L.P.H. van; Jong, S.M. de; Geer, F.C. van; Bierkens, M.F.P.

    2011-01-01

    Large-scale groundwater models involving aquifers and basins of multiple countries are still rare due to a lack of hydrogeological data which are usually only available in developed countries. In this study, we propose a novel approach to construct large-scale groundwater models by using global

  18. Detonation and fragmentation modeling for the description of large scale vapor explosions

    International Nuclear Information System (INIS)

    Buerger, M.; Carachalios, C.; Unger, H.

    1985-01-01

    The thermal detonation modeling of large-scale vapor explosions is shown to be indispensable for realistic safety evaluations. A steady-state as well as transient detonation model have been developed including detailed descriptions of the dynamics as well as the fragmentation processes inside a detonation wave. Strong restrictions for large-scale vapor explosions are obtained from this modeling and they indicate that the reactor pressure vessel would even withstand explosions with unrealistically high masses of corium involved. The modeling is supported by comparisons with a detonation experiment and - concerning its key part - hydronamic fragmentation experiments. (orig.) [de

  19. Mathematical modeling of large floating roof reservoir temperature arena

    Directory of Open Access Journals (Sweden)

    Liu Yang

    2018-03-01

    Full Text Available The current study is a simplification of related components of large floating roof tank and modeling for three dimensional temperature field of large floating roof tank. The heat transfer involves its transfer between the hot fluid in the oil tank, between the hot fluid and the tank wall and between the tank wall and the external environment. The mathematical model of heat transfer and flow of oil in the tank simulates the temperature field of oil in tank. Oil temperature field of large floating roof tank is obtained by numerical simulation, map the curve of central temperature dynamics with time and analyze axial and radial temperature of storage tank. It determines the distribution of low temperature storage tank location based on the thickness of the reservoir temperature. Finally, it compared the calculated results and the field test data; eventually validated the calculated results based on the experimental results.

  20. Fast sampling from a Hidden Markov Model posterior for large data

    DEFF Research Database (Denmark)

    Bonnevie, Rasmus; Hansen, Lars Kai

    2014-01-01

    Hidden Markov Models are of interest in a broad set of applications including modern data driven systems involving very large data sets. However, approximate inference methods based on Bayesian averaging are precluded in such applications as each sampling step requires a full sweep over the data...

  1. Extending SME to Handle Large-Scale Cognitive Modeling.

    Science.gov (United States)

    Forbus, Kenneth D; Ferguson, Ronald W; Lovett, Andrew; Gentner, Dedre

    2017-07-01

    Analogy and similarity are central phenomena in human cognition, involved in processes ranging from visual perception to conceptual change. To capture this centrality requires that a model of comparison must be able to integrate with other processes and handle the size and complexity of the representations required by the tasks being modeled. This paper describes extensions to Structure-Mapping Engine (SME) since its inception in 1986 that have increased its scope of operation. We first review the basic SME algorithm, describe psychological evidence for SME as a process model, and summarize its role in simulating similarity-based retrieval and generalization. Then we describe five techniques now incorporated into the SME that have enabled it to tackle large-scale modeling tasks: (a) Greedy merging rapidly constructs one or more best interpretations of a match in polynomial time: O(n 2 log(n)); (b) Incremental operation enables mappings to be extended as new information is retrieved or derived about the base or target, to model situations where information in a task is updated over time; (c) Ubiquitous predicates model the varying degrees to which items may suggest alignment; (d) Structural evaluation of analogical inferences models aspects of plausibility judgments; (e) Match filters enable large-scale task models to communicate constraints to SME to influence the mapping process. We illustrate via examples from published studies how these enable it to capture a broader range of psychological phenomena than before. Copyright © 2016 Cognitive Science Society, Inc.

  2. Isolated cutaneous involvement in a child with nodal anaplastic large cell lymphoma

    Directory of Open Access Journals (Sweden)

    Vibhu Mendiratta

    2016-01-01

    Full Text Available Non-Hodgkin lymphoma is a common childhood T-cell and B-cell neoplasm that originates primarily from lymphoid tissue. Cutaneous involvement can be in the form of a primary extranodal lymphoma, or secondary to metastasis from a non-cutaneous location. The latter is uncommon, and isolated cutaneous involvement is rarely reported. We report a case of isolated secondary cutaneous involvement from nodal anaplastic large cell lymphoma (CD30 + and ALK + in a 7-year-old boy who was on chemotherapy. This case is reported for its unusual clinical presentation as an acute febrile, generalized papulonodular eruption that mimicked deep fungal infection, with the absence of other foci of systemic metastasis.

  3. Pituitary and adrenal involvement in diffuse large B-cell lymphoma, with recovery of their function after chemotherapy

    OpenAIRE

    Nakashima, Yasuhiro; Shiratsuchi, Motoaki; Abe, Ichiro; Matsuda, Yayoi; Miyata, Noriyuki; Ohno, Hirofumi; Ikeda, Motohiko; Matsushima, Takamitsu; Nomura, Masatoshi; Takayanagi, Ryoichi

    2013-01-01

    Background Diffuse large B-cell lymphoma sometimes involves the endocrine organs, but involvement of both the pituitary and adrenal glands is extremely rare. Involvement of these structures can lead to hypopituitarism and adrenal insufficiency, and subsequent recovery of their function is rarely seen. The present report describes an extremely rare case of pituitary and adrenal diffuse large B-cell lymphoma presenting with hypopituitarism and adrenal insufficiency with subsequent recovery of p...

  4. Multifocal Extranodal Involvement of Diffuse Large B-Cell Lymphoma

    Directory of Open Access Journals (Sweden)

    Devrim Cabuk

    2013-01-01

    Full Text Available Endobronchial involvement of extrapulmonary malignant tumors is uncommon and mostly associated with breast, kidney, colon, and rectum carcinomas. A 68-year-old male with a prior diagnosis of colon non-Hodgkin lymphoma (NHL was admitted to the hospital with a complaint of cough, sputum, and dyspnea. The chest radiograph showed right hilar enlargement and opacity at the right middle zone suggestive of a mass lesion. Computed tomography of thorax revealed a right-sided mass lesion extending to thoracic wall with the destruction of the third and the fourth ribs and a right hilar mass lesion. Fiberoptic bronchoscopy was performed in order to evaluate endobronchial involvement and showed stenosis with mucosal tumor infiltration in right upper lobe bronchus. The pathological examination of bronchoscopic biopsy specimen reported diffuse large B-cell lymphoma and the patient was accepted as the endobronchial recurrence of sigmoid colon NHL. The patient is still under treatment of R-ICE (rituximab-ifosfamide-carboplatin-etoposide chemotherapy and partial regression of pulmonary lesions was noted after 3 courses of treatment.

  5. Application of simplified models to CO2 migration and immobilization in large-scale geological systems

    KAUST Repository

    Gasda, Sarah E.

    2012-07-01

    Long-term stabilization of injected carbon dioxide (CO 2) is an essential component of risk management for geological carbon sequestration operations. However, migration and trapping phenomena are inherently complex, involving processes that act over multiple spatial and temporal scales. One example involves centimeter-scale density instabilities in the dissolved CO 2 region leading to large-scale convective mixing that can be a significant driver for CO 2 dissolution. Another example is the potentially important effect of capillary forces, in addition to buoyancy and viscous forces, on the evolution of mobile CO 2. Local capillary effects lead to a capillary transition zone, or capillary fringe, where both fluids are present in the mobile state. This small-scale effect may have a significant impact on large-scale plume migration as well as long-term residual and dissolution trapping. Computational models that can capture both large and small-scale effects are essential to predict the role of these processes on the long-term storage security of CO 2 sequestration operations. Conventional modeling tools are unable to resolve sufficiently all of these relevant processes when modeling CO 2 migration in large-scale geological systems. Herein, we present a vertically-integrated approach to CO 2 modeling that employs upscaled representations of these subgrid processes. We apply the model to the Johansen formation, a prospective site for sequestration of Norwegian CO 2 emissions, and explore the sensitivity of CO 2 migration and trapping to subscale physics. Model results show the relative importance of different physical processes in large-scale simulations. The ability of models such as this to capture the relevant physical processes at large spatial and temporal scales is important for prediction and analysis of CO 2 storage sites. © 2012 Elsevier Ltd.

  6. Does Business Model Affect CSR Involvement? A Survey of Polish Manufacturing and Service Companies

    Directory of Open Access Journals (Sweden)

    Marzanna Katarzyna Witek-Hajduk

    2016-02-01

    Full Text Available The study explores links between types of business models used by companies and their involvement in CSR. As the main part of our conceptual framework we used a business model taxonomy developed by Dudzik and Witek-Hajduk, which identifies five types of models: traditionalists, market players, contractors, distributors, and integrators. From shared characteristics of the business model profiles, we proposed that market players and integrators will show significantly higher levels of involvement in CSR than the three other classes of companies. Among other things, both market players and integrators relied strongly on building own brand value and fostering harmonious supply channel relations, which served as a rationale for our hypothesis. The data for the study were obtained through a combined CATI and CAWI survey on a group of 385 managers of medium and large enterprises. The sample was representative for the three Polish industries of chemical manufacturing, food production, and retailing. Statistical methods included confirmatory factor analysis and one-way ANOVA with contrasts and post hoc tests. The findings supported our hypothesis, showing that market players and integrators were indeed more engaged in CSR than other groups of firms. This may suggest that managers in control of these companies could bolster the integrity of their business models by increasing CSR involvement. Another important contribution of the study was to propose and validate a versatile scale for assessing CSR involvement, which showed measurement invariance for all involved industries.

  7. Secondary pancreatic involvement by a diffuse large B-cell lymphoma presenting as acute pancreatitis

    Institute of Scientific and Technical Information of China (English)

    M Wasif Saif; Sapna Khubchandani; Marek Walczak

    2007-01-01

    Diffuse large B-cell lymphoma is the most common type of non-Hodgkin's lymphoma. More than 50% of patients have some site of extra-nodal involvement at diagnosis,including the gastrointestinal tract and bone marrow.However, a diffuse large B-cell lymphoma presenting as acute pancreatitis is rare. A 57-year-old female presented with abdominal pain and matted lymph nodes in her axilla. She was admitted with a diagnosis of acute pancreatitis. Abdominal computed tomography (CT) scan showed diffusely enlarged pancreas due to infiltrative neoplasm and peripancreatic lymphadenopathy. Biopsy of the axillary mass revealed a large B-cell lymphoma.The patient was classified as stage Ⅳ, based on the Ann Arbor Classification, and as having a high-risk lymphoma,based on the International Prognostic Index. She was started on chemotherapy with CHOP (cyclophosphamide,doxorubicin, vincristine and prednisone). Within a week after chemotherapy, the patient's abdominal pain resolved. Follow-up CT scan of the abdomen revealed a marked decrease in the size of the pancreas and peripancreatic lymphadenopathy. A literature search revealed only seven cases of primary involvement of the pancreas in B-cell lymphoma presenting as acute pancreatitis. However, only one case of secondary pancreatic involvement by B-cell lymphoma presenting as acute pancreatitis has been published. Our case appears to be the second report of such a manifestation.Both cases responded well to chemotherapy.

  8. ALK-positive anaplastic large cell lymphoma with soft tissue involvement in a young woman

    Directory of Open Access Journals (Sweden)

    Gao KH

    2016-07-01

    Full Text Available Kehai Gao, Hongtao Li, Caihong Huang, Huazhuang Li, Jun Fang, Chen Tian Department of Orthopaedics, Yidu Central Hospital, Shandong, People’s Republic of China Introduction: Anaplastic large cell lymphoma (ALCL is a type of non-Hodgkin lymphoma that has strong expression of CD30. ALCL can sometimes involve the bone marrow, and in advanced stages, it can produce destructive extranodal lesions. But anaplastic large cell lymphoma kinase (ALK+ ALCL with soft tissue involvement is very rare.Case report: A 35-year-old woman presented with waist pain for over 1 month. The biopsy of soft tissue lesions showed that these cells were positive for ALK-1, CD30, TIA-1, GranzymeB, CD4, CD8, and Ki67 (90%+ and negative for CD3, CD5, CD20, CD10, cytokeratin (CK, TdT, HMB-45, epithelial membrane antigen (EMA, and pan-CK, which identified ALCL. After six cycles of Hyper-CVAD/MA regimen, she achieved partial remission. Three months later, she died due to disease progression.Conclusion: This case illustrates the unusual presentation of ALCL in soft tissue with a bad response to chemotherapy. Because of the tendency for rapid progression, ALCL in young adults with extranodal lesions are often treated with high-grade chemotherapy, such as Hyper-CVAD/MA. Keywords: anaplastic large cell lymphoma, ALK+, soft tissue involvement, Hyper-CVAD/MA

  9. Bullying Prevention and the Parent Involvement Model

    Science.gov (United States)

    Kolbert, Jered B.; Schultz, Danielle; Crothers, Laura M.

    2014-01-01

    A recent meta-analysis of bullying prevention programs provides support for social-ecological theory, in which parent involvement addressing child bullying behaviors is seen as important in preventing school-based bullying. The purpose of this manuscript is to suggest how Epstein and colleagues' parent involvement model can be used as a…

  10. Modeling interdisciplinary activities involving Mathematics

    DEFF Research Database (Denmark)

    Iversen, Steffen Møllegaard

    2006-01-01

    In this paper a didactical model is presented. The goal of the model is to work as a didactical tool, or conceptual frame, for developing, carrying through and evaluating interdisciplinary activities involving the subject of mathematics and philosophy in the high schools. Through the terms...... of Horizontal Intertwining, Vertical Structuring and Horizontal Propagation the model consists of three phases, each considering different aspects of the nature of interdisciplinary activities. The theoretical modelling is inspired by work which focuses on the students abilities to concept formation in expanded...... domains (Michelsen, 2001, 2005a, 2005b). Furthermore the theoretical description rest on a series of qualitative interviews with teachers from the Danish high school (grades 9-11) conducted recently. The special case of concrete interdisciplinary activities between mathematics and philosophy is also...

  11. Learning models of activities involving interacting objects

    DEFF Research Database (Denmark)

    Manfredotti, Cristina; Pedersen, Kim Steenstrup; Hamilton, Howard J.

    2013-01-01

    We propose the LEMAIO multi-layer framework, which makes use of hierarchical abstraction to learn models for activities involving multiple interacting objects from time sequences of data concerning the individual objects. Experiments in the sea navigation domain yielded learned models that were t...

  12. Modelling hydrologic and hydrodynamic processes in basins with large semi-arid wetlands

    Science.gov (United States)

    Fleischmann, Ayan; Siqueira, Vinícius; Paris, Adrien; Collischonn, Walter; Paiva, Rodrigo; Pontes, Paulo; Crétaux, Jean-François; Bergé-Nguyen, Muriel; Biancamaria, Sylvain; Gosset, Marielle; Calmant, Stephane; Tanimoun, Bachir

    2018-06-01

    hydrologic and hydrodynamic modelling proves to be an important tool for integrated evaluation of hydrological processes in such poorly gauged, large scale basins. We hope that this model application provides new ways forward for large scale model development in such systems, involving semi-arid regions and complex floodplains.

  13. Iron Malabsorption in a Patient With Large Cell Lymphoma Involving the Duodenum

    Science.gov (United States)

    1992-01-01

    hemoglobin. The lymphomas (5-7). The presenting symptoms mimic chest radiograph in May demonstrated an anterior me- those of celiac disease and include...compounded the anemia in a pa- tion in celiac disease were reversible by the institution tient with diffuse large cell lymphoma involving the of a gluten...etiologies (usually 2-3 h) is expected in patients who are iron (e.g., celiac disease , pancreatic insufliciency). however, deficient and have normal

  14. Managing large-scale models: DBS

    International Nuclear Information System (INIS)

    1981-05-01

    A set of fundamental management tools for developing and operating a large scale model and data base system is presented. Based on experience in operating and developing a large scale computerized system, the only reasonable way to gain strong management control of such a system is to implement appropriate controls and procedures. Chapter I discusses the purpose of the book. Chapter II classifies a broad range of generic management problems into three groups: documentation, operations, and maintenance. First, system problems are identified then solutions for gaining management control are disucssed. Chapters III, IV, and V present practical methods for dealing with these problems. These methods were developed for managing SEAS but have general application for large scale models and data bases

  15. Investigation of large α production in reactions involving weakly bound 7Li

    Science.gov (United States)

    Pandit, S. K.; Shrivastava, A.; Mahata, K.; Parkar, V. V.; Palit, R.; Keeley, N.; Rout, P. C.; Kumar, A.; Ramachandran, K.; Bhattacharyya, S.; Nanal, V.; Palshetkar, C. S.; Nag, T. N.; Gupta, Shilpi; Biswas, S.; Saha, S.; Sethi, J.; Singh, P.; Chatterjee, A.; Kailas, S.

    2017-10-01

    The origin of the large α -particle production cross sections in systems involving weakly bound 7Li projectiles has been investigated by measuring the cross sections of all possible fragment-capture as well as complete fusion using the particle-γ coincidence, in-beam, and off-beam γ -ray counting techniques for the 7Li+93Nb system at near Coulomb barrier energies. Almost all of the inclusive α -particle yield has been accounted for. While the t -capture mechanism is found to be dominant (˜70 % ), compound nuclear evaporation and breakup processes contribute ˜15 % each to the inclusive α -particle production in the measured energy range. Systematic behavior of the t capture and inclusive α cross sections for reactions involving 7Li over a wide mass range is also reported.

  16. Exploiting multi-scale parallelism for large scale numerical modelling of laser wakefield accelerators

    International Nuclear Information System (INIS)

    Fonseca, R A; Vieira, J; Silva, L O; Fiuza, F; Davidson, A; Tsung, F S; Mori, W B

    2013-01-01

    A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ∼10 6 cores and sustained performance over ∼2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios. (paper)

  17. A turbulence model for large interfaces in high Reynolds two-phase CFD

    International Nuclear Information System (INIS)

    Coste, P.; Laviéville, J.

    2015-01-01

    Highlights: • Two-phase CFD commonly involves interfaces much larger than the computational cells. • A two-phase turbulence model is developed to better take them into account. • It solves k–epsilon transport equations in each phase. • The special treatments and transfer terms at large interfaces are described. • Validation cases are presented. - Abstract: A model for two-phase (six-equation) CFD modelling of turbulence is presented, for the regions of the flow where the liquid–gas interface takes place on length scales which are much larger than the typical computational cell size. In the other regions of the flow, the liquid or gas volume fractions range from 0 to 1. Heat and mass transfer, compressibility of the fluids, are included in the system, which is used at high Reynolds numbers in large scale industrial calculations. In this context, a model based on k and ε transport equations in each phase was chosen. The paper describes the model, with a focus on the large interfaces, which require special treatments and transfer terms between the phases, including some approaches inspired from wall functions. The validation of the model is based on high Reynolds number experiments with turbulent quantities measurements of a liquid jet impinging a free surface and an air water stratified flow. A steam–water stratified condensing flow experiment is also used for an indirect validation in the case of heat and mass transfer

  18. Abnormal binding and disruption in large scale networks involved in human partial seizures

    Directory of Open Access Journals (Sweden)

    Bartolomei Fabrice

    2013-12-01

    Full Text Available There is a marked increase in the amount of electrophysiological and neuroimaging works dealing with the study of large scale brain connectivity in the epileptic brain. Our view of the epileptogenic process in the brain has largely evolved over the last twenty years from the historical concept of “epileptic focus” to a more complex description of “Epileptogenic networks” involved in the genesis and “propagation” of epileptic activities. In particular, a large number of studies have been dedicated to the analysis of intracerebral EEG signals to characterize the dynamic of interactions between brain areas during temporal lobe seizures. These studies have reported that large scale functional connectivity is dramatically altered during seizures, particularly during temporal lobe seizure genesis and development. Dramatic changes in neural synchrony provoked by epileptic rhythms are also responsible for the production of ictal symptoms or changes in patient’s behaviour such as automatisms, emotional changes or consciousness alteration. Beside these studies dedicated to seizures, large-scale network connectivity during the interictal state has also been investigated not only to define biomarkers of epileptogenicity but also to better understand the cognitive impairments observed between seizures.

  19. Compilation of information on uncertainties involved in deposition modeling

    International Nuclear Information System (INIS)

    Lewellen, W.S.; Varma, A.K.; Sheng, Y.P.

    1985-04-01

    The current generation of dispersion models contains very simple parameterizations of deposition processes. The analysis here looks at the physical mechanisms governing these processes in an attempt to see if more valid parameterizations are available and what level of uncertainty is involved in either these simple parameterizations or any more advanced parameterization. The report is composed of three parts. The first, on dry deposition model sensitivity, provides an estimate of the uncertainty existing in current estimates of the deposition velocity due to uncertainties in independent variables such as meteorological stability, particle size, surface chemical reactivity and canopy structure. The range of uncertainty estimated for an appropriate dry deposition velocity for a plume generated by a nuclear power plant accident is three orders of magnitude. The second part discusses the uncertainties involved in precipitation scavenging rates for effluents resulting from a nuclear reactor accident. The conclusion is that major uncertainties are involved both as a result of the natural variability of the atmospheric precipitation process and due to our incomplete understanding of the underlying process. The third part involves a review of the important problems associated with modeling the interaction between the atmosphere and a forest. It gives an indication of the magnitude of the problem involved in modeling dry deposition in such environments. Separate analytics have been done for each section and are contained in the EDB

  20. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  1. Efficient querying of large process model repositories

    NARCIS (Netherlands)

    Jin, Tao; Wang, Jianmin; La Rosa, M.; Hofstede, ter A.H.M.; Wen, Lijie

    2013-01-01

    Recent years have seen an increased uptake of business process management technology in industries. This has resulted in organizations trying to manage large collections of business process models. One of the challenges facing these organizations concerns the retrieval of models from large business

  2. Towards agile large-scale predictive modelling in drug discovery with flow-based programming design principles.

    Science.gov (United States)

    Lampa, Samuel; Alvarsson, Jonathan; Spjuth, Ola

    2016-01-01

    Predictive modelling in drug discovery is challenging to automate as it often contains multiple analysis steps and might involve cross-validation and parameter tuning that create complex dependencies between tasks. With large-scale data or when using computationally demanding modelling methods, e-infrastructures such as high-performance or cloud computing are required, adding to the existing challenges of fault-tolerant automation. Workflow management systems can aid in many of these challenges, but the currently available systems are lacking in the functionality needed to enable agile and flexible predictive modelling. We here present an approach inspired by elements of the flow-based programming paradigm, implemented as an extension of the Luigi system which we name SciLuigi. We also discuss the experiences from using the approach when modelling a large set of biochemical interactions using a shared computer cluster.Graphical abstract.

  3. Five challenges for stochastic epidemic models involving global transmission

    Directory of Open Access Journals (Sweden)

    Tom Britton

    2015-03-01

    Full Text Available The most basic stochastic epidemic models are those involving global transmission, meaning that infection rates depend only on the type and state of the individuals involved, and not on their location in the population. Simple as they are, there are still several open problems for such models. For example, when will such an epidemic go extinct and with what probability (questions depending on the population being fixed, changing or growing? How can a model be defined explaining the sometimes observed scenario of frequent mid-sized epidemic outbreaks? How can evolution of the infectious agent transmission rates be modelled and fitted to data in a robust way?

  4. Constituent models and large transverse momentum reactions

    International Nuclear Information System (INIS)

    Brodsky, S.J.

    1975-01-01

    The discussion of constituent models and large transverse momentum reactions includes the structure of hard scattering models, dimensional counting rules for large transverse momentum reactions, dimensional counting and exclusive processes, the deuteron form factor, applications to inclusive reactions, predictions for meson and photon beams, the charge-cubed test for the e/sup +-/p → e/sup +-/γX asymmetry, the quasi-elastic peak in inclusive hadronic reactions, correlations, and the multiplicity bump at large transverse momentum. Also covered are the partition method for bound state calculations, proofs of dimensional counting, minimal neutralization and quark--quark scattering, the development of the constituent interchange model, and the A dependence of high transverse momentum reactions

  5. Virtualizing ancient Rome: 3D acquisition and modeling of a large plaster-of-Paris model of imperial Rome

    Science.gov (United States)

    Guidi, Gabriele; Frischer, Bernard; De Simone, Monica; Cioci, Andrea; Spinetti, Alessandro; Carosso, Luca; Micoli, Laura L.; Russo, Michele; Grasso, Tommaso

    2005-01-01

    Computer modeling through digital range images has been used for many applications, including 3D modeling of objects belonging to our cultural heritage. The scales involved range from small objects (e.g. pottery), to middle-sized works of art (statues, architectural decorations), up to very large structures (architectural and archaeological monuments). For any of these applications, suitable sensors and methodologies have been explored by different authors. The object to be modeled within this project is the "Plastico di Roma antica," a large plaster-of-Paris model of imperial Rome (16x17 meters) created in the last century. Its overall size therefore demands an acquisition approach typical of large structures, but it also is characterized extremely tiny details typical of small objects (houses are a few centimeters high; their doors, windows, etc. are smaller than 1 centimeter). This paper gives an account of the procedures followed for solving this "contradiction" and describes how a huge 3D model was acquired and generated by using a special metrology Laser Radar. The procedures for reorienting in a single reference system the huge point clouds obtained after each acquisition phase, thanks to the measurement of fixed redundant references, are described. The data set was split in smaller sub-areas 2 x 2 meters each for purposes of mesh editing. This subdivision was necessary owing to the huge number of points in each individual scan (50-60 millions). The final merge of the edited parts made it possible to create a single mesh. All these processes were made with software specifically designed for this project since no commercial package could be found that was suitable for managing such a large number of points. Preliminary models are presented. Finally, the significance of the project is discussed in terms of the overall project known as "Rome Reborn," of which the present acquisition is an important component.

  6. Modeling and analysis of large-eddy simulations of particle-laden turbulent boundary layer flows

    KAUST Repository

    Rahman, Mustafa M.

    2017-01-05

    We describe a framework for the large-eddy simulation of solid particles suspended and transported within an incompressible turbulent boundary layer (TBL). For the fluid phase, the large-eddy simulation (LES) of incompressible turbulent boundary layer employs stretched spiral vortex subgrid-scale model and a virtual wall model similar to the work of Cheng, Pullin & Samtaney (J. Fluid Mech., 2015). This LES model is virtually parameter free and involves no active filtering of the computed velocity field. Furthermore, a recycling method to generate turbulent inflow is implemented. For the particle phase, the direct quadrature method of moments (DQMOM) is chosen in which the weights and abscissas of the quadrature approximation are tracked directly rather than the moments themselves. The numerical method in this framework is based on a fractional-step method with an energy-conservative fourth-order finite difference scheme on a staggered mesh. This code is parallelized based on standard message passing interface (MPI) protocol and is designed for distributed-memory machines. It is proposed to utilize this framework to examine transport of particles in very large-scale simulations. The solver is validated using the well know result of Taylor-Green vortex case. A large-scale sandstorm case is simulated and the altitude variations of number density along with its fluctuations are quantified.

  7. Economic Model Predictive Control for Large-Scale and Distributed Energy Systems

    DEFF Research Database (Denmark)

    Standardi, Laura

    Sources (RESs) in the smart grids is increasing. These energy sources bring uncertainty to the production due to their fluctuations. Hence,smart grids need suitable control systems that are able to continuously balance power production and consumption.  We apply the Economic Model Predictive Control (EMPC......) strategy to optimise the economic performances of the energy systems and to balance the power production and consumption. In the case of large-scale energy systems, the electrical grid connects a high number of power units. Because of this, the related control problem involves a high number of variables......In this thesis, we consider control strategies for large and distributed energy systems that are important for the implementation of smart grid technologies.  An electrical grid has to ensure reliability and avoid long-term interruptions in the power supply. Moreover, the share of Renewable Energy...

  8. Shell model in large spaces and statistical spectroscopy

    International Nuclear Information System (INIS)

    Kota, V.K.B.

    1996-01-01

    For many nuclear structure problems of current interest it is essential to deal with shell model in large spaces. For this, three different approaches are now in use and two of them are: (i) the conventional shell model diagonalization approach but taking into account new advances in computer technology; (ii) the shell model Monte Carlo method. A brief overview of these two methods is given. Large space shell model studies raise fundamental questions regarding the information content of the shell model spectrum of complex nuclei. This led to the third approach- the statistical spectroscopy methods. The principles of statistical spectroscopy have their basis in nuclear quantum chaos and they are described (which are substantiated by large scale shell model calculations) in some detail. (author)

  9. The Missing Stakeholder Group: Why Patients Should be Involved in Health Economic Modelling.

    Science.gov (United States)

    van Voorn, George A K; Vemer, Pepijn; Hamerlijnck, Dominique; Ramos, Isaac Corro; Teunissen, Geertruida J; Al, Maiwenn; Feenstra, Talitha L

    2016-04-01

    Evaluations of healthcare interventions, e.g. new drugs or other new treatment strategies, commonly include a cost-effectiveness analysis (CEA) that is based on the application of health economic (HE) models. As end users, patients are important stakeholders regarding the outcomes of CEAs, yet their knowledge of HE model development and application, or their involvement therein, is absent. This paper considers possible benefits and risks of patient involvement in HE model development and application for modellers and patients. An exploratory review of the literature has been performed on stakeholder-involved modelling in various disciplines. In addition, Dutch patient experts have been interviewed about their experience in, and opinion about, the application of HE models. Patients have little to no knowledge of HE models and are seldom involved in HE model development and application. Benefits of becoming involved would include a greater understanding and possible acceptance by patients of HE model application, improved model validation, and a more direct infusion of patient expertise. Risks would include patient bias and increased costs of modelling. Patient involvement in HE modelling seems to carry several benefits as well as risks. We claim that the benefits may outweigh the risks and that patients should become involved.

  10. Water and salt balance modelling to predict the effects of land-use changes in forested catchments. 3. The large catchment model

    Science.gov (United States)

    Sivapalan, Murugesu; Viney, Neil R.; Jeevaraj, Charles G.

    1996-03-01

    This paper presents an application of a long-term, large catchment-scale, water balance model developed to predict the effects of forest clearing in the south-west of Western Australia. The conceptual model simulates the basic daily water balance fluxes in forested catchments before and after clearing. The large catchment is divided into a number of sub-catchments (1-5 km2 in area), which are taken as the fundamental building blocks of the large catchment model. The responses of the individual subcatchments to rainfall and pan evaporation are conceptualized in terms of three inter-dependent subsurface stores A, B and F, which are considered to represent the moisture states of the subcatchments. Details of the subcatchment-scale water balance model have been presented earlier in Part 1 of this series of papers. The response of any subcatchment is a function of its local moisture state, as measured by the local values of the stores. The variations of the initial values of the stores among the subcatchments are described in the large catchment model through simple, linear equations involving a number of similarity indices representing topography, mean annual rainfall and level of forest clearing.The model is applied to the Conjurunup catchment, a medium-sized (39·6 km2) catchment in the south-west of Western Australia. The catchment has been heterogeneously (in space and time) cleared for bauxite mining and subsequently rehabilitated. For this application, the catchment is divided into 11 subcatchments. The model parameters are estimated by calibration, by comparing observed and predicted runoff values, over a 18 year period, for the large catchment and two of the subcatchments. Excellent fits are obtained.

  11. Large-scale modelling of neuronal systems

    International Nuclear Information System (INIS)

    Castellani, G.; Verondini, E.; Giampieri, E.; Bersani, F.; Remondini, D.; Milanesi, L.; Zironi, I.

    2009-01-01

    The brain is, without any doubt, the most, complex system of the human body. Its complexity is also due to the extremely high number of neurons, as well as the huge number of synapses connecting them. Each neuron is capable to perform complex tasks, like learning and memorizing a large class of patterns. The simulation of large neuronal systems is challenging for both technological and computational reasons, and can open new perspectives for the comprehension of brain functioning. A well-known and widely accepted model of bidirectional synaptic plasticity, the BCM model, is stated by a differential equation approach based on bistability and selectivity properties. We have modified the BCM model extending it from a single-neuron to a whole-network model. This new model is capable to generate interesting network topologies starting from a small number of local parameters, describing the interaction between incoming and outgoing links from each neuron. We have characterized this model in terms of complex network theory, showing how this, learning rule can be a support For network generation.

  12. Large-scale groundwater modeling using global datasets: a test case for the Rhine-Meuse basin

    Directory of Open Access Journals (Sweden)

    E. H. Sutanudjaja

    2011-09-01

    Full Text Available The current generation of large-scale hydrological models does not include a groundwater flow component. Large-scale groundwater models, involving aquifers and basins of multiple countries, are still rare mainly due to a lack of hydro-geological data which are usually only available in developed countries. In this study, we propose a novel approach to construct large-scale groundwater models by using global datasets that are readily available. As the test-bed, we use the combined Rhine-Meuse basin that contains groundwater head data used to verify the model output. We start by building a distributed land surface model (30 arc-second resolution to estimate groundwater recharge and river discharge. Subsequently, a MODFLOW transient groundwater model is built and forced by the recharge and surface water levels calculated by the land surface model. Results are promising despite the fact that we still use an offline procedure to couple the land surface and MODFLOW groundwater models (i.e. the simulations of both models are separately performed. The simulated river discharges compare well to the observations. Moreover, based on our sensitivity analysis, in which we run several groundwater model scenarios with various hydro-geological parameter settings, we observe that the model can reasonably well reproduce the observed groundwater head time series. However, we note that there are still some limitations in the current approach, specifically because the offline-coupling technique simplifies the dynamic feedbacks between surface water levels and groundwater heads, and between soil moisture states and groundwater heads. Also the current sensitivity analysis ignores the uncertainty of the land surface model output. Despite these limitations, we argue that the results of the current model show a promise for large-scale groundwater modeling practices, including for data-poor environments and at the global scale.

  13. Using Agent Base Models to Optimize Large Scale Network for Large System Inventories

    Science.gov (United States)

    Shameldin, Ramez Ahmed; Bowling, Shannon R.

    2010-01-01

    The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.

  14. Disinformative data in large-scale hydrological modelling

    Directory of Open Access Journals (Sweden)

    A. Kauffeldt

    2013-07-01

    Full Text Available Large-scale hydrological modelling has become an important tool for the study of global and regional water resources, climate impacts, and water-resources management. However, modelling efforts over large spatial domains are fraught with problems of data scarcity, uncertainties and inconsistencies between model forcing and evaluation data. Model-independent methods to screen and analyse data for such problems are needed. This study aimed at identifying data inconsistencies in global datasets using a pre-modelling analysis, inconsistencies that can be disinformative for subsequent modelling. The consistency between (i basin areas for different hydrographic datasets, and (ii between climate data (precipitation and potential evaporation and discharge data, was examined in terms of how well basin areas were represented in the flow networks and the possibility of water-balance closure. It was found that (i most basins could be well represented in both gridded basin delineations and polygon-based ones, but some basins exhibited large area discrepancies between flow-network datasets and archived basin areas, (ii basins exhibiting too-high runoff coefficients were abundant in areas where precipitation data were likely affected by snow undercatch, and (iii the occurrence of basins exhibiting losses exceeding the potential-evaporation limit was strongly dependent on the potential-evaporation data, both in terms of numbers and geographical distribution. Some inconsistencies may be resolved by considering sub-grid variability in climate data, surface-dependent potential-evaporation estimates, etc., but further studies are needed to determine the reasons for the inconsistencies found. Our results emphasise the need for pre-modelling data analysis to identify dataset inconsistencies as an important first step in any large-scale study. Applying data-screening methods before modelling should also increase our chances to draw robust conclusions from subsequent

  15. How and Why Fathers Are Involved in Their Children's Education: Gendered Model of Parent Involvement

    Science.gov (United States)

    Kim, Sung won

    2018-01-01

    Accumulating evidence points to the unique contributions fathers make to their children's academic outcomes. However, the large body of multi-disciplinary literature on fatherhood does not address how fathers engage in specific practices relevant to education, while the educational research in the United States focused on parent involvement often…

  16. Regularization modeling for large-eddy simulation

    NARCIS (Netherlands)

    Geurts, Bernardus J.; Holm, D.D.

    2003-01-01

    A new modeling approach for large-eddy simulation (LES) is obtained by combining a "regularization principle" with an explicit filter and its inversion. This regularization approach allows a systematic derivation of the implied subgrid model, which resolves the closure problem. The central role of

  17. Long-Term Calculations with Large Air Pollution Models

    DEFF Research Database (Denmark)

    Ambelas Skjøth, C.; Bastrup-Birk, A.; Brandt, J.

    1999-01-01

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  18. Automatic Generation of Connectivity for Large-Scale Neuronal Network Models through Structural Plasticity.

    Science.gov (United States)

    Diaz-Pier, Sandra; Naveau, Mikaël; Butz-Ostendorf, Markus; Morrison, Abigail

    2016-01-01

    With the emergence of new high performance computation technology in the last decade, the simulation of large scale neural networks which are able to reproduce the behavior and structure of the brain has finally become an achievable target of neuroscience. Due to the number of synaptic connections between neurons and the complexity of biological networks, most contemporary models have manually defined or static connectivity. However, it is expected that modeling the dynamic generation and deletion of the links among neurons, locally and between different regions of the brain, is crucial to unravel important mechanisms associated with learning, memory and healing. Moreover, for many neural circuits that could potentially be modeled, activity data is more readily and reliably available than connectivity data. Thus, a framework that enables networks to wire themselves on the basis of specified activity targets can be of great value in specifying network models where connectivity data is incomplete or has large error margins. To address these issues, in the present work we present an implementation of a model of structural plasticity in the neural network simulator NEST. In this model, synapses consist of two parts, a pre- and a post-synaptic element. Synapses are created and deleted during the execution of the simulation following local homeostatic rules until a mean level of electrical activity is reached in the network. We assess the scalability of the implementation in order to evaluate its potential usage in the self generation of connectivity of large scale networks. We show and discuss the results of simulations on simple two population networks and more complex models of the cortical microcircuit involving 8 populations and 4 layers using the new framework.

  19. Bone marrow involvement in diffuse large B-cell lymphoma: correlation between FDG-PET uptake and type of cellular infiltrate

    International Nuclear Information System (INIS)

    Paone, Gaetano; Itti, Emmanuel; Lin, Chieh; Meignan, Michel; Haioun, Corinne; Dupuis, Jehan; Gaulard, Philippe

    2009-01-01

    To assess, in patients with diffuse large B-cell lymphoma (DLBCL), whether the low sensitivity of 18 F-fluorodeoxyglucose positron emission tomography (FDG-PET) for bone marrow assessment may be explained by histological characteristics of the cellular infiltrate. From a prospective cohort of 110 patients with newly diagnosed aggressive lymphoma, 21 patients with DLBCL had bone marrow involvement. Pretherapeutic FDG-PET images were interpreted visually and semiquantitatively, then correlated with the type of cellular infiltrate and known prognostic factors. Of these 21 patients, 7 (33%) had lymphoid infiltrates with a prominent component of large transformed lymphoid cells (concordant bone marrow involvement, CBMI) and 14 (67%) had lymphoid infiltrates composed of small cells (discordant bone marrow involvement, DBMI). Only 10 patients (48%) had abnormal bone marrow FDG uptake, 6 of the 7 with CBMI and 4 of the 14 with DBMI. Therefore, FDG-PET positivity in the bone marrow was significantly associated with CBMI, while FDG-PET negativity was associated with DBMI (Fisher's exact test, p=0.024). There were no significant differences in gender, age and overall survival between patients with CBMI and DBMI, while the international prognostic index was significantly higher in patients with CBMI. Our study suggests that in patients with DLBCL with bone marrow involvement bone marrow FDG uptake depends on two types of infiltrate, comprising small (DBMI) or large (CBMI) cells. This may explain the apparent low sensitivity of FDG-PET previously reported for detecting bone marrow involvement. (orig.)

  20. Giant-cell arteritis. Concordance study between aortic CT angiography and FDG-PET/CT in detection of large-vessel involvement

    International Nuclear Information System (INIS)

    Boysson, Hubert de; Dumont, Anael; Boutemy, Jonathan; Maigne, Gwenola; Martin Silva, Nicolas; Sultan, Audrey; Bienvenu, Boris; Aouba, Achille; Liozon, Eric; Ly, Kim Heang; Lambert, Marc; Aide, Nicolas; Manrique, Alain

    2017-01-01

    The purpose of our study was to assess the concordance of aortic CT angiography (CTA) and FDG-PET/CT in the detection of large-vessel involvement at diagnosis in patients with giant-cell arteritis (GCA). We created a multicenter cohort of patients with GCA diagnosed between 2010 and 2015, and who underwent both FDG-PET/CT and aortic CTA before or in the first ten days following treatment introduction. Eight vascular segments were studied on each procedure. We calculated concordance between both imaging techniques in a per-patient and a per-segment analysis, using Cohen's kappa concordance index. We included 28 patients (21/7 women/men, median age 67 [56-82]). Nineteen patients had large-vessel involvement on PET/CT and 18 of these patients also presented positive findings on CTA. In a per-segment analysis, a median of 5 [1-7] and 3 [1-6] vascular territories were involved on positive PET/CT and CTA, respectively (p = 0.03). In qualitative analysis, i.e., positivity of the procedure suggesting a large-vessel involvement, the concordance rate between both procedures was 0.85 [0.64-1]. In quantitative analysis, i.e., per-segment analysis in both procedures, the global concordance rate was 0.64 [0.54-0.75]. Using FDG-PET/CT as a reference, CTA showed excellent sensitivity (95%) and specificity (100%) in a per-patient analysis. In a per-segment analysis, sensitivity and specificity were 61% and 97.9%, respectively. CTA and FDG-PET/CT were both able to detect large-vessel involvement in GCA with comparable results in a per-patient analysis. However, PET/CT showed higher performance in a per-segment analysis, especially in the detection of inflammation of the aorta's branches. (orig.)

  1. Giant-cell arteritis. Concordance study between aortic CT angiography and FDG-PET/CT in detection of large-vessel involvement

    Energy Technology Data Exchange (ETDEWEB)

    Boysson, Hubert de; Dumont, Anael; Boutemy, Jonathan; Maigne, Gwenola; Martin Silva, Nicolas; Sultan, Audrey; Bienvenu, Boris; Aouba, Achille [Caen University Hospital, Department of Internal Medicine, Caen (France); Liozon, Eric; Ly, Kim Heang [Limoges University Hospital, Department of Internal Medicine, Limoges (France); Lambert, Marc [Lille University Hospital, Department of Internal Medicine, Lille (France); Aide, Nicolas [Caen University Hospital, Department of Nuclear Medicine, Caen (France); INSERM U1086 ' ' ANTICIPE' ' , Francois Baclesse Cancer Centre, Caen (France); Manrique, Alain [Caen University Hospital, Department of Nuclear Medicine, Caen (France); Normandy University, Caen (France)

    2017-12-15

    The purpose of our study was to assess the concordance of aortic CT angiography (CTA) and FDG-PET/CT in the detection of large-vessel involvement at diagnosis in patients with giant-cell arteritis (GCA). We created a multicenter cohort of patients with GCA diagnosed between 2010 and 2015, and who underwent both FDG-PET/CT and aortic CTA before or in the first ten days following treatment introduction. Eight vascular segments were studied on each procedure. We calculated concordance between both imaging techniques in a per-patient and a per-segment analysis, using Cohen's kappa concordance index. We included 28 patients (21/7 women/men, median age 67 [56-82]). Nineteen patients had large-vessel involvement on PET/CT and 18 of these patients also presented positive findings on CTA. In a per-segment analysis, a median of 5 [1-7] and 3 [1-6] vascular territories were involved on positive PET/CT and CTA, respectively (p = 0.03). In qualitative analysis, i.e., positivity of the procedure suggesting a large-vessel involvement, the concordance rate between both procedures was 0.85 [0.64-1]. In quantitative analysis, i.e., per-segment analysis in both procedures, the global concordance rate was 0.64 [0.54-0.75]. Using FDG-PET/CT as a reference, CTA showed excellent sensitivity (95%) and specificity (100%) in a per-patient analysis. In a per-segment analysis, sensitivity and specificity were 61% and 97.9%, respectively. CTA and FDG-PET/CT were both able to detect large-vessel involvement in GCA with comparable results in a per-patient analysis. However, PET/CT showed higher performance in a per-segment analysis, especially in the detection of inflammation of the aorta's branches. (orig.)

  2. Large-scale multimedia modeling applications

    International Nuclear Information System (INIS)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications

  3. Large Mammalian Animal Models of Heart Disease

    Directory of Open Access Journals (Sweden)

    Paula Camacho

    2016-10-01

    Full Text Available Due to the biological complexity of the cardiovascular system, the animal model is an urgent pre-clinical need to advance our knowledge of cardiovascular disease and to explore new drugs to repair the damaged heart. Ideally, a model system should be inexpensive, easily manipulated, reproducible, a biological representative of human disease, and ethically sound. Although a larger animal model is more expensive and difficult to manipulate, its genetic, structural, functional, and even disease similarities to humans make it an ideal model to first consider. This review presents the commonly-used large animals—dog, sheep, pig, and non-human primates—while the less-used other large animals—cows, horses—are excluded. The review attempts to introduce unique points for each species regarding its biological property, degrees of susceptibility to develop certain types of heart diseases, and methodology of induced conditions. For example, dogs barely develop myocardial infarction, while dilated cardiomyopathy is developed quite often. Based on the similarities of each species to the human, the model selection may first consider non-human primates—pig, sheep, then dog—but it also depends on other factors, for example, purposes, funding, ethics, and policy. We hope this review can serve as a basic outline of large animal models for cardiovascular researchers and clinicians.

  4. Large scale model testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.

    1989-01-01

    Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs

  5. The Oskarshamn model for public involvement in the siting of nuclear facilities

    Energy Technology Data Exchange (ETDEWEB)

    Aahagen, H. [Ahagen and Co (Sweden); CarIsson, Torsten [Mayor, Oskarshamn (Sweden); Hallberg, K. [Local Competence Building, Oskarshamn (Sweden); Andersson, Kjell [Karinta-Konsult, Taeby(Sweden)

    1999-12-01

    The Oskarshamn model has so far worked extremely well as a tool to achieve openness and public participation. The municipality involvement has been successful in several aspects, e.g.: It has been possible to influence the program, to a large extent, to meet certain municipality conditions and to ensure the local perspective. The local competence has increased to a considerable degree. The activities generated by the six working groups with a total of 40 members have generated a large number of contacts with various organisations, schools, mass media, individuals in the general public and interest groups. For the future, clarification of the disposal method and site selection criteria as well as the site selection process as such is crucial. The municipality has also emphasised the importance of SKB having shown the integration between site selection criteria, the feasibility study and the safety assessment. Furthermore, the programs for the encapsulation facility and the repository must be co-ordinated. For Oskarshamn it will be of utmost importance that the repository is well under way to be realised before the encapsulation facility can be built.

  6. The Oskarshamn model for public involvement in the siting of nuclear facilities

    International Nuclear Information System (INIS)

    Aahagen, H.; CarIsson, Torsten; Hallberg, K.; Andersson, Kjell

    1999-01-01

    The Oskarshamn model has so far worked extremely well as a tool to achieve openness and public participation. The municipality involvement has been successful in several aspects, e.g.: It has been possible to influence the program, to a large extent, to meet certain municipality conditions and to ensure the local perspective. The local competence has increased to a considerable degree. The activities generated by the six working groups with a total of 40 members have generated a large number of contacts with various organisations, schools, mass media, individuals in the general public and interest groups. For the future, clarification of the disposal method and site selection criteria as well as the site selection process as such is crucial. The municipality has also emphasised the importance of SKB having shown the integration between site selection criteria, the feasibility study and the safety assessment. Furthermore, the programs for the encapsulation facility and the repository must be co-ordinated. For Oskarshamn it will be of utmost importance that the repository is well under way to be realised before the encapsulation facility can be built

  7. Models for large superconducting toroidal magnet systems

    International Nuclear Information System (INIS)

    Arendt, F.; Brechna, H.; Erb, J.; Komarek, P.; Krauth, H.; Maurer, W.

    1976-01-01

    Prior to the design of large GJ toroidal magnet systems it is appropriate to procure small scale models, which can simulate their pertinent properties and allow to investigate their relevant phenomena. The important feature of the model is to show under which circumstances the system performance can be extrapolated to large magnets. Based on parameters such as the maximum magnetic field and the current density, the maximum tolerable magneto-mechanical stresses, a simple method of designing model magnets is presented. It is shown how pertinent design parameters are changed when the toroidal dimensions are altered. In addition some conductor cost estimations are given based on reactor power output and wall loading

  8. Multistability in Large Scale Models of Brain Activity.

    Directory of Open Access Journals (Sweden)

    Mathieu Golos

    2015-12-01

    Full Text Available Noise driven exploration of a brain network's dynamic repertoire has been hypothesized to be causally involved in cognitive function, aging and neurodegeneration. The dynamic repertoire crucially depends on the network's capacity to store patterns, as well as their stability. Here we systematically explore the capacity of networks derived from human connectomes to store attractor states, as well as various network mechanisms to control the brain's dynamic repertoire. Using a deterministic graded response Hopfield model with connectome-based interactions, we reconstruct the system's attractor space through a uniform sampling of the initial conditions. Large fixed-point attractor sets are obtained in the low temperature condition, with a bigger number of attractors than ever reported so far. Different variants of the initial model, including (i a uniform activation threshold or (ii a global negative feedback, produce a similarly robust multistability in a limited parameter range. A numerical analysis of the distribution of the attractors identifies spatially-segregated components, with a centro-medial core and several well-delineated regional patches. Those different modes share similarity with the fMRI independent components observed in the "resting state" condition. We demonstrate non-stationary behavior in noise-driven generalizations of the models, with different meta-stable attractors visited along the same time course. Only the model with a global dynamic density control is found to display robust and long-lasting non-stationarity with no tendency toward either overactivity or extinction. The best fit with empirical signals is observed at the edge of multistability, a parameter region that also corresponds to the highest entropy of the attractors.

  9. Exactly soluble models for surface partition of large clusters

    International Nuclear Information System (INIS)

    Bugaev, K.A.; Bugaev, K.A.; Elliott, J.B.

    2007-01-01

    The surface partition of large clusters is studied analytically within a framework of the 'Hills and Dales Model'. Three formulations are solved exactly by using the Laplace-Fourier transformation method. In the limit of small amplitude deformations, the 'Hills and Dales Model' gives the upper and lower bounds for the surface entropy coefficient of large clusters. The found surface entropy coefficients are compared with those of large clusters within the 2- and 3-dimensional Ising models

  10. Comparison of void strengthening in fcc and bcc metals: Large-scale atomic-level modelling

    International Nuclear Information System (INIS)

    Osetsky, Yu.N.; Bacon, D.J.

    2005-01-01

    Strengthening due to voids can be a significant radiation effect in metals. Treatment of this by elasticity theory of dislocations is difficult when atomic structure of the obstacle and dislocation is influential. In this paper, we report results of large-scale atomic-level modelling of edge dislocation-void interaction in fcc (copper) and bcc (iron) metals. Voids of up to 5 nm diameter were studied over the temperature range from 0 to 600 K. We demonstrate that atomistic modelling is able to reveal important effects, which are beyond the continuum approach. Some arise from features of the dislocation core and crystal structure, others involve dislocation climb and temperature effects

  11. Computational Modeling of Large Wildfires: A Roadmap

    KAUST Repository

    Coen, Janice L.; Douglas, Craig C.

    2010-01-01

    Wildland fire behavior, particularly that of large, uncontrolled wildfires, has not been well understood or predicted. Our methodology to simulate this phenomenon uses high-resolution dynamic models made of numerical weather prediction (NWP) models

  12. Large-scale modeling of rain fields from a rain cell deterministic model

    Science.gov (United States)

    FéRal, Laurent; Sauvageot, Henri; Castanet, Laurent; Lemorton, JoëL.; Cornet, FréDéRic; Leconte, Katia

    2006-04-01

    A methodology to simulate two-dimensional rain rate fields at large scale (1000 × 1000 km2, the scale of a satellite telecommunication beam or a terrestrial fixed broadband wireless access network) is proposed. It relies on a rain rate field cellular decomposition. At small scale (˜20 × 20 km2), the rain field is split up into its macroscopic components, the rain cells, described by the Hybrid Cell (HYCELL) cellular model. At midscale (˜150 × 150 km2), the rain field results from the conglomeration of rain cells modeled by HYCELL. To account for the rain cell spatial distribution at midscale, the latter is modeled by a doubly aggregative isotropic random walk, the optimal parameterization of which is derived from radar observations at midscale. The extension of the simulation area from the midscale to the large scale (1000 × 1000 km2) requires the modeling of the weather frontal area. The latter is first modeled by a Gaussian field with anisotropic covariance function. The Gaussian field is then turned into a binary field, giving the large-scale locations over which it is raining. This transformation requires the definition of the rain occupation rate over large-scale areas. Its probability distribution is determined from observations by the French operational radar network ARAMIS. The coupling with the rain field modeling at midscale is immediate whenever the large-scale field is split up into midscale subareas. The rain field thus generated accounts for the local CDF at each point, defining a structure spatially correlated at small scale, midscale, and large scale. It is then suggested that this approach be used by system designers to evaluate diversity gain, terrestrial path attenuation, or slant path attenuation for different azimuth and elevation angle directions.

  13. Active Exploration of Large 3D Model Repositories.

    Science.gov (United States)

    Gao, Lin; Cao, Yan-Pei; Lai, Yu-Kun; Huang, Hao-Zhi; Kobbelt, Leif; Hu, Shi-Min

    2015-12-01

    With broader availability of large-scale 3D model repositories, the need for efficient and effective exploration becomes more and more urgent. Existing model retrieval techniques do not scale well with the size of the database since often a large number of very similar objects are returned for a query, and the possibilities to refine the search are quite limited. We propose an interactive approach where the user feeds an active learning procedure by labeling either entire models or parts of them as "like" or "dislike" such that the system can automatically update an active set of recommended models. To provide an intuitive user interface, candidate models are presented based on their estimated relevance for the current query. From the methodological point of view, our main contribution is to exploit not only the similarity between a query and the database models but also the similarities among the database models themselves. We achieve this by an offline pre-processing stage, where global and local shape descriptors are computed for each model and a sparse distance metric is derived that can be evaluated efficiently even for very large databases. We demonstrate the effectiveness of our method by interactively exploring a repository containing over 100 K models.

  14. On spinfoam models in large spin regime

    International Nuclear Information System (INIS)

    Han, Muxin

    2014-01-01

    We study the semiclassical behavior of Lorentzian Engle–Pereira–Rovelli–Livine (EPRL) spinfoam model, by taking into account the sum over spins in the large spin regime. We also employ the method of stationary phase analysis with parameters and the so-called, almost analytic machinery, in order to find the asymptotic behavior of the contributions from all possible large spin configurations in the spinfoam model. The spins contributing the sum are written as J f = λj f , where λ is a large parameter resulting in an asymptotic expansion via stationary phase approximation. The analysis shows that at least for the simplicial Lorentzian geometries (as spinfoam critical configurations), they contribute the leading order approximation of spinfoam amplitude only when their deficit angles satisfy γ Θ-ring f ≤λ −1/2 mod 4πZ. Our analysis results in a curvature expansion of the semiclassical low energy effective action from the spinfoam model, where the UV modifications of Einstein gravity appear as subleading high-curvature corrections. (paper)

  15. Dynamics Modeling and Simulation of Large Transport Airplanes in Upset Conditions

    Science.gov (United States)

    Foster, John V.; Cunningham, Kevin; Fremaux, Charles M.; Shah, Gautam H.; Stewart, Eric C.; Rivers, Robert A.; Wilborn, James E.; Gato, William

    2005-01-01

    As part of NASA's Aviation Safety and Security Program, research has been in progress to develop aerodynamic modeling methods for simulations that accurately predict the flight dynamics characteristics of large transport airplanes in upset conditions. The motivation for this research stems from the recognition that simulation is a vital tool for addressing loss-of-control accidents, including applications to pilot training, accident reconstruction, and advanced control system analysis. The ultimate goal of this effort is to contribute to the reduction of the fatal accident rate due to loss-of-control. Research activities have involved accident analyses, wind tunnel testing, and piloted simulation. Results have shown that significant improvements in simulation fidelity for upset conditions, compared to current training simulations, can be achieved using state-of-the-art wind tunnel testing and aerodynamic modeling methods. This paper provides a summary of research completed to date and includes discussion on key technical results, lessons learned, and future research needs.

  16. Research on large-scale wind farm modeling

    Science.gov (United States)

    Ma, Longfei; Zhang, Baoqun; Gong, Cheng; Jiao, Ran; Shi, Rui; Chi, Zhongjun; Ding, Yifeng

    2017-01-01

    Due to intermittent and adulatory properties of wind energy, when large-scale wind farm connected to the grid, it will have much impact on the power system, which is different from traditional power plants. Therefore it is necessary to establish an effective wind farm model to simulate and analyze the influence wind farms have on the grid as well as the transient characteristics of the wind turbines when the grid is at fault. However we must first establish an effective WTGs model. As the doubly-fed VSCF wind turbine has become the mainstream wind turbine model currently, this article first investigates the research progress of doubly-fed VSCF wind turbine, and then describes the detailed building process of the model. After that investigating the common wind farm modeling methods and pointing out the problems encountered. As WAMS is widely used in the power system, which makes online parameter identification of the wind farm model based on off-output characteristics of wind farm be possible, with a focus on interpretation of the new idea of identification-based modeling of large wind farms, which can be realized by two concrete methods.

  17. Modelling and control of large cryogenic refrigerator

    International Nuclear Information System (INIS)

    Bonne, Francois

    2014-01-01

    This manuscript is concern with both the modeling and the derivation of control schemes for large cryogenic refrigerators. The particular case of those which are submitted to highly variable pulsed heat load is studied. A model of each object that normally compose a large cryo-refrigerator is proposed. The methodology to gather objects model into the model of a subsystem is presented. The manuscript also shows how to obtain a linear equivalent model of the subsystem. Based on the derived models, advances control scheme are proposed. Precisely, a linear quadratic controller for warm compression station working with both two and three pressures state is derived, and a predictive constrained one for the cold-box is obtained. The particularity of those control schemes is that they fit the computing and data storage capabilities of Programmable Logic Controllers (PLC) with are well used in industry. The open loop model prediction capability is assessed using experimental data. Developed control schemes are validated in simulation and experimentally on the 400W1.8K SBT's cryogenic test facility and on the CERN's LHC warm compression station. (author) [fr

  18. Wall modeled large eddy simulations of complex high Reynolds number flows with synthetic inlet turbulence

    International Nuclear Information System (INIS)

    Patil, Sunil; Tafti, Danesh

    2012-01-01

    Highlights: ► Large eddy simulation. ► Wall layer modeling. ► Synthetic inlet turbulence. ► Swirl flows. - Abstract: Large eddy simulations of complex high Reynolds number flows are carried out with the near wall region being modeled with a zonal two layer model. A novel formulation for solving the turbulent boundary layer equation for the effective tangential velocity in a generalized co-ordinate system is presented and applied in the near wall zonal treatment. This formulation reduces the computational time in the inner layer significantly compared to the conventional two layer formulations present in the literature and is most suitable for complex geometries involving body fitted structured and unstructured meshes. The cost effectiveness and accuracy of the proposed wall model, used with the synthetic eddy method (SEM) to generate inlet turbulence, is investigated in turbulent channel flow, flow over a backward facing step, and confined swirling flows at moderately high Reynolds numbers. Predictions are compared with available DNS, experimental LDV data, as well as wall resolved LES. In all cases, there is at least an order of magnitude reduction in computational cost with no significant loss in prediction accuracy.

  19. Numerical Modeling of Large-Scale Rocky Coastline Evolution

    Science.gov (United States)

    Limber, P.; Murray, A. B.; Littlewood, R.; Valvo, L.

    2008-12-01

    Seventy-five percent of the world's ocean coastline is rocky. On large scales (i.e. greater than a kilometer), many intertwined processes drive rocky coastline evolution, including coastal erosion and sediment transport, tectonics, antecedent topography, and variations in sea cliff lithology. In areas such as California, an additional aspect of rocky coastline evolution involves submarine canyons that cut across the continental shelf and extend into the nearshore zone. These types of canyons intercept alongshore sediment transport and flush sand to abyssal depths during periodic turbidity currents, thereby delineating coastal sediment transport pathways and affecting shoreline evolution over large spatial and time scales. How tectonic, sediment transport, and canyon processes interact with inherited topographic and lithologic settings to shape rocky coastlines remains an unanswered, and largely unexplored, question. We will present numerical model results of rocky coastline evolution that starts with an immature fractal coastline. The initial shape is modified by headland erosion, wave-driven alongshore sediment transport, and submarine canyon placement. Our previous model results have shown that, as expected, an initial sediment-free irregularly shaped rocky coastline with homogeneous lithology will undergo smoothing in response to wave attack; headlands erode and mobile sediment is swept into bays, forming isolated pocket beaches. As this diffusive process continues, pocket beaches coalesce, and a continuous sediment transport pathway results. However, when a randomly placed submarine canyon is introduced to the system as a sediment sink, the end results are wholly different: sediment cover is reduced, which in turn increases weathering and erosion rates and causes the entire shoreline to move landward more rapidly. The canyon's alongshore position also affects coastline morphology. When placed offshore of a headland, the submarine canyon captures local sediment

  20. Modeling containment of large wildfires using generalized linear mixed-model analysis

    Science.gov (United States)

    Mark Finney; Isaac C. Grenfell; Charles W. McHugh

    2009-01-01

    Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...

  1. Organizational Commitment and Nurses' Characteristics as Predictors of Job Involvement.

    Science.gov (United States)

    Alammar, Kamila; Alamrani, Mashael; Alqahtani, Sara; Ahmad, Muayyad

    2016-01-01

    To predict nurses' job involvement on the basis of their organizational commitment and personal characteristics at a large tertiary hospital in Saudi Arabia. Data were collected in 2015 from a convenience sample of 558 nurses working at a large tertiary hospital in Riyadh, Saudi Arabia. A cross-sectional correlational design was used in this study. Data were collected using a structured questionnaire. All commitment scales had significant relationships. Multiple linear regression analysis revealed that the model predicted a sizeable proportion of variance in nurses' job involvement (p organizational commitment enhances job involvement, which may lead to more organizational stability and effectiveness.

  2. Model of large pool fires

    Energy Technology Data Exchange (ETDEWEB)

    Fay, J.A. [Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)]. E-mail: jfay@mit.edu

    2006-08-21

    A two zone entrainment model of pool fires is proposed to depict the fluid flow and flame properties of the fire. Consisting of combustion and plume zones, it provides a consistent scheme for developing non-dimensional scaling parameters for correlating and extrapolating pool fire visible flame length, flame tilt, surface emissive power, and fuel evaporation rate. The model is extended to include grey gas thermal radiation from soot particles in the flame zone, accounting for emission and absorption in both optically thin and thick regions. A model of convective heat transfer from the combustion zone to the liquid fuel pool, and from a water substrate to cryogenic fuel pools spreading on water, provides evaporation rates for both adiabatic and non-adiabatic fires. The model is tested against field measurements of large scale pool fires, principally of LNG, and is generally in agreement with experimental values of all variables.

  3. Model of large pool fires

    International Nuclear Information System (INIS)

    Fay, J.A.

    2006-01-01

    A two zone entrainment model of pool fires is proposed to depict the fluid flow and flame properties of the fire. Consisting of combustion and plume zones, it provides a consistent scheme for developing non-dimensional scaling parameters for correlating and extrapolating pool fire visible flame length, flame tilt, surface emissive power, and fuel evaporation rate. The model is extended to include grey gas thermal radiation from soot particles in the flame zone, accounting for emission and absorption in both optically thin and thick regions. A model of convective heat transfer from the combustion zone to the liquid fuel pool, and from a water substrate to cryogenic fuel pools spreading on water, provides evaporation rates for both adiabatic and non-adiabatic fires. The model is tested against field measurements of large scale pool fires, principally of LNG, and is generally in agreement with experimental values of all variables

  4. Comparison Between Overtopping Discharge in Small and Large Scale Models

    DEFF Research Database (Denmark)

    Helgason, Einar; Burcharth, Hans F.

    2006-01-01

    The present paper presents overtopping measurements from small scale model test performed at the Haudraulic & Coastal Engineering Laboratory, Aalborg University, Denmark and large scale model tests performed at the Largde Wave Channel,Hannover, Germany. Comparison between results obtained from...... small and large scale model tests show no clear evidence of scale effects for overtopping above a threshold value. In the large scale model no overtopping was measured for waveheights below Hs = 0.5m as the water sunk into the voids between the stones on the crest. For low overtopping scale effects...

  5. Large Animal Stroke Models vs. Rodent Stroke Models, Pros and Cons, and Combination?

    Science.gov (United States)

    Cai, Bin; Wang, Ning

    2016-01-01

    Stroke is a leading cause of serious long-term disability worldwide and the second leading cause of death in many countries. Long-time attempts to salvage dying neurons via various neuroprotective agents have failed in stroke translational research, owing in part to the huge gap between animal stroke models and stroke patients, which also suggests that rodent models have limited predictive value and that alternate large animal models are likely to become important in future translational research. The genetic background, physiological characteristics, behavioral characteristics, and brain structure of large animals, especially nonhuman primates, are analogous to humans, and resemble humans in stroke. Moreover, relatively new regional imaging techniques, measurements of regional cerebral blood flow, and sophisticated physiological monitoring can be more easily performed on the same animal at multiple time points. As a result, we can use large animal stroke models to decrease the gap and promote translation of basic science stroke research. At the same time, we should not neglect the disadvantages of the large animal stroke model such as the significant expense and ethical considerations, which can be overcome by rodent models. Rodents should be selected as stroke models for initial testing and primates or cats are desirable as a second species, which was recommended by the Stroke Therapy Academic Industry Roundtable (STAIR) group in 2009.

  6. Constituent rearrangement model and large transverse momentum reactions

    International Nuclear Information System (INIS)

    Igarashi, Yuji; Imachi, Masahiro; Matsuoka, Takeo; Otsuki, Shoichiro; Sawada, Shoji.

    1978-01-01

    In this chapter, two models based on the constituent rearrangement picture for large p sub( t) phenomena are summarized. One is the quark-junction model, and the other is the correlating quark rearrangement model. Counting rules of the models apply to both two-body reactions and hadron productions. (author)

  7. Large-signal modeling method for power FETs and diodes

    Energy Technology Data Exchange (ETDEWEB)

    Sun Lu; Wang Jiali; Wang Shan; Li Xuezheng; Shi Hui; Wang Na; Guo Shengping, E-mail: sunlu_1019@126.co [School of Electromechanical Engineering, Xidian University, Xi' an 710071 (China)

    2009-06-01

    Under a large signal drive level, a frequency domain black box model of the nonlinear scattering function is introduced into power FETs and diodes. A time domain measurement system and a calibration method based on a digital oscilloscope are designed to extract the nonlinear scattering function of semiconductor devices. The extracted models can reflect the real electrical performance of semiconductor devices and propose a new large-signal model to the design of microwave semiconductor circuits.

  8. Large-signal modeling method for power FETs and diodes

    International Nuclear Information System (INIS)

    Sun Lu; Wang Jiali; Wang Shan; Li Xuezheng; Shi Hui; Wang Na; Guo Shengping

    2009-01-01

    Under a large signal drive level, a frequency domain black box model of the nonlinear scattering function is introduced into power FETs and diodes. A time domain measurement system and a calibration method based on a digital oscilloscope are designed to extract the nonlinear scattering function of semiconductor devices. The extracted models can reflect the real electrical performance of semiconductor devices and propose a new large-signal model to the design of microwave semiconductor circuits.

  9. Environmental Management Model for Road Maintenance Operation Involving Community Participation

    Science.gov (United States)

    Triyono, A. R. H.; Setyawan, A.; Sobriyah; Setiono, P.

    2017-07-01

    Public expectations of Central Java, which is very high on demand fulfillment, especially road infrastructure as outlined in the number of complaints and community expectations tweeter, Short Mail Massage (SMS), e-mail and public reports from various media, Highways Department of Central Java province requires development model of environmental management in the implementation of a routine way by involving the community in order to fulfill the conditions of a representative, may serve road users safely and comfortably. This study used survey method with SEM analysis and SWOT with Latent Independent Variable (X), namely; Public Participation in the regulation, development, construction and supervision of road (PSM); Public behavior in the utilization of the road (PMJ) Provincial Road Service (PJP); Safety in the Provincial Road (KJP); Integrated Management System (SMT) and latent dependent variable (Y) routine maintenance of the provincial road that is integrated with the environmental management system and involve the participation of the community (MML). The result showed the implementation of routine maintenance of road conditions in Central Java province has yet to implement an environmental management by involving the community; Therefore developed environmental management model with the results of H1: Community Participation (PSM) has positive influence on the Model of Environmental Management (MML); H2: Behavior Society in Jalan Utilization (PMJ) positive effect on Model Environmental Management (MML); H3: Provincial Road Service (PJP) positive effect on Model Environmental Management (MML); H4: Safety in the Provincial Road (KJP) positive effect on Model Environmental Management (MML); H5: Integrated Management System (SMT) has positive influence on the Model of Environmental Management (MML). From the analysis obtained formulation model describing the relationship / influence of the independent variables PSM, PMJ, PJP, KJP, and SMT on the dependent variable

  10. Large scale stochastic spatio-temporal modelling with PCRaster

    NARCIS (Netherlands)

    Karssenberg, D.J.; Drost, N.; Schmitz, O.; Jong, K. de; Bierkens, M.F.P.

    2013-01-01

    PCRaster is a software framework for building spatio-temporal models of land surface processes (http://www.pcraster.eu). Building blocks of models are spatial operations on raster maps, including a large suite of operations for water and sediment routing. These operations are available to model

  11. Towards large scale stochastic rainfall models for flood risk assessment in trans-national basins

    Science.gov (United States)

    Serinaldi, F.; Kilsby, C. G.

    2012-04-01

    While extensive research has been devoted to rainfall-runoff modelling for risk assessment in small and medium size watersheds, less attention has been paid, so far, to large scale trans-national basins, where flood events have severe societal and economic impacts with magnitudes quantified in billions of Euros. As an example, in the April 2006 flood events along the Danube basin at least 10 people lost their lives and up to 30 000 people were displaced, with overall damages estimated at more than half a billion Euros. In this context, refined analytical methods are fundamental to improve the risk assessment and, then, the design of structural and non structural measures of protection, such as hydraulic works and insurance/reinsurance policies. Since flood events are mainly driven by exceptional rainfall events, suitable characterization and modelling of space-time properties of rainfall fields is a key issue to perform a reliable flood risk analysis based on alternative precipitation scenarios to be fed in a new generation of large scale rainfall-runoff models. Ultimately, this approach should be extended to a global flood risk model. However, as the need of rainfall models able to account for and simulate spatio-temporal properties of rainfall fields over large areas is rather new, the development of new rainfall simulation frameworks is a challenging task involving that faces with the problem of overcoming the drawbacks of the existing modelling schemes (devised for smaller spatial scales), but keeping the desirable properties. In this study, we critically summarize the most widely used approaches for rainfall simulation. Focusing on stochastic approaches, we stress the importance of introducing suitable climate forcings in these simulation schemes in order to account for the physical coherence of rainfall fields over wide areas. Based on preliminary considerations, we suggest a modelling framework relying on the Generalized Additive Models for Location, Scale

  12. Application of Pareto-efficient combustion modeling framework to large eddy simulations of turbulent reacting flows

    Science.gov (United States)

    Wu, Hao; Ihme, Matthias

    2017-11-01

    The modeling of turbulent combustion requires the consideration of different physico-chemical processes, involving a vast range of time and length scales as well as a large number of scalar quantities. To reduce the computational complexity, various combustion models are developed. Many of them can be abstracted using a lower-dimensional manifold representation. A key issue in using such lower-dimensional combustion models is the assessment as to whether a particular combustion model is adequate in representing a certain flame configuration. The Pareto-efficient combustion (PEC) modeling framework was developed to perform dynamic combustion model adaptation based on various existing manifold models. In this work, the PEC model is applied to a turbulent flame simulation, in which a computationally efficient flamelet-based combustion model is used in together with a high-fidelity finite-rate chemistry model. The combination of these two models achieves high accuracy in predicting pollutant species at a relatively low computational cost. The relevant numerical methods and parallelization techniques are also discussed in this work.

  13. Setting up recovery clinics and promoting service user involvement.

    Science.gov (United States)

    John, Thomas

    2017-06-22

    Service user involvement in mental health has gained considerable momentum. Evidence from the literature suggests that it remains largely theoretical rather than being put into practice. The current nature of acute inpatient mental health units creates various challenges for nurses to put this concept into practice. Recovery clinics were introduced to bridge this gap and to promote service user involvement practice within the current care delivery model at Kent and Medway NHS and Social Care Partnership Trust. It has shaped new ways of working for nurses with a person-centred approach as its philosophy. Service users and nurses were involved in implementing a needs-led and bottom-up initiative using Kotter's change model. Initial results suggest that it has been successful in meeting its objectives evidenced through increased meaningful interactions and involvement in care by service users and carers. The clinics have gained wide recognition and have highlighted a need for further research into care delivery models to promote service user involvement in these units.

  14. Simulating large-scale pedestrian movement using CA and event driven model: Methodology and case study

    Science.gov (United States)

    Li, Jun; Fu, Siyao; He, Haibo; Jia, Hongfei; Li, Yanzhong; Guo, Yi

    2015-11-01

    Large-scale regional evacuation is an important part of national security emergency response plan. Large commercial shopping area, as the typical service system, its emergency evacuation is one of the hot research topics. A systematic methodology based on Cellular Automata with the Dynamic Floor Field and event driven model has been proposed, and the methodology has been examined within context of a case study involving the evacuation within a commercial shopping mall. Pedestrians walking is based on Cellular Automata and event driven model. In this paper, the event driven model is adopted to simulate the pedestrian movement patterns, the simulation process is divided into normal situation and emergency evacuation. The model is composed of four layers: environment layer, customer layer, clerk layer and trajectory layer. For the simulation of movement route of pedestrians, the model takes into account purchase intention of customers and density of pedestrians. Based on evacuation model of Cellular Automata with Dynamic Floor Field and event driven model, we can reflect behavior characteristics of customers and clerks at the situations of normal and emergency evacuation. The distribution of individual evacuation time as a function of initial positions and the dynamics of the evacuation process is studied. Our results indicate that the evacuation model using the combination of Cellular Automata with Dynamic Floor Field and event driven scheduling can be used to simulate the evacuation of pedestrian flows in indoor areas with complicated surroundings and to investigate the layout of shopping mall.

  15. Parents as Role Models: Parental Behavior Affects Adolescents' Plans for Work Involvement

    Science.gov (United States)

    Wiese, Bettina S.; Freund, Alexandra M.

    2011-01-01

    This study (N = 520 high-school students) investigates the influence of parental work involvement on adolescents' own plans regarding their future work involvement. As expected, adolescents' perceptions of parental work behavior affected their plans for own work involvement. Same-sex parents served as main role models for the adolescents' own…

  16. Nuclear spectroscopy in large shell model spaces: recent advances

    International Nuclear Information System (INIS)

    Kota, V.K.B.

    1995-01-01

    Three different approaches are now available for carrying out nuclear spectroscopy studies in large shell model spaces and they are: (i) the conventional shell model diagonalization approach but taking into account new advances in computer technology; (ii) the recently introduced Monte Carlo method for the shell model; (iii) the spectral averaging theory, based on central limit theorems, in indefinitely large shell model spaces. The various principles, recent applications and possibilities of these three methods are described and the similarity between the Monte Carlo method and the spectral averaging theory is emphasized. (author). 28 refs., 1 fig., 5 tabs

  17. On the modelling of microsegregation in steels involving thermodynamic databases

    International Nuclear Information System (INIS)

    You, D; Bernhard, C; Michelic, S; Wieser, G; Presoly, P

    2016-01-01

    A microsegregation model involving thermodynamic database based on Ohnaka's model is proposed. In the model, the thermodynamic database is applied for equilibrium calculation. Multicomponent alloy effects on partition coefficients and equilibrium temperatures are accounted for. Microsegregation and partition coefficients calculated using different databases exhibit significant differences. The segregated concentrations predicted using the optimized database are in good agreement with the measured inter-dendritic concentrations. (paper)

  18. Modeling Temporal Behavior in Large Networks: A Dynamic Mixed-Membership Model

    Energy Technology Data Exchange (ETDEWEB)

    Rossi, R; Gallagher, B; Neville, J; Henderson, K

    2011-11-11

    Given a large time-evolving network, how can we model and characterize the temporal behaviors of individual nodes (and network states)? How can we model the behavioral transition patterns of nodes? We propose a temporal behavior model that captures the 'roles' of nodes in the graph and how they evolve over time. The proposed dynamic behavioral mixed-membership model (DBMM) is scalable, fully automatic (no user-defined parameters), non-parametric/data-driven (no specific functional form or parameterization), interpretable (identifies explainable patterns), and flexible (applicable to dynamic and streaming networks). Moreover, the interpretable behavioral roles are generalizable, computationally efficient, and natively supports attributes. We applied our model for (a) identifying patterns and trends of nodes and network states based on the temporal behavior, (b) predicting future structural changes, and (c) detecting unusual temporal behavior transitions. We use eight large real-world datasets from different time-evolving settings (dynamic and streaming). In particular, we model the evolving mixed-memberships and the corresponding behavioral transitions of Twitter, Facebook, IP-Traces, Email (University), Internet AS, Enron, Reality, and IMDB. The experiments demonstrate the scalability, flexibility, and effectiveness of our model for identifying interesting patterns, detecting unusual structural transitions, and predicting the future structural changes of the network and individual nodes.

  19. Fast Kalman-like filtering for large-dimensional linear and Gaussian state-space models

    KAUST Repository

    Ait-El-Fquih, Boujemaa; Hoteit, Ibrahim

    2015-01-01

    This paper considers the filtering problem for linear and Gaussian state-space models with large dimensions, a setup in which the optimal Kalman Filter (KF) might not be applicable owing to the excessive cost of manipulating huge covariance matrices. Among the most popular alternatives that enable cheaper and reasonable computation is the Ensemble KF (EnKF), a Monte Carlo-based approximation. In this paper, we consider a class of a posteriori distributions with diagonal covariance matrices and propose fast approximate deterministic-based algorithms based on the Variational Bayesian (VB) approach. More specifically, we derive two iterative KF-like algorithms that differ in the way they operate between two successive filtering estimates; one involves a smoothing estimate and the other involves a prediction estimate. Despite its iterative nature, the prediction-based algorithm provides a computational cost that is, on the one hand, independent of the number of iterations in the limit of very large state dimensions, and on the other hand, always much smaller than the cost of the EnKF. The cost of the smoothing-based algorithm depends on the number of iterations that may, in some situations, make this algorithm slower than the EnKF. The performances of the proposed filters are studied and compared to those of the KF and EnKF through a numerical example.

  20. Fast Kalman-like filtering for large-dimensional linear and Gaussian state-space models

    KAUST Repository

    Ait-El-Fquih, Boujemaa

    2015-08-13

    This paper considers the filtering problem for linear and Gaussian state-space models with large dimensions, a setup in which the optimal Kalman Filter (KF) might not be applicable owing to the excessive cost of manipulating huge covariance matrices. Among the most popular alternatives that enable cheaper and reasonable computation is the Ensemble KF (EnKF), a Monte Carlo-based approximation. In this paper, we consider a class of a posteriori distributions with diagonal covariance matrices and propose fast approximate deterministic-based algorithms based on the Variational Bayesian (VB) approach. More specifically, we derive two iterative KF-like algorithms that differ in the way they operate between two successive filtering estimates; one involves a smoothing estimate and the other involves a prediction estimate. Despite its iterative nature, the prediction-based algorithm provides a computational cost that is, on the one hand, independent of the number of iterations in the limit of very large state dimensions, and on the other hand, always much smaller than the cost of the EnKF. The cost of the smoothing-based algorithm depends on the number of iterations that may, in some situations, make this algorithm slower than the EnKF. The performances of the proposed filters are studied and compared to those of the KF and EnKF through a numerical example.

  1. Traffic assignment models in large-scale applications

    DEFF Research Database (Denmark)

    Rasmussen, Thomas Kjær

    the potential of the method proposed and the possibility to use individual-based GPS units for travel surveys in real-life large-scale multi-modal networks. Congestion is known to highly influence the way we act in the transportation network (and organise our lives), because of longer travel times...... of observations of actual behaviour to obtain estimates of the (monetary) value of different travel time components, thereby increasing the behavioural realism of largescale models. vii The generation of choice sets is a vital component in route choice models. This is, however, not a straight-forward task in real......, but the reliability of the travel time also has a large impact on our travel choices. Consequently, in order to improve the realism of transport models, correct understanding and representation of two values that are related to the value of time (VoT) are essential: (i) the value of congestion (VoC), as the Vo...

  2. Large-scale hydrology in Europe : observed patterns and model performance

    Energy Technology Data Exchange (ETDEWEB)

    Gudmundsson, Lukas

    2011-06-15

    In a changing climate, terrestrial water storages are of great interest as water availability impacts key aspects of ecosystem functioning. Thus, a better understanding of the variations of wet and dry periods will contribute to fully grasp processes of the earth system such as nutrient cycling and vegetation dynamics. Currently, river runoff from small, nearly natural, catchments is one of the few variables of the terrestrial water balance that is regularly monitored with detailed spatial and temporal coverage on large scales. River runoff, therefore, provides a foundation to approach European hydrology with respect to observed patterns on large scales, with regard to the ability of models to capture these.The analysis of observed river flow from small catchments, focused on the identification and description of spatial patterns of simultaneous temporal variations of runoff. These are dominated by large-scale variations of climatic variables but also altered by catchment processes. It was shown that time series of annual low, mean and high flows follow the same atmospheric drivers. The observation that high flows are more closely coupled to large scale atmospheric drivers than low flows, indicates the increasing influence of catchment properties on runoff under dry conditions. Further, it was shown that the low-frequency variability of European runoff is dominated by two opposing centres of simultaneous variations, such that dry years in the north are accompanied by wet years in the south.Large-scale hydrological models are simplified representations of our current perception of the terrestrial water balance on large scales. Quantification of the models strengths and weaknesses is the prerequisite for a reliable interpretation of simulation results. Model evaluations may also enable to detect shortcomings with model assumptions and thus enable a refinement of the current perception of hydrological systems. The ability of a multi model ensemble of nine large

  3. Homogenization of Large-Scale Movement Models in Ecology

    Science.gov (United States)

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  4. ABOUT MODELING COMPLEX ASSEMBLIES IN SOLIDWORKS – LARGE AXIAL BEARING

    Directory of Open Access Journals (Sweden)

    Cătălin IANCU

    2017-12-01

    Full Text Available In this paperwork is presented the modeling strategy used in SOLIDWORKS for modeling special items as large axial bearing and the steps to be taken in order to obtain a better design. In the paper are presented the features that are used for modeling parts, and then the steps that must be taken in order to obtain the 3D model of a large axial bearing used for bucket-wheel equipment for charcoal moving.

  5. Feasibility of an energy conversion system in Canada involving large-scale integrated hydrogen production using solid fuels

    International Nuclear Information System (INIS)

    Gnanapragasam, Nirmal V.; Reddy, Bale V.; Rosen, Marc A.

    2010-01-01

    A large-scale hydrogen production system is proposed using solid fuels and designed to increase the sustainability of alternative energy forms in Canada, and the technical and economic aspects of the system within the Canadian energy market are examined. The work investigates the feasibility and constraints in implementing such a system within the energy infrastructure of Canada. The proposed multi-conversion and single-function system produces hydrogen in large quantities using energy from solid fuels such as coal, tar sands, biomass, municipal solid waste (MSW) and agricultural/forest/industrial residue. The proposed system involves significant technology integration, with various energy conversion processes (such as gasification, chemical looping combustion, anaerobic digestion, combustion power cycles-electrolysis and solar-thermal converters) interconnected to increase the utilization of solid fuels as much as feasible within cost, environmental and other constraints. The analysis involves quantitative and qualitative assessments based on (i) energy resources availability and demand for hydrogen, (ii) commercial viability of primary energy conversion technologies, (iii) academia, industry and government participation, (iv) sustainability and (v) economics. An illustrative example provides an initial road map for implementing such a system. (author)

  6. Involving Corporate Functions: Who Contributes to Sustainable Development?

    Directory of Open Access Journals (Sweden)

    Stefan Schaltegger

    2014-05-01

    Full Text Available A large body of literature claims that corporate sustainable development is a cross-functional challenge, which requires all functional units to be involved. However, it remains uncertain to what extent and in which way different corporate functions are actually involved in corporate sustainability management. To bridge this research gap, our paper draws on a concept of involvement introduced in the field of consumer behavior. Based on this previous research, our paper distinguishes two components of involvement: first, a cognitive-affective component, incorporating being affected by sustainability issues and being supportive of corporate sustainability; and second, a behavioral component, represented by the application of sustainability management tools. We use this concept to empirically analyze the involvement of corporate functions in sustainability management and find considerable differences in large German companies. Whereas public relations and strategic management are heavily involved, finance, accounting and management control appear not to be involved. A multinomial logistic regression shows that the cognitive-affective component significantly influences the behavioral component, with a functional unit being affected influencing the application of tools the most. Building on the model proposed, the paper provides implications on how to increase a functional unit’s involvement in sustainability management.

  7. Large scale debris-flow hazard assessment: a geotechnical approach and GIS modelling

    Directory of Open Access Journals (Sweden)

    G. Delmonaco

    2003-01-01

    Full Text Available A deterministic distributed model has been developed for large-scale debris-flow hazard analysis in the basin of River Vezza (Tuscany Region – Italy. This area (51.6 km 2 was affected by over 250 landslides. These were classified as debris/earth flow mainly involving the metamorphic geological formations outcropping in the area, triggered by the pluviometric event of 19 June 1996. In the last decades landslide hazard and risk analysis have been favoured by the development of GIS techniques permitting the generalisation, synthesis and modelling of stability conditions on a large scale investigation (>1:10 000. In this work, the main results derived by the application of a geotechnical model coupled with a hydrological model for the assessment of debris flows hazard analysis, are reported. This analysis has been developed starting by the following steps: landslide inventory map derived by aerial photo interpretation, direct field survey, generation of a database and digital maps, elaboration of a DTM and derived themes (i.e. slope angle map, definition of a superficial soil thickness map, geotechnical soil characterisation through implementation of a backanalysis on test slopes, laboratory test analysis, inference of the influence of precipitation, for distinct return times, on ponding time and pore pressure generation, implementation of a slope stability model (infinite slope model and generalisation of the safety factor for estimated rainfall events with different return times. Such an approach has allowed the identification of potential source areas of debris flow triggering. This is used to detected precipitation events with estimated return time of 10, 50, 75 and 100 years. The model shows a dramatic decrease of safety conditions for the simulation when is related to a 75 years return time rainfall event. It corresponds to an estimated cumulated daily intensity of 280–330 mm. This value can be considered the hydrological triggering

  8. Use of a statistical model of the whole femur in a large scale, multi-model study of femoral neck fracture risk.

    Science.gov (United States)

    Bryan, Rebecca; Nair, Prasanth B; Taylor, Mark

    2009-09-18

    Interpatient variability is often overlooked in orthopaedic computational studies due to the substantial challenges involved in sourcing and generating large numbers of bone models. A statistical model of the whole femur incorporating both geometric and material property variation was developed as a potential solution to this problem. The statistical model was constructed using principal component analysis, applied to 21 individual computer tomography scans. To test the ability of the statistical model to generate realistic, unique, finite element (FE) femur models it was used as a source of 1000 femurs to drive a study on femoral neck fracture risk. The study simulated the impact of an oblique fall to the side, a scenario known to account for a large proportion of hip fractures in the elderly and have a lower fracture load than alternative loading approaches. FE model generation, application of subject specific loading and boundary conditions, FE processing and post processing of the solutions were completed automatically. The generated models were within the bounds of the training data used to create the statistical model with a high mesh quality, able to be used directly by the FE solver without remeshing. The results indicated that 28 of the 1000 femurs were at highest risk of fracture. Closer analysis revealed the percentage of cortical bone in the proximal femur to be a crucial differentiator between the failed and non-failed groups. The likely fracture location was indicated to be intertrochantic. Comparison to previous computational, clinical and experimental work revealed support for these findings.

  9. Simulation of a Large Wildfire in a Coupled Fire-Atmosphere Model

    Directory of Open Access Journals (Sweden)

    Jean-Baptiste Filippi

    2018-06-01

    Full Text Available The Aullene fire devastated more than 3000 ha of Mediterranean maquis and pine forest in July 2009. The simulation of combustion processes, as well as atmospheric dynamics represents a challenge for such scenarios because of the various involved scales, from the scale of the individual flames to the larger regional scale. A coupled approach between the Meso-NH (Meso-scale Non-Hydrostatic atmospheric model running in LES (Large Eddy Simulation mode and the ForeFire fire spread model is proposed for predicting fine- to large-scale effects of this extreme wildfire, showing that such simulation is possible in a reasonable time using current supercomputers. The coupling involves the surface wind to drive the fire, while heat from combustion and water vapor fluxes are injected into the atmosphere at each atmospheric time step. To be representative of the phenomenon, a sub-meter resolution was used for the simulation of the fire front, while atmospheric simulations were performed with nested grids from 2400-m to 50-m resolution. Simulations were run with or without feedback from the fire to the atmospheric model, or without coupling from the atmosphere to the fire. In the two-way mode, the burnt area was reproduced with a good degree of realism at the local scale, where an acceleration in the valley wind and over sloping terrain pushed the fire line to locations in accordance with fire passing point observations. At the regional scale, the simulated fire plume compares well with the satellite image. The study explores the strong fire-atmosphere interactions leading to intense convective updrafts extending above the boundary layer, significant downdrafts behind the fire line in the upper plume, and horizontal wind speeds feeding strong inflow into the base of the convective updrafts. The fire-induced dynamics is induced by strong near-surface sensible heat fluxes reaching maximum values of 240 kW m − 2 . The dynamical production of turbulent kinetic

  10. Optimization of large-scale heterogeneous system-of-systems models.

    Energy Technology Data Exchange (ETDEWEB)

    Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Lee, Herbert K. H. (University of California, Santa Cruz, Santa Cruz, CA); Hart, William Eugene; Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Woodruff, David L. (University of California, Davis, Davis, CA)

    2012-01-01

    Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

  11. Modeling, Analysis, and Optimization Issues for Large Space Structures

    Science.gov (United States)

    Pinson, L. D. (Compiler); Amos, A. K. (Compiler); Venkayya, V. B. (Compiler)

    1983-01-01

    Topics concerning the modeling, analysis, and optimization of large space structures are discussed including structure-control interaction, structural and structural dynamics modeling, thermal analysis, testing, and design.

  12. An interactive display system for large-scale 3D models

    Science.gov (United States)

    Liu, Zijian; Sun, Kun; Tao, Wenbing; Liu, Liman

    2018-04-01

    With the improvement of 3D reconstruction theory and the rapid development of computer hardware technology, the reconstructed 3D models are enlarging in scale and increasing in complexity. Models with tens of thousands of 3D points or triangular meshes are common in practical applications. Due to storage and computing power limitation, it is difficult to achieve real-time display and interaction with large scale 3D models for some common 3D display software, such as MeshLab. In this paper, we propose a display system for large-scale 3D scene models. We construct the LOD (Levels of Detail) model of the reconstructed 3D scene in advance, and then use an out-of-core view-dependent multi-resolution rendering scheme to realize the real-time display of the large-scale 3D model. With the proposed method, our display system is able to render in real time while roaming in the reconstructed scene and 3D camera poses can also be displayed. Furthermore, the memory consumption can be significantly decreased via internal and external memory exchange mechanism, so that it is possible to display a large scale reconstructed scene with over millions of 3D points or triangular meshes in a regular PC with only 4GB RAM.

  13. Metallogenic model for continental volcanic-type rich and large uranium deposits

    International Nuclear Information System (INIS)

    Chen Guihua

    1998-01-01

    A metallogenic model for continental volcanic-type rich and large/super large uranium deposits has been established on the basis of analysis of occurrence features and ore-forming mechanism of some continental volcanic-type rich and large/super large uranium deposits in the world. The model proposes that uranium-enriched granite or granitic basement is the foundation, premetallogenic polycyclic and multistage volcanic eruptions are prerequisites, intense tectonic-extensional environment is the key for the ore formation, and relatively enclosed geologic setting is the reliable protection condition of the deposit. By using the model the author explains the occurrence regularities of some rich and large/super large uranium deposits such as Strelichof uranium deposit in Russia, Dornot uranium deposit in Mongolia, Olympic Dam Cu-U-Au-REE deposit in Australia, uranium deposit No.460 and Zhoujiashan uranium deposit in China, and then compares the above deposits with a large poor uranium deposit No.661 as well

  14. Models of user involvement in the mental health context: intentions and implementation challenges.

    Science.gov (United States)

    Storm, Marianne; Edwards, Adrian

    2013-09-01

    Patient-centered care, shared decision-making, patient participation and the recovery model are models of care which incorporate user involvement and patients' perspectives on their treatment and care. The aims of this paper are to examine these different care models and their association with user involvement in the mental health context and discuss some of the challenges associated with their implementation. The sources used are health policy documents and published literature and research on patient-centered care, shared decision-making, patient participation and recovery. The policy documents advocate that mental health services should be oriented towards patients' or users' needs, participation and involvement. These policies also emphasize recovery and integration of people with mental disorders in the community. However, these collaborative care models have generally been subject to limited empirical research about effectiveness. There are also challenges to implementation of the models in inpatient care. What evidence there is indicates tensions between patients' and providers' perspectives on treatment and care. There are issues related to risk and the person's capacity for user involvement, and concerns about what role patients themselves wish to play in decision-making. Lack of competence and awareness among providers are further issues. Further work on training, evaluation and implementation is needed to ensure that inpatient mental health services are adapting user oriented care models at all levels of services.

  15. Fires involving radioactive materials : transference model; operative recommendations

    International Nuclear Information System (INIS)

    Rodriguez, C.E.; Puntarulo, L.J.; Canibano, J.A.

    1988-01-01

    In all aspects related to the nuclear activity, the occurrence of an explosion, fire or burst type accident, with or without victims, is directly related to the characteristics of the site. The present work analyses the different parameters involved, describing a transference model and recommendations for evaluation and control of the radiological risk for firemen. Special emphasis is placed on the measurement of the variables existing in this kind of operations

  16. Contribution of large scale coherence to wind turbine power: A large eddy simulation study in periodic wind farms

    Science.gov (United States)

    Chatterjee, Tanmoy; Peet, Yulia T.

    2018-03-01

    Length scales of eddies involved in the power generation of infinite wind farms are studied by analyzing the spectra of the turbulent flux of mean kinetic energy (MKE) from large eddy simulations (LES). Large-scale structures with an order of magnitude bigger than the turbine rotor diameter (D ) are shown to have substantial contribution to wind power. Varying dynamics in the intermediate scales (D -10 D ) are also observed from a parametric study involving interturbine distances and hub height of the turbines. Further insight about the eddies responsible for the power generation have been provided from the scaling analysis of two-dimensional premultiplied spectra of MKE flux. The LES code is developed in a high Reynolds number near-wall modeling framework, using an open-source spectral element code Nek5000, and the wind turbines have been modelled using a state-of-the-art actuator line model. The LES of infinite wind farms have been validated against the statistical results from the previous literature. The study is expected to improve our understanding of the complex multiscale dynamics in the domain of large wind farms and identify the length scales that contribute to the power. This information can be useful for design of wind farm layout and turbine placement that take advantage of the large-scale structures contributing to wind turbine power.

  17. Introducing an Intervention Model for Fostering Affective Involvement with Persons Who Are Congenitally Deafblind

    Science.gov (United States)

    Martens, Marga A. W.; Janssen, Marleen J.; Ruijssenaars, Wied A. J. J. M.; Riksen-Walraven, J. Marianne

    2014-01-01

    The article presented here introduces the Intervention Model for Affective Involvement (IMAI), which was designed to train staff members (for example, teachers, caregivers, support workers) to foster affective involvement during interaction and communication with persons who have congenital deaf-blindness. The model is theoretically underpinned,…

  18. Hydrogen combustion modelling in large-scale geometries

    International Nuclear Information System (INIS)

    Studer, E.; Beccantini, A.; Kudriakov, S.; Velikorodny, A.

    2014-01-01

    Hydrogen risk mitigation issues based on catalytic recombiners cannot exclude flammable clouds to be formed during the course of a severe accident in a Nuclear Power Plant. Consequences of combustion processes have to be assessed based on existing knowledge and state of the art in CFD combustion modelling. The Fukushima accidents have also revealed the need for taking into account the hydrogen explosion phenomena in risk management. Thus combustion modelling in a large-scale geometry is one of the remaining severe accident safety issues. At present day there doesn't exist a combustion model which can accurately describe a combustion process inside a geometrical configuration typical of the Nuclear Power Plant (NPP) environment. Therefore the major attention in model development has to be paid on the adoption of existing approaches or creation of the new ones capable of reliably predicting the possibility of the flame acceleration in the geometries of that type. A set of experiments performed previously in RUT facility and Heiss Dampf Reactor (HDR) facility is used as a validation database for development of three-dimensional gas dynamic model for the simulation of hydrogen-air-steam combustion in large-scale geometries. The combustion regimes include slow deflagration, fast deflagration, and detonation. Modelling is based on Reactive Discrete Equation Method (RDEM) where flame is represented as an interface separating reactants and combustion products. The transport of the progress variable is governed by different flame surface wrinkling factors. The results of numerical simulation are presented together with the comparisons, critical discussions and conclusions. (authors)

  19. Modeling of 3D Aluminum Polycrystals during Large Deformations

    International Nuclear Information System (INIS)

    Maniatty, Antoinette M.; Littlewood, David J.; Lu Jing; Pyle, Devin

    2007-01-01

    An approach for generating, meshing, and modeling 3D polycrystals, with a focus on aluminum alloys, subjected to large deformation processes is presented. A Potts type model is used to generate statistically representative grain structures with periodicity to allow scale-linking. The grain structures are compared to experimentally observed grain structures to validate that they are representative. A procedure for generating a geometric model from the voxel data is developed allowing for adaptive meshing of the generated grain structure. Material behavior is governed by an appropriate crystal, elasto-viscoplastic constitutive model. The elastic-viscoplastic model is implemented in a three-dimensional, finite deformation, mixed, finite element program. In order to handle the large-scale problems of interest, a parallel implementation is utilized. A multiscale procedure is used to link larger scale models of deformation processes to the polycrystal model, where periodic boundary conditions on the fluctuation field are enforced. Finite-element models, of 3D polycrystal grain structures will be presented along with observations made from these simulations

  20. Modeling of modification experiments involving neutral-gas release

    International Nuclear Information System (INIS)

    Bernhardt, P.A.

    1983-01-01

    Many experiments involve the injection of neutral gases into the upper atmosphere. Examples are critical velocity experiments, MHD wave generation, ionospheric hole production, plasma striation formation, and ion tracing. Many of these experiments are discussed in other sessions of the Active Experiments Conference. This paper limits its discussion to: (1) the modeling of the neutral gas dynamics after injection, (2) subsequent formation of ionosphere holes, and (3) use of such holes as experimental tools

  1. Customer involvement in greening the supply chain: an interpretive structural modeling methodology

    Science.gov (United States)

    Kumar, Sanjay; Luthra, Sunil; Haleem, Abid

    2013-04-01

    The role of customers in green supply chain management needs to be identified and recognized as an important research area. This paper is an attempt to explore the involvement aspect of customers towards greening of the supply chain (SC). An empirical research approach has been used to collect primary data to rank different variables for effective customer involvement in green concept implementation in SC. An interpretive structural-based model has been presented, and variables have been classified using matrice d' impacts croises- multiplication appliqué a un classement analysis. Contextual relationships among variables have been established using experts' opinions. The research may help practicing managers to understand the interaction among variables affecting customer involvement. Further, this understanding may be helpful in framing the policies and strategies to green SC. Analyzing interaction among variables for effective customer involvement in greening SC to develop the structural model in the Indian perspective is an effort towards promoting environment consciousness.

  2. An accurate and simple large signal model of HEMT

    DEFF Research Database (Denmark)

    Liu, Qing

    1989-01-01

    A large-signal model of discrete HEMTs (high-electron-mobility transistors) has been developed. It is simple and suitable for SPICE simulation of hybrid digital ICs. The model parameters are extracted by using computer programs and data provided by the manufacturer. Based on this model, a hybrid...

  3. Aero-Acoustic Modelling using Large Eddy Simulation

    International Nuclear Information System (INIS)

    Shen, W Z; Soerensen, J N

    2007-01-01

    The splitting technique for aero-acoustic computations is extended to simulate three-dimensional flow and acoustic waves from airfoils. The aero-acoustic model is coupled to a sub-grid-scale turbulence model for Large-Eddy Simulations. In the first test case, the model is applied to compute laminar flow past a NACA 0015 airfoil at a Reynolds number of 800, a Mach number of 0.2 and an angle of attack of 20 deg. The model is then applied to compute turbulent flow past a NACA 0015 airfoil at a Reynolds number of 100 000, a Mach number of 0.2 and an angle of attack of 20 deg. The predicted noise spectrum is compared to experimental data

  4. Effects of deceptive packaging and product involvement on purchase intention: an elaboration likelihood model perspective.

    Science.gov (United States)

    Lammers, H B

    2000-04-01

    From an Elaboration Likelihood Model perspective, it was hypothesized that postexposure awareness of deceptive packaging claims would have a greater negative effect on scores for purchase intention by consumers lowly involved rather than highly involved with a product (n = 40). Undergraduates who were classified as either highly or lowly (ns = 20 and 20) involved with M&Ms examined either a deceptive or non-deceptive package design for M&Ms candy and were subsequently informed of the deception employed in the packaging before finally rating their intention to purchase. As anticipated, highly deceived subjects who were low in involvement rated intention to purchase lower than their highly involved peers. Overall, the results attest to the robustness of the model and suggest that the model has implications beyond advertising effects and into packaging effects.

  5. The Dynamics of Large-Amplitude Motion in Energized Molecules

    Energy Technology Data Exchange (ETDEWEB)

    Perry, David S. [Univ. of Akron, OH (United States). Dept. of Chemistry

    2016-05-27

    Chemical reactions involve large-amplitude nuclear motion along the reaction coordinate that serves to distinguish reactants from products. Some reactions, such as roaming reactions and reactions proceeding through a loose transition state, involve more than one large-amplitude degree of freedom. Because of the limitation of exact quantum nuclear dynamics to small systems, one must, in general, define the active degrees of freedom and separate them in some way from the other degrees of freedom. In this project, we use large-amplitude motion in bound model systems to investigate the coupling of large-amplitude degrees of freedom to other nuclear degrees of freedom. This approach allows us to use the precision and power of high-resolution molecular spectroscopy to probe the specific coupling mechanisms involved, and to apply the associated theoretical tools. In addition to slit-jet spectra at the University of Akron, the current project period has involved collaboration with Michel Herman and Nathalie Vaeck of the Université Libre de Bruxelles, and with Brant Billinghurst at the Canadian Light Source (CLS).

  6. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    NARCIS (Netherlands)

    Loon, van A.F.; Huijgevoort, van M.H.J.; Lanen, van H.A.J.

    2012-01-01

    Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological

  7. Verifying large SDL-specifications using model checking

    NARCIS (Netherlands)

    Sidorova, N.; Steffen, M.; Reed, R.; Reed, J.

    2001-01-01

    In this paper we propose a methodology for model-checking based verification of large SDL specifications. The methodology is illustrated by a case study of an industrial medium-access protocol for wireless ATM. To cope with the state space explosion, the verification exploits the layered and modular

  8. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    Directory of Open Access Journals (Sweden)

    A. F. Van Loon

    2012-11-01

    Full Text Available Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological drought? To answer this question, we evaluated the simulation of drought propagation in an ensemble mean of ten large-scale models, both land-surface models and global hydrological models, that participated in the model intercomparison project of WATCH (WaterMIP. For a selection of case study areas, we studied drought characteristics (number of droughts, duration, severity, drought propagation features (pooling, attenuation, lag, lengthening, and hydrological drought typology (classical rainfall deficit drought, rain-to-snow-season drought, wet-to-dry-season drought, cold snow season drought, warm snow season drought, composite drought.

    Drought characteristics simulated by large-scale models clearly reflected drought propagation; i.e. drought events became fewer and longer when moving through the hydrological cycle. However, more differentiation was expected between fast and slowly responding systems, with slowly responding systems having fewer and longer droughts in runoff than fast responding systems. This was not found using large-scale models. Drought propagation features were poorly reproduced by the large-scale models, because runoff reacted immediately to precipitation, in all case study areas. This fast reaction to precipitation, even in cold climates in winter and in semi-arid climates in summer, also greatly influenced the hydrological drought typology as identified by the large-scale models. In general, the large-scale models had the correct representation of drought types, but the percentages of occurrence had some important mismatches, e.g. an overestimation of classical rainfall deficit droughts, and an

  9. A Research Framework for Understanding the Practical Impact of Family Involvement in the Juvenile Justice System: The Juvenile Justice Family Involvement Model.

    Science.gov (United States)

    Walker, Sarah Cusworth; Bishop, Asia S; Pullmann, Michael D; Bauer, Grace

    2015-12-01

    Family involvement is recognized as a critical element of service planning for children's mental health, welfare and education. For the juvenile justice system, however, parents' roles in this system are complex due to youths' legal rights, public safety, a process which can legally position parents as plaintiffs, and a historical legacy of blaming parents for youth indiscretions. Three recent national surveys of juvenile justice-involved parents reveal that the current paradigm elicits feelings of stress, shame and distrust among parents and is likely leading to worse outcomes for youth, families and communities. While research on the impact of family involvement in the justice system is starting to emerge, the field currently has no organizing framework to guide a research agenda, interpret outcomes or translate findings for practitioners. We propose a research framework for family involvement that is informed by a comprehensive review and content analysis of current, published arguments for family involvement in juvenile justice along with a synthesis of family involvement efforts in other child-serving systems. In this model, family involvement is presented as an ascending, ordinal concept beginning with (1) exclusion, and moving toward climates characterized by (2) information-giving, (3) information-eliciting and (4) full, decision-making partnerships. Specific examples of how courts and facilities might align with these levels are described. Further, the model makes predictions for how involvement will impact outcomes at multiple levels with applications for other child-serving systems.

  10. Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam's Window.

    Science.gov (United States)

    Onorante, Luca; Raftery, Adrian E

    2016-01-01

    Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam's window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods.

  11. A dynamic performance model for redox-flow batteries involving soluble species

    International Nuclear Information System (INIS)

    Shah, A.A.; Watt-Smith, M.J.; Walsh, F.C.

    2008-01-01

    A transient modelling framework for a vanadium redox-flow battery (RFB) is developed and experiments covering a range of vanadium concentration and electrolyte flow rate are conducted. The two-dimensional model is based on a comprehensive description of mass, charge and momentum transport and conservation, and is combined with a global kinetic model for reactions involving vanadium species. The model is validated against the experimental data and is used to study the effects of variations in concentration, electrolyte flow rate and electrode porosity. Extensions to the model and future work are suggested

  12. Questionnaire: involved actors in large disused components management - Summary Of Responses To The Questionnaire

    International Nuclear Information System (INIS)

    2012-01-01

    The aim of the Questionnaire is to establish an overview of the various bodies [Actors] that have responsibilities or input to the issue of large component decommissioning. In answering the intent is to cover the overall organisation and those bits that have most relevance to large components. The answers should reflect the areas from site operations to decommissioning as well as the wider issue of disposal at another location. The Questionnaire covers the following points: 1 - What is the country (institutional) structure for decommissioning? 2 - who does what and where lie the responsibilities? 3 - Which bodies have responsibility for onsite safety regulation, discharges and disposal? 4 - Which body(s) owns the facilities? 5 - Describe the responsibilities for funding of the decommissioning plan and disposal plan. Are they one and the same body? Whilst there are differences between countries there are some common threads. Regulation is through the state though the number of regulators involved may vary. In summary, the IAEA principles concerning independence of the regulatory body are followed. Funding arrangements vary but there are plans. Similarly, ownership of facilities is a mix of state and private. Some systems require a separate decommissioning license with Spain having the clearest demarcation of responsibilities for the decommissioning phase and waste management responsibilities

  13. Challenges of Modeling Flood Risk at Large Scales

    Science.gov (United States)

    Guin, J.; Simic, M.; Rowe, J.

    2009-04-01

    Flood risk management is a major concern for many nations and for the insurance sector in places where this peril is insured. A prerequisite for risk management, whether in the public sector or in the private sector is an accurate estimation of the risk. Mitigation measures and traditional flood management techniques are most successful when the problem is viewed at a large regional scale such that all inter-dependencies in a river network are well understood. From an insurance perspective the jury is still out there on whether flood is an insurable peril. However, with advances in modeling techniques and computer power it is possible to develop models that allow proper risk quantification at the scale suitable for a viable insurance market for flood peril. In order to serve the insurance market a model has to be event-simulation based and has to provide financial risk estimation that forms the basis for risk pricing, risk transfer and risk management at all levels of insurance industry at large. In short, for a collection of properties, henceforth referred to as a portfolio, the critical output of the model is an annual probability distribution of economic losses from a single flood occurrence (flood event) or from an aggregation of all events in any given year. In this paper, the challenges of developing such a model are discussed in the context of Great Britain for which a model has been developed. The model comprises of several, physically motivated components so that the primary attributes of the phenomenon are accounted for. The first component, the rainfall generator simulates a continuous series of rainfall events in space and time over thousands of years, which are physically realistic while maintaining the statistical properties of rainfall at all locations over the model domain. A physically based runoff generation module feeds all the rivers in Great Britain, whose total length of stream links amounts to about 60,000 km. A dynamical flow routing

  14. Drought forecasting in Luanhe River basin involving climatic indices

    Science.gov (United States)

    Ren, Weinan; Wang, Yixuan; Li, Jianzhu; Feng, Ping; Smith, Ronald J.

    2017-11-01

    Drought is regarded as one of the most severe natural disasters globally. This is especially the case in Tianjin City, Northern China, where drought can affect economic development and people's livelihoods. Drought forecasting, the basis of drought management, is an important mitigation strategy. In this paper, we evolve a probabilistic forecasting model, which forecasts transition probabilities from a current Standardized Precipitation Index (SPI) value to a future SPI class, based on conditional distribution of multivariate normal distribution to involve two large-scale climatic indices at the same time, and apply the forecasting model to 26 rain gauges in the Luanhe River basin in North China. The establishment of the model and the derivation of the SPI are based on the hypothesis of aggregated monthly precipitation that is normally distributed. Pearson correlation and Shapiro-Wilk normality tests are used to select appropriate SPI time scale and large-scale climatic indices. Findings indicated that longer-term aggregated monthly precipitation, in general, was more likely to be considered normally distributed and forecasting models should be applied to each gauge, respectively, rather than to the whole basin. Taking Liying Gauge as an example, we illustrate the impact of the SPI time scale and lead time on transition probabilities. Then, the controlled climatic indices of every gauge are selected by Pearson correlation test and the multivariate normality of SPI, corresponding climatic indices for current month and SPI 1, 2, and 3 months later are demonstrated using Shapiro-Wilk normality test. Subsequently, we illustrate the impact of large-scale oceanic-atmospheric circulation patterns on transition probabilities. Finally, we use a score method to evaluate and compare the performance of the three forecasting models and compare them with two traditional models which forecast transition probabilities from a current to a future SPI class. The results show that the

  15. The use of public participation and economic appraisal for public involvement in large-scale hydropower projects: Case study of the Nam Theun 2 Hydropower Project

    International Nuclear Information System (INIS)

    Mirumachi, Naho; Torriti, Jacopo

    2012-01-01

    Gaining public acceptance is one of the main issues with large-scale low-carbon projects such as hydropower development. It has been recommended by the World Commission on Dams that to gain public acceptance, public involvement is necessary in the decision-making process (). As financially-significant actors in the planning and implementation of large-scale hydropower projects in developing country contexts, the paper examines the ways in which public involvement may be influenced by international financial institutions. Using the case study of the Nam Theun 2 Hydropower Project in Laos, the paper analyses how public involvement facilitated by the Asian Development Bank had a bearing on procedural and distributional justice. The paper analyses the extent of public participation and the assessment of full social and environmental costs of the project in the Cost-Benefit Analysis conducted during the project appraisal stage. It is argued that while efforts were made to involve the public, there were several factors that influenced procedural and distributional justice: the late contribution of the Asian Development Bank in the project appraisal stage; and the issue of non-market values and discount rate to calculate the full social and environmental costs. - Highlights: ► Public acceptance in large-scale hydropower projects is examined. ► Both procedural and distributional justice are important for public acceptance. ► International Financial Institutions can influence the level of public involvement. ► Public involvement benefits consideration of non-market values and discount rates.

  16. Large-scale linear programs in planning and prediction.

    Science.gov (United States)

    2017-06-01

    Large-scale linear programs are at the core of many traffic-related optimization problems in both planning and prediction. Moreover, many of these involve significant uncertainty, and hence are modeled using either chance constraints, or robust optim...

  17. Large deflection of viscoelastic beams using fractional derivative model

    International Nuclear Information System (INIS)

    Bahranini, Seyed Masoud Sotoodeh; Eghtesad, Mohammad; Ghavanloo, Esmaeal; Farid, Mehrdad

    2013-01-01

    This paper deals with large deflection of viscoelastic beams using a fractional derivative model. For this purpose, a nonlinear finite element formulation of viscoelastic beams in conjunction with the fractional derivative constitutive equations has been developed. The four-parameter fractional derivative model has been used to describe the constitutive equations. The deflected configuration for a uniform beam with different boundary conditions and loads is presented. The effect of the order of fractional derivative on the large deflection of the cantilever viscoelastic beam, is investigated after 10, 100, and 1000 hours. The main contribution of this paper is finite element implementation for nonlinear analysis of viscoelastic fractional model using the storage of both strain and stress histories. The validity of the present analysis is confirmed by comparing the results with those found in the literature.

  18. Modelling the fathering role: Experience in the family of origin and father involvement

    Directory of Open Access Journals (Sweden)

    Mihić Ivana

    2012-01-01

    Full Text Available The study presented in this paper deals with the effects of experiences with father in the family of origin on the fathering role in the family of procreation. The results of the studies so far point to great importance of such experiences in parental role modelling, while recent approaches have suggested the concept of introjected notion or an internal working model of the fathering role as the way to operationalise the transgenerational transfer. The study included 247 two-parent couple families whose oldest child attended preschool education. Fathers provided information on self-assessed involvement via the Inventory of father involvement, while both fathers and mothers gave information on introjected experiences from the family of origin via the inventory Presence of the father in the family of origin. It was shown that father’s experiences from the family of origin had significant direct effects on his involvement in child-care. Very important experiences were those of negative emotional exchange, physical closeness and availability of the father, as well as beliefs about the importance of the father as a parent. Although maternal experiences from the family of origin did not contribute significantly to father involvement, shared beliefs about father’s importance as a parent in the parenting alliance had an effect on greater involvement in child-care. The data provide confirmation of the hypotheses on modelling of the fathering role, but also open the issue of the factor of intergenerational maintenance of traditional forms of father involvement in families in Serbia.

  19. Misspecified poisson regression models for large-scale registry data: inference for 'large n and small p'.

    Science.gov (United States)

    Grøn, Randi; Gerds, Thomas A; Andersen, Per K

    2016-03-30

    Poisson regression is an important tool in register-based epidemiology where it is used to study the association between exposure variables and event rates. In this paper, we will discuss the situation with 'large n and small p', where n is the sample size and p is the number of available covariates. Specifically, we are concerned with modeling options when there are time-varying covariates that can have time-varying effects. One problem is that tests of the proportional hazards assumption, of no interactions between exposure and other observed variables, or of other modeling assumptions have large power due to the large sample size and will often indicate statistical significance even for numerically small deviations that are unimportant for the subject matter. Another problem is that information on important confounders may be unavailable. In practice, this situation may lead to simple working models that are then likely misspecified. To support and improve conclusions drawn from such models, we discuss methods for sensitivity analysis, for estimation of average exposure effects using aggregated data, and a semi-parametric bootstrap method to obtain robust standard errors. The methods are illustrated using data from the Danish national registries investigating the diabetes incidence for individuals treated with antipsychotics compared with the general unexposed population. Copyright © 2015 John Wiley & Sons, Ltd.

  20. Differences in passenger car and large truck involved crash frequencies at urban signalized intersections: an exploratory analysis.

    Science.gov (United States)

    Dong, Chunjiao; Clarke, David B; Richards, Stephen H; Huang, Baoshan

    2014-01-01

    The influence of intersection features on safety has been examined extensively because intersections experience a relatively large proportion of motor vehicle conflicts and crashes. Although there are distinct differences between passenger cars and large trucks-size, operating characteristics, dimensions, and weight-modeling crash counts across vehicle types is rarely addressed. This paper develops and presents a multivariate regression model of crash frequencies by collision vehicle type using crash data for urban signalized intersections in Tennessee. In addition, the performance of univariate Poisson-lognormal (UVPLN), multivariate Poisson (MVP), and multivariate Poisson-lognormal (MVPLN) regression models in establishing the relationship between crashes, traffic factors, and geometric design of roadway intersections is investigated. Bayesian methods are used to estimate the unknown parameters of these models. The evaluation results suggest that the MVPLN model possesses most of the desirable statistical properties in developing the relationships. Compared to the UVPLN and MVP models, the MVPLN model better identifies significant factors and predicts crash frequencies. The findings suggest that traffic volume, truck percentage, lighting condition, and intersection angle significantly affect intersection safety. Important differences in car, car-truck, and truck crash frequencies with respect to various risk factors were found to exist between models. The paper provides some new or more comprehensive observations that have not been covered in previous studies. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Topic modeling for cluster analysis of large biological and medical datasets.

    Science.gov (United States)

    Zhao, Weizhong; Zou, Wen; Chen, James J

    2014-01-01

    The big data moniker is nowhere better deserved than to describe the ever-increasing prodigiousness and complexity of biological and medical datasets. New methods are needed to generate and test hypotheses, foster biological interpretation, and build validated predictors. Although multivariate techniques such as cluster analysis may allow researchers to identify groups, or clusters, of related variables, the accuracies and effectiveness of traditional clustering methods diminish for large and hyper dimensional datasets. Topic modeling is an active research field in machine learning and has been mainly used as an analytical tool to structure large textual corpora for data mining. Its ability to reduce high dimensionality to a small number of latent variables makes it suitable as a means for clustering or overcoming clustering difficulties in large biological and medical datasets. In this study, three topic model-derived clustering methods, highest probable topic assignment, feature selection and feature extraction, are proposed and tested on the cluster analysis of three large datasets: Salmonella pulsed-field gel electrophoresis (PFGE) dataset, lung cancer dataset, and breast cancer dataset, which represent various types of large biological or medical datasets. All three various methods are shown to improve the efficacy/effectiveness of clustering results on the three datasets in comparison to traditional methods. A preferable cluster analysis method emerged for each of the three datasets on the basis of replicating known biological truths. Topic modeling could be advantageously applied to the large datasets of biological or medical research. The three proposed topic model-derived clustering methods, highest probable topic assignment, feature selection and feature extraction, yield clustering improvements for the three different data types. Clusters more efficaciously represent truthful groupings and subgroupings in the data than traditional methods, suggesting

  2. Statistical process control charts for attribute data involving very large sample sizes: a review of problems and solutions.

    Science.gov (United States)

    Mohammed, Mohammed A; Panesar, Jagdeep S; Laney, David B; Wilson, Richard

    2013-04-01

    The use of statistical process control (SPC) charts in healthcare is increasing. The primary purpose of SPC is to distinguish between common-cause variation which is attributable to the underlying process, and special-cause variation which is extrinsic to the underlying process. This is important because improvement under common-cause variation requires action on the process, whereas special-cause variation merits an investigation to first find the cause. Nonetheless, when dealing with attribute or count data (eg, number of emergency admissions) involving very large sample sizes, traditional SPC charts often produce tight control limits with most of the data points appearing outside the control limits. This can give a false impression of common and special-cause variation, and potentially misguide the user into taking the wrong actions. Given the growing availability of large datasets from routinely collected databases in healthcare, there is a need to present a review of this problem (which arises because traditional attribute charts only consider within-subgroup variation) and its solutions (which consider within and between-subgroup variation), which involve the use of the well-established measurements chart and the more recently developed attribute charts based on Laney's innovative approach. We close by making some suggestions for practice.

  3. Protein homology model refinement by large-scale energy optimization.

    Science.gov (United States)

    Park, Hahnbeom; Ovchinnikov, Sergey; Kim, David E; DiMaio, Frank; Baker, David

    2018-03-20

    Proteins fold to their lowest free-energy structures, and hence the most straightforward way to increase the accuracy of a partially incorrect protein structure model is to search for the lowest-energy nearby structure. This direct approach has met with little success for two reasons: first, energy function inaccuracies can lead to false energy minima, resulting in model degradation rather than improvement; and second, even with an accurate energy function, the search problem is formidable because the energy only drops considerably in the immediate vicinity of the global minimum, and there are a very large number of degrees of freedom. Here we describe a large-scale energy optimization-based refinement method that incorporates advances in both search and energy function accuracy that can substantially improve the accuracy of low-resolution homology models. The method refined low-resolution homology models into correct folds for 50 of 84 diverse protein families and generated improved models in recent blind structure prediction experiments. Analyses of the basis for these improvements reveal contributions from both the improvements in conformational sampling techniques and the energy function.

  4. Approximate Model Checking of PCTL Involving Unbounded Path Properties

    Science.gov (United States)

    Basu, Samik; Ghosh, Arka P.; He, Ru

    We study the problem of applying statistical methods for approximate model checking of probabilistic systems against properties encoded as PCTL formulas. Such approximate methods have been proposed primarily to deal with state-space explosion that makes the exact model checking by numerical methods practically infeasible for large systems. However, the existing statistical methods either consider a restricted subset of PCTL, specifically, the subset that can only express bounded until properties; or rely on user-specified finite bound on the sample path length. We propose a new method that does not have such restrictions and can be effectively used to reason about unbounded until properties. We approximate probabilistic characteristics of an unbounded until property by that of a bounded until property for a suitably chosen value of the bound. In essence, our method is a two-phase process: (a) the first phase is concerned with identifying the bound k 0; (b) the second phase computes the probability of satisfying the k 0-bounded until property as an estimate for the probability of satisfying the corresponding unbounded until property. In both phases, it is sufficient to verify bounded until properties which can be effectively done using existing statistical techniques. We prove the correctness of our technique and present its prototype implementations. We empirically show the practical applicability of our method by considering different case studies including a simple infinite-state model, and large finite-state models such as IPv4 zeroconf protocol and dining philosopher protocol modeled as Discrete Time Markov chains.

  5. Using Quality Circles to Enhance Student Involvement and Course Quality in a Large Undergraduate Food Science and Human Nutrition Course

    Science.gov (United States)

    Schmidt, S. J.; Parmer, M. S.; Bohn, D. M.

    2005-01-01

    Large undergraduate classes are a challenge to manage, to engage, and to assess, yet such formidable classes can flourish when student participation is facilitated. One method of generating authentic student involvement is implementation of quality circles by means of a Student Feedback Committee (SFC), which is a volunteer problem-solving and…

  6. Coarse-Grained Model for Water Involving a Virtual Site.

    Science.gov (United States)

    Deng, Mingsen; Shen, Hujun

    2016-02-04

    In this work, we propose a new coarse-grained (CG) model for water by combining the features of two popular CG water models (BMW and MARTINI models) as well as by adopting a topology similar to that of the TIP4P water model. In this CG model, a CG unit, representing four real water molecules, consists of a virtual site, two positively charged particles, and a van der Waals (vdW) interaction center. Distance constraint is applied to the bonds formed between the vdW interaction center and the positively charged particles. The virtual site, which carries a negative charge, is determined by the locations of the two positively charged particles and the vdW interaction center. For the new CG model of water, we coined the name "CAVS" (charge is attached to a virtual site) due to the involvment of the virtual site. After being tested in molecular dynamic (MD) simulations of bulk water at various time steps, under different temperatures and in different salt (NaCl) concentrations, the CAVS model offers encouraging predictions for some bulk properties of water (such as density, dielectric constant, etc.) when compared to experimental ones.

  7. The Hamburg large scale geostrophic ocean general circulation model. Cycle 1

    International Nuclear Information System (INIS)

    Maier-Reimer, E.; Mikolajewicz, U.

    1992-02-01

    The rationale for the Large Scale Geostrophic ocean circulation model (LSG-OGCM) is based on the observations that for a large scale ocean circulation model designed for climate studies, the relevant characteristic spatial scales are large compared with the internal Rossby radius throughout most of the ocean, while the characteristic time scales are large compared with the periods of gravity modes and barotropic Rossby wave modes. In the present version of the model, the fast modes have been filtered out by a conventional technique of integrating the full primitive equations, including all terms except the nonlinear advection of momentum, by an implicit time integration method. The free surface is also treated prognostically, without invoking a rigid lid approximation. The numerical scheme is unconditionally stable and has the additional advantage that it can be applied uniformly to the entire globe, including the equatorial and coastal current regions. (orig.)

  8. Modelling and measurements of wakes in large wind farms

    DEFF Research Database (Denmark)

    Barthelmie, Rebecca Jane; Rathmann, Ole; Frandsen, Sten Tronæs

    2007-01-01

    The paper presents research conducted in the Flow workpackage of the EU funded UPWIND project which focuses on improving models of flow within and downwind of large wind farms in complex terrain and offshore. The main activity is modelling the behaviour of wind turbine wakes in order to improve...

  9. Modeling of the Global Water Cycle - Analytical Models

    Science.gov (United States)

    Yongqiang Liu; Roni Avissar

    2005-01-01

    Both numerical and analytical models of coupled atmosphere and its underlying ground components (land, ocean, ice) are useful tools for modeling the global and regional water cycle. Unlike complex three-dimensional climate models, which need very large computing resources and involve a large number of complicated interactions often difficult to interpret, analytical...

  10. Subgrid-scale models for large-eddy simulation of rotating turbulent channel flows

    Science.gov (United States)

    Silvis, Maurits H.; Bae, Hyunji Jane; Trias, F. Xavier; Abkar, Mahdi; Moin, Parviz; Verstappen, Roel

    2017-11-01

    We aim to design subgrid-scale models for large-eddy simulation of rotating turbulent flows. Rotating turbulent flows form a challenging test case for large-eddy simulation due to the presence of the Coriolis force. The Coriolis force conserves the total kinetic energy while transporting it from small to large scales of motion, leading to the formation of large-scale anisotropic flow structures. The Coriolis force may also cause partial flow laminarization and the occurrence of turbulent bursts. Many subgrid-scale models for large-eddy simulation are, however, primarily designed to parametrize the dissipative nature of turbulent flows, ignoring the specific characteristics of transport processes. We, therefore, propose a new subgrid-scale model that, in addition to the usual dissipative eddy viscosity term, contains a nondissipative nonlinear model term designed to capture transport processes, such as those due to rotation. We show that the addition of this nonlinear model term leads to improved predictions of the energy spectra of rotating homogeneous isotropic turbulence as well as of the Reynolds stress anisotropy in spanwise-rotating plane-channel flows. This work is financed by the Netherlands Organisation for Scientific Research (NWO) under Project Number 613.001.212.

  11. Large transverse momentum processes in a non-scaling parton model

    International Nuclear Information System (INIS)

    Stirling, W.J.

    1977-01-01

    The production of large transverse momentum mesons in hadronic collisions by the quark fusion mechanism is discussed in a parton model which gives logarithmic corrections to Bjorken scaling. It is found that the moments of the large transverse momentum structure function exhibit a simple scale breaking behaviour similar to the behaviour of the Drell-Yan and deep inelastic structure functions of the model. An estimate of corresponding experimental consequences is made and the extent to which analogous results can be expected in an asymptotically free gauge theory is discussed. A simple set of rules is presented for incorporating the logarithmic corrections to scaling into all covariant parton model calculations. (Auth.)

  12. An International Collaborative Study of Outcome and Prognostic Factors in Patients with Secondary CNS Involvement By Diffuse Large B-Cell Lymphoma

    DEFF Research Database (Denmark)

    El-Galaly, Tarec Christoffer; Cheah, Chan Yoon; Bendtsen, Mette Dahl

    2016-01-01

    ) determine prognostic factors after SCNS.Patients and methods: We performed a retrospective study of patients diagnosed with SCNS during or after frontline immunochemotherapy (R-CHOP or equivalently effective regimens). SCNS was defined as new involvement of the CNS (parenchymal, leptomeningeal, and/or eye......Background: Secondary CNS involvement (SCNS) is a detrimental complication seen in ~5% of patients with diffuse large B-cell lymphoma (DLBCL) treated with modern immunochemotherapy. Data from older series report short survival following SCNS, typically lt;6 months. However, data in patients...

  13. Nonlinear continuum mechanics and large inelastic deformations

    CERN Document Server

    Dimitrienko, Yuriy I

    2010-01-01

    This book provides a rigorous axiomatic approach to continuum mechanics under large deformation. In addition to the classical nonlinear continuum mechanics - kinematics, fundamental laws, the theory of functions having jump discontinuities across singular surfaces, etc. - the book presents the theory of co-rotational derivatives, dynamic deformation compatibility equations, and the principles of material indifference and symmetry, all in systematized form. The focus of the book is a new approach to the formulation of the constitutive equations for elastic and inelastic continua under large deformation. This new approach is based on using energetic and quasi-energetic couples of stress and deformation tensors. This approach leads to a unified treatment of large, anisotropic elastic, viscoelastic, and plastic deformations. The author analyses classical problems, including some involving nonlinear wave propagation, using different models for continua under large deformation, and shows how different models lead t...

  14. A stochastic large deformation model for computational anatomy

    DEFF Research Database (Denmark)

    Arnaudon, Alexis; Holm, Darryl D.; Pai, Akshay Sadananda Uppinakudru

    2017-01-01

    In the study of shapes of human organs using computational anatomy, variations are found to arise from inter-subject anatomical differences, disease-specific effects, and measurement noise. This paper introduces a stochastic model for incorporating random variations into the Large Deformation...

  15. Modeling the impact of large-scale energy conversion systems on global climate

    International Nuclear Information System (INIS)

    Williams, J.

    There are three energy options which could satisfy a projected energy requirement of about 30 TW and these are the solar, nuclear and (to a lesser extent) coal options. Climate models can be used to assess the impact of large scale deployment of these options. The impact of waste heat has been assessed using energy balance models and general circulation models (GCMs). Results suggest that the impacts are significant when the heat imput is very high and studies of more realistic scenarios are required. Energy balance models, radiative-convective models and a GCM have been used to study the impact of doubling the atmospheric CO 2 concentration. State-of-the-art models estimate a surface temperature increase of 1.5-3.0 0 C with large amplification near the poles, but much uncertainty remains. Very few model studies have been made of the impact of particles on global climate, more information on the characteristics of particle input are required. The impact of large-scale deployment of solar energy conversion systems has received little attention but model studies suggest that large scale changes in surface characteristics associated with such systems (surface heat balance, roughness and hydrological characteristics and ocean surface temperature) could have significant global climatic effects. (Auth.)

  16. Bayesian hierarchical model for large-scale covariance matrix estimation.

    Science.gov (United States)

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  17. Modelling and measurements of wakes in large wind farms

    International Nuclear Information System (INIS)

    Barthelmie, R J; Rathmann, O; Frandsen, S T; Hansen, K S; Politis, E; Prospathopoulos, J; Rados, K; Cabezon, D; Schlez, W; Phillips, J; Neubert, A; Schepers, J G; Pijl, S P van der

    2007-01-01

    The paper presents research conducted in the Flow workpackage of the EU funded UPWIND project which focuses on improving models of flow within and downwind of large wind farms in complex terrain and offshore. The main activity is modelling the behaviour of wind turbine wakes in order to improve power output predictions

  18. Rotation sequence to report humerothoracic kinematics during 3D motion involving large horizontal component: application to the tennis forehand drive.

    Science.gov (United States)

    Creveaux, Thomas; Sevrez, Violaine; Dumas, Raphaël; Chèze, Laurence; Rogowski, Isabelle

    2018-03-01

    The aim of this study was to examine the respective aptitudes of three rotation sequences (Y t X f 'Y h '', Z t X f 'Y h '', and X t Z f 'Y h '') to effectively describe the orientation of the humerus relative to the thorax during a movement involving a large horizontal abduction/adduction component: the tennis forehand drive. An optoelectronic system was used to record the movements of eight elite male players, each performing ten forehand drives. The occurrences of gimbal lock, phase angle discontinuity and incoherency in the time course of the three angles defining humerothoracic rotation were examined for each rotation sequence. Our results demonstrated that no single sequence effectively describes humerothoracic motion without discontinuities throughout the forehand motion. The humerothoracic joint angles can nevertheless be described without singularities when considering the backswing/forward-swing and the follow-through phases separately. Our findings stress that the sequence choice may have implications for the report and interpretation of 3D joint kinematics during large shoulder range of motion. Consequently, the use of Euler/Cardan angles to represent 3D orientation of the humerothoracic joint in sport tasks requires the evaluation of the rotation sequence regarding singularity occurrence before analysing the kinematic data, especially when the task involves a large shoulder range of motion in the horizontal plane.

  19. Large-eddy simulation of the temporal mixing layer using the Clark model

    NARCIS (Netherlands)

    Vreman, A.W.; Geurts, B.J.; Kuerten, J.G.M.

    1996-01-01

    The Clark model for the turbulent stress tensor in large-eddy simulation is investigated from a theoretical and computational point of view. In order to be applicable to compressible turbulent flows, the Clark model has been reformulated. Actual large-eddy simulation of a weakly compressible,

  20. Global Bedload Flux Modeling and Analysis in Large Rivers

    Science.gov (United States)

    Islam, M. T.; Cohen, S.; Syvitski, J. P.

    2017-12-01

    Proper sediment transport quantification has long been an area of interest for both scientists and engineers in the fields of geomorphology, and management of rivers and coastal waters. Bedload flux is important for monitoring water quality and for sustainable development of coastal and marine bioservices. Bedload measurements, especially for large rivers, is extremely scarce across time, and many rivers have never been monitored. Bedload measurements in rivers, is particularly acute in developing countries where changes in sediment yields is high. The paucity of bedload measurements is the result of 1) the nature of the problem (large spatial and temporal uncertainties), and 2) field costs including the time-consuming nature of the measurement procedures (repeated bedform migration tracking, bedload samplers). Here we present a first of its kind methodology for calculating bedload in large global rivers (basins are >1,000 km. Evaluation of model skill is based on 113 bedload measurements. The model predictions are compared with an empirical model developed from the observational dataset in an attempt to evaluate the differences between a physically-based numerical model and a lumped relationship between bedload flux and fluvial and basin parameters (e.g., discharge, drainage area, lithology). The initial study success opens up various applications to global fluvial geomorphology (e.g. including the relationship between suspended sediment (wash load) and bedload). Simulated results with known uncertainties offers a new research product as a valuable resource for the whole scientific community.

  1. A large deformation viscoelastic model for double-network hydrogels

    Science.gov (United States)

    Mao, Yunwei; Lin, Shaoting; Zhao, Xuanhe; Anand, Lallit

    2017-03-01

    We present a large deformation viscoelasticity model for recently synthesized double network hydrogels which consist of a covalently-crosslinked polyacrylamide network with long chains, and an ionically-crosslinked alginate network with short chains. Such double-network gels are highly stretchable and at the same time tough, because when stretched the crosslinks in the ionically-crosslinked alginate network rupture which results in distributed internal microdamage which dissipates a substantial amount of energy, while the configurational entropy of the covalently-crosslinked polyacrylamide network allows the gel to return to its original configuration after deformation. In addition to the large hysteresis during loading and unloading, these double network hydrogels also exhibit a substantial rate-sensitive response during loading, but exhibit almost no rate-sensitivity during unloading. These features of large hysteresis and asymmetric rate-sensitivity are quite different from the response of conventional hydrogels. We limit our attention to modeling the complex viscoelastic response of such hydrogels under isothermal conditions. Our model is restricted in the sense that we have limited our attention to conditions under which one might neglect any diffusion of the water in the hydrogel - as might occur when the gel has a uniform initial value of the concentration of water, and the mobility of the water molecules in the gel is low relative to the time scale of the mechanical deformation. We also do not attempt to model the final fracture of such double-network hydrogels.

  2. Multiresolution comparison of precipitation datasets for large-scale models

    Science.gov (United States)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  3. Statistical aspects of carbon fiber risk assessment modeling. [fire accidents involving aircraft

    Science.gov (United States)

    Gross, D.; Miller, D. R.; Soland, R. M.

    1980-01-01

    The probabilistic and statistical aspects of the carbon fiber risk assessment modeling of fire accidents involving commercial aircraft are examined. Three major sources of uncertainty in the modeling effort are identified. These are: (1) imprecise knowledge in establishing the model; (2) parameter estimation; and (3)Monte Carlo sampling error. All three sources of uncertainty are treated and statistical procedures are utilized and/or developed to control them wherever possible.

  4. Use of models in large-area forest surveys: comparing model-assisted, model-based and hybrid estimation

    Science.gov (United States)

    Goran Stahl; Svetlana Saarela; Sebastian Schnell; Soren Holm; Johannes Breidenbach; Sean P. Healey; Paul L. Patterson; Steen Magnussen; Erik Naesset; Ronald E. McRoberts; Timothy G. Gregoire

    2016-01-01

    This paper focuses on the use of models for increasing the precision of estimators in large-area forest surveys. It is motivated by the increasing availability of remotely sensed data, which facilitates the development of models predicting the variables of interest in forest surveys. We present, review and compare three different estimation frameworks where...

  5. Introducing an Intervention Model for Fostering Affective Involvement with Persons Who Are Congenitally Deafblind

    NARCIS (Netherlands)

    Martens, M.A.W.; Janssen, M.J.; Ruijssenaars, A.J.J.M.; Riksen-Walraven, J.M.A.

    2014-01-01

    The article presented here introduces the Intervention Model for Affective Involvement (IMAI), which was designed to train staff members (for example, teachers, caregivers, support workers) to foster affective involvement during interaction and communication with persons who have congenital

  6. Compensatory hypertrophy of the teres minor muscle after large rotator cuff tear model in adult male rat.

    Science.gov (United States)

    Ichinose, Tsuyoshi; Yamamoto, Atsushi; Kobayashi, Tsutomu; Shitara, Hitoshi; Shimoyama, Daisuke; Iizuka, Haku; Koibuchi, Noriyuki; Takagishi, Kenji

    2016-02-01

    Rotator cuff tear (RCT) is a common musculoskeletal disorder in the elderly. The large RCT is often irreparable due to the retraction and degeneration of the rotator cuff muscle. The integrity of the teres minor (TM) muscle is thought to affect postoperative functional recovery in some surgical treatments. Hypertrophy of the TM is found in some patients with large RCTs; however, the process underlying this hypertrophy is still unclear. The objective of this study was to determine if compensatory hypertrophy of the TM muscle occurs in a large RCT rat model. Twelve Wistar rats underwent transection of the suprascapular nerve and the supraspinatus and infraspinatus tendons in the left shoulder. The rats were euthanized 4 weeks after the surgery, and the cuff muscles were collected and weighed. The cross-sectional area and the involvement of Akt/mammalian target of rapamycin (mTOR) signaling were examined in the remaining TM muscle. The weight and cross-sectional area of the TM muscle was higher in the operated-on side than in the control side. The phosphorylated Akt/Akt protein ratio was not significantly different between these sides. The phosphorylated-mTOR/mTOR protein ratio was significantly higher on the operated-on side. Transection of the suprascapular nerve and the supraspinatus and infraspinatus tendons activates mTOR signaling in the TM muscle, which results in muscle hypertrophy. The Akt-signaling pathway may not be involved in this process. Nevertheless, activation of mTOR signaling in the TM muscle after RCT may be an effective therapeutic target of a large RCT. Copyright © 2016 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  7. CP violation in beauty decays the standard model paradigm of large effects

    CERN Document Server

    Bigi, Ikaros I.Y.

    1994-01-01

    The Standard Model contains a natural source for CP asymmetries in weak decays, which is described by the KM mechanism. Beyond \\epsilon _K it generates only elusive manifestations of CP violation in {\\em light-}quark systems. On the other hand it naturally leads to large asymmetries in certain non-leptonic beauty decays. In particular when B^0-\\bar B^0 oscillations are involved, theoretical uncertainties in the hadronic matrix elements either drop out or can be controlled, and one predicts asymmetries well in excess of 10\\% with high parametric reliability. It is briefly described how the KM triangle can be determined experimentally and then subjected to sensitive consistency tests. Any failure would constitute indirect, but unequivocal evidence for the intervention of New Physics; some examples are sketched. Any outcome of a comprehensive program of CP studies in B decays -- short of technical failure -- will provide us with fundamental and unique insights into nature's design.

  8. Large-scale, high-performance and cloud-enabled multi-model analytics experiments in the context of the Earth System Grid Federation

    Science.gov (United States)

    Fiore, S.; Płóciennik, M.; Doutriaux, C.; Blanquer, I.; Barbera, R.; Williams, D. N.; Anantharaj, V. G.; Evans, B. J. K.; Salomoni, D.; Aloisio, G.

    2017-12-01

    The increased models resolution in the development of comprehensive Earth System Models is rapidly leading to very large climate simulations output that pose significant scientific data management challenges in terms of data sharing, processing, analysis, visualization, preservation, curation, and archiving.Large scale global experiments for Climate Model Intercomparison Projects (CMIP) have led to the development of the Earth System Grid Federation (ESGF), a federated data infrastructure which has been serving the CMIP5 experiment, providing access to 2PB of data for the IPCC Assessment Reports. In such a context, running a multi-model data analysis experiment is very challenging, as it requires the availability of a large amount of data related to multiple climate models simulations and scientific data management tools for large-scale data analytics. To address these challenges, a case study on climate models intercomparison data analysis has been defined and implemented in the context of the EU H2020 INDIGO-DataCloud project. The case study has been tested and validated on CMIP5 datasets, in the context of a large scale, international testbed involving several ESGF sites (LLNL, ORNL and CMCC), one orchestrator site (PSNC) and one more hosting INDIGO PaaS services (UPV). Additional ESGF sites, such as NCI (Australia) and a couple more in Europe, are also joining the testbed. The added value of the proposed solution is summarized in the following: it implements a server-side paradigm which limits data movement; it relies on a High-Performance Data Analytics (HPDA) stack to address performance; it exploits the INDIGO PaaS layer to support flexible, dynamic and automated deployment of software components; it provides user-friendly web access based on the INDIGO Future Gateway; and finally it integrates, complements and extends the support currently available through ESGF. Overall it provides a new "tool" for climate scientists to run multi-model experiments. At the

  9. Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam’s Window*

    Science.gov (United States)

    Onorante, Luca; Raftery, Adrian E.

    2015-01-01

    Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam’s window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods. PMID:26917859

  10. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    Science.gov (United States)

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  11. Modeling and Forecasting Large Realized Covariance Matrices and Portfolio Choice

    NARCIS (Netherlands)

    Callot, Laurent A.F.; Kock, Anders B.; Medeiros, Marcelo C.

    2017-01-01

    We consider modeling and forecasting large realized covariance matrices by penalized vector autoregressive models. We consider Lasso-type estimators to reduce the dimensionality and provide strong theoretical guarantees on the forecast capability of our procedure. We show that we can forecast

  12. Large animal models for vaccine development and testing.

    Science.gov (United States)

    Gerdts, Volker; Wilson, Heather L; Meurens, Francois; van Drunen Littel-van den Hurk, Sylvia; Wilson, Don; Walker, Stewart; Wheler, Colette; Townsend, Hugh; Potter, Andrew A

    2015-01-01

    The development of human vaccines continues to rely on the use of animals for research. Regulatory authorities require novel vaccine candidates to undergo preclinical assessment in animal models before being permitted to enter the clinical phase in human subjects. Substantial progress has been made in recent years in reducing and replacing the number of animals used for preclinical vaccine research through the use of bioinformatics and computational biology to design new vaccine candidates. However, the ultimate goal of a new vaccine is to instruct the immune system to elicit an effective immune response against the pathogen of interest, and no alternatives to live animal use currently exist for evaluation of this response. Studies identifying the mechanisms of immune protection; determining the optimal route and formulation of vaccines; establishing the duration and onset of immunity, as well as the safety and efficacy of new vaccines, must be performed in a living system. Importantly, no single animal model provides all the information required for advancing a new vaccine through the preclinical stage, and research over the last two decades has highlighted that large animals more accurately predict vaccine outcome in humans than do other models. Here we review the advantages and disadvantages of large animal models for human vaccine development and demonstrate that much of the success in bringing a new vaccine to market depends on choosing the most appropriate animal model for preclinical testing. © The Author 2015. Published by Oxford University Press on behalf of the Institute for Laboratory Animal Research. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  13. Concordant bone marrow involvement of diffuse large B-cell lymphoma represents a distinct clinical and biological entity in the era of immunotherapy

    DEFF Research Database (Denmark)

    Yao, Zhilei; Deng, Lijuan; Xu-Monette, Z Y

    2018-01-01

    In diffuse large B-cell lymphoma (DLBCL), the clinical and biological significance of concordant and discordant bone marrow (BM) involvement have not been well investigated. We evaluated 712 de novo DLBCL patients with front-line rituximab-containing treatment, including 263 patients with positiv...

  14. Wind and Photovoltaic Large-Scale Regional Models for hourly production evaluation

    DEFF Research Database (Denmark)

    Marinelli, Mattia; Maule, Petr; Hahmann, Andrea N.

    2015-01-01

    This work presents two large-scale regional models used for the evaluation of normalized power output from wind turbines and photovoltaic power plants on a European regional scale. The models give an estimate of renewable production on a regional scale with 1 h resolution, starting from a mesosca...... of the transmission system, especially regarding the cross-border power flows. The tuning of these regional models is done using historical meteorological data acquired on a per-country basis and using publicly available data of installed capacity.......This work presents two large-scale regional models used for the evaluation of normalized power output from wind turbines and photovoltaic power plants on a European regional scale. The models give an estimate of renewable production on a regional scale with 1 h resolution, starting from a mesoscale...

  15. Modeling and control of a large nuclear reactor. A three-time-scale approach

    Energy Technology Data Exchange (ETDEWEB)

    Shimjith, S.R. [Indian Institute of Technology Bombay, Mumbai (India); Bhabha Atomic Research Centre, Mumbai (India); Tiwari, A.P. [Bhabha Atomic Research Centre, Mumbai (India); Bandyopadhyay, B. [Indian Institute of Technology Bombay, Mumbai (India). IDP in Systems and Control Engineering

    2013-07-01

    Recent research on Modeling and Control of a Large Nuclear Reactor. Presents a three-time-scale approach. Written by leading experts in the field. Control analysis and design of large nuclear reactors requires a suitable mathematical model representing the steady state and dynamic behavior of the reactor with reasonable accuracy. This task is, however, quite challenging because of several complex dynamic phenomena existing in a reactor. Quite often, the models developed would be of prohibitively large order, non-linear and of complex structure not readily amenable for control studies. Moreover, the existence of simultaneously occurring dynamic variations at different speeds makes the mathematical model susceptible to numerical ill-conditioning, inhibiting direct application of standard control techniques. This monograph introduces a technique for mathematical modeling of large nuclear reactors in the framework of multi-point kinetics, to obtain a comparatively smaller order model in standard state space form thus overcoming these difficulties. It further brings in innovative methods for controller design for systems exhibiting multi-time-scale property, with emphasis on three-time-scale systems.

  16. Medical staff involvement in nursing homes: development of a conceptual model and research agenda.

    Science.gov (United States)

    Shield, Renée; Rosenthal, Marsha; Wetle, Terrie; Tyler, Denise; Clark, Melissa; Intrator, Orna

    2014-02-01

    Medical staff (physicians, nurse practitioners, physicians' assistants) involvement in nursing homes (NH) is limited by professional guidelines, government policies, regulations, and reimbursements, creating bureaucratic burden. The conceptual NH Medical Staff Involvement Model, based on our mixed-methods research, applies the Donabedian "structure-process-outcomes" framework to the NH, identifying measures for a coordinated research agenda. Quantitative surveys and qualitative interviews conducted with medical directors, administrators and directors of nursing, other experts, residents and family members and Minimum Data Set, the Online Certification and Reporting System and Medicare Part B claims data related to NH structure, process, and outcomes were analyzed. NH control of medical staff, or structure, affects medical staff involvement in care processes and is associated with better outcomes (e.g., symptom management, appropriate transitions, satisfaction). The model identifies measures clarifying the impact of NH medical staff involvement on care processes and resident outcomes and has strong potential to inform regulatory policies.

  17. Precise MRI-based stereotaxic surgery in large animal models

    DEFF Research Database (Denmark)

    Glud, Andreas Nørgaard; Bech, Johannes; Tvilling, Laura

    BACKGROUND: Stereotaxic neurosurgery in large animals is used widely in different sophisticated models, where precision is becoming more crucial as desired anatomical target regions are becoming smaller. Individually calculated coordinates are necessary in large animal models with cortical...... and subcortical anatomical differences. NEW METHOD: We present a convenient method to make an MRI-visible skull fiducial for 3D MRI-based stereotaxic procedures in larger experimental animals. Plastic screws were filled with either copper-sulphate solution or MRI-visible paste from a commercially available...... cranial head marker. The screw fiducials were inserted in the animal skulls and T1 weighted MRI was performed allowing identification of the inserted skull marker. RESULTS: Both types of fiducial markers were clearly visible on the MRÍs. This allows high precision in the stereotaxic space. COMPARISON...

  18. Large degeneracy of excited hadrons and quark models

    International Nuclear Information System (INIS)

    Bicudo, P.

    2007-01-01

    The pattern of a large approximate degeneracy of the excited hadron spectra (larger than the chiral restoration degeneracy) is present in the recent experimental report of Bugg. Here we try to model this degeneracy with state of the art quark models. We review how the Coulomb Gauge chiral invariant and confining Bethe-Salpeter equation simplifies in the case of very excited quark-antiquark mesons, including angular or radial excitations, to a Salpeter equation with an ultrarelativistic kinetic energy with the spin-independent part of the potential. The resulting meson spectrum is solved, and the excited chiral restoration is recovered, for all mesons with J>0. Applying the ultrarelativistic simplification to a linear equal-time potential, linear Regge trajectories are obtained, for both angular and radial excitations. The spectrum is also compared with the semiclassical Bohr-Sommerfeld quantization relation. However, the excited angular and radial spectra do not coincide exactly. We then search, with the classical Bertrand theorem, for central potentials producing always classical closed orbits with the ultrarelativistic kinetic energy. We find that no such potential exists, and this implies that no exact larger degeneracy can be obtained in our equal-time framework, with a single principal quantum number comparable to the nonrelativistic Coulomb or harmonic oscillator potentials. Nevertheless we find it plausible that the large experimental approximate degeneracy will be modeled in the future by quark models beyond the present state of the art

  19. Large-scale building energy efficiency retrofit: Concept, model and control

    International Nuclear Information System (INIS)

    Wu, Zhou; Wang, Bo; Xia, Xiaohua

    2016-01-01

    BEER (Building energy efficiency retrofit) projects are initiated in many nations and regions over the world. Existing studies of BEER focus on modeling and planning based on one building and one year period of retrofitting, which cannot be applied to certain large BEER projects with multiple buildings and multi-year retrofit. In this paper, the large-scale BEER problem is defined in a general TBT (time-building-technology) framework, which fits essential requirements of real-world projects. The large-scale BEER is newly studied in the control approach rather than the optimization approach commonly used before. Optimal control is proposed to design optimal retrofitting strategy in terms of maximal energy savings and maximal NPV (net present value). The designed strategy is dynamically changing on dimensions of time, building and technology. The TBT framework and the optimal control approach are verified in a large BEER project, and results indicate that promising performance of energy and cost savings can be achieved in the general TBT framework. - Highlights: • Energy efficiency retrofit of many buildings is studied. • A TBT (time-building-technology) framework is proposed. • The control system of the large-scale BEER is modeled. • The optimal retrofitting strategy is obtained.

  20. MODELLING OF CARBON MONOXIDE AIR POLLUTION IN LARG CITIES BY EVALUETION OF SPECTRAL LANDSAT8 IMAGES

    Directory of Open Access Journals (Sweden)

    M. Hamzelo

    2015-12-01

    Full Text Available Air pollution in large cities is one of the major problems that resolve and reduce it need multiple applications and environmental management. Of The main sources of this pollution is industrial activities, urban and transport that enter large amounts of contaminants into the air and reduces its quality. With Variety of pollutants and high volume manufacturing, local distribution of manufacturing centers, Testing and measuring emissions is difficult. Substances such as carbon monoxide, sulfur dioxide, and unburned hydrocarbons and lead compounds are substances that cause air pollution and carbon monoxide is most important. Today, data exchange systems, processing, analysis and modeling is of important pillars of management system and air quality control. In this study, using the spectral signature of carbon monoxide gas as the most efficient gas pollution LANDSAT8 images in order that have better spatial resolution than appropriate spectral bands and weather meters،SAM classification algorithm and Geographic Information System (GIS , spatial distribution of carbon monoxide gas in Tehran over a period of one year from the beginning of 2014 until the beginning of 2015 at 11 map have modeled and then to the model valuation ،created maps were compared with the map provided by the Tehran quality comparison air company. Compare involved plans did with the error matrix and results in 4 types of care; overall, producer, user and kappa coefficient was investigated. Results of average accuracy were about than 80%, which indicates the fit method and data used for modeling.

  1. Dynamic Modeling, Optimization, and Advanced Control for Large Scale Biorefineries

    DEFF Research Database (Denmark)

    Prunescu, Remus Mihail

    with a complex conversion route. Computational fluid dynamics is used to model transport phenomena in large reactors capturing tank profiles, and delays due to plug flows. This work publishes for the first time demonstration scale real data for validation showing that the model library is suitable...

  2. Material-Point Analysis of Large-Strain Problems

    DEFF Research Database (Denmark)

    Andersen, Søren

    The aim of this thesis is to apply and improve the material-point method for modelling of geotechnical problems. One of the geotechnical phenomena that is a subject of active research is the study of landslides. A large amount of research is focused on determining when slopes become unstable. Hence......, it is possible to predict if a certain slope is stable using commercial finite element or finite difference software such as PLAXIS, ABAQUS or FLAC. However, the dynamics during a landslide are less explored. The material-point method (MPM) is a novel numerical method aimed at analysing problems involving...... materials subjected to large strains in a dynamical time–space domain. This thesis explores the material-point method with the specific aim of improving the performance for geotechnical problems. Large-strain geotechnical problems such as landslides pose a major challenge to model numerically. Employing...

  3. A hierarchical causal modeling for large industrial plants supervision

    International Nuclear Information System (INIS)

    Dziopa, P.; Leyval, L.

    1994-01-01

    A supervision system has to analyse the process current state and the way it will evolve after a modification of the inputs or disturbance. It is proposed to base this analysis on a hierarchy of models, witch differ by the number of involved variables and the abstraction level used to describe their temporal evolution. In a first step, special attention is paid to causal models building, from the most abstract one. Once the hierarchy of models has been build, the most detailed model parameters are estimated. Several models of different abstraction levels can be used for on line prediction. These methods have been applied to a nuclear reprocessing plant. The abstraction level could be chosen on line by the operator. Moreover when an abnormal process behaviour is detected a more detailed model is automatically triggered in order to focus the operator attention on the suspected subsystem. (authors). 11 refs., 11 figs

  4. Modeling of nonlinear responses for reciprocal transducers involving polarization switching

    DEFF Research Database (Denmark)

    Willatzen, Morten; Wang, Linxiang

    2007-01-01

    Nonlinearities and hysteresis effects in a reciprocal PZT transducer are examined by use of a dynamical mathematical model on the basis of phase-transition theory. In particular, we consider the perovskite piezoelectric ceramic in which the polarization process in the material can be modeled...... by Landau theory for the first-order phase transformation, in which each polarization state is associated with a minimum of the Landau free-energy function. Nonlinear constitutive laws are obtained by using thermodynamical equilibrium conditions, and hysteretic behavior of the material can be modeled...... intrinsically. The time-dependent Ginzburg-Landau theory is used in the parameter identification involving hysteresis effects. We use the Chebyshev collocation method in the numerical simulations. The elastic field is assumed to be coupled linearly with other fields, and the nonlinearity is in the E-D coupling...

  5. Comparison of hard scattering models for particle production at large transverse momentum. 2

    International Nuclear Information System (INIS)

    Schiller, A.; Ilgenfritz, E.M.; Kripfganz, J.; Moehring, H.J.; Ranft, G.; Ranft, J.

    1977-01-01

    Single particle distributions of π + and π - at large transverse momentum are analysed using various hard collision models: qq → qq, qantiq → MantiM, qM → qM. The transverse momentum dependence at thetasub(cm) = 90 0 is well described in all models except qantiq → MantiM. This model has problems with the ratios (pp → π + +X)/(π +- p → π 0 +X). Presently available data on rapidity distributions of pions in π - p and pantip collisions are at rather low transverse momentum (however large xsub(perpendicular) = 2psub(perpendicular)/√s) where it is not obvious that hard collision models should dominate. The data, in particular the π - /π + asymmetry are well described by all models except qM → Mq (CIM). At large values of transverse momentum significant differences between the models are predicted. (author)

  6. An improved large signal model of InP HEMTs

    Science.gov (United States)

    Li, Tianhao; Li, Wenjun; Liu, Jun

    2018-05-01

    An improved large signal model for InP HEMTs is proposed in this paper. The channel current and charge model equations are constructed based on the Angelov model equations. Both the equations for channel current and gate charge models were all continuous and high order drivable, and the proposed gate charge model satisfied the charge conservation. For the strong leakage induced barrier reduction effect of InP HEMTs, the Angelov current model equations are improved. The channel current model could fit DC performance of devices. A 2 × 25 μm × 70 nm InP HEMT device is used to demonstrate the extraction and validation of the model, in which the model has predicted the DC I–V, C–V and bias related S parameters accurately. Project supported by the National Natural Science Foundation of China (No. 61331006).

  7. Large deviations in the presence of cooperativity and slow dynamics

    Science.gov (United States)

    Whitelam, Stephen

    2018-06-01

    We study simple models of intermittency, involving switching between two states, within the dynamical large-deviation formalism. Singularities appear in the formalism when switching is cooperative or when its basic time scale diverges. In the first case the unbiased trajectory distribution undergoes a symmetry breaking, leading to a change in shape of the large-deviation rate function for a particular dynamical observable. In the second case the symmetry of the unbiased trajectory distribution remains unbroken. Comparison of these models suggests that singularities of the dynamical large-deviation formalism can signal the dynamical equivalent of an equilibrium phase transition but do not necessarily do so.

  8. The Cauchy problem for a model of immiscible gas flow with large data

    Energy Technology Data Exchange (ETDEWEB)

    Sande, Hilde

    2008-12-15

    The thesis consists of an introduction and two papers; 1. The solution of the Cauchy problem with large data for a model of a mixture of gases. 2. Front tracking for a model of immiscible gas flow with large data. (AG) refs, figs

  9. A Hybrid Neuro-Fuzzy Model For Integrating Large Earth-Science Datasets

    Science.gov (United States)

    Porwal, A.; Carranza, J.; Hale, M.

    2004-12-01

    A GIS-based hybrid neuro-fuzzy approach to integration of large earth-science datasets for mineral prospectivity mapping is described. It implements a Takagi-Sugeno type fuzzy inference system in the framework of a four-layered feed-forward adaptive neural network. Each unique combination of the datasets is considered a feature vector whose components are derived by knowledge-based ordinal encoding of the constituent datasets. A subset of feature vectors with a known output target vector (i.e., unique conditions known to be associated with either a mineralized or a barren location) is used for the training of an adaptive neuro-fuzzy inference system. Training involves iterative adjustment of parameters of the adaptive neuro-fuzzy inference system using a hybrid learning procedure for mapping each training vector to its output target vector with minimum sum of squared error. The trained adaptive neuro-fuzzy inference system is used to process all feature vectors. The output for each feature vector is a value that indicates the extent to which a feature vector belongs to the mineralized class or the barren class. These values are used to generate a prospectivity map. The procedure is demonstrated by an application to regional-scale base metal prospectivity mapping in a study area located in the Aravalli metallogenic province (western India). A comparison of the hybrid neuro-fuzzy approach with pure knowledge-driven fuzzy and pure data-driven neural network approaches indicates that the former offers a superior method for integrating large earth-science datasets for predictive spatial mathematical modelling.

  10. Including investment risk in large-scale power market models

    DEFF Research Database (Denmark)

    Lemming, Jørgen Kjærgaard; Meibom, P.

    2003-01-01

    Long-term energy market models can be used to examine investments in production technologies, however, with market liberalisation it is crucial that such models include investment risks and investor behaviour. This paper analyses how the effect of investment risk on production technology selection...... can be included in large-scale partial equilibrium models of the power market. The analyses are divided into a part about risk measures appropriate for power market investors and a more technical part about the combination of a risk-adjustment model and a partial-equilibrium model. To illustrate...... the analyses quantitatively, a framework based on an iterative interaction between the equilibrium model and a separate risk-adjustment module was constructed. To illustrate the features of the proposed modelling approach we examined how uncertainty in demand and variable costs affects the optimal choice...

  11. Air quality models and unusually large ozone increases: Identifying model failures, understanding environmental causes, and improving modeled chemistry

    Science.gov (United States)

    Couzo, Evan A.

    Several factors combine to make ozone (O3) pollution in Houston, Texas, unique when compared to other metropolitan areas. These include complex meteorology, intense clustering of industrial activity, and significant precursor emissions from the heavily urbanized eight-county area. Decades of air pollution research have borne out two different causes, or conceptual models, of O 3 formation. One conceptual model describes a gradual region-wide increase in O3 concentrations "typical" of many large U.S. cities. The other conceptual model links episodic emissions of volatile organic compounds to spatially limited plumes of high O3, which lead to large hourly increases that have exceeded 100 parts per billion (ppb) per hour. These large hourly increases are known to lead to violations of the federal O 3 standard and impact Houston's status as a non-attainment area. There is a need to further understand and characterize the causes of peak O 3 levels in Houston and simulate them correctly so that environmental regulators can find the most cost-effective pollution controls. This work provides a detailed understanding of unusually large O 3 increases in the natural and modeled environments. First, we probe regulatory model simulations and assess their ability to reproduce the observed phenomenon. As configured for the purpose of demonstrating future attainment of the O3 standard, the model fails to predict the spatially limited O3 plumes observed in Houston. Second, we combine ambient meteorological and pollutant measurement data to identify the most likely geographic origins and preconditions of the concentrated O3 plumes. We find evidence that the O3 plumes are the result of photochemical activity accelerated by industrial emissions. And, third, we implement changes to the modeled chemistry to add missing formation mechanisms of nitrous acid, which is an important radical precursor. Radicals control the chemical reactivity of atmospheric systems, and perturbations to

  12. Helpful Components Involved in the Cognitive-Experiential Model of Dream Work

    Science.gov (United States)

    Tien, Hsiu-Lan Shelley; Chen, Shuh-Chi; Lin, Chia-Huei

    2009-01-01

    The purpose of the study was to examine the helpful components involved in the Hill's cognitive-experiential dream work model. Participants were 27 volunteer clients from colleges and universities in northern and central parts of Taiwan. Each of the clients received 1-2 sessions of dream interpretations. The cognitive-experiential dream work model…

  13. The sheep as a large osteoporotic model for orthopaedic research in humans

    DEFF Research Database (Denmark)

    Cheng, L.; Ding, Ming; Li, Z.

    2008-01-01

    Although small animals as rodents are very popular animals for osteoporosis models , large animals models are necessary for research of human osteoporotic diseases. Sheep osteoporosis models are becoming more important because of its unique advantages for osteoporosis reseach. Sheep are docile...... in nature and large in size , which facilitates obtaining blood samples , urine samples and bone tissue samples for different biochemical tests and histological tests , and surgical manipulation and instrument examinations. Their physiology is similar to humans. To induce osteoporosis , OVX and calcium...... intake restriction and glucocorticoid application are the most effective methods for sheep osteoporosis model. Sheep osteoporosis model is an ideal animal model for studying various medicines reacting to osteoporosis and other treatment methods such as prosthetic replacement reacting to osteoporotic...

  14. On the Phenomenology of an Accelerated Large-Scale Universe

    Directory of Open Access Journals (Sweden)

    Martiros Khurshudyan

    2016-10-01

    Full Text Available In this review paper, several new results towards the explanation of the accelerated expansion of the large-scale universe is discussed. On the other hand, inflation is the early-time accelerated era and the universe is symmetric in the sense of accelerated expansion. The accelerated expansion of is one of the long standing problems in modern cosmology, and physics in general. There are several well defined approaches to solve this problem. One of them is an assumption concerning the existence of dark energy in recent universe. It is believed that dark energy is responsible for antigravity, while dark matter has gravitational nature and is responsible, in general, for structure formation. A different approach is an appropriate modification of general relativity including, for instance, f ( R and f ( T theories of gravity. On the other hand, attempts to build theories of quantum gravity and assumptions about existence of extra dimensions, possible variability of the gravitational constant and the speed of the light (among others, provide interesting modifications of general relativity applicable to problems of modern cosmology, too. In particular, here two groups of cosmological models are discussed. In the first group the problem of the accelerated expansion of large-scale universe is discussed involving a new idea, named the varying ghost dark energy. On the other hand, the second group contains cosmological models addressed to the same problem involving either new parameterizations of the equation of state parameter of dark energy (like varying polytropic gas, or nonlinear interactions between dark energy and dark matter. Moreover, for cosmological models involving varying ghost dark energy, massless particle creation in appropriate radiation dominated universe (when the background dynamics is due to general relativity is demonstrated as well. Exploring the nature of the accelerated expansion of the large-scale universe involving generalized

  15. Towards a 'standard model' of large scale structure formation

    International Nuclear Information System (INIS)

    Shafi, Q.

    1994-01-01

    We explore constraints on inflationary models employing data on large scale structure mainly from COBE temperature anisotropies and IRAS selected galaxy surveys. In models where the tensor contribution to the COBE signal is negligible, we find that the spectral index of density fluctuations n must exceed 0.7. Furthermore the COBE signal cannot be dominated by the tensor component, implying n > 0.85 in such models. The data favors cold plus hot dark matter models with n equal or close to unity and Ω HDM ∼ 0.2 - 0.35. Realistic grand unified theories, including supersymmetric versions, which produce inflation with these properties are presented. (author). 46 refs, 8 figs

  16. A dynamic globalization model for large eddy simulation of complex turbulent flow

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Hae Cheon; Park, No Ma; Kim, Jin Seok [Seoul National Univ., Seoul (Korea, Republic of)

    2005-07-01

    A dynamic subgrid-scale model is proposed for large eddy simulation of turbulent flows in complex geometry. The eddy viscosity model by Vreman [Phys. Fluids, 16, 3670 (2004)] is considered as a base model. A priori tests with the original Vreman model show that it predicts the correct profile of subgrid-scale dissipation in turbulent channel flow but the optimal model coefficient is far from universal. Dynamic procedures of determining the model coefficient are proposed based on the 'global equilibrium' between the subgrid-scale dissipation and viscous dissipation. An important feature of the proposed procedures is that the model coefficient determined is globally constant in space but varies only in time. Large eddy simulations with the present dynamic model are conducted for forced isotropic turbulence, turbulent channel flow and flow over a sphere, showing excellent agreements with previous results.

  17. Large gauge invariant nonstandard neutrino interactions

    International Nuclear Information System (INIS)

    Gavela, M. B.; Hernandez, D.; Ota, T.; Winter, W.

    2009-01-01

    Theories beyond the standard model must necessarily respect its gauge symmetry. This implies strict constraints on the possible models of nonstandard neutrino interactions, which we analyze. The focus is set on the effective low-energy dimension six and eight operators involving four leptons, decomposing them according to all possible tree-level mediators, as a guide for model building. The new couplings are required to have sizable strength, while processes involving four charged leptons are required to be suppressed. For nonstandard interactions in matter, only diagonal tau-neutrino interactions can escape these requirements and can be allowed to result from dimension six operators. Large nonstandard neutrino interactions from dimension eight operators alone are phenomenologically allowed in all flavor channels and are shown to require at least two new mediator particles. The new couplings must obey general cancellation conditions both at the dimension six and dimension eight levels, which result from expressing the operators obtained from the mediator analysis in terms of a complete basis of operators. We illustrate with one example how to apply this information to model building.

  18. Neuroprotective effect of lurasidone via antagonist activities on histamine in a rat model of cranial nerve involvement.

    Science.gov (United States)

    He, Baoming; Yu, Liang; Li, Suping; Xu, Fei; Yang, Lili; Ma, Shuai; Guo, Yi

    2018-04-01

    Cranial nerve involvement frequently involves neuron damage and often leads to psychiatric disorder caused by multiple inducements. Lurasidone is a novel antipsychotic agent approved for the treatment of cranial nerve involvement and a number of mental health conditions in several countries. In the present study, the neuroprotective effect of lurasidone by antagonist activities on histamine was investigated in a rat model of cranial nerve involvement. The antagonist activities of lurasidone on serotonin 5‑HT7, serotonin 5‑HT2A, serotonin 5‑HT1A and serotonin 5‑HT6 were analyzed, and the preclinical therapeutic effects of lurasidone were examined in a rat model of cranial nerve involvement. The safety, maximum tolerated dose (MTD) and preliminary antitumor activity of lurasidone were also assessed in the cranial nerve involvement model. The therapeutic dose of lurasidone was 0.32 mg once daily, administered continuously in 14‑day cycles. The results of the present study found that the preclinical prescriptions induced positive behavioral responses following treatment with lurasidone. The MTD was identified as a once daily administration of 0.32 mg lurasidone. Long‑term treatment with lurasidone for cranial nerve involvement was shown to improve the therapeutic effects and reduce anxiety in the experimental rats. In addition, treatment with lurasidone did not affect body weight. The expression of the language competence protein, Forkhead‑BOX P2, was increased, and the levels of neuroprotective SxIP motif and microtubule end‑binding protein were increased in the hippocampal cells of rats with cranial nerve involvement treated with lurasidone. Lurasidone therapy reinforced memory capability and decreased anxiety. Taken together, lurasidone treatment appeared to protect against language disturbances associated with negative and cognitive impairment in the rat model of cranial nerve involvement, providing a basis for its use in the clinical treatment of

  19. Characteristics of the large corporation-based, bureaucratic model among oecd countries - an foi model analysis

    Directory of Open Access Journals (Sweden)

    Bartha Zoltán

    2014-03-01

    Full Text Available Deciding on the development path of the economy has been a delicate question in economic policy, not least because of the trade-off effects which immediately worsen certain economic indicators as steps are taken to improve others. The aim of the paper is to present a framework that helps decide on such policy dilemmas. This framework is based on an analysis conducted among OECD countries with the FOI model (focusing on future, outside and inside potentials. Several development models can be deduced by this method, out of which only the large corporation-based, bureaucratic model is discussed in detail. The large corporation-based, bureaucratic model implies a development strategy focused on the creation of domestic safe havens. Based on country studies, it is concluded that well-performing safe havens require the active participation of the state. We find that, in countries adhering to this model, business competitiveness is sustained through intensive public support, and an active role taken by the government in education, research and development, in detecting and exploiting special market niches, and in encouraging sectorial cooperation.

  20. Computational Modeling of Large Wildfires: A Roadmap

    KAUST Repository

    Coen, Janice L.

    2010-08-01

    Wildland fire behavior, particularly that of large, uncontrolled wildfires, has not been well understood or predicted. Our methodology to simulate this phenomenon uses high-resolution dynamic models made of numerical weather prediction (NWP) models coupled to fire behavior models to simulate fire behavior. NWP models are capable of modeling very high resolution (< 100 m) atmospheric flows. The wildland fire component is based upon semi-empirical formulas for fireline rate of spread, post-frontal heat release, and a canopy fire. The fire behavior is coupled to the atmospheric model such that low level winds drive the spread of the surface fire, which in turn releases sensible heat, latent heat, and smoke fluxes into the lower atmosphere, feeding back to affect the winds directing the fire. These coupled dynamic models capture the rapid spread downwind, flank runs up canyons, bifurcations of the fire into two heads, and rough agreement in area, shape, and direction of spread at periods for which fire location data is available. Yet, intriguing computational science questions arise in applying such models in a predictive manner, including physical processes that span a vast range of scales, processes such as spotting that cannot be modeled deterministically, estimating the consequences of uncertainty, the efforts to steer simulations with field data ("data assimilation"), lingering issues with short term forecasting of weather that may show skill only on the order of a few hours, and the difficulty of gathering pertinent data for verification and initialization in a dangerous environment. © 2010 IEEE.

  1. Modeling and Control of a Large Nuclear Reactor A Three-Time-Scale Approach

    CERN Document Server

    Shimjith, S R; Bandyopadhyay, B

    2013-01-01

    Control analysis and design of large nuclear reactors requires a suitable mathematical model representing the steady state and dynamic behavior of the reactor with reasonable accuracy. This task is, however, quite challenging because of several complex dynamic phenomena existing in a reactor. Quite often, the models developed would be of prohibitively large order, non-linear and of complex structure not readily amenable for control studies. Moreover, the existence of simultaneously occurring dynamic variations at different speeds makes the mathematical model susceptible to numerical ill-conditioning, inhibiting direct application of standard control techniques. This monograph introduces a technique for mathematical modeling of large nuclear reactors in the framework of multi-point kinetics, to obtain a comparatively smaller order model in standard state space form thus overcoming these difficulties. It further brings in innovative methods for controller design for systems exhibiting multi-time-scale property,...

  2. Software engineering the mixed model for genome-wide association studies on large samples.

    Science.gov (United States)

    Zhang, Zhiwu; Buckler, Edward S; Casstevens, Terry M; Bradbury, Peter J

    2009-11-01

    Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample size and number of markers used for GWAS is increasing dramatically, resulting in greater statistical power to detect those associations. The use of mixed models with increasingly large data sets depends on the availability of software for analyzing those models. While multiple software packages implement the mixed model method, no single package provides the best combination of fast computation, ability to handle large samples, flexible modeling and ease of use. Key elements of association analysis with mixed models are reviewed, including modeling phenotype-genotype associations using mixed models, population stratification, kinship and its estimation, variance component estimation, use of best linear unbiased predictors or residuals in place of raw phenotype, improving efficiency and software-user interaction. The available software packages are evaluated, and suggestions made for future software development.

  3. A refined regional modeling approach for the Corn Belt - Experiences and recommendations for large-scale integrated modeling

    Science.gov (United States)

    Panagopoulos, Yiannis; Gassman, Philip W.; Jha, Manoj K.; Kling, Catherine L.; Campbell, Todd; Srinivasan, Raghavan; White, Michael; Arnold, Jeffrey G.

    2015-05-01

    Nonpoint source pollution from agriculture is the main source of nitrogen and phosphorus in the stream systems of the Corn Belt region in the Midwestern US. This region is comprised of two large river basins, the intensely row-cropped Upper Mississippi River Basin (UMRB) and Ohio-Tennessee River Basin (OTRB), which are considered the key contributing areas for the Northern Gulf of Mexico hypoxic zone according to the US Environmental Protection Agency. Thus, in this area it is of utmost importance to ensure that intensive agriculture for food, feed and biofuel production can coexist with a healthy water environment. To address these objectives within a river basin management context, an integrated modeling system has been constructed with the hydrologic Soil and Water Assessment Tool (SWAT) model, capable of estimating river basin responses to alternative cropping and/or management strategies. To improve modeling performance compared to previous studies and provide a spatially detailed basis for scenario development, this SWAT Corn Belt application incorporates a greatly refined subwatershed structure based on 12-digit hydrologic units or 'subwatersheds' as defined by the US Geological Service. The model setup, calibration and validation are time-demanding and challenging tasks for these large systems, given the scale intensive data requirements, and the need to ensure the reliability of flow and pollutant load predictions at multiple locations. Thus, the objectives of this study are both to comprehensively describe this large-scale modeling approach, providing estimates of pollution and crop production in the region as well as to present strengths and weaknesses of integrated modeling at such a large scale along with how it can be improved on the basis of the current modeling structure and results. The predictions were based on a semi-automatic hydrologic calibration approach for large-scale and spatially detailed modeling studies, with the use of the Sequential

  4. Large psub(T) pion production and clustered parton model

    Energy Technology Data Exchange (ETDEWEB)

    Kanki, T [Osaka Univ., Toyonaka (Japan). Coll. of General Education

    1977-05-01

    Recent experimental results on the large p sub(T) inclusive ..pi../sup 0/ productions by pp and ..pi..p collisions are interpreted by the parton model in which the constituent quarks are defined to be the clusters of the quark-partons and gluons.

  5. Large-scale inverse model analyses employing fast randomized data reduction

    Science.gov (United States)

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  6. Impact of resilience and job involvement on turnover intention of new graduate nurses using structural equation modeling.

    Science.gov (United States)

    Yu, Mi; Lee, Haeyoung

    2018-03-06

    Nurses' turnover intention is not just a result of their maladjustment to the field; it is an organizational issue. This study aimed to construct a structural model to verify the effects of new graduate nurses' work environment satisfaction, emotional labor, and burnout on their turnover intention, with consideration of resilience and job involvement, and to test the adequacy of the developed model. A cross-sectional study and a structural equation modelling approach were used. A nationwide survey was conducted of 371 new nurses who were working in hospitals for ≤18 months between July and October, 2014. The final model accounted for 40% of the variance in turnover intention. Emotional labor and burnout had a significant positive direct effect and an indirect effect on nurses' turnover intention. Resilience had a positive direct effect on job involvement. Job involvement had a negative direct effect on turnover intention. Resilience and job involvement mediated the effect of work environment satisfaction, emotional labor, and burnout on turnover intention. It is important to strengthen new graduate nurses' resilience in order to increase their job involvement and to reduce their turnover intention. © 2018 Japan Academy of Nursing Science.

  7. Large interface simulation in an averaged two-fluid code

    International Nuclear Information System (INIS)

    Henriques, A.

    2006-01-01

    Different ranges of size of interfaces and eddies are involved in multiphase flow phenomena. Classical formalisms focus on a specific range of size. This study presents a Large Interface Simulation (LIS) two-fluid compressible formalism taking into account different sizes of interfaces. As in the single-phase Large Eddy Simulation, a filtering process is used to point out Large Interface (LI) simulation and Small interface (SI) modelization. The LI surface tension force is modelled adapting the well-known CSF method. The modelling of SI transfer terms is done calling for classical closure laws of the averaged approach. To simulate accurately LI transfer terms, we develop a LI recognition algorithm based on a dimensionless criterion. The LIS model is applied in a classical averaged two-fluid code. The LI transfer terms modelling and the LI recognition are validated on analytical and experimental tests. A square base basin excited by a horizontal periodic movement is studied with the LIS model. The capability of the model is also shown on the case of the break-up of a bubble in a turbulent liquid flow. The break-up of a large bubble at a grid impact performed regime transition between two different scales of interface from LI to SI and from PI to LI. (author) [fr

  8. Photorealistic large-scale urban city model reconstruction.

    Science.gov (United States)

    Poullis, Charalambos; You, Suya

    2009-01-01

    The rapid and efficient creation of virtual environments has become a crucial part of virtual reality applications. In particular, civil and defense applications often require and employ detailed models of operations areas for training, simulations of different scenarios, planning for natural or man-made events, monitoring, surveillance, games, and films. A realistic representation of the large-scale environments is therefore imperative for the success of such applications since it increases the immersive experience of its users and helps reduce the difference between physical and virtual reality. However, the task of creating such large-scale virtual environments still remains a time-consuming and manual work. In this work, we propose a novel method for the rapid reconstruction of photorealistic large-scale virtual environments. First, a novel, extendible, parameterized geometric primitive is presented for the automatic building identification and reconstruction of building structures. In addition, buildings with complex roofs containing complex linear and nonlinear surfaces are reconstructed interactively using a linear polygonal and a nonlinear primitive, respectively. Second, we present a rendering pipeline for the composition of photorealistic textures, which unlike existing techniques, can recover missing or occluded texture information by integrating multiple information captured from different optical sensors (ground, aerial, and satellite).

  9. Finite element modelling for fatigue stress analysis of large suspension bridges

    Science.gov (United States)

    Chan, Tommy H. T.; Guo, L.; Li, Z. X.

    2003-03-01

    Fatigue is an important failure mode for large suspension bridges under traffic loadings. However, large suspension bridges have so many attributes that it is difficult to analyze their fatigue damage using experimental measurement methods. Numerical simulation is a feasible method of studying such fatigue damage. In British standards, the finite element method is recommended as a rigorous method for steel bridge fatigue analysis. This paper aims at developing a finite element (FE) model of a large suspension steel bridge for fatigue stress analysis. As a case study, a FE model of the Tsing Ma Bridge is presented. The verification of the model is carried out with the help of the measured bridge modal characteristics and the online data measured by the structural health monitoring system installed on the bridge. The results show that the constructed FE model is efficient for bridge dynamic analysis. Global structural analyses using the developed FE model are presented to determine the components of the nominal stress generated by railway loadings and some typical highway loadings. The critical locations in the bridge main span are also identified with the numerical results of the global FE stress analysis. Local stress analysis of a typical weld connection is carried out to obtain the hot-spot stresses in the region. These results provide a basis for evaluating fatigue damage and predicting the remaining life of the bridge.

  10. A devolved model for public involvement in the field of mental health research: case study learning.

    Science.gov (United States)

    Moule, Pam; Davies, Rosie

    2016-12-01

    Patient and public involvement in all aspects of research is espoused and there is a continued interest in understanding its wider impact. Existing investigations have identified both beneficial outcomes and remaining issues. This paper presents the impact of public involvement in one case study led by a mental health charity conducted as part of a larger research project. The case study used a devolved model of working, contracting with service user-led organizations to maximize the benefits of local knowledge on the implementation of personalized budgets, support recruitment and local user-led organizations. To understand the processes and impact of public involvement in a devolved model of working with user-led organizations. Multiple data collection methods were employed throughout 2012. These included interviews with the researchers (n = 10) and research partners (n = 5), observation of two case study meetings and the review of key case study documentation. Analysis was conducted in NVivo10 using a coding framework developed following a literature review. Five key themes emerged from the data; Devolved model, Nature of involvement, Enabling factors, Implementation challenges and Impact. While there were some challenges of implementing the devolved model it is clear that our findings add to the growing understanding of the positive benefits research partners can bring to complex research. A devolved model can support the involvement of user-led organizations in research if there is a clear understanding of the underpinning philosophy and support mechanisms are in place. © 2015 The Authors. Health Expectations Published by John Wiley & Sons Ltd.

  11. Penalized Estimation in Large-Scale Generalized Linear Array Models

    DEFF Research Database (Denmark)

    Lund, Adam; Vincent, Martin; Hansen, Niels Richard

    2017-01-01

    Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...

  12. Investigation on the integral output power model of a large-scale wind farm

    Institute of Scientific and Technical Information of China (English)

    BAO Nengsheng; MA Xiuqian; NI Weidou

    2007-01-01

    The integral output power model of a large-scale wind farm is needed when estimating the wind farm's output over a period of time in the future.The actual wind speed power model and calculation method of a wind farm made up of many wind turbine units are discussed.After analyzing the incoming wind flow characteristics and their energy distributions,and after considering the multi-effects among the wind turbine units and certain assumptions,the incoming wind flow model of multi-units is built.The calculation algorithms and steps of the integral output power model of a large-scale wind farm are provided.Finally,an actual power output of the wind farm is calculated and analyzed by using the practical measurement wind speed data.The characteristics of a large-scale wind farm are also discussed.

  13. Non sentinel node involvement prediction for sentinel node micrometastases in breast cancer: nomogram validation and comparison with other models.

    Science.gov (United States)

    Houvenaeghel, Gilles; Bannier, Marie; Nos, Claude; Giard, Sylvia; Mignotte, Herve; Jacquemier, Jocelyne; Martino, Marc; Esterni, Benjamin; Belichard, Catherine; Classe, Jean-Marc; Tunon de Lara, Christine; Cohen, Monique; Payan, Raoul; Blanchot, Jerome; Rouanet, Philippe; Penault-Llorca, Frederique; Bonnier, Pascal; Fournet, Sandrine; Agostini, Aubert; Marchal, Frederique; Garbay, Jean-Remi

    2012-04-01

    The risk of non sentinel node (NSN) involvement varies in function of the characteristics of sentinel nodes (SN) and primary tumor. Our aim was to determine and validate a statistical tool (a nomogram) able to predict the risk of NSN involvement in case of SN micro or sub-micrometastasis of breast cancer. We have compared this monogram with other models described in the literature. We have collected data on 905 patients, then 484 other patients, to build and validate the nomogram and compare it with other published scores and nomograms. Multivariate analysis conducted on the data of the first cohort allowed us to define a nomogram based on 5 criteria: the method of SN detection (immunohistochemistry or by standard coloration with HES); the ratio of positive SN out of total removed SN; the pathologic size of the tumor; the histological type; and the presence (or not) of lympho-vascular invasion. The nomogram developed here is the only one dedicated to micrometastasis and developed on the basis of two large cohorts. The results of this statistical tool in the calculation of the risk of NSN involvement is similar to those of the MSKCC (the similarly more effective nomogram according to the literature), with a lower rate of false negatives. this nomogram is dedicated specifically to cases of SN involvement by metastasis lower or equal to 2 mm. It could be used in clinical practice in the way to omit ALND when the risk of NSN involvement is low. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. Validating a Model of Motivational Factors Influencing Involvement for Parents of Transition-Age Youth with Disabilities

    Science.gov (United States)

    Hirano, Kara A.; Shanley, Lina; Garbacz, S. Andrew; Rowe, Dawn A.; Lindstrom, Lauren; Leve, Leslie D.

    2018-01-01

    Parent involvement is a predictor of postsecondary education and employment outcomes, but rigorous measures of parent involvement for youth with disabilities are lacking. Hirano, Garbacz, Shanley, and Rowe adapted scales based on Hoover-Dempsey and Sandler model of parent involvement for use with parents of youth with disabilities aged 14 to 23.…

  15. The Large Office Environment - Measurement and Modeling of the Wideband Radio Channel

    DEFF Research Database (Denmark)

    Andersen, Jørgen Bach; Nielsen, Jesper Ødum; Bauch, Gerhard

    2006-01-01

    In a future 4G or WLAN wideband application we can imagine multiple users in a large office environment con-sisting of a single room with partitions. Up to now, indoor radio channel measurement and modelling has mainly concentrated on scenarios with several office rooms and corridors. We present...... here measurements at 5.8GHz for 100 MHz bandwidth and a novel modelling approach for the wideband radio channel in a large office room envi-ronment. An acoustic like reverberation theory is pro-posed that allows to specify a tapped delay line model just from the room dimensions and an average...... calculated from the measurements. The pro-posed model can likely also be applied to indoor hot spot scenarios....

  16. A Model for Teaching Large Classes: Facilitating a "Small Class Feel"

    Science.gov (United States)

    Lynch, Rosealie P.; Pappas, Eric

    2017-01-01

    This paper presents a model for teaching large classes that facilitates a "small class feel" to counteract the distance, anonymity, and formality that often characterize large lecture-style courses in higher education. One author (E. P.) has been teaching a 300-student general education critical thinking course for ten years, and the…

  17. Mechanism and models for collisional energy transfer in highly excited large polyatomic molecules

    International Nuclear Information System (INIS)

    Gilbert, R. G.

    1995-01-01

    Collisional energy transfer in highly excited molecules (say, 200-500 kJ mol -1 above the zero-point energy of reactant, or of product, for a recombination reaction) is reviewed. An understanding of this energy transfer is important in predicting and interpreting the pressure dependence of gas-phase rate coefficients for unimolecular and recombination reactions. For many years it was thought that this pressure dependence could be calculated from a single energy-transfer quantity, such as the average energy transferred per collision. However, the discovery of 'super collisions' (a small but significant fraction of collisions which transfer abnormally large amounts of energy) means that this simplistic approach needs some revision. The 'ordinary' (non-super) component of the distribution function for collisional energy transfer can be quantified either by empirical models (e.g., an exponential-down functional form) or by models with a physical basis, such as biased random walk (applicable to monatomic or diatomic collision partners) or ergodic (for polyatomic collision partners) treatments. The latter two models enable approximate expressions for the average energy transfer to be estimated from readily available molecular parameters. Rotational energy transfer, important for finding the pressure dependence for recombination reactions, can for these purposes usually be taken as transferring sufficient energy so that the explicit functional form is not required to predict the pressure dependence. The mechanism of 'ordinary' energy transfer seems to be dominated by low-frequency modes of the substrate, whereby there is sufficient time during a vibrational period for significant energy flow between the collision partners. Super collisions may involve sudden energy flow as an outer atom of the substrate is squashed between the substrate and the bath gas, and then is moved away from the interaction by large-amplitude motion such as a ring vibration or a rotation; improved

  18. Large-Signal DG-MOSFET Modelling for RFID Rectification

    Directory of Open Access Journals (Sweden)

    R. Rodríguez

    2016-01-01

    Full Text Available This paper analyses the undoped DG-MOSFETs capability for the operation of rectifiers for RFIDs and Wireless Power Transmission (WPT at microwave frequencies. For this purpose, a large-signal compact model has been developed and implemented in Verilog-A. The model has been numerically validated with a device simulator (Sentaurus. It is found that the number of stages to achieve the optimal rectifier performance is inferior to that required with conventional MOSFETs. In addition, the DC output voltage could be incremented with the use of appropriate mid-gap metals for the gate, as TiN. Minor impact of short channel effects (SCEs on rectification is also pointed out.

  19. Bilevel Traffic Evacuation Model and Algorithm Design for Large-Scale Activities

    Directory of Open Access Journals (Sweden)

    Danwen Bao

    2017-01-01

    Full Text Available This paper establishes a bilevel planning model with one master and multiple slaves to solve traffic evacuation problems. The minimum evacuation network saturation and shortest evacuation time are used as the objective functions for the upper- and lower-level models, respectively. The optimizing conditions of this model are also analyzed. An improved particle swarm optimization (PSO method is proposed by introducing an electromagnetism-like mechanism to solve the bilevel model and enhance its convergence efficiency. A case study is carried out using the Nanjing Olympic Sports Center. The results indicate that, for large-scale activities, the average evacuation time of the classic model is shorter but the road saturation distribution is more uneven. Thus, the overall evacuation efficiency of the network is not high. For induced emergencies, the evacuation time of the bilevel planning model is shortened. When the audience arrival rate is increased from 50% to 100%, the evacuation time is shortened from 22% to 35%, indicating that the optimization effect of the bilevel planning model is more effective compared to the classic model. Therefore, the model and algorithm presented in this paper can provide a theoretical basis for the traffic-induced evacuation decision making of large-scale activities.

  20. Parameterization of a Hydrological Model for a Large, Ungauged Urban Catchment

    Directory of Open Access Journals (Sweden)

    Gerald Krebs

    2016-10-01

    Full Text Available Urbanization leads to the replacement of natural areas by impervious surfaces and affects the catchment hydrological cycle with adverse environmental impacts. Low impact development tools (LID that mimic hydrological processes of natural areas have been developed and applied to mitigate these impacts. Hydrological simulations are one possibility to evaluate the LID performance but the associated small-scale processes require a highly spatially distributed and explicit modeling approach. However, detailed data for model development are often not available for large urban areas, hampering the model parameterization. In this paper we propose a methodology to parameterize a hydrological model to a large, ungauged urban area by maintaining at the same time a detailed surface discretization for direct parameter manipulation for LID simulation and a firm reliance on available data for model conceptualization. Catchment delineation was based on a high-resolution digital elevation model (DEM and model parameterization relied on a novel model regionalization approach. The impact of automated delineation and model regionalization on simulation results was evaluated for three monitored study catchments (5.87–12.59 ha. The simulated runoff peak was most sensitive to accurate catchment discretization and calibration, while both the runoff volume and the fit of the hydrograph were less affected.

  1. Developing a conceptual model for the application of patient and public involvement in the healthcare system in Iran.

    Science.gov (United States)

    Azmal, Mohammad; Sari, Ali Akbari; Foroushani, Abbas Rahimi; Ahmadi, Batoul

    2016-06-01

    Patient and public involvement is engaging patients, providers, community representatives, and the public in healthcare planning and decision-making. The purpose of this study was to develop a model for the application of patient and public involvement in decision making in the Iranian healthcare system. A mixed qualitative-quantitative approach was used to develop a conceptual model. Thirty three key informants were purposely recruited in the qualitative stage, and 420 people (patients and their companions) were included in a protocol study that was implemented in five steps: 1) Identifying antecedents, consequences, and variables associated with the patient and the publics' involvement in healthcare decision making through a comprehensive literature review; 2) Determining the main variables in the context of Iran's health system using conceptual framework analysis; 3) Prioritizing and weighting variables by Shannon entropy; 4) designing and validating a tool for patient and public involvement in healthcare decision making; and 5) Providing a conceptual model of patient and the public involvement in planning and developing healthcare using structural equation modeling. We used various software programs, including SPSS (17), Max QDA (10), EXCEL, and LISREL. Content analysis, Shannon entropy, and descriptive and analytic statistics were used to analyze the data. In this study, seven antecedents variable, five dimensions of involvement, and six consequences were identified. These variables were used to design a valid tool. A logical model was derived that explained the logical relationships between antecedent and consequent variables and the dimensions of patient and public involvement as well. Given the specific context of the political, social, and innovative environments in Iran, it was necessary to design a model that would be compatible with these features. It can improve the quality of care and promote the patient and the public satisfaction with healthcare and

  2. A Hybrid Artificial Reputation Model Involving Interaction Trust, Witness Information and the Trust Model to Calculate the Trust Value of Service Providers

    Directory of Open Access Journals (Sweden)

    Gurdeep Singh Ransi

    2014-02-01

    Full Text Available Agent interaction in a community, such as the online buyer-seller scenario, is often uncertain, as when an agent comes in contact with other agents they initially know nothing about each other. Currently, many reputation models are developed that help service consumers select better service providers. Reputation models also help agents to make a decision on who they should trust and transact with in the future. These reputation models are either built on interaction trust that involves direct experience as a source of information or they are built upon witness information also known as word-of-mouth that involves the reports provided by others. Neither the interaction trust nor the witness information models alone succeed in such uncertain interactions. In this paper we propose a hybrid reputation model involving both interaction trust and witness information to address the shortcomings of existing reputation models when taken separately. A sample simulation is built to setup buyer-seller services and uncertain interactions. Experiments reveal that the hybrid approach leads to better selection of trustworthy agents where consumers select more reputable service providers, eventually helping consumers obtain more gains. Furthermore, the trust model developed is used in calculating trust values of service providers.

  3. Parallel runs of a large air pollution model on a grid of Sun computers

    DEFF Research Database (Denmark)

    Alexandrov, V.N.; Owczarz, W.; Thomsen, Per Grove

    2004-01-01

    Large -scale air pollution models can successfully be used in different environmental studies. These models are described mathematically by systems of partial differential equations. Splitting procedures followed by discretization of the spatial derivatives leads to several large systems...

  4. Groundwater Flow and Thermal Modeling to Support a Preferred Conceptual Model for the Large Hydraulic Gradient North of Yucca Mountain

    International Nuclear Information System (INIS)

    McGraw, D.; Oberlander, P.

    2007-01-01

    The purpose of this study is to report on the results of a preliminary modeling framework to investigate the causes of the large hydraulic gradient north of Yucca Mountain. This study builds on the Saturated Zone Site-Scale Flow and Transport Model (referenced herein as the Site-scale model (Zyvoloski, 2004a)), which is a three-dimensional saturated zone model of the Yucca Mountain area. Groundwater flow was simulated under natural conditions. The model framework and grid design describe the geologic layering and the calibration parameters describe the hydrogeology. The Site-scale model is calibrated to hydraulic heads, fluid temperature, and groundwater flowpaths. One area of interest in the Site-scale model represents the large hydraulic gradient north of Yucca Mountain. Nearby water levels suggest over 200 meters of hydraulic head difference in less than 1,000 meters horizontal distance. Given the geologic conceptual models defined by various hydrogeologic reports (Faunt, 2000, 2001; Zyvoloski, 2004b), no definitive explanation has been found for the cause of the large hydraulic gradient. Luckey et al. (1996) presents several possible explanations for the large hydraulic gradient as provided below: The gradient is simply the result of flow through the upper volcanic confining unit, which is nearly 300 meters thick near the large gradient. The gradient represents a semi-perched system in which flow in the upper and lower aquifers is predominantly horizontal, whereas flow in the upper confining unit would be predominantly vertical. The gradient represents a drain down a buried fault from the volcanic aquifers to the lower Carbonate Aquifer. The gradient represents a spillway in which a fault marks the effective northern limit of the lower volcanic aquifer. The large gradient results from the presence at depth of the Eleana Formation, a part of the Paleozoic upper confining unit, which overlies the lower Carbonate Aquifer in much of the Death Valley region. The

  5. Comprehensive personal witness: a model to enlarge missional involvement of the local church

    Directory of Open Access Journals (Sweden)

    Hancke, Frans

    2013-06-01

    Full Text Available In the The Split-Level Fellowship, Wesley Baker analysed the role of individual members in the Church. He gave a name to a tragic phenomenon with which Church leaders are familiar. Although true of society in general it is especially true of the church. Baker called the difference between the committed few and the uninvolved many, Factor Beta. This reality triggers the question: Why are the majority of Christians in the world not missionally involved through personal witness and which factors consequently influence personal witness and missional involvement? This article explains how the range of personal witness and missional involvement found in local churches are rooted in certain fundamental factors and conditions which are mutually influencing each other and ultimately contribute towards forming a certain paradigm. This paradigm acts as the basis from which certain behavioural patterns (witness will manifest. The factors influencing witness are either described as accelerators or decelerators and their relativity and mutual relationships are considered. Factors acting as decelerators can severely hamper or even annul witness, while accelerators on the other hand, can have an immensely positive effect to enlarge the transformational influence of witness. In conclusion a transformational model is developed through which paradigms can be influenced and eventually changed. This model fulfils a diagnostic and remedial function and will support local churches to enlarge the individual and corporate missional involvement of believers.

  6. Time simulation of flutter with large stiffness changes

    Science.gov (United States)

    Karpel, Mordechay; Wieseman, Carol D.

    1992-01-01

    Time simulation of flutter, involving large local structural changes, is formulated with a state-space model that is based on a relatively small number of generalized coordinates. Free-free vibration modes are first calculated for a nominal finite-element model with relatively large fictitious masses located at the area of structural changes. A low-frequency subset of these modes is then transformed into a set of structural modal coordinates with which the entire simulation is performed. These generalized coordinates and the associated oscillatory aerodynamic force coefficient matrices are used to construct an efficient time-domain, state-space model for a basic aeroelastic case. The time simulation can then be performed by simply changing the mass, stiffness, and damping coupling terms when structural changes occur. It is shown that the size of the aeroelastic model required for time simulation with large structural changes at a few apriori known locations is similar to that required for direct analysis of a single structural case. The method is applied to the simulation of an aeroelastic wind-tunnel model. The diverging oscillations are followed by the activation of a tip-ballast decoupling mechanism that stabilizes the system but may cause significant transient overshoots.

  7. Modeling the Relations among Parental Involvement, School Engagement and Academic Performance of High School Students

    Science.gov (United States)

    Al-Alwan, Ahmed F.

    2014-01-01

    The author proposed a model to explain how parental involvement and school engagement related to academic performance. Participants were (671) 9th and 10th graders students who completed two scales of "parental involvement" and "school engagement" in their regular classrooms. Results of the path analysis suggested that the…

  8. Degree of multicollinearity and variables involved in linear dependence in additive-dominant models

    Directory of Open Access Journals (Sweden)

    Juliana Petrini

    2012-12-01

    Full Text Available The objective of this work was to assess the degree of multicollinearity and to identify the variables involved in linear dependence relations in additive-dominant models. Data of birth weight (n=141,567, yearling weight (n=58,124, and scrotal circumference (n=20,371 of Montana Tropical composite cattle were used. Diagnosis of multicollinearity was based on the variance inflation factor (VIF and on the evaluation of the condition indexes and eigenvalues from the correlation matrix among explanatory variables. The first model studied (RM included the fixed effect of dam age class at calving and the covariates associated to the direct and maternal additive and non-additive effects. The second model (R included all the effects of the RM model except the maternal additive effects. Multicollinearity was detected in both models for all traits considered, with VIF values of 1.03 - 70.20 for RM and 1.03 - 60.70 for R. Collinearity increased with the increase of variables in the model and the decrease in the number of observations, and it was classified as weak, with condition index values between 10.00 and 26.77. In general, the variables associated with additive and non-additive effects were involved in multicollinearity, partially due to the natural connection between these covariables as fractions of the biological types in breed composition.

  9. The three-point function as a probe of models for large-scale structure

    International Nuclear Information System (INIS)

    Frieman, J.A.; Gaztanaga, E.

    1993-01-01

    The authors analyze the consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime. Several variations of the standard Ω = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, R p ∼20 h -1 Mpc, e.g., low-matter-density (non-zero cosmological constant) models, open-quote tilted close-quote primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, et al. The authors show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q J at large scales, r approx-gt R p . Current observational constraints on the three-point amplitudes Q 3 and S 3 can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales

  10. Lumped hydrological models is an Occam' razor for runoff modeling in large Russian Arctic basins

    OpenAIRE

    Ayzel Georgy

    2018-01-01

    This study is aimed to investigate the possibility of three lumped hydrological models to predict daily runoff of large-scale Arctic basins for the modern period (1979-2014) in the case of substantial data scarcity. All models were driven only by meteorological forcing reanalysis dataset without any additional information about landscape, soil or vegetation cover properties of studied basins. We found limitations of model parameters calibration in ungauged basins using global optimization alg...

  11. Estimation and Inference for Very Large Linear Mixed Effects Models

    OpenAIRE

    Gao, K.; Owen, A. B.

    2016-01-01

    Linear mixed models with large imbalanced crossed random effects structures pose severe computational problems for maximum likelihood estimation and for Bayesian analysis. The costs can grow as fast as $N^{3/2}$ when there are N observations. Such problems arise in any setting where the underlying factors satisfy a many to many relationship (instead of a nested one) and in electronic commerce applications, the N can be quite large. Methods that do not account for the correlation structure can...

  12. Large deformation analysis of adhesive by Eulerian method with new material model

    International Nuclear Information System (INIS)

    Maeda, K; Nishiguchi, K; Iwamoto, T; Okazawa, S

    2010-01-01

    The material model to describe large deformation of a pressure sensitive adhesive (PSA) is presented. A relationship between stress and strain of PSA includes viscoelasticity and rubber-elasticity. Therefore, we propose the material model for describing viscoelasticity and rubber-elasticity, and extend the presented material model to the rate form for three dimensional finite element analysis. After proposing the material model for PSA, we formulate the Eulerian method to simulate large deformation behavior. In the Eulerian calculation, the Piecewise Linear Interface Calculation (PLIC) method for capturing material surface is employed. By using PLIC method, we can impose dynamic and kinematic boundary conditions on captured material surface. The representative two computational examples are calculated to check validity of the present methods.

  13. The large-scale peculiar velocity field in flat models of the universe

    International Nuclear Information System (INIS)

    Vittorio, N.; Turner, M.S.

    1986-10-01

    The inflationary Universe scenario predicts a flat Universe and both adiabatic and isocurvature primordial density perturbations with the Zel'dovich spectrum. The two simplest realizations, models dominated by hot or cold dark matter, seem to be in conflict with observations. Flat models are examined with two components of mass density, where one of the components of mass density is smoothly distributed and the large-scale (≥10h -1 MpC) peculiar velocity field for these models is considered. For the smooth component relativistic particles, a relic cosmological term, and light strings are considered. At present the observational situation is unsettled; but, in principle, the large-scale peculiar velocity field is very powerful discriminator between these different models. 61 refs

  14. Particle production at large transverse momentum and hard collision models

    International Nuclear Information System (INIS)

    Ranft, G.; Ranft, J.

    1977-04-01

    The majority of the presently available experimental data is consistent with hard scattering models. Therefore the hard scattering model seems to be well established. There is good evidence for jets in large transverse momentum reactions as predicted by these models. The overall picture is however not yet well enough understood. We mention only the empirical hard scattering cross section introduced in most of the models, the lack of a deep theoretical understanding of the interplay between quark confinement and jet production, and the fact that we are not yet able to discriminate conclusively between the many proposed hard scattering models. The status of different hard collision models discussed in this paper is summarized. (author)

  15. Optimizing Prediction Using Bayesian Model Averaging: Examples Using Large-Scale Educational Assessments.

    Science.gov (United States)

    Kaplan, David; Lee, Chansoon

    2018-01-01

    This article provides a review of Bayesian model averaging as a means of optimizing the predictive performance of common statistical models applied to large-scale educational assessments. The Bayesian framework recognizes that in addition to parameter uncertainty, there is uncertainty in the choice of models themselves. A Bayesian approach to addressing the problem of model uncertainty is the method of Bayesian model averaging. Bayesian model averaging searches the space of possible models for a set of submodels that satisfy certain scientific principles and then averages the coefficients across these submodels weighted by each model's posterior model probability (PMP). Using the weighted coefficients for prediction has been shown to yield optimal predictive performance according to certain scoring rules. We demonstrate the utility of Bayesian model averaging for prediction in education research with three examples: Bayesian regression analysis, Bayesian logistic regression, and a recently developed approach for Bayesian structural equation modeling. In each case, the model-averaged estimates are shown to yield better prediction of the outcome of interest than any submodel based on predictive coverage and the log-score rule. Implications for the design of large-scale assessments when the goal is optimal prediction in a policy context are discussed.

  16. Large urban fire environment: trends and model city predictions

    International Nuclear Information System (INIS)

    Larson, D.A.; Small, R.D.

    1983-01-01

    The urban fire environment that would result from a megaton-yield nuclear weapon burst is considered. The dependence of temperatures and velocities on fire size, burning intensity, turbulence, and radiation is explored, and specific calculations for three model urban areas are presented. In all cases, high velocity fire winds are predicted. The model-city results show the influence of building density and urban sprawl on the fire environment. Additional calculations consider large-area fires with the burning intensity reduced in a blast-damaged urban center

  17. Relationships among Adolescents' Leisure Motivation, Leisure Involvement, and Leisure Satisfaction: A Structural Equation Model

    Science.gov (United States)

    Chen, Ying-Chieh; Li, Ren-Hau; Chen, Sheng-Hwang

    2013-01-01

    The purpose of this cross-sectional study was to test a cause-and-effect model of factors affecting leisure satisfaction among Taiwanese adolescents. A structural equation model was proposed in which the relationships among leisure motivation, leisure involvement, and leisure satisfaction were explored. The study collected data from 701 adolescent…

  18. Adolescents and Music Media: Toward an Involvement-Mediational Model of Consumption and Self-Concept

    Science.gov (United States)

    Kistler, Michelle; Rodgers, Kathleen Boyce; Power, Thomas; Austin, Erica Weintraub; Hill, Laura Griner

    2010-01-01

    Using social cognitive theory and structural regression modeling, we examined pathways between early adolescents' music media consumption, involvement with music media, and 3 domains of self-concept (physical appearance, romantic appeal, and global self-worth; N=124). A mediational model was supported for 2 domains of self-concept. Music media…

  19. REQUIREMENTS FOR SYSTEMS DEVELOPMENT LIFE CYCLE MODELS FOR LARGE-SCALE DEFENSE SYSTEMS

    Directory of Open Access Journals (Sweden)

    Kadir Alpaslan DEMIR

    2015-10-01

    Full Text Available TLarge-scale defense system projects are strategic for maintaining and increasing the national defense capability. Therefore, governments spend billions of dollars in the acquisition and development of large-scale defense systems. The scale of defense systems is always increasing and the costs to build them are skyrocketing. Today, defense systems are software intensive and they are either a system of systems or a part of it. Historically, the project performances observed in the development of these systems have been signifi cantly poor when compared to other types of projects. It is obvious that the currently used systems development life cycle models are insuffi cient to address today’s challenges of building these systems. Using a systems development life cycle model that is specifi cally designed for largescale defense system developments and is effective in dealing with today’s and near-future challenges will help to improve project performances. The fi rst step in the development a large-scale defense systems development life cycle model is the identifi cation of requirements for such a model. This paper contributes to the body of literature in the fi eld by providing a set of requirements for system development life cycle models for large-scale defense systems. Furthermore, a research agenda is proposed.

  20. Numerically modelling the large scale coronal magnetic field

    Science.gov (United States)

    Panja, Mayukh; Nandi, Dibyendu

    2016-07-01

    The solar corona spews out vast amounts of magnetized plasma into the heliosphere which has a direct impact on the Earth's magnetosphere. Thus it is important that we develop an understanding of the dynamics of the solar corona. With our present technology it has not been possible to generate 3D magnetic maps of the solar corona; this warrants the use of numerical simulations to study the coronal magnetic field. A very popular method of doing this, is to extrapolate the photospheric magnetic field using NLFF or PFSS codes. However the extrapolations at different time intervals are completely independent of each other and do not capture the temporal evolution of magnetic fields. On the other hand full MHD simulations of the global coronal field, apart from being computationally very expensive would be physically less transparent, owing to the large number of free parameters that are typically used in such codes. This brings us to the Magneto-frictional model which is relatively simpler and computationally more economic. We have developed a Magnetofrictional Model, in 3D spherical polar co-ordinates to study the large scale global coronal field. Here we present studies of changing connectivities between active regions, in response to photospheric motions.

  1. Burn mouse models

    DEFF Research Database (Denmark)

    Calum, Henrik; Høiby, Niels; Moser, Claus

    2014-01-01

    Severe thermal injury induces immunosuppression, involving all parts of the immune system, especially when large fractions of the total body surface area are affected. An animal model was established to characterize the burn-induced immunosuppression. In our novel mouse model a 6 % third-degree b......Severe thermal injury induces immunosuppression, involving all parts of the immune system, especially when large fractions of the total body surface area are affected. An animal model was established to characterize the burn-induced immunosuppression. In our novel mouse model a 6 % third...... with infected burn wound compared with the burn wound only group. The burn mouse model resembles the clinical situation and provides an opportunity to examine or develop new strategies like new antibiotics and immune therapy, in handling burn wound victims much....

  2. Cardiac regeneration using pluripotent stem cells—Progression to large animal models

    Directory of Open Access Journals (Sweden)

    James J.H. Chong

    2014-11-01

    Full Text Available Pluripotent stem cells (PSCs have indisputable cardiomyogenic potential and therefore have been intensively investigated as a potential cardiac regenerative therapy. Current directed differentiation protocols are able to produce high yields of cardiomyocytes from PSCs and studies in small animal models of cardiovascular disease have proven sustained engraftment and functional efficacy. Therefore, the time is ripe for cardiac regenerative therapies using PSC derivatives to be tested in large animal models that more closely resemble the hearts of humans. In this review, we discuss the results of our recent study using human embryonic stem cell derived cardiomyocytes (hESC-CM in a non-human primate model of ischemic cardiac injury. Large scale remuscularization, electromechanical coupling and short-term arrhythmias demonstrated by our hESC-CM grafts are discussed in the context of other studies using adult stem cells for cardiac regeneration.

  3. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    The fields of sensitivity and uncertainty analysis have traditionally been dominated by statistical techniques when large-scale modeling codes are being analyzed. These methods are able to estimate sensitivities, generate response surfaces, and estimate response probability distributions given the input parameter probability distributions. Because the statistical methods are computationally costly, they are usually applied only to problems with relatively small parameter sets. Deterministic methods, on the other hand, are very efficient and can handle large data sets, but generally require simpler models because of the considerable programming effort required for their implementation. The first part of this paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. This second part of the paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. This paper is applicable to low-level radioactive waste disposal system performance assessment

  4. Hybrid Reynolds-Averaged/Large Eddy Simulation of a Cavity Flameholder; Assessment of Modeling Sensitivities

    Science.gov (United States)

    Baurle, R. A.

    2015-01-01

    Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. The cases simulated corresponded to those used to examine this flowfield experimentally using particle image velocimetry. A variety of turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged / large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This effort was undertaken to formally assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community. The numerical errors were quantified for both the steady-state and scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results showed a high degree of variability when comparing the predictions obtained from each turbulence model, with the non-linear eddy viscosity model (an explicit algebraic stress model) providing the most accurate prediction of the measured values. The hybrid Reynolds-averaged/large eddy simulation results were carefully scrutinized to ensure that even the coarsest grid had an acceptable level of resolution for large eddy simulation, and that the time-averaged statistics were acceptably accurate. The autocorrelation and its Fourier transform were the primary tools used for this assessment. The statistics extracted from the hybrid simulation strategy proved to be more accurate than the Reynolds-averaged results obtained using the linear eddy viscosity models. However, there was no predictive improvement noted over the results obtained from the explicit

  5. Large signal S-parameters: modeling and radiation effects in microwave power transistors

    International Nuclear Information System (INIS)

    Graham, E.D. Jr.; Chaffin, R.J.; Gwyn, C.W.

    1973-01-01

    Microwave power transistors are usually characterized by measuring the source and load impedances, efficiency, and power output at a specified frequency and bias condition in a tuned circuit. These measurements provide limited data for circuit design and yield essentially no information concerning broadbanding possibilities. Recently, a method using large signal S-parameters has been developed which provides a rapid and repeatable means for measuring microwave power transistor parameters. These large signal S-parameters have been successfully used to design rf power amplifiers. Attempts at modeling rf power transistors have in the past been restricted to a modified Ebers-Moll procedure with numerous adjustable model parameters. The modified Ebers-Moll model is further complicated by inclusion of package parasitics. In the present paper an exact one-dimensional device analysis code has been used to model the performance of the transistor chip. This code has been integrated into the SCEPTRE circuit analysis code such that chip, package and circuit performance can be coupled together in the analysis. Using []his computational tool, rf transistor performance has been examined with particular attention given to the theoretical validity of large-signal S-parameters and the effects of nuclear radiation on device parameters. (auth)

  6. Critical behavior in dome D = 1 large-N matrix models

    International Nuclear Information System (INIS)

    Das, S.R.; Dhar, A.; Sengupta, A.M.; Wadia, D.R.

    1990-01-01

    The authors study the critical behavior in D = 1 large-N matrix models. The authors also look at the subleading terms in susceptibility in order to find out the dimensions of some of the operators in the theory

  7. Large-n limit of the Heisenberg model: The decorated lattice and the disordered chain

    International Nuclear Information System (INIS)

    Khoruzhenko, B.A.; Pastur, L.A.; Shcherbina, M.V.

    1989-01-01

    The critical temperature of the generalized spherical model (large-component limit of the classical Heisenberg model) on a cubic lattice, whose every bond is decorated by L spins, is found. When L → ∞, the asymptotics of the temperature is T c ∼ aL -1 . The reduction of the number of spherical constraints for the model is found to be fairly large. The free energy of the one-dimensional generalized spherical model with random nearest neighbor interaction is calculated

  8. Large-N limit of the two-Hermitian-matrix model by the hidden BRST method

    International Nuclear Information System (INIS)

    Alfaro, J.

    1993-01-01

    This paper discusses the large-N limit of the two-Hermitian-matrix model in zero dimensions, using the hidden Becchi-Rouet-Stora-Tyutin method. A system of integral equations previously found is solved, showing that it contained the exact solution of the model in leading order of large N

  9. A semiparametric graphical modelling approach for large-scale equity selection.

    Science.gov (United States)

    Liu, Han; Mulvey, John; Zhao, Tianqi

    2016-01-01

    We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption.

  10. Modeling and analysis of large-eddy simulations of particle-laden turbulent boundary layer flows

    KAUST Repository

    Rahman, Mustafa M.; Samtaney, Ravi

    2017-01-01

    layer employs stretched spiral vortex subgrid-scale model and a virtual wall model similar to the work of Cheng, Pullin & Samtaney (J. Fluid Mech., 2015). This LES model is virtually parameter free and involves no active filtering of the computed

  11. Findings and Challenges in Fine-Resolution Large-Scale Hydrological Modeling

    Science.gov (United States)

    Her, Y. G.

    2017-12-01

    Fine-resolution large-scale (FL) modeling can provide the overall picture of the hydrological cycle and transport while taking into account unique local conditions in the simulation. It can also help develop water resources management plans consistent across spatial scales by describing the spatial consequences of decisions and hydrological events extensively. FL modeling is expected to be common in the near future as global-scale remotely sensed data are emerging, and computing resources have been advanced rapidly. There are several spatially distributed models available for hydrological analyses. Some of them rely on numerical methods such as finite difference/element methods (FDM/FEM), which require excessive computing resources (implicit scheme) to manipulate large matrices or small simulation time intervals (explicit scheme) to maintain the stability of the solution, to describe two-dimensional overland processes. Others make unrealistic assumptions such as constant overland flow velocity to reduce the computational loads of the simulation. Thus, simulation efficiency often comes at the expense of precision and reliability in FL modeling. Here, we introduce a new FL continuous hydrological model and its application to four watersheds in different landscapes and sizes from 3.5 km2 to 2,800 km2 at the spatial resolution of 30 m on an hourly basis. The model provided acceptable accuracy statistics in reproducing hydrological observations made in the watersheds. The modeling outputs including the maps of simulated travel time, runoff depth, soil water content, and groundwater recharge, were animated, visualizing the dynamics of hydrological processes occurring in the watersheds during and between storm events. Findings and challenges were discussed in the context of modeling efficiency, accuracy, and reproducibility, which we found can be improved by employing advanced computing techniques and hydrological understandings, by using remotely sensed hydrological

  12. Large-scale tropospheric transport in the Chemistry-Climate Model Initiative (CCMI) simulations

    Science.gov (United States)

    Orbe, Clara; Yang, Huang; Waugh, Darryn W.; Zeng, Guang; Morgenstern, Olaf; Kinnison, Douglas E.; Lamarque, Jean-Francois; Tilmes, Simone; Plummer, David A.; Scinocca, John F.; Josse, Beatrice; Marecal, Virginie; Jöckel, Patrick; Oman, Luke D.; Strahan, Susan E.; Deushi, Makoto; Tanaka, Taichu Y.; Yoshida, Kohei; Akiyoshi, Hideharu; Yamashita, Yousuke; Stenke, Andreas; Revell, Laura; Sukhodolov, Timofei; Rozanov, Eugene; Pitari, Giovanni; Visioni, Daniele; Stone, Kane A.; Schofield, Robyn; Banerjee, Antara

    2018-05-01

    Understanding and modeling the large-scale transport of trace gases and aerosols is important for interpreting past (and projecting future) changes in atmospheric composition. Here we show that there are large differences in the global-scale atmospheric transport properties among the models participating in the IGAC SPARC Chemistry-Climate Model Initiative (CCMI). Specifically, we find up to 40 % differences in the transport timescales connecting the Northern Hemisphere (NH) midlatitude surface to the Arctic and to Southern Hemisphere high latitudes, where the mean age ranges between 1.7 and 2.6 years. We show that these differences are related to large differences in vertical transport among the simulations, in particular to differences in parameterized convection over the oceans. While stronger convection over NH midlatitudes is associated with slower transport to the Arctic, stronger convection in the tropics and subtropics is associated with faster interhemispheric transport. We also show that the differences among simulations constrained with fields derived from the same reanalysis products are as large as (and in some cases larger than) the differences among free-running simulations, most likely due to larger differences in parameterized convection. Our results indicate that care must be taken when using simulations constrained with analyzed winds to interpret the influence of meteorology on tropospheric composition.

  13. Large-scale tropospheric transport in the Chemistry–Climate Model Initiative (CCMI simulations

    Directory of Open Access Journals (Sweden)

    C. Orbe

    2018-05-01

    Full Text Available Understanding and modeling the large-scale transport of trace gases and aerosols is important for interpreting past (and projecting future changes in atmospheric composition. Here we show that there are large differences in the global-scale atmospheric transport properties among the models participating in the IGAC SPARC Chemistry–Climate Model Initiative (CCMI. Specifically, we find up to 40 % differences in the transport timescales connecting the Northern Hemisphere (NH midlatitude surface to the Arctic and to Southern Hemisphere high latitudes, where the mean age ranges between 1.7 and 2.6 years. We show that these differences are related to large differences in vertical transport among the simulations, in particular to differences in parameterized convection over the oceans. While stronger convection over NH midlatitudes is associated with slower transport to the Arctic, stronger convection in the tropics and subtropics is associated with faster interhemispheric transport. We also show that the differences among simulations constrained with fields derived from the same reanalysis products are as large as (and in some cases larger than the differences among free-running simulations, most likely due to larger differences in parameterized convection. Our results indicate that care must be taken when using simulations constrained with analyzed winds to interpret the influence of meteorology on tropospheric composition.

  14. An explanatory model of maths achievement:Perceived parental involvement and academic motivation.

    Science.gov (United States)

    Rodríguez, Susana; Piñeiro, Isabel; Gómez-Taibo, Mª L; Regueiro, Bibiana; Estévez, Iris; Valle, Antonio

    2017-05-01

    Although numerous studies have tried to explain performance in maths very few have deeply explored the relationship between different variables and how they jointly explain mathematical performance. With a sample of 897 students in 5th and 6th grade in Primary Education and using structural equation modeling (SEM), this study analyzes how the perception of parents’ beliefs is related to children´s beliefs, their involvement in mathematical tasks and their performance. Perceived parental involvement contributes to the motivation of their children in mathematics. Direct supervision of students’ academic work by parents may increase students’ concerns about the image and rating of their children, but not their academic performance. In fact, maths achievement depends directly and positively on the parents’ expectations and children’s maths self-efficacy and negatively on the parents’ help in tasks and performance goal orientation. Perceived parental involvement contributes to children’s motivation in maths essentially conveying confidence in their abilities and showing interest in their progress and schoolwork.

  15. Involving parents from the start: formative evaluation for a large ...

    African Journals Online (AJOL)

    While HIV prevention research conducted among adolescent populations may encounter parental resistance, the active engagement of parents from inception to trial completion may alleviate opposition. In preparation for implementing a large randomised controlled trial (RCT) examining the efficacy of a behavioural ...

  16. Hydrological models are mediating models

    Science.gov (United States)

    Babel, L. V.; Karssenberg, D.

    2013-08-01

    Despite the increasing role of models in hydrological research and decision-making processes, only few accounts of the nature and function of models exist in hydrology. Earlier considerations have traditionally been conducted while making a clear distinction between physically-based and conceptual models. A new philosophical account, primarily based on the fields of physics and economics, transcends classes of models and scientific disciplines by considering models as "mediators" between theory and observations. The core of this approach lies in identifying models as (1) being only partially dependent on theory and observations, (2) integrating non-deductive elements in their construction, and (3) carrying the role of instruments of scientific enquiry about both theory and the world. The applicability of this approach to hydrology is evaluated in the present article. Three widely used hydrological models, each showing a different degree of apparent physicality, are confronted to the main characteristics of the "mediating models" concept. We argue that irrespective of their kind, hydrological models depend on both theory and observations, rather than merely on one of these two domains. Their construction is additionally involving a large number of miscellaneous, external ingredients, such as past experiences, model objectives, knowledge and preferences of the modeller, as well as hardware and software resources. We show that hydrological models convey the role of instruments in scientific practice by mediating between theory and the world. It results from these considerations that the traditional distinction between physically-based and conceptual models is necessarily too simplistic and refers at best to the stage at which theory and observations are steering model construction. The large variety of ingredients involved in model construction would deserve closer attention, for being rarely explicitly presented in peer-reviewed literature. We believe that devoting

  17. Simplest simulation model for three-dimensional xenon oscillations in large PWRs

    International Nuclear Information System (INIS)

    Shimazu, Yoichiro

    2004-01-01

    Xenon oscillations in large PWRs are well understood and there have been no operational problems remained. However, in order to suppress the oscillations effectively, optimal control strategy is preferable. Generally speaking in such optimality search based on the modern control theory, a large volume of transient core analyses is required. For example, three dimensional core calculations are inevitable for the analyses of radial oscillations. From this point of view, a very simple 3-D model is proposed, which is based on a reactor model of only four points. As in the actual reactor operation, the magnitude of xenon oscillations should be limited from the view point of safety, the model further assumes that the neutron leakage can be also small or even constant. It can explicitly use reactor parameters such as reactivity coefficients and control rod worth directly. The model is so simplified as described above that it can predict oscillation behavior in a very short calculation time even on a PC. However the prediction result is good. The validity of the model in comparison with measured data and the applications are discussed. (author)

  18. Simplified Model for the Population Dynamics Involved in a Malaria Crisis

    International Nuclear Information System (INIS)

    Kenfack-Jiotsa, A.; Fotsa-Ngaffo, F.

    2009-12-01

    We adapt a simple model of predator-prey to the population involved in a crisis of malaria. The study is made only in the stream blood inside the human body except for the liver. Particularly we look at the dynamics of the malaria parasites 'merozoites' and their interaction with the blood components, more specifically the red blood cells (RBC) and the immune response grouped under the white blood cells (WBC). The stability analysis of the system reveals an important practical direction to investigate as regards the ratio WBC over RBC since it is a fundamental parameter that characterizes stable regions. The model numerically presents a wide range of possible features of the disease. Even with its simplified form, the model not only recovers well-known results but in addition predicts possible hidden phenomenon and an interesting clinical feature a malaria crisis. (author)

  19. Two-group modeling of interfacial area transport in large diameter channels

    Energy Technology Data Exchange (ETDEWEB)

    Schlegel, J.P., E-mail: schlegelj@mst.edu [Department of Mining and Nuclear Engineering, Missouri University of Science and Technology, 301 W 14th St., Rolla, MO 65409 (United States); Hibiki, T.; Ishii, M. [School of Nuclear Engineering, Purdue University, 400 Central Dr., West Lafayette, IN 47907 (United States)

    2015-11-15

    Highlights: • Implemented updated constitutive models and benchmarking method for IATE in large pipes. • New model and method with new data improved the overall IATE prediction for large pipes. • Not all conditions well predicted shows that further development is still required. - Abstract: A comparison of the existing two-group interfacial area transport equation source and sink terms for large diameter channels with recently collected interfacial area concentration measurements (Schlegel et al., 2012, 2014. Int. J. Heat Fluid Flow 47, 42) has indicated that the model does not perform well in predicting interfacial area transport outside of the range of flow conditions used in the original benchmarking effort. In order to reduce the error in the prediction of interfacial area concentration by the interfacial area transport equation, several constitutive relations have been updated including the turbulence model and relative velocity correlation. The transport equation utilizing these updated models has been modified by updating the inter-group transfer and Group 2 coalescence and disintegration kernels using an expanded range of experimental conditions extending to pipe sizes of 0.304 m [12 in.], gas velocities of up to nearly 11 m/s [36.1 ft/s] and liquid velocities of up to 2 m/s [6.56 ft/s], as well as conditions with both bubbly flow and cap-bubbly flow injection (Schlegel et al., 2012, 2014). The modifications to the transport equation have resulted in a decrease in the RMS error for void fraction and interfacial area concentration from 17.32% to 12.3% and 21.26% to 19.6%. The combined RMS error, for both void fraction and interfacial area concentration, is below 15% for most of the experiments used in the comparison, a distinct improvement over the previous version of the model.

  20. Enhanced ICP for the Registration of Large-Scale 3D Environment Models: An Experimental Study

    Directory of Open Access Journals (Sweden)

    Jianda Han

    2016-02-01

    Full Text Available One of the main applications of mobile robots is the large-scale perception of the outdoor environment. One of the main challenges of this application is fusing environmental data obtained by multiple robots, especially heterogeneous robots. This paper proposes an enhanced iterative closest point (ICP method for the fast and accurate registration of 3D environmental models. First, a hierarchical searching scheme is combined with the octree-based ICP algorithm. Second, an early-warning mechanism is used to perceive the local minimum problem. Third, a heuristic escape scheme based on sampled potential transformation vectors is used to avoid local minima and achieve optimal registration. Experiments involving one unmanned aerial vehicle and one unmanned surface vehicle were conducted to verify the proposed technique. The experimental results were compared with those of normal ICP registration algorithms to demonstrate the superior performance of the proposed method.

  1. Using radar altimetry to update a large-scale hydrological model of the Brahmaputra river basin

    DEFF Research Database (Denmark)

    Finsen, F.; Milzow, Christian; Smith, R.

    2014-01-01

    Measurements of river and lake water levels from space-borne radar altimeters (past missions include ERS, Envisat, Jason, Topex) are useful for calibration and validation of large-scale hydrological models in poorly gauged river basins. Altimetry data availability over the downstream reaches...... of the Brahmaputra is excellent (17 high-quality virtual stations from ERS-2, 6 from Topex and 10 from Envisat are available for the Brahmaputra). In this study, altimetry data are used to update a large-scale Budyko-type hydrological model of the Brahmaputra river basin in real time. Altimetry measurements...... improved model performance considerably. The Nash-Sutcliffe model efficiency increased from 0.77 to 0.83. Real-time river basin modelling using radar altimetry has the potential to improve the predictive capability of large-scale hydrological models elsewhere on the planet....

  2. RELAP5/SCDAPSIM model development for AP1000 and verification for large break LOCA

    Energy Technology Data Exchange (ETDEWEB)

    Trivedi, A.K. [Nuclear Engineering and Technology Program, Indian Institute of Technology, Kanpur 208016 (India); Allison, C. [Innovative Systems Software, Idaho Falls, ID 83406 (United States); Khanna, A., E-mail: akhanna@iitk.ac.in [Nuclear Engineering and Technology Program, Indian Institute of Technology, Kanpur 208016 (India); Munshi, P. [Nuclear Engineering and Technology Program, Indian Institute of Technology, Kanpur 208016 (India)

    2016-08-15

    Highlights: • RELAP5/SCDAPSIM model of AP1000 has been developed. • Analysis involves a LBLOCA (double ended guillotine break) study in cold leg. • Results are compared with those of WCOBRA–TRAC and TRACE. • Concluded that PCT does not violate the safety criteria of 1477 K. - Abstract: The AP1000 is a Westinghouse 2-loop pressurized water reactor (PWR) with all emergency core cooling systems based on natural circulation. Its core design is very similar to a 3-loop PWR with 157 fuel assemblies. Westinghouse has reported their results of the safety analysis in its design control document (DCD) for a large break loss of coolant accident (LOCA) using WCOBRA/TRAC and for a small break LOCA using NOTRUMP. The current study involves the development of a representative RELAP5/SCDASIM model for AP1000 based on publically available data and its verification for a double ended cold leg (DECL) break in one of the cold legs in the loop containing core makeup tanks (CMT). The calculated RELAP5/SCDAPSIM results have been compared to publically available WCOBRA–TRAC and TRACE results of DECL break in AP1000. The objective of this study is to benchmark thermal hydraulic model for later severe accident analyses using the 2D SCDAP fuel rod component in place of the RELAP5 heat structures which currently represent the fuel rods. Results from this comparison provides sufficient confidence in the model which will be used for further studies such as a station blackout. The primary circuit pumps, pressurizer and steam generators (including the necessary secondary side) are modeled using RELAP5 components following all the necessary recommendations for nodalization. The core has been divided into 6 radial rings and 10 axial nodes. For the RELAP5 thermal hydraulic calculation, the six groups of fuel assemblies have been modeled as pipe components with equivalent flow areas. The fuel including the gap and cladding is modeled as a 1d heat structure. The final input deck achieved

  3. Mixed-signal instrumentation for large-signal device characterization and modelling

    NARCIS (Netherlands)

    Marchetti, M.

    2013-01-01

    This thesis concentrates on the development of advanced large-signal measurement and characterization tools to support technology development, model extraction and validation, and power amplifier (PA) designs that address the newly introduced third and fourth generation (3G and 4G) wideband

  4. An empirical velocity scale relation for modelling a design of large mesh pelagic trawl

    NARCIS (Netherlands)

    Ferro, R.S.T.; Marlen, van B.; Hansen, K.E.

    1996-01-01

    Physical models of fishing nets are used in fishing technology research at scales of 1:40 or smaller. As with all modelling involving fluid flow, a set of rules is required to determine the geometry of the model and its velocity relative to the water. Appropriate rules ensure that the model is

  5. The Effects of Group Relaxation Training/Large Muscle Exercise, and Parental Involvement on Attention to Task, Impulsivity, and Locus of Control among Hyperactive Boys.

    Science.gov (United States)

    Porter, Sally S.; Omizo, Michael M.

    1984-01-01

    The study examined the effects of group relaxation training/large muscle exercise and parental involvement on attention to task, impulsivity, and locus of control among 34 hyperactive boys. Following treatment both experimental groups recorded significantly higher attention to task, lower impulsivity, and lower locus of control scores. (Author/CL)

  6. Large-deflection statics analysis of active cardiac catheters through co-rotational modelling.

    Science.gov (United States)

    Peng Qi; Chen Qiu; Mehndiratta, Aadarsh; I-Ming Chen; Haoyong Yu

    2016-08-01

    This paper presents a co-rotational concept for large-deflection formulation of cardiac catheters. Using this approach, the catheter is first discretized with a number of equal length beam elements and nodes, and the rigid body motions of an individual beam element are separated from its deformations. Therefore, it is adequate for modelling arbitrarily large deflections of a catheter with linear elastic analysis at the local element level. A novel design of active cardiac catheter of 9 Fr in diameter at the beginning of the paper is proposed, which is based on the contra-rotating double helix patterns and is improved from the previous prototypes. The modelling section is followed by MATLAB simulations of various deflections when the catheter is exerted different types of loads. This proves the feasibility of the presented modelling approach. To the best knowledge of the authors, it is the first to utilize this methodology for large-deflection static analysis of the catheter, which will enable more accurate control of robot-assisted cardiac catheterization procedures. Future work would include further experimental validations.

  7. A model-based eco-routing strategy for electric vehicles in large urban networks

    OpenAIRE

    De Nunzio , Giovanni; Thibault , Laurent; Sciarretta , Antonio

    2016-01-01

    International audience; A novel eco-routing navigation strategy and energy consumption modeling approach for electric vehicles are presented in this work. Speed fluctuations and road network infrastructure have a large impact on vehicular energy consumption. Neglecting these effects may lead to large errors in eco-routing navigation, which could trivially select the route with the lowest average speed. We propose an energy consumption model that considers both accelerations and impact of the ...

  8. Effects of uncertainty in model predictions of individual tree volume on large area volume estimates

    Science.gov (United States)

    Ronald E. McRoberts; James A. Westfall

    2014-01-01

    Forest inventory estimates of tree volume for large areas are typically calculated by adding model predictions of volumes for individual trees. However, the uncertainty in the model predictions is generally ignored with the result that the precision of the large area volume estimates is overestimated. The primary study objective was to estimate the effects of model...

  9. Perturbation theory instead of large scale shell model calculations

    International Nuclear Information System (INIS)

    Feldmeier, H.; Mankos, P.

    1977-01-01

    Results of large scale shell model calculations for (sd)-shell nuclei are compared with a perturbation theory provides an excellent approximation when the SU(3)-basis is used as a starting point. The results indicate that perturbation theory treatment in an SU(3)-basis including 2hω excitations should be preferable to a full diagonalization within the (sd)-shell. (orig.) [de

  10. Acetylome analysis reveals the involvement of lysine acetylation in photosynthesis and carbon metabolism in the model cyanobacterium Synechocystis sp. PCC 6803.

    Science.gov (United States)

    Mo, Ran; Yang, Mingkun; Chen, Zhuo; Cheng, Zhongyi; Yi, Xingling; Li, Chongyang; He, Chenliu; Xiong, Qian; Chen, Hui; Wang, Qiang; Ge, Feng

    2015-02-06

    Cyanobacteria are the oldest known life form inhabiting Earth and the only prokaryotes capable of performing oxygenic photosynthesis. Synechocystis sp. PCC 6803 (Synechocystis) is a model cyanobacterium used extensively in research on photosynthesis and environmental adaptation. Posttranslational protein modification by lysine acetylation plays a critical regulatory role in both eukaryotes and prokaryotes; however, its extent and function in cyanobacteria remain unexplored. Herein, we performed a global acetylome analysis on Synechocystis through peptide prefractionation, antibody enrichment, and high accuracy LC-MS/MS analysis; identified 776 acetylation sites on 513 acetylated proteins; and functionally categorized them into an interaction map showing their involvement in various biological processes. Consistent with previous reports, a large fraction of the acetylation sites are present on proteins involved in cellular metabolism. Interestingly, for the first time, many proteins involved in photosynthesis, including the subunits of phycocyanin (CpcA, CpcB, CpcC, and CpcG) and allophycocyanin (ApcA, ApcB, ApcD, ApcE, and ApcF), were found to be lysine acetylated, suggesting that lysine acetylation may play regulatory roles in the photosynthesis process. Six identified acetylated proteins associated with photosynthesis and carbon metabolism were further validated by immunoprecipitation and Western blotting. Our data provide the first global survey of lysine acetylation in cyanobacteria and reveal previously unappreciated roles of lysine acetylation in the regulation of photosynthesis. The provided data set may serve as an important resource for the functional analysis of lysine acetylation in cyanobacteria and facilitate the elucidation of the entire metabolic networks and photosynthesis process in this model cyanobacterium.

  11. Expected Utility and Entropy-Based Decision-Making Model for Large Consumers in the Smart Grid

    Directory of Open Access Journals (Sweden)

    Bingtuan Gao

    2015-09-01

    Full Text Available In the smart grid, large consumers can procure electricity energy from various power sources to meet their load demands. To maximize its profit, each large consumer needs to decide their energy procurement strategy under risks such as price fluctuations from the spot market and power quality issues. In this paper, an electric energy procurement decision-making model is studied for large consumers who can obtain their electric energy from the spot market, generation companies under bilateral contracts, the options market and self-production facilities in the smart grid. Considering the effect of unqualified electric energy, the profit model of large consumers is formulated. In order to measure the risks from the price fluctuations and power quality, the expected utility and entropy is employed. Consequently, the expected utility and entropy decision-making model is presented, which helps large consumers to minimize their expected profit of electricity procurement while properly limiting the volatility of this cost. Finally, a case study verifies the feasibility and effectiveness of the proposed model.

  12. Material model for non-linear finite element analyses of large concrete structures

    NARCIS (Netherlands)

    Engen, Morten; Hendriks, M.A.N.; Øverli, Jan Arve; Åldstedt, Erik; Beushausen, H.

    2016-01-01

    A fully triaxial material model for concrete was implemented in a commercial finite element code. The only required input parameter was the cylinder compressive strength. The material model was suitable for non-linear finite element analyses of large concrete structures. The importance of including

  13. The pig as a large animal model for influenza a virus infection

    DEFF Research Database (Denmark)

    Skovgaard, Kerstin; Brogaard, Louise; Larsen, Lars Erik

    It is increasingly realized that large animal models like the pig are exceptionally human like and serve as an excellent model for disease and inflammation. Pigs are fully susceptible to human influenza, share many similarities with humans regarding lung physiology and innate immune cell...

  14. Large scale solar district heating. Evaluation, modelling and designing - Appendices

    Energy Technology Data Exchange (ETDEWEB)

    Heller, A.

    2000-07-01

    The appendices present the following: A) Cad-drawing of the Marstal CSHP design. B) Key values - large-scale solar heating in Denmark. C) Monitoring - a system description. D) WMO-classification of pyranometers (solarimeters). E) The computer simulation model in TRNSYS. F) Selected papers from the author. (EHS)

  15. Large Scale Skill in Regional Climate Modeling and the Lateral Boundary Condition Scheme

    Science.gov (United States)

    Veljović, K.; Rajković, B.; Mesinger, F.

    2009-04-01

    Several points are made concerning the somewhat controversial issue of regional climate modeling: should a regional climate model (RCM) be expected to maintain the large scale skill of the driver global model that is supplying its lateral boundary condition (LBC)? Given that this is normally desired, is it able to do so without help via the fairly popular large scale nudging? Specifically, without such nudging, will the RCM kinetic energy necessarily decrease with time compared to that of the driver model or analysis data as suggested by a study using the Regional Atmospheric Modeling System (RAMS)? Finally, can the lateral boundary condition scheme make a difference: is the almost universally used but somewhat costly relaxation scheme necessary for a desirable RCM performance? Experiments are made to explore these questions running the Eta model in two versions differing in the lateral boundary scheme used. One of these schemes is the traditional relaxation scheme, and the other the Eta model scheme in which information is used at the outermost boundary only, and not all variables are prescribed at the outflow boundary. Forecast lateral boundary conditions are used, and results are verified against the analyses. Thus, skill of the two RCM forecasts can be and is compared not only against each other but also against that of the driver global forecast. A novel verification method is used in the manner of customary precipitation verification in that forecast spatial wind speed distribution is verified against analyses by calculating bias adjusted equitable threat scores and bias scores for wind speeds greater than chosen wind speed thresholds. In this way, focusing on a high wind speed value in the upper troposphere, verification of large scale features we suggest can be done in a manner that may be more physically meaningful than verifications via spectral decomposition that are a standard RCM verification method. The results we have at this point are somewhat

  16. Model Experiments for the Determination of Airflow in Large Spaces

    DEFF Research Database (Denmark)

    Nielsen, Peter V.

    Model experiments are one of the methods used for the determination of airflow in large spaces. This paper will discuss the formation of the governing dimensionless numbers. It is shown that experiments with a reduced scale often will necessitate a fully developed turbulence level of the flow....... Details of the flow from supply openings are very important for the determination of room air distribution. It is in some cases possible to make a simplified supply opening for the model experiment....

  17. Climateprediction.com: Public Involvement, Multi-Million Member Ensembles and Systematic Uncertainty Analysis

    Science.gov (United States)

    Stainforth, D. A.; Allen, M.; Kettleborough, J.; Collins, M.; Heaps, A.; Stott, P.; Wehner, M.

    2001-12-01

    The climateprediction.com project is preparing to carry out the first systematic uncertainty analysis of climate forecasts using large ensembles of GCM climate simulations. This will be done by involving schools, businesses and members of the public, and utilizing the novel technology of distributed computing. Each participant will be asked to run one member of the ensemble on their PC. The model used will initially be the UK Met Office's Unified Model (UM). It will be run under Windows and software will be provided to enable those involved to view their model output as it develops. The project will use this method to carry out large perturbed physics GCM ensembles and thereby analyse the uncertainty in the forecasts from such models. Each participant/ensemble member will therefore have a version of the UM in which certain aspects of the model physics have been perturbed from their default values. Of course the non-linear nature of the system means that it will be necessary to look not just at perturbations to individual parameters in specific schemes, such as the cloud parameterization, but also to the many combinations of perturbations. This rapidly leads to the need for very large, perhaps multi-million member ensembles, which could only be undertaken using the distributed computing methodology. The status of the project will be presented and the Windows client will be demonstrated. In addition, initial results will be presented from beta test runs using a demo release for Linux PCs and Alpha workstations. Although small by comparison to the whole project, these pilot results constitute a 20-50 member perturbed physics climate ensemble with results indicating how climate sensitivity can be substantially affected by individual parameter values in the cloud scheme.

  18. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    This paper presents a comprehensive approach to sensitivity and uncertainty analysis of large-scale computer models that is analytic (deterministic) in principle and that is firmly based on the model equations. The theory and application of two systems based upon computer calculus, GRESS and ADGEN, are discussed relative to their role in calculating model derivatives and sensitivities without a prohibitive initial manpower investment. Storage and computational requirements for these two systems are compared for a gradient-enhanced version of the PRESTO-II computer model. A Deterministic Uncertainty Analysis (DUA) method that retains the characteristics of analytically computing result uncertainties based upon parameter probability distributions is then introduced and results from recent studies are shown. 29 refs., 4 figs., 1 tab

  19. Simulation of hydrogen release and combustion in large scale geometries: models and methods

    International Nuclear Information System (INIS)

    Beccantini, A.; Dabbene, F.; Kudriakov, S.; Magnaud, J.P.; Paillere, H.; Studer, E.

    2003-01-01

    The simulation of H2 distribution and combustion in confined geometries such as nuclear reactor containments is a challenging task from the point of view of numerical simulation, as it involves quite disparate length and time scales, which need to resolved appropriately and efficiently. Cea is involved in the development and validation of codes to model such problems, for external clients such as IRSN (TONUS code), Technicatome (NAUTILUS code) or for its own safety studies. This paper provides an overview of the physical and numerical models developed for such applications, as well as some insight into the current research topics which are being pursued. Examples of H2 mixing and combustion simulations are given. (authors)

  20. Modeling Resource Utilization of a Large Data Acquisition System

    CERN Document Server

    AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger

    2017-01-01

    The ATLAS 'Phase-II' upgrade, scheduled to start in 2024, will significantly change the requirements under which the data-acquisition system operates. The input data rate, currently fixed around 150 GB/s, is anticipated to reach 5 TB/s. In order to deal with the challenging conditions, and exploit the capabilities of newer technologies, a number of architectural changes are under consideration. Of particular interest is a new component, known as the Storage Handler, which will provide a large buffer area decoupling real-time data taking from event filtering. Dynamic operational models of the upgraded system can be used to identify the required resources and to select optimal techniques. In order to achieve a robust and dependable model, the current data-acquisition architecture has been used as a test case. This makes it possible to verify and calibrate the model against real operation data. Such a model can then be evolved toward the future ATLAS Phase-II architecture. In this paper we introduce the current ...

  1. Modelling Resource Utilization of a Large Data Acquisition System

    CERN Document Server

    Santos, Alejandro; The ATLAS collaboration

    2017-01-01

    The ATLAS 'Phase-II' upgrade, scheduled to start in 2024, will significantly change the requirements under which the data-acquisition system operates. The input data rate, currently fixed around 150 GB/s, is anticipated to reach 5 TB/s. In order to deal with the challenging conditions, and exploit the capabilities of newer technologies, a number of architectural changes are under consideration. Of particular interest is a new component, known as the Storage Handler, which will provide a large buffer area decoupling real-time data taking from event filtering. Dynamic operational models of the upgraded system can be used to identify the required resources and to select optimal techniques. In order to achieve a robust and dependable model, the current data-acquisition architecture has been used as a test case. This makes it possible to verify and calibrate the model against real operation data. Such a model can then be evolved toward the future ATLAS Phase-II architecture. In this paper we introduce the current ...

  2. Large animal and primate models of spinal cord injury for the testing of novel therapies.

    Science.gov (United States)

    Kwon, Brian K; Streijger, Femke; Hill, Caitlin E; Anderson, Aileen J; Bacon, Mark; Beattie, Michael S; Blesch, Armin; Bradbury, Elizabeth J; Brown, Arthur; Bresnahan, Jacqueline C; Case, Casey C; Colburn, Raymond W; David, Samuel; Fawcett, James W; Ferguson, Adam R; Fischer, Itzhak; Floyd, Candace L; Gensel, John C; Houle, John D; Jakeman, Lyn B; Jeffery, Nick D; Jones, Linda Ann Truett; Kleitman, Naomi; Kocsis, Jeffery; Lu, Paul; Magnuson, David S K; Marsala, Martin; Moore, Simon W; Mothe, Andrea J; Oudega, Martin; Plant, Giles W; Rabchevsky, Alexander Sasha; Schwab, Jan M; Silver, Jerry; Steward, Oswald; Xu, Xiao-Ming; Guest, James D; Tetzlaff, Wolfram

    2015-07-01

    Large animal and primate models of spinal cord injury (SCI) are being increasingly utilized for the testing of novel therapies. While these represent intermediary animal species between rodents and humans and offer the opportunity to pose unique research questions prior to clinical trials, the role that such large animal and primate models should play in the translational pipeline is unclear. In this initiative we engaged members of the SCI research community in a questionnaire and round-table focus group discussion around the use of such models. Forty-one SCI researchers from academia, industry, and granting agencies were asked to complete a questionnaire about their opinion regarding the use of large animal and primate models in the context of testing novel therapeutics. The questions centered around how large animal and primate models of SCI would be best utilized in the spectrum of preclinical testing, and how much testing in rodent models was warranted before employing these models. Further questions were posed at a focus group meeting attended by the respondents. The group generally felt that large animal and primate models of SCI serve a potentially useful role in the translational pipeline for novel therapies, and that the rational use of these models would depend on the type of therapy and specific research question being addressed. While testing within these models should not be mandatory, the detection of beneficial effects using these models lends additional support for translating a therapy to humans. These models provides an opportunity to evaluate and refine surgical procedures prior to use in humans, and safety and bio-distribution in a spinal cord more similar in size and anatomy to that of humans. Our results reveal that while many feel that these models are valuable in the testing of novel therapies, important questions remain unanswered about how they should be used and how data derived from them should be interpreted. Copyright © 2015 Elsevier

  3. Assessing the economic impact of paternal involvement: a comparison of the generalized linear model versus decision analysis trees.

    Science.gov (United States)

    Salihu, Hamisu M; Salemi, Jason L; Nash, Michelle C; Chandler, Kristen; Mbah, Alfred K; Alio, Amina P

    2014-08-01

    Lack of paternal involvement has been shown to be associated with adverse pregnancy outcomes, including infant morbidity and mortality, but the impact on health care costs is unknown. Various methodological approaches have been used in cost minimization and cost effectiveness analyses and it remains unclear how cost estimates vary according to the analytic strategy adopted. We illustrate a methodological comparison of decision analysis modeling and generalized linear modeling (GLM) techniques using a case study that assesses the cost-effectiveness of potential father involvement interventions. We conducted a 12-year retrospective cohort study using a statewide enhanced maternal-infant database that contains both clinical and nonclinical information. A missing name for the father on the infant's birth certificate was used as a proxy for lack of paternal involvement, the main exposure of this study. Using decision analysis modeling and GLM, we compared all infant inpatient hospitalization costs over the first year of life. Costs were calculated from hospital charges using department-level cost-to-charge ratios and were adjusted for inflation. In our cohort of 2,243,891 infants, 9.2% had a father uninvolved during pregnancy. Lack of paternal involvement was associated with higher rates of preterm birth, small-for-gestational age, and infant morbidity and mortality. Both analytic approaches estimate significantly higher per-infant costs for father uninvolved pregnancies (decision analysis model: $1,827, GLM: $1,139). This paper provides sufficient evidence that healthcare costs could be significantly reduced through enhanced father involvement during pregnancy, and buttresses the call for a national program to involve fathers in antenatal care.

  4. REIONIZATION ON LARGE SCALES. I. A PARAMETRIC MODEL CONSTRUCTED FROM RADIATION-HYDRODYNAMIC SIMULATIONS

    International Nuclear Information System (INIS)

    Battaglia, N.; Trac, H.; Cen, R.; Loeb, A.

    2013-01-01

    We present a new method for modeling inhomogeneous cosmic reionization on large scales. Utilizing high-resolution radiation-hydrodynamic simulations with 2048 3 dark matter particles, 2048 3 gas cells, and 17 billion adaptive rays in a L = 100 Mpc h –1 box, we show that the density and reionization redshift fields are highly correlated on large scales (∼> 1 Mpc h –1 ). This correlation can be statistically represented by a scale-dependent linear bias. We construct a parametric function for the bias, which is then used to filter any large-scale density field to derive the corresponding spatially varying reionization redshift field. The parametric model has three free parameters that can be reduced to one free parameter when we fit the two bias parameters to simulation results. We can differentiate degenerate combinations of the bias parameters by combining results for the global ionization histories and correlation length between ionized regions. Unlike previous semi-analytic models, the evolution of the reionization redshift field in our model is directly compared cell by cell against simulations and performs well in all tests. Our model maps the high-resolution, intermediate-volume radiation-hydrodynamic simulations onto lower-resolution, larger-volume N-body simulations (∼> 2 Gpc h –1 ) in order to make mock observations and theoretical predictions

  5. Involving regional expertise in nationwide modeling for adequate prediction of climate change effects on different demands for fresh water

    Science.gov (United States)

    de Lange, W. J.

    2014-05-01

    Wim J. de Lange, Geert F. Prinsen, Jacco H. Hoogewoud, Ab A Veldhuizen, Joachim Hunink, Erik F.W. Ruijgh, Timo Kroon Nationwide modeling aims to produce a balanced distribution of climate change effects (e.g. harm on crops) and possible compensation (e.g. volume fresh water) based on consistent calculation. The present work is based on the Netherlands Hydrological Instrument (NHI, www.nhi.nu), which is a national, integrated, hydrological model that simulates distribution, flow and storage of all water in the surface water and groundwater systems. The instrument is developed to assess the impact on water use on land-surface (sprinkling crops, drinking water) and in surface water (navigation, cooling). The regional expertise involved in the development of NHI come from all parties involved in the use, production and management of water, such as waterboards, drinking water supply companies, provinces, ngo's, and so on. Adequate prediction implies that the model computes changes in the order of magnitude that is relevant to the effects. In scenarios related to drought, adequate prediction applies to the water demand and the hydrological effects during average, dry, very dry and extremely dry periods. The NHI acts as a part of the so-called Deltamodel (www.deltamodel.nl), which aims to predict effects and compensating measures of climate change both on safety against flooding and on water shortage during drought. To assess the effects, a limited number of well-defined scenarios is used within the Deltamodel. The effects on demand of fresh water consist of an increase of the demand e.g. for surface water level control to prevent dike burst, for flushing salt in ditches, for sprinkling of crops, for preserving wet nature and so on. Many of the effects are dealt with by regional and local parties. Therefore, these parties have large interest in the outcome of the scenario analyses. They are participating in the assessment of the NHI previous to the start of the analyses

  6. Consumer input into health care: Time for a new active and comprehensive model of consumer involvement.

    Science.gov (United States)

    Hall, Alix E; Bryant, Jamie; Sanson-Fisher, Rob W; Fradgley, Elizabeth A; Proietto, Anthony M; Roos, Ian

    2018-03-07

    To ensure the provision of patient-centred health care, it is essential that consumers are actively involved in the process of determining and implementing health-care quality improvements. However, common strategies used to involve consumers in quality improvements, such as consumer membership on committees and collection of patient feedback via surveys, are ineffective and have a number of limitations, including: limited representativeness; tokenism; a lack of reliable and valid patient feedback data; infrequent assessment of patient feedback; delays in acquiring feedback; and how collected feedback is used to drive health-care improvements. We propose a new active model of consumer engagement that aims to overcome these limitations. This model involves the following: (i) the development of a new measure of consumer perceptions; (ii) low cost and frequent electronic data collection of patient views of quality improvements; (iii) efficient feedback to the health-care decision makers; and (iv) active involvement of consumers that fosters power to influence health system changes. © 2018 The Authors Health Expectations published by John Wiley & Sons Ltd.

  7. Development of a transverse mixing model for large scale impulsion phenomenon in tight lattice

    International Nuclear Information System (INIS)

    Liu, Xiaojing; Ren, Shuo; Cheng, Xu

    2017-01-01

    Highlights: • Experiment data of Krauss is used to validate the feasibility of CFD simulation method. • CFD simulation is performed to simulate the large scale impulsion phenomenon for tight-lattice bundle. • A mixing model to simulate the large scale impulsion phenomenon is proposed based on CFD result fitting. • The new developed mixing model has been added in the subchannel code. - Abstract: Tight-lattice is widely adopted in the innovative reactor fuel bundles design since it can increase the conversion ratio and improve the heat transfer between fuel bundles and coolant. It has been noticed that a large scale impulsion of cross-velocity exists in the gap region, which plays an important role on the transverse mixing flow and heat transfer. Although many experiments and numerical simulation have been carried out to study the impulsion of velocity, a model to describe the wave length, amplitude and frequency of mixing coefficient is still missing. This research work takes advantage of the CFD method to simulate the experiment of Krauss and to compare experiment data and simulation result in order to demonstrate the feasibility of simulation method and turbulence model. Then, based on this verified method and model, several simulations are performed with different Reynolds number and different Pitch-to-Diameter ratio. By fitting the CFD results achieved, a mixing model to simulate the large scale impulsion phenomenon is proposed and adopted in the current subchannel code. The new mixing model is applied to some fuel assembly analysis by subchannel calculation, it can be noticed that the new developed mixing model can reduce the hot channel factor and contribute to a uniform distribution of outlet temperature.

  8. Solving large linear systems in an implicit thermohaline ocean model

    NARCIS (Netherlands)

    de Niet, Arie Christiaan

    2007-01-01

    The climate on earth is largely determined by the global ocean circulation. Hence it is important to predict how the flow will react to perturbation by for example melting icecaps. To answer questions about the stability of the global ocean flow, a computer model has been developed that is able to

  9. Monte Carlo technique for very large ising models

    Science.gov (United States)

    Kalle, C.; Winkelmann, V.

    1982-08-01

    Rebbi's multispin coding technique is improved and applied to the kinetic Ising model with size 600*600*600. We give the central part of our computer program (for a CDC Cyber 76), which will be helpful also in a simulation of smaller systems, and describe the other tricks necessary to go to large lattices. The magnetization M at T=1.4* T c is found to decay asymptotically as exp(-t/2.90) if t is measured in Monte Carlo steps per spin, and M( t = 0) = 1 initially.

  10. A coordination model for ultra-large scale systems of systems

    Directory of Open Access Journals (Sweden)

    Manuela L. Bujorianu

    2013-11-01

    Full Text Available The ultra large multi-agent systems are becoming increasingly popular due to quick decay of the individual production costs and the potential of speeding up the solving of complex problems. Examples include nano-robots, or systems of nano-satellites for dangerous meteorite detection, or cultures of stem cells for organ regeneration or nerve repair. The topics associated with these systems are usually dealt within the theories of intelligent swarms or biologically inspired computation systems. Stochastic models play an important role and they are based on various formulations of the mechanical statistics. In these cases, the main assumption is that the swarm elements have a simple behaviour and that some average properties can be deduced for the entire swarm. In contrast, complex systems in areas like aeronautics are formed by elements with sophisticated behaviour, which are even autonomous. In situations like this, a new approach to swarm coordination is necessary. We present a stochastic model where the swarm elements are communicating autonomous systems, the coordination is separated from the component autonomous activity and the entire swarm can be abstracted away as a piecewise deterministic Markov process, which constitutes one of the most popular model in stochastic control. Keywords: ultra large multi-agent systems, system of systems, autonomous systems, stochastic hybrid systems.

  11. Shallow to Deep Convection Transition over a Heterogeneous Land Surface Using the Land Model Coupled Large-Eddy Simulation

    Science.gov (United States)

    Lee, J.; Zhang, Y.; Klein, S. A.

    2017-12-01

    The triggering of the land breeze, and hence the development of deep convection over heterogeneous land should be understood as a consequence of the complex processes involving various factors from land surface and atmosphere simultaneously. That is a sub-grid scale process that many large-scale models have difficulty incorporating it into the parameterization scheme partly due to lack of our understanding. Thus, it is imperative that we approach the problem using a high-resolution modeling framework. In this study, we use SAM-SLM (Lee and Khairoutdinov, 2015), a large-eddy simulation model coupled to a land model, to explore the cloud effect such as cold pool, the cloud shading and the soil moisture memory on the land breeze structure and the further development of cloud and precipitation over a heterogeneous land surface. The atmospheric large scale forcing and the initial sounding are taken from the new composite case study of the fair-weather, non-precipitating shallow cumuli at ARM SGP (Zhang et al., 2017). We model the land surface as a chess board pattern with alternating leaf area index (LAI). The patch contrast of the LAI is adjusted to encompass the weak to strong heterogeneity amplitude. The surface sensible- and latent heat fluxes are computed according to the given LAI representing the differential surface heating over a heterogeneous land surface. Separate from the surface forcing imposed from the originally modeled surface, the cases that transition into the moist convection can induce another layer of the surface heterogeneity from the 1) radiation shading by clouds, 2) adjusted soil moisture pattern by the rain, 3) spreading cold pool. First, we assess and quantifies the individual cloud effect on the land breeze and the moist convection under the weak wind to simplify the feedback processes. And then, the same set of experiments is repeated under sheared background wind with low level jet, a typical summer time wind pattern at ARM SGP site, to

  12. A large duplication involving the IHH locus mimics acrocallosal syndrome.

    Science.gov (United States)

    Yuksel-Apak, Memnune; Bögershausen, Nina; Pawlik, Barbara; Li, Yun; Apak, Selcuk; Uyguner, Oya; Milz, Esther; Nürnberg, Gudrun; Karaman, Birsen; Gülgören, Ayan; Grzeschik, Karl-Heinz; Nürnberg, Peter; Kayserili, Hülya; Wollnik, Bernd

    2012-06-01

    Indian hedgehog (Ihh) signaling is a major determinant of various processes during embryonic development and has a pivotal role in embryonic skeletal development. A specific spatial and temporal expression of Ihh within the developing limb buds is essential for accurate digit outgrowth and correct digit number. Although missense mutations in IHH cause brachydactyly type A1, small tandem duplications involving the IHH locus have recently been described in patients with mild syndactyly and craniosynostosis. In contrast, a ∼600-kb deletion 5' of IHH in the doublefoot mouse mutant (Dbf) leads to severe polydactyly without craniosynostosis, but with craniofacial dysmorphism. We now present a patient resembling acrocallosal syndrome (ACS) with extensive polysyndactyly of the hands and feet, craniofacial abnormalities including macrocephaly, agenesis of the corpus callosum, dysplastic and low-set ears, severe hypertelorism and profound psychomotor delay. Single-nucleotide polymorphism (SNP) array copy number analysis identified a ∼900-kb duplication of the IHH locus, which was confirmed by an independent quantitative method. A fetus from a second pregnancy of the mother by a different spouse showed similar craniofacial and limb malformations and the same duplication of the IHH-locus. We defined the exact breakpoints and showed that the duplications are identical tandem duplications in both sibs. No copy number changes were observed in the healthy mother. To our knowledge, this is the first report of a human phenotype similar to the Dbf mutant and strikingly overlapping with ACS that is caused by a copy number variation involving the IHH locus on chromosome 2q35.

  13. A Multi-Resolution Spatial Model for Large Datasets Based on the Skew-t Distribution

    KAUST Repository

    Tagle, Felipe

    2017-12-06

    Large, non-Gaussian spatial datasets pose a considerable modeling challenge as the dependence structure implied by the model needs to be captured at different scales, while retaining feasible inference. Skew-normal and skew-t distributions have only recently begun to appear in the spatial statistics literature, without much consideration, however, for the ability to capture dependence at multiple resolutions, and simultaneously achieve feasible inference for increasingly large data sets. This article presents the first multi-resolution spatial model inspired by the skew-t distribution, where a large-scale effect follows a multivariate normal distribution and the fine-scale effects follow a multivariate skew-normal distributions. The resulting marginal distribution for each region is skew-t, thereby allowing for greater flexibility in capturing skewness and heavy tails characterizing many environmental datasets. Likelihood-based inference is performed using a Monte Carlo EM algorithm. The model is applied as a stochastic generator of daily wind speeds over Saudi Arabia.

  14. Fast and accurate focusing analysis of large photon sieve using pinhole ring diffraction model.

    Science.gov (United States)

    Liu, Tao; Zhang, Xin; Wang, Lingjie; Wu, Yanxiong; Zhang, Jizhen; Qu, Hemeng

    2015-06-10

    In this paper, we developed a pinhole ring diffraction model for the focusing analysis of a large photon sieve. Instead of analyzing individual pinholes, we discuss the focusing of all of the pinholes in a single ring. An explicit equation for the diffracted field of individual pinhole ring has been proposed. We investigated the validity range of this generalized model and analytically describe the sufficient conditions for the validity of this pinhole ring diffraction model. A practical example and investigation reveals the high accuracy of the pinhole ring diffraction model. This simulation method could be used for fast and accurate focusing analysis of a large photon sieve.

  15. 5D Modelling: An Efficient Approach for Creating Spatiotemporal Predictive 3D Maps of Large-Scale Cultural Resources

    Science.gov (United States)

    Doulamis, A.; Doulamis, N.; Ioannidis, C.; Chrysouli, C.; Grammalidis, N.; Dimitropoulos, K.; Potsiou, C.; Stathopoulou, E.-K.; Ioannides, M.

    2015-08-01

    Outdoor large-scale cultural sites are mostly sensitive to environmental, natural and human made factors, implying an imminent need for a spatio-temporal assessment to identify regions of potential cultural interest (material degradation, structuring, conservation). On the other hand, in Cultural Heritage research quite different actors are involved (archaeologists, curators, conservators, simple users) each of diverse needs. All these statements advocate that a 5D modelling (3D geometry plus time plus levels of details) is ideally required for preservation and assessment of outdoor large scale cultural sites, which is currently implemented as a simple aggregation of 3D digital models at different time and levels of details. The main bottleneck of such an approach is its complexity, making 5D modelling impossible to be validated in real life conditions. In this paper, a cost effective and affordable framework for 5D modelling is proposed based on a spatial-temporal dependent aggregation of 3D digital models, by incorporating a predictive assessment procedure to indicate which regions (surfaces) of an object should be reconstructed at higher levels of details at next time instances and which at lower ones. In this way, dynamic change history maps are created, indicating spatial probabilities of regions needed further 3D modelling at forthcoming instances. Using these maps, predictive assessment can be made, that is, to localize surfaces within the objects where a high accuracy reconstruction process needs to be activated at the forthcoming time instances. The proposed 5D Digital Cultural Heritage Model (5D-DCHM) is implemented using open interoperable standards based on the CityGML framework, which also allows the description of additional semantic metadata information. Visualization aspects are also supported to allow easy manipulation, interaction and representation of the 5D-DCHM geometry and the respective semantic information. The open source 3DCity

  16. Job involvement of primary healthcare employees: does a service provision model play a role?

    Science.gov (United States)

    Koponen, Anne M; Laamanen, Ritva; Simonsen-Rehn, Nina; Sundell, Jari; Brommels, Mats; Suominen, Sakari

    2010-05-01

    To investigate whether the development of job involvement of primary healthcare (PHC) employees in Southern Municipality (SM), where PHC services were outsourced to an independent non-profit organisation, differed from that in the three comparison municipalities (M1, M2, M3) with municipal service providers. Also, the associations of job involvement with factors describing the psychosocial work environment were investigated. A panel mail survey 2000-02 in Finland (n=369, response rates 73% and 60%). The data were analysed by descriptive statistics and multivariate linear regression analysis. Despite the favourable development in the psychosocial work environment, job involvement decreased most in SM, which faced the biggest organisational changes. Job involvement decreased also in M3, where the psychosocial work environment deteriorated most. Job involvement in 2002 was best predicted by high baseline level of interactional justice and work control, positive change in interactional justice, and higher age. Also other factors, such as organisational stability, seemed to play a role; after controlling for the effect of the psychosocial work characteristics, job involvement was higher in M3 than in SM. Outsourcing of PHC services may decrease job involvement at least during the first years. A particular service provision model is better than the others only if it is superior in providing a favourable and stable psychosocial work environment.

  17. Effective models of new physics at the Large Hadron Collider

    International Nuclear Information System (INIS)

    Llodra-Perez, J.

    2011-07-01

    With the start of the Large Hadron Collider runs, in 2010, particle physicists will be soon able to have a better understanding of the electroweak symmetry breaking. They might also answer to many experimental and theoretical open questions raised by the Standard Model. Surfing on this really favorable situation, we will first present in this thesis a highly model-independent parametrization in order to characterize the new physics effects on mechanisms of production and decay of the Higgs boson. This original tool will be easily and directly usable in data analysis of CMS and ATLAS, the huge generalist experiments of LHC. It will help indeed to exclude or validate significantly some new theories beyond the Standard Model. In another approach, based on model-building, we considered a scenario of new physics, where the Standard Model fields can propagate in a flat six-dimensional space. The new spatial extra-dimensions will be compactified on a Real Projective Plane. This orbifold is the unique six-dimensional geometry which possesses chiral fermions and a natural Dark Matter candidate. The scalar photon, which is the lightest particle of the first Kaluza-Klein tier, is stabilized by a symmetry relic of the six dimension Lorentz invariance. Using the current constraints from cosmological observations and our first analytical calculation, we derived a characteristic mass range around few hundred GeV for the Kaluza-Klein scalar photon. Therefore the new states of our Universal Extra-Dimension model are light enough to be produced through clear signatures at the Large Hadron Collider. So we used a more sophisticated analysis of particle mass spectrum and couplings, including radiative corrections at one-loop, in order to establish our first predictions and constraints on the expected LHC phenomenology. (author)

  18. Application of Logic Models in a Large Scientific Research Program

    Science.gov (United States)

    O'Keefe, Christine M.; Head, Richard J.

    2011-01-01

    It is the purpose of this article to discuss the development and application of a logic model in the context of a large scientific research program within the Commonwealth Scientific and Industrial Research Organisation (CSIRO). CSIRO is Australia's national science agency and is a publicly funded part of Australia's innovation system. It conducts…

  19. Large-scale ligand-based predictive modelling using support vector machines.

    Science.gov (United States)

    Alvarsson, Jonathan; Lampa, Samuel; Schaal, Wesley; Andersson, Claes; Wikberg, Jarl E S; Spjuth, Ola

    2016-01-01

    The increasing size of datasets in drug discovery makes it challenging to build robust and accurate predictive models within a reasonable amount of time. In order to investigate the effect of dataset sizes on predictive performance and modelling time, ligand-based regression models were trained on open datasets of varying sizes of up to 1.2 million chemical structures. For modelling, two implementations of support vector machines (SVM) were used. Chemical structures were described by the signatures molecular descriptor. Results showed that for the larger datasets, the LIBLINEAR SVM implementation performed on par with the well-established libsvm with a radial basis function kernel, but with dramatically less time for model building even on modest computer resources. Using a non-linear kernel proved to be infeasible for large data sizes, even with substantial computational resources on a computer cluster. To deploy the resulting models, we extended the Bioclipse decision support framework to support models from LIBLINEAR and made our models of logD and solubility available from within Bioclipse.

  20. ARMA modelling of neutron stochastic processes with large measurement noise

    International Nuclear Information System (INIS)

    Zavaljevski, N.; Kostic, Lj.; Pesic, M.

    1994-01-01

    An autoregressive moving average (ARMA) model of the neutron fluctuations with large measurement noise is derived from langevin stochastic equations and validated using time series data obtained during prompt neutron decay constant measurements at the zero power reactor RB in Vinca. Model parameters are estimated using the maximum likelihood (ML) off-line algorithm and an adaptive pole estimation algorithm based on the recursive prediction error method (RPE). The results show that subcriticality can be determined from real data with high measurement noise using much shorter statistical sample than in standard methods. (author)

  1. Large deviations for noninteracting infinite-particle systems

    International Nuclear Information System (INIS)

    Donsker, M.D.; Varadhan, S.R.S.

    1987-01-01

    A large deviation property is established for noninteracting infinite particle systems. Previous large deviation results obtained by the authors involved a single I-function because the cases treated always involved a unique invariant measure for the process. In the context of this paper there is an infinite family of invariant measures and a corresponding infinite family of I-functions governing the large deviations

  2. Large Deviations for Stochastic Models of Two-Dimensional Second Grade Fluids

    International Nuclear Information System (INIS)

    Zhai, Jianliang; Zhang, Tusheng

    2017-01-01

    In this paper, we establish a large deviation principle for stochastic models of incompressible second grade fluids. The weak convergence method introduced by Budhiraja and Dupuis (Probab Math Statist 20:39–61, 2000) plays an important role.

  3. Large Deviations for Stochastic Models of Two-Dimensional Second Grade Fluids

    Energy Technology Data Exchange (ETDEWEB)

    Zhai, Jianliang, E-mail: zhaijl@ustc.edu.cn [University of Science and Technology of China, School of Mathematical Sciences (China); Zhang, Tusheng, E-mail: Tusheng.Zhang@manchester.ac.uk [University of Manchester, School of Mathematics (United Kingdom)

    2017-06-15

    In this paper, we establish a large deviation principle for stochastic models of incompressible second grade fluids. The weak convergence method introduced by Budhiraja and Dupuis (Probab Math Statist 20:39–61, 2000) plays an important role.

  4. The Role of Student Involvement and Perceptions of Integration in a Causal Model of Student Persistence.

    Science.gov (United States)

    Berger, Joseph B.; Milem, Jeffrey F.

    1999-01-01

    This study refined and applied an integrated model of undergraduate persistence (accounting for both behavioral and perceptual components) to examine first-year retention at a private, highly selective research university. Results suggest that including behaviorally based measures of involvement improves the model's explanatory power concerning…

  5. Regional modeling of large wildfires under current and potential future climates in Colorado and Wyoming, USA

    Science.gov (United States)

    West, Amanda; Kumar, Sunil; Jarnevich, Catherine S.

    2016-01-01

    Regional analysis of large wildfire potential given climate change scenarios is crucial to understanding areas most at risk in the future, yet wildfire models are not often developed and tested at this spatial scale. We fit three historical climate suitability models for large wildfires (i.e. ≥ 400 ha) in Colorado andWyoming using topography and decadal climate averages corresponding to wildfire occurrence at the same temporal scale. The historical models classified points of known large wildfire occurrence with high accuracies. Using a novel approach in wildfire modeling, we applied the historical models to independent climate and wildfire datasets, and the resulting sensitivities were 0.75, 0.81, and 0.83 for Maxent, Generalized Linear, and Multivariate Adaptive Regression Splines, respectively. We projected the historic models into future climate space using data from 15 global circulation models and two representative concentration pathway scenarios. Maps from these geospatial analyses can be used to evaluate the changing spatial distribution of climate suitability of large wildfires in these states. April relative humidity was the most important covariate in all models, providing insight to the climate space of large wildfires in this region. These methods incorporate monthly and seasonal climate averages at a spatial resolution relevant to land management (i.e. 1 km2) and provide a tool that can be modified for other regions of North America, or adapted for other parts of the world.

  6. University Physics Students' Use of Models in Explanations of Phenomena Involving Interaction between Metals and Electromagnetic Radiation.

    Science.gov (United States)

    Redfors, Andreas; Ryder, Jim

    2001-01-01

    Examines third year university physics students' use of models when explaining familiar phenomena involving interaction between metals and electromagnetic radiation. Concludes that few students use a single model consistently. (Contains 27 references.) (DDR)

  7. A Low-involvement Choice Model for Consumer Panel Data

    OpenAIRE

    Brugha, Cathal; Turley, Darach

    1987-01-01

    The long overdue surge of interest in consumer behaviour texts in low-involvement purchasing has only begun to gather momemtum. It often takes the form of asking whether concepts usually associated with high-involvement purchasing can be applied, albeit in a modified form, to low-involvement purchasing. One such concept is evoked set, that is the range of brands deemed acceptable by a consumer in a particular product area. This has characteristically been associated with consumption involving...

  8. A phase transition between small- and large-field models of inflation

    International Nuclear Information System (INIS)

    Itzhaki, Nissan; Kovetz, Ely D

    2009-01-01

    We show that models of inflection point inflation exhibit a phase transition from a region in parameter space where they are of large-field type to a region where they are of small-field type. The phase transition is between a universal behavior, with respect to the initial condition, at the large-field region and non-universal behavior at the small-field region. The order parameter is the number of e-foldings. We find integer critical exponents at the transition between the two phases.

  9. Large tau and tau neutrino electric dipole moments in models with vectorlike multiplets

    International Nuclear Information System (INIS)

    Ibrahim, Tarek; Nath, Pran

    2010-01-01

    It is shown that the electric dipole moment of the τ lepton several orders of magnitude larger than predicted by the standard model can be generated from mixings in models with vectorlike mutiplets. The electric dipole moment (EDM) of the τ lepton arises from loops involving the exchange of the W, the charginos, the neutralinos, the sleptons, the mirror leptons, and the mirror sleptons. The EDM of the Dirac τ neutrino is also computed from loops involving the exchange of the W, the charginos, the mirror leptons, and the mirror sleptons. A numerical analysis is presented, and it is shown that the EDMs of the τ lepton and the τ neutrino which lie just a couple of orders of magnitude below the sensitivity of the current experiment can be achieved. Thus the predictions of the model are testable in an improved experiment on the EDM of the τ and the τ neutrino.

  10. Searches for phenomena beyond the Standard Model at the Large ...

    Indian Academy of Sciences (India)

    metry searches at the LHC is thus the channel with large missing transverse momentum and jets of high transverse momentum. No excess above the expected SM background is observed and limits are set on supersymmetric models. Figures 1 and 2 show the limits from ATLAS [11] and CMS [12]. In addition to setting limits ...

  11. Review of Dynamic Modeling and Simulation of Large Scale Belt Conveyor System

    Science.gov (United States)

    He, Qing; Li, Hong

    Belt conveyor is one of the most important devices to transport bulk-solid material for long distance. Dynamic analysis is the key to decide whether the design is rational in technique, safe and reliable in running, feasible in economy. It is very important to study dynamic properties, improve efficiency and productivity, guarantee conveyor safe, reliable and stable running. The dynamic researches and applications of large scale belt conveyor are discussed. The main research topics, the state-of-the-art of dynamic researches on belt conveyor are analyzed. The main future works focus on dynamic analysis, modeling and simulation of main components and whole system, nonlinear modeling, simulation and vibration analysis of large scale conveyor system.

  12. DMPy: a Python package for automated mathematical model construction of large-scale metabolic systems.

    Science.gov (United States)

    Smith, Robert W; van Rosmalen, Rik P; Martins Dos Santos, Vitor A P; Fleck, Christian

    2018-06-19

    Models of metabolism are often used in biotechnology and pharmaceutical research to identify drug targets or increase the direct production of valuable compounds. Due to the complexity of large metabolic systems, a number of conclusions have been drawn using mathematical methods with simplifying assumptions. For example, constraint-based models describe changes of internal concentrations that occur much quicker than alterations in cell physiology. Thus, metabolite concentrations and reaction fluxes are fixed to constant values. This greatly reduces the mathematical complexity, while providing a reasonably good description of the system in steady state. However, without a large number of constraints, many different flux sets can describe the optimal model and we obtain no information on how metabolite levels dynamically change. Thus, to accurately determine what is taking place within the cell, finer quality data and more detailed models need to be constructed. In this paper we present a computational framework, DMPy, that uses a network scheme as input to automatically search for kinetic rates and produce a mathematical model that describes temporal changes of metabolite fluxes. The parameter search utilises several online databases to find measured reaction parameters. From this, we take advantage of previous modelling efforts, such as Parameter Balancing, to produce an initial mathematical model of a metabolic pathway. We analyse the effect of parameter uncertainty on model dynamics and test how recent flux-based model reduction techniques alter system properties. To our knowledge this is the first time such analysis has been performed on large models of metabolism. Our results highlight that good estimates of at least 80% of the reaction rates are required to accurately model metabolic systems. Furthermore, reducing the size of the model by grouping reactions together based on fluxes alters the resulting system dynamics. The presented pipeline automates the

  13. Applying the Intervention Model for Fostering Affective Involvement with Persons Who Are Congenitally Deafblind: An Effect Study

    Science.gov (United States)

    Martens, Marga A. W.; Janssen, Marleen J.; Ruijssenaars, Wied A. J. J. M.; Huisman, Mark; Riksen-Walraven, J. Marianne

    2014-01-01

    Introduction: In this study, we applied the Intervention Model for Affective Involvement (IMAI) to four participants who are congenitally deafblind and their 16 communication partners in 3 different settings (school, a daytime activities center, and a group home). We examined whether the intervention increased affective involvement between the…

  14. Full-Scale Approximations of Spatio-Temporal Covariance Models for Large Datasets

    KAUST Repository

    Zhang, Bohai; Sang, Huiyan; Huang, Jianhua Z.

    2014-01-01

    of dataset and application of such models is not feasible for large datasets. This article extends the full-scale approximation (FSA) approach by Sang and Huang (2012) to the spatio-temporal context to reduce computational complexity. A reversible jump Markov

  15. Modeling the Hydrologic Effects of Large-Scale Green Infrastructure Projects with GIS

    Science.gov (United States)

    Bado, R. A.; Fekete, B. M.; Khanbilvardi, R.

    2015-12-01

    Impervious surfaces in urban areas generate excess runoff, which in turn causes flooding, combined sewer overflows, and degradation of adjacent surface waters. Municipal environmental protection agencies have shown a growing interest in mitigating these effects with 'green' infrastructure practices that partially restore the perviousness and water holding capacity of urban centers. Assessment of the performance of current and future green infrastructure projects is hindered by the lack of adequate hydrological modeling tools; conventional techniques fail to account for the complex flow pathways of urban environments, and detailed analyses are difficult to prepare for the very large domains in which green infrastructure projects are implemented. Currently, no standard toolset exists that can rapidly and conveniently predict runoff, consequent inundations, and sewer overflows at a city-wide scale. We demonstrate how streamlined modeling techniques can be used with open-source GIS software to efficiently model runoff in large urban catchments. Hydraulic parameters and flow paths through city blocks, roadways, and sewer drains are automatically generated from GIS layers, and ultimately urban flow simulations can be executed for a variety of rainfall conditions. With this methodology, users can understand the implications of large-scale land use changes and green/gray storm water retention systems on hydraulic loading, peak flow rates, and runoff volumes.

  16. Inviscid Wall-Modeled Large Eddy Simulations for Improved Efficiency

    Science.gov (United States)

    Aikens, Kurt; Craft, Kyle; Redman, Andrew

    2015-11-01

    The accuracy of an inviscid flow assumption for wall-modeled large eddy simulations (LES) is examined because of its ability to reduce simulation costs. This assumption is not generally applicable for wall-bounded flows due to the high velocity gradients found near walls. In wall-modeled LES, however, neither the viscous near-wall region or the viscous length scales in the outer flow are resolved. Therefore, the viscous terms in the Navier-Stokes equations have little impact on the resolved flowfield. Zero pressure gradient flat plate boundary layer results are presented for both viscous and inviscid simulations using a wall model developed previously. The results are very similar and compare favorably to those from another wall model methodology and experimental data. Furthermore, the inviscid assumption reduces simulation costs by about 25% and 39% for supersonic and subsonic flows, respectively. Future research directions are discussed as are preliminary efforts to extend the wall model to include the effects of unresolved wall roughness. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. Computational resources on TACC Stampede were provided under XSEDE allocation ENG150001.

  17. Modelling of decay heat removal using large water pools

    International Nuclear Information System (INIS)

    Munther, R.; Raussi, P.; Kalli, H.

    1992-01-01

    The main task for investigating of passive safety systems typical for ALWRs (Advanced Light Water Reactors) has been reviewing decay heat removal systems. The reference system for calculations has been represented in Hitachi's SBWR-concept. The calculations for energy transfer to the suppression pool were made using two different fluid mechanics codes, namely FIDAP and PHOENICS. FIDAP is based on finite element methodology and PHOENICS uses finite differences. The reason choosing these codes has been to compare their modelling and calculating abilities. The thermal stratification behaviour and the natural circulation was modelled with several turbulent flow models. Also, energy transport to the suppression pool was calculated for laminar flow conditions. These calculations required a large amount of computer resources and so the CRAY-supercomputer of the state computing centre was used. The results of the calculations indicated that the capabilities of these codes for modelling the turbulent flow regime are limited. Output from these codes should be considered carefully, and whenever possible, experimentally determined parameters should be used as input to enhance the code reliability. (orig.). (31 refs., 21 figs., 3 tabs.)

  18. Large tan β in gauge-mediated SUSY-breaking models

    International Nuclear Information System (INIS)

    Rattazzi, R.

    1997-01-01

    We explore some topics in the phenomenology of gauge-mediated SUSY-breaking scenarios having a large hierarchy of Higgs VEVs, v U /v D = tan β>>1. Some motivation for this scenario is first presented. We then use a systematic, analytic expansion (including some threshold corrections) to calculate the μ-parameter needed for proper electroweak breaking and the radiative corrections to the B-parameter, which fortuitously cancel at leading order. If B = 0 at the messenger scale then tan β is naturally large and calculable; we calculate it. We then confront this prediction with classical and quantum vacuum stability constraints arising from the Higgs-slepton potential, and indicate the preferred values of the top quark mass and messenger scale(s). The possibility of vacuum instability in a different direction yields an upper bound on the messenger mass scale complementary to the familiar bound from gravitino relic abundance. Next, we calculate the rate for b→sγ and show the possibility of large deviations (in the direction currently favored by experiment) from standard-model and small tan β predictions. Finally, we discuss the implications of these findings and their applicability to future, broader and more detailed investigations. (orig.)

  19. How uncertainty in socio-economic variables affects large-scale transport model forecasts

    DEFF Research Database (Denmark)

    Manzo, Stefano; Nielsen, Otto Anker; Prato, Carlo Giacomo

    2015-01-01

    A strategic task assigned to large-scale transport models is to forecast the demand for transport over long periods of time to assess transport projects. However, by modelling complex systems transport models have an inherent uncertainty which increases over time. As a consequence, the longer...... the period forecasted the less reliable is the forecasted model output. Describing uncertainty propagation patterns over time is therefore important in order to provide complete information to the decision makers. Among the existing literature only few studies analyze uncertainty propagation patterns over...

  20. Model correction factor method for reliability problems involving integrals of non-Gaussian random fields

    DEFF Research Database (Denmark)

    Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der

    2002-01-01

    The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...

  1. Principal considerations in large energy-storage capacitor banks

    International Nuclear Information System (INIS)

    Kemp, E.L.

    1976-01-01

    Capacitor banks storing one or more megajoules and costing more than one million dollars have unique problems not often found in smaller systems. Two large banks, Scyllac at Los Alamos and Shiva at Livermore, are used as models of large, complex systems. Scyllac is a 10-MJ, 60-kV theta-pinch system while Shiva is a 20-MJ, 20-kV energy system for laser flash lamps. A number of design principles are emphasized for expediting the design and construction of large banks. The sensitive features of the charge system, the storage system layout, the switching system, the transmission system, and the design of the principal bank components are presented. Project management and planning must involve a PERT chart with certain common features for all the activities. The importance of the budget is emphasized

  2. Analogue scale modelling of extensional tectonic processes using a large state-of-the-art centrifuge

    Science.gov (United States)

    Park, Heon-Joon; Lee, Changyeol

    2017-04-01

    Analogue scale modelling of extensional tectonic processes such as rifting and basin opening has been numerously conducted. Among the controlling factors, gravitational acceleration (g) on the scale models was regarded as a constant (Earth's gravity) in the most of the analogue model studies, and only a few model studies considered larger gravitational acceleration by using a centrifuge (an apparatus generating large centrifugal force by rotating the model at a high speed). Although analogue models using a centrifuge allow large scale-down and accelerated deformation that is derived by density differences such as salt diapir, the possible model size is mostly limited up to 10 cm. A state-of-the-art centrifuge installed at the KOCED Geotechnical Centrifuge Testing Center, Korea Advanced Institute of Science and Technology (KAIST) allows a large surface area of the scale-models up to 70 by 70 cm under the maximum capacity of 240 g-tons. Using the centrifuge, we will conduct analogue scale modelling of the extensional tectonic processes such as opening of the back-arc basin. Acknowledgement This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (grant number 2014R1A6A3A04056405).

  3. Improving CASINO performance for models with large number of electrons

    International Nuclear Information System (INIS)

    Anton, L.; Alfe, D.; Hood, R.Q.; Tanqueray, D.

    2009-01-01

    Quantum Monte Carlo calculations have at their core algorithms based on statistical ensembles of multidimensional random walkers which are straightforward to use on parallel computers. Nevertheless some computations have reached the limit of the memory resources for models with more than 1000 electrons because of the need to store a large amount of electronic orbitals related data. Besides that, for systems with large number of electrons, it is interesting to study if the evolution of one configuration of random walkers can be done faster in parallel. We present a comparative study of two ways to solve these problems: (1) distributed orbital data done with MPI or Unix inter-process communication tools, (2) second level parallelism for configuration computation

  4. Linear velocity fields in non-Gaussian models for large-scale structure

    Science.gov (United States)

    Scherrer, Robert J.

    1992-01-01

    Linear velocity fields in two types of physically motivated non-Gaussian models are examined for large-scale structure: seed models, in which the density field is a convolution of a density profile with a distribution of points, and local non-Gaussian fields, derived from a local nonlinear transformation on a Gaussian field. The distribution of a single component of the velocity is derived for seed models with randomly distributed seeds, and these results are applied to the seeded hot dark matter model and the global texture model with cold dark matter. An expression for the distribution of a single component of the velocity in arbitrary local non-Gaussian models is given, and these results are applied to such fields with chi-squared and lognormal distributions. It is shown that all seed models with randomly distributed seeds and all local non-Guassian models have single-component velocity distributions with positive kurtosis.

  5. Patient involvement in research programming and implementation: a responsive evaluation of the Dialogue Model for research agenda setting

    NARCIS (Netherlands)

    Abma, T.A.; Pittens, C.A.C.M.; Visse, M.; Elberse, J.E.; Broerse, J.E.W.

    2015-01-01

    Background: The Dialogue Model for research agenda-setting, involving multiple stakeholders including patients, was developed and validated in the Netherlands. However, there is little insight into whether and how patient involvement is sustained during the programming and implementation of research

  6. Evaluation of sub grid scale and local wall models in Large-eddy simulations of separated flow

    Directory of Open Access Journals (Sweden)

    Sam Ali Al

    2015-01-01

    Full Text Available The performance of the Sub Grid Scale models is studied by simulating a separated flow over a wavy channel. The first and second order statistical moments of the resolved velocities obtained by using Large-Eddy simulations at different mesh resolutions are compared with Direct Numerical Simulations data. The effectiveness of modeling the wall stresses by using local log-law is then tested on a relatively coarse grid. The results exhibit a good agreement between highly-resolved Large Eddy Simulations and Direct Numerical Simulations data regardless the Sub Grid Scale models. However, the agreement is less satisfactory with relatively coarse grid without using any wall models and the differences between Sub Grid Scale models are distinguishable. Using local wall model retuned the basic flow topology and reduced significantly the differences between the coarse meshed Large-Eddy Simulations and Direct Numerical Simulations data. The results show that the ability of local wall model to predict the separation zone depends strongly on its implementation way.

  7. Observations involving broadband impedance modelling

    Energy Technology Data Exchange (ETDEWEB)

    Berg, J S [Stanford Linear Accelerator Center, Menlo Park, CA (United States)

    1996-08-01

    Results for single- and multi-bunch instabilities can be significantly affected by the precise model that is used for the broadband impedance. This paper discusses three aspects of broadband impedance modelling. The first is an observation of the effect that a seemingly minor change in an impedance model has on the single-bunch mode coupling threshold. The second is a successful attempt to construct a model for the high-frequency tails of an r.f. cavity. The last is a discussion of requirements for the mathematical form of an impedance which follow from the general properties of impedances. (author)

  8. Observations involving broadband impedance modelling

    International Nuclear Information System (INIS)

    Berg, J.S.

    1995-08-01

    Results for single- and multi-bunch instabilities can be significantly affected by the precise model that is used for the broadband impendance. This paper discusses three aspects of broadband impendance modeling. The first is an observation of the effect that a seemingly minor change in an impedance model has on the single-bunch mode coupling threshold. The second is a successful attempt to construct a model for the high-frequency tails of an r.f cavity. The last is a discussion of requirements for the mathematical form of an impendance which follow from the general properties of impendances

  9. Childhood craniopharyngioma: greater hypothalamic involvement before surgery is associated with higher homeostasis model insulin resistance index

    Science.gov (United States)

    Trivin, Christine; Busiah, Kanetee; Mahlaoui, Nizar; Recasens, Christophe; Souberbielle, Jean-Claude; Zerah, Michel; Sainte-Rose, Christian; Brauner, Raja

    2009-01-01

    Background Obesity seems to be linked to the hypothalamic involvement in craniopharyngioma. We evaluated the pre-surgery relationship between the degree of this involvement on magnetic resonance imaging and insulin resistance, as evaluated by the homeostasis model insulin resistance index (HOMA). As insulin-like growth factor 1, leptin, soluble leptin receptor (sOB-R) and ghrelin may also be involved, we compared their plasma concentrations and their link to weight change. Methods 27 children with craniopharyngioma were classified as either grade 0 (n = 7, no hypothalamic involvement), grade 1 (n = 8, compression without involvement), or grade 2 (n = 12, severe involvement). Results Despite having similar body mass indexes (BMI), the grade 2 patients had higher glucose, insulin and HOMA before surgery than the grade 0 (P = 0.02, craniopharyngioma before surgery seems to determine the degree of insulin resistance, regardless of the BMI. The pre-surgery HOMA values were correlated with the post-surgery weight gain. This suggests that obesity should be prevented by reducing inn secretion in those cases with hypothalamic involvement. PMID:19341477

  10. A Reformulated Model of Barriers to Parental Involvement in Education: Comment on Hornby and Lafaele (2011)

    Science.gov (United States)

    Fan, Weihua; Li, Nan; Sandoval, Jaime Robert

    2018-01-01

    In a 2011 article in this journal, Hornby and Lafaele provided a comprehensive model to understand barriers that may adversely impact effectiveness of parental involvement (PI) in education. The proposed explanatory model provides researchers with a new comprehensive and systematic perspective of the phenomenon in question with references from an…

  11. Large animals as potential models of human mental and behavioral disorders.

    Science.gov (United States)

    Danek, Michał; Danek, Janusz; Araszkiewicz, Aleksander

    2017-12-30

    Many animal models in different species have been developed for mental and behavioral disorders. This review presents large animals (dog, ovine, swine, horse) as potential models of this disorders. The article was based on the researches that were published in the peer-reviewed journals. Aliterature research was carried out using the PubMed database. The above issues were discussed in the several problem groups in accordance with the WHO International Statistical Classification of Diseases and Related Health Problems 10thRevision (ICD-10), in particular regarding: organic, including symptomatic, disorders; mental disorders (Alzheimer's disease and Huntington's disease, pernicious anemia and hepatic encephalopathy, epilepsy, Parkinson's disease, Creutzfeldt-Jakob disease); behavioral disorders due to psychoactive substance use (alcoholic intoxication, abuse of morphine); schizophrenia and other schizotypal disorders (puerperal psychosis); mood (affective) disorders (depressive episode); neurotic, stress-related and somatoform disorders (posttraumatic stress disorder, obsessive-compulsive disorder); behavioral syndromes associated with physiological disturbances and physical factors (anxiety disorders, anorexia nervosa, narcolepsy); mental retardation (Cohen syndrome, Down syndrome, Hunter syndrome); behavioral and emotional disorders (attention deficit hyperactivity disorder). This data indicates many large animal disorders which can be models to examine the above human mental and behavioral disorders.

  12. Involvement of herbal medicine as a cause of mesenteric phlebosclerosis: results from a large-scale nationwide survey.

    Science.gov (United States)

    Shimizu, Seiji; Kobayashi, Taku; Tomioka, Hideo; Ohtsu, Kensei; Matsui, Toshiyuki; Hibi, Toshifumi

    2017-03-01

    Mesenteric phlebosclerosis (MP) is a rare disease characterized by venous calcification extending from the colonic wall to the mesentery, with chronic ischemic changes from venous return impairment in the intestine. It is an idiopathic disease, but increasing attention has been paid to the potential involvement of herbal medicine, or Kampo, in its etiology. Until now, there were scattered case reports, but no large-scale studies have been conducted to unravel the clinical characteristics and etiology of the disease. A nationwide survey was conducted using questionnaires to assess possible etiology (particularly the involvement of herbal medicine), clinical manifestations, disease course, and treatment of MP. Data from 222 patients were collected. Among the 169 patients (76.1 %), whose history of herbal medicine was obtained, 147 (87.0 %) used herbal medicines. The use of herbal medicines containing sanshishi (gardenia fruit, Gardenia jasminoides Ellis) was reported in 119 out of 147 patients (81.0 %). Therefore, the use of herbal medicine containing sanshishi was confirmed in 70.4 % of 169 patients whose history of herbal medicine was obtained. The duration of sanshishi use ranged from 3 to 51 years (mean 13.6 years). Patients who discontinued sanshishi showed a better outcome compared with those who continued it. The use of herbal medicine containing sanshishi is associated with the etiology of MP. Although it may not be the causative factor, it is necessary for gastroenterologists to be aware of the potential risk of herbal medicine containing sanshishi for the development of MP.

  13. Foreshock occurrence before large earthquakes

    Science.gov (United States)

    Reasenberg, P.A.

    1999-01-01

    Rates of foreshock occurrence involving shallow M ??? 6 and M ??? 7 mainshocks and M ??? 5 foreshocks were measured in two worldwide catalogs over ???20-year intervals. The overall rates observed are similar to ones measured in previous worldwide and regional studies when they are normalized for the ranges of magnitude difference they each span. The observed worldwide rates were compared to a generic model of earthquake clustering based on patterns of small and moderate aftershocks in California. The aftershock model was extended to the case of moderate foreshocks preceding large mainshocks. Overall, the observed worldwide foreshock rates exceed the extended California generic model by a factor of ???2. Significant differences in foreshock rate were found among subsets of earthquakes defined by their focal mechanism and tectonic region, with the rate before thrust events higher and the rate before strike-slip events lower than the worldwide average. Among the thrust events, a large majority, composed of events located in shallow subduction zones, had a high foreshock rate, while a minority, located in continental thrust belts, had a low rate. These differences may explain why previous surveys have found low foreshock rates among thrust events in California (especially southern California), while the worldwide observations suggests the opposite: California, lacking an active subduction zone in most of its territory, and including a region of mountain-building thrusts in the south, reflects the low rate apparently typical for continental thrusts, while the worldwide observations, dominated by shallow subduction zone events, are foreshock-rich. If this is so, then the California generic model may significantly underestimate the conditional probability for a very large (M ??? 8) earthquake following a potential (M ??? 7) foreshock in Cascadia. The magnitude differences among the identified foreshock-mainshock pairs in the Harvard catalog are consistent with a uniform

  14. ِDesigning a Model to Medical Errors Prediction for Outpatients Visits According to Rganizational Commitment and Job Involvement

    Directory of Open Access Journals (Sweden)

    SM Mirhosseini

    2015-09-01

    Full Text Available Abstract Introduction: A wide ranges of variables effect on the medical errors such as job involvement and organizational commitment. Coincidental relationship between two variables on medical errors during outpatients’ visits has been investigated to design a model. Methods: A field study with 114 physicians during outpatients’ visits revealed the mean of medical errors. Azimi and Allen-meyer questionnaires were used to measure Job involvement and organizational commitment. Physicians divided into four groups according to the Job involvement and organizational commitment in two dimensions (Zone1: high job involvement and high organizational commitment, Zone2: high job involvement and low organizational commitment, Zone3: low job involvement and high organizational commitment, Zone 4: low job involvement and low organizational commitment. ANOVA and Scheffe test were conducted to analyse the medical errors in four Zones by SPSS22. A guideline was presented according to the relationship between errors and two other variables. Results: The mean of organizational commitment was 79.50±12.30 and job involvement 12.72±3.66, medical errors in first group (0.32, second group (0.51, third group (0.41 and last one (0.50. ANOVA (F test=22.20, sig=0.00 and Scheffé were significant except for the second and forth group. The validity of the model was 73.60%. Conclusion: Applying some strategies to boost the organizational commitment and job involvement can help for diminishing the medical errors during outpatients’ visits. Thus, the investigation to comprehend the factors contributing organizational commitment and job involvement can be helpful.

  15. Patterns of failure of diffuse large B-cell lymphoma patients after involved-site radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Holzhaeuser, Eva; Berlin, Maximilian; Bezold, Thomas; Mayer, Arnulf; Schmidberger, Heinz [University Medical Center Mainz, Department of Radiation Oncology and Radiotherapy, Mainz (Germany); Wollschlaeger, Daniel [University Medical Center Mainz, Institute for Medical Biostatistics, Epidemiology and Informatics, Mainz (Germany); Hess, Georg [University Medical Center Mainz, Department of Internal Medicine, Mainz (Germany)

    2017-12-15

    Radiotherapy (RT) in combination with chemoimmunotherapy is highly efficient in the treatment of diffuse large B-cell lymphoma (DLBCL). This retrospective analysis evaluated the efficacy of the treatment volume and the dose concept of involved-site RT (ISRT). We identified 60 histologically confirmed stage I-IV DLBCL patients treated with multimodal cytotoxic chemoimmunotherapy and followed by consolidative ISRT from 2005-2015. Progression-free survival (PFS) and overall survival (OS) were estimated by Kaplan-Meier method. Univariate analyses were performed by log-rank test and Mann-Whitney U-test. After initial chemoimmunotherapy (mostly R-CHOP; rituximab, cyclophosphamide, doxorubicin, vincristine, and prednisolone), 19 (36%) patients achieved complete response (CR), 34 (64%) partial response (PR) or less. Excluded were 7 (12%) patients with progressive disease after chemoimmunotherapy. All patients underwent ISRT with a dose of 40 Gy. After a median follow-up of 44 months, 79% of the patients remained disease free, while 21% presented with failure, progressive systemic disease, or death. All patients who achieved CR after chemoimmunotherapy remained in CR. Of the patients achieving PR after chemotherapy only 2 failed at the initial site within the ISRT volume. No marginal relapse was observed. Ann Arbor clinical stage I/II showed significantly improved PFS compared to stage III/IV (93% vs 65%; p ≤ 0.021). International Prognostic Index (IPI) score of 0 or 1 compared to 2-5 has been associated with significantly increased PFS (100% vs 70%; p ≤ 0.031). Postchemoimmunotherapy status of CR compared to PR was associated with significantly increased PFS (100% vs 68%; p ≤ 0.004) and OS (100% vs 82%; p ≤ 0.026). Only 3 of 53 patients developed grade II late side effects, whereas grade III or IV side effects have not been observed. These data suggest that a reduction of the RT treatment volume from involved-field (IF) to involved-site (IS) is sufficient because

  16. Modelling of large sodium fires: A coupled experimental and calculational approach

    International Nuclear Information System (INIS)

    Astegiano, J.C.; Balard, F.; Cartier, L.; De Pascale, C.; Forestier, A.; Merigot, C.; Roubin, P.; Tenchine, D.; Bakouta, N.

    1996-01-01

    The consequences of large sodium leaks in secondary circuit of Super-Phenix have been studied mainly with the FEUMIX code, on the basis of sodium fire experiments. This paper presents the status of the coupled AIRBUS (water experiment) FEUMIX approach under development in order to strengthen the extrapolation made for the Super-Phenix secondary circuits calculations for large leakage flow. FEUMIX code is a point code based on the concept of a global interfacial area between sodium and air. Mass and heat transfers through this global area is supposed to be similar. Then, global interfacial transfer coefficient Sih is an important parameter of the model. Correlations for the interfacial area are extracted from a large number of sodium tests. For the studies of hypothetical large sodium leak in secondary circuit of Super-Phenix, flow rates of more than 1 t/s have been considered and extrapolation was made from the existing results (maximum flow rate 225 kg/s). In order to strengthen the extrapolation, water test has been contemplated, on the basis of a thermal hydraulic similarity. The principle is to measure the interfacial area of a hot water jet in air, then to transpose the Sih to sodium without combustion, and to use this value in FEUMIX with combustion modelling. AIRBUS test section is a parallelepipedic gastight tank, 106 m 3 (5.7 x 3.7 x 5) internally insulated. Water jet is injected from heated external auxiliary tank into the cell using pressurized air tank and specific valve. The main measurements performed during each test are injected flow rate air pressure water temperature gas temperature A first series of tests were performed in order to qualify the methodology: typical FCA and IGNA sodium fire tests were represented in AIRBUS, and a comparison of the FEUMIX calculation using Sih value deduced from water experiments show satisfactory agreement. A second series of test for large flow rate, corresponding to large sodium leak in secondary circuit of Super

  17. Predicted occurrence rate of severe transportation accidents involving large casks

    International Nuclear Information System (INIS)

    Dennis, A.W.

    1978-01-01

    A summary of the results of an investigation of the severities of highway and railroad accidents as they relate to the shipment of large radioactive materials casks is discussed. The accident environments considered are fire, impact, crash, immersion, and puncture. For each of these environments, the accident severities and their predicted frequencies of occurrence are presented. These accident environments are presented in tabular and graphic form to allow the reader to evaluate the probabilities of occurrence of the accident parameter severities he selects

  18. Social Work Involvement in Advance Care Planning: Findings from a Large Survey of Social Workers in Hospice and Palliative Care Settings.

    Science.gov (United States)

    Stein, Gary L; Cagle, John G; Christ, Grace H

    2017-03-01

    Few data are available describing the involvement and activities of social workers in advance care planning (ACP). We sought to provide data about (1) social worker involvement and leadership in ACP conversations with patients and families; and (2) the extent of functions and activities when these discussions occur. We conducted a large web-based survey of social workers employed in hospice, palliative care, and related settings to explore their role, participation, and self-rated competency in facilitating ACP discussions. Respondents were recruited through the Social Work Hospice and Palliative Care Network and the National Hospice and Palliative Care Organization. Descriptive analyses were conducted on the full sample of respondents (N = 641) and a subsample of clinical social workers (N = 456). Responses were analyzed to explore differences in ACP involvement by practice setting. Most clinical social workers (96%) reported that social workers in their department are conducting ACP discussions with patients/families. Majorities also participate in, and lead, ACP discussions (69% and 60%, respectively). Most respondents report that social workers are responsible for educating patients/families about ACP options (80%) and are the team members responsible for documenting ACP (68%). Compared with other settings, oncology and inpatient palliative care social workers were less likely to be responsible for ensuring that patients/families are informed of ACP options and documenting ACP preferences. Social workers are prominently involved in facilitating, leading, and documenting ACP discussions. Policy-makers, administrators, and providers should incorporate the vital contributions of social work professionals in policies and programs supporting ACP.

  19. An Axiomatic Analysis Approach for Large-Scale Disaster-Tolerant Systems Modeling

    Directory of Open Access Journals (Sweden)

    Theodore W. Manikas

    2011-02-01

    Full Text Available Disaster tolerance in computing and communications systems refers to the ability to maintain a degree of functionality throughout the occurrence of a disaster. We accomplish the incorporation of disaster tolerance within a system by simulating various threats to the system operation and identifying areas for system redesign. Unfortunately, extremely large systems are not amenable to comprehensive simulation studies due to the large computational complexity requirements. To address this limitation, an axiomatic approach that decomposes a large-scale system into smaller subsystems is developed that allows the subsystems to be independently modeled. This approach is implemented using a data communications network system example. The results indicate that the decomposition approach produces simulation responses that are similar to the full system approach, but with greatly reduced simulation time.

  20. Flexible non-linear predictive models for large-scale wind turbine diagnostics

    DEFF Research Database (Denmark)

    Bach-Andersen, Martin; Rømer-Odgaard, Bo; Winther, Ole

    2017-01-01

    We demonstrate how flexible non-linear models can provide accurate and robust predictions on turbine component temperature sensor data using data-driven principles and only a minimum of system modeling. The merits of different model architectures are evaluated using data from a large set...... of turbines operating under diverse conditions. We then go on to test the predictive models in a diagnostic setting, where the output of the models are used to detect mechanical faults in rotor bearings. Using retrospective data from 22 actual rotor bearing failures, the fault detection performance...... of the models are quantified using a structured framework that provides the metrics required for evaluating the performance in a fleet wide monitoring setup. It is demonstrated that faults are identified with high accuracy up to 45 days before a warning from the hard-threshold warning system....

  1. Model checking methodology for large systems, faults and asynchronous behaviour. SARANA 2011 work report

    International Nuclear Information System (INIS)

    Lahtinen, J.; Launiainen, T.; Heljanko, K.; Ropponen, J.

    2012-01-01

    Digital instrumentation and control (I and C) systems are challenging to verify. They enable complicated control functions, and the state spaces of the models easily become too large for comprehensive verification through traditional methods. Model checking is a formal method that can be used for system verification. A number of efficient model checking systems are available that provide analysis tools to determine automatically whether a given state machine model satisfies the desired safety properties. This report reviews the work performed in the Safety Evaluation and Reliability Analysis of Nuclear Automation (SARANA) project in 2011 regarding model checking. We have developed new, more exact modelling methods that are able to capture the behaviour of a system more realistically. In particular, we have developed more detailed fault models depicting the hardware configuration of a system, and methodology to model function-block-based systems asynchronously. In order to improve the usability of our model checking methods, we have developed an algorithm for model checking large modular systems. The algorithm can be used to verify properties of a model that could otherwise not be verified in a straightforward manner. (orig.)

  2. Model checking methodology for large systems, faults and asynchronous behaviour. SARANA 2011 work report

    Energy Technology Data Exchange (ETDEWEB)

    Lahtinen, J. [VTT Technical Research Centre of Finland, Espoo (Finland); Launiainen, T.; Heljanko, K.; Ropponen, J. [Aalto Univ., Espoo (Finland). Dept. of Information and Computer Science

    2012-07-01

    Digital instrumentation and control (I and C) systems are challenging to verify. They enable complicated control functions, and the state spaces of the models easily become too large for comprehensive verification through traditional methods. Model checking is a formal method that can be used for system verification. A number of efficient model checking systems are available that provide analysis tools to determine automatically whether a given state machine model satisfies the desired safety properties. This report reviews the work performed in the Safety Evaluation and Reliability Analysis of Nuclear Automation (SARANA) project in 2011 regarding model checking. We have developed new, more exact modelling methods that are able to capture the behaviour of a system more realistically. In particular, we have developed more detailed fault models depicting the hardware configuration of a system, and methodology to model function-block-based systems asynchronously. In order to improve the usability of our model checking methods, we have developed an algorithm for model checking large modular systems. The algorithm can be used to verify properties of a model that could otherwise not be verified in a straightforward manner. (orig.)

  3. Absorption and scattering coefficient dependence of laser-Doppler flowmetry models for large tissue volumes

    International Nuclear Information System (INIS)

    Binzoni, T; Leung, T S; Ruefenacht, D; Delpy, D T

    2006-01-01

    Based on quasi-elastic scattering theory (and random walk on a lattice approach), a model of laser-Doppler flowmetry (LDF) has been derived which can be applied to measurements in large tissue volumes (e.g. when the interoptode distance is >30 mm). The model holds for a semi-infinite medium and takes into account the transport-corrected scattering coefficient and the absorption coefficient of the tissue, and the scattering coefficient of the red blood cells. The model holds for anisotropic scattering and for multiple scattering of the photons by the moving scatterers of finite size. In particular, it has also been possible to take into account the simultaneous presence of both Brownian and pure translational movements. An analytical and simplified version of the model has also been derived and its validity investigated, for the case of measurements in human skeletal muscle tissue. It is shown that at large optode spacing it is possible to use the simplified model, taking into account only a 'mean' light pathlength, to predict the blood flow related parameters. It is also demonstrated that the 'classical' blood volume parameter, derived from LDF instruments, may not represent the actual blood volume variations when the investigated tissue volume is large. The simplified model does not need knowledge of the tissue optical parameters and thus should allow the development of very simple and cost-effective LDF hardware

  4. Oligopolistic competition in wholesale electricity markets: Large-scale simulation and policy analysis using complementarity models

    Science.gov (United States)

    Helman, E. Udi

    This dissertation conducts research into the large-scale simulation of oligopolistic competition in wholesale electricity markets. The dissertation has two parts. Part I is an examination of the structure and properties of several spatial, or network, equilibrium models of oligopolistic electricity markets formulated as mixed linear complementarity problems (LCP). Part II is a large-scale application of such models to the electricity system that encompasses most of the United States east of the Rocky Mountains, the Eastern Interconnection. Part I consists of Chapters 1 to 6. The models developed in this part continue research into mixed LCP models of oligopolistic electricity markets initiated by Hobbs [67] and subsequently developed by Metzler [87] and Metzler, Hobbs and Pang [88]. Hobbs' central contribution is a network market model with Cournot competition in generation and a price-taking spatial arbitrage firm that eliminates spatial price discrimination by the Cournot firms. In one variant, the solution to this model is shown to be equivalent to the "no arbitrage" condition in a "pool" market, in which a Regional Transmission Operator optimizes spot sales such that the congestion price between two locations is exactly equivalent to the difference in the energy prices at those locations (commonly known as locational marginal pricing). Extensions to this model are presented in Chapters 5 and 6. One of these is a market model with a profit-maximizing arbitrage firm. This model is structured as a mathematical program with equilibrium constraints (MPEC), but due to the linearity of its constraints, can be solved as a mixed LCP. Part II consists of Chapters 7 to 12. The core of these chapters is a large-scale simulation of the U.S. Eastern Interconnection applying one of the Cournot competition with arbitrage models. This is the first oligopolistic equilibrium market model to encompass the full Eastern Interconnection with a realistic network representation (using

  5. Induction of continuous expanding infrarenal aortic aneurysms in a large porcine animal model

    DEFF Research Database (Denmark)

    Kloster, Brian Ozeraitis; Lund, Lars; Lindholt, Jes S.

    2015-01-01

    BackgroundA large animal model with a continuous expanding infrarenal aortic aneurysm gives access to a more realistic AAA model with anatomy and physiology similar to humans, and thus allows for new experimental research in the natural history and treatment options of the disease. Methods10 pigs...

  6. A large-scale multi-species spatial depletion model for overwintering waterfowl

    NARCIS (Netherlands)

    Baveco, J.M.; Kuipers, H.; Nolet, B.A.

    2011-01-01

    In this paper, we develop a model to evaluate the capacity of accommodation areas for overwintering waterfowl, at a large spatial scale. Each day geese are distributed over roosting sites. Based on the energy minimization principle, the birds daily decide which surrounding fields to exploit within

  7. Large-Signal Code TESLA: Improvements in the Implementation and in the Model

    National Research Council Canada - National Science Library

    Chernyavskiy, Igor A; Vlasov, Alexander N; Anderson, Jr., Thomas M; Cooke, Simon J; Levush, Baruch; Nguyen, Khanh T

    2006-01-01

    We describe the latest improvements made in the large-signal code TESLA, which include transformation of the code to a Fortran-90/95 version with dynamical memory allocation and extension of the model...

  8. Simple Model for Simulating Characteristics of River Flow Velocity in Large Scale

    Directory of Open Access Journals (Sweden)

    Husin Alatas

    2015-01-01

    Full Text Available We propose a simple computer based phenomenological model to simulate the characteristics of river flow velocity in large scale. We use shuttle radar tomography mission based digital elevation model in grid form to define the terrain of catchment area. The model relies on mass-momentum conservation law and modified equation of motion of falling body in inclined plane. We assume inelastic collision occurs at every junction of two river branches to describe the dynamics of merged flow velocity.

  9. Large Scale Computing for the Modelling of Whole Brain Connectivity

    DEFF Research Database (Denmark)

    Albers, Kristoffer Jon

    organization of the brain in continuously increasing resolution. From these images, networks of structural and functional connectivity can be constructed. Bayesian stochastic block modelling provides a prominent data-driven approach for uncovering the latent organization, by clustering the networks into groups...... of neurons. Relying on Markov Chain Monte Carlo (MCMC) simulations as the workhorse in Bayesian inference however poses significant computational challenges, especially when modelling networks at the scale and complexity supported by high-resolution whole-brain MRI. In this thesis, we present how to overcome...... these computational limitations and apply Bayesian stochastic block models for un-supervised data-driven clustering of whole-brain connectivity in full image resolution. We implement high-performance software that allows us to efficiently apply stochastic blockmodelling with MCMC sampling on large complex networks...

  10. Phase-field-based lattice Boltzmann modeling of large-density-ratio two-phase flows

    Science.gov (United States)

    Liang, Hong; Xu, Jiangrong; Chen, Jiangxing; Wang, Huili; Chai, Zhenhua; Shi, Baochang

    2018-03-01

    In this paper, we present a simple and accurate lattice Boltzmann (LB) model for immiscible two-phase flows, which is able to deal with large density contrasts. This model utilizes two LB equations, one of which is used to solve the conservative Allen-Cahn equation, and the other is adopted to solve the incompressible Navier-Stokes equations. A forcing distribution function is elaborately designed in the LB equation for the Navier-Stokes equations, which make it much simpler than the existing LB models. In addition, the proposed model can achieve superior numerical accuracy compared with previous Allen-Cahn type of LB models. Several benchmark two-phase problems, including static droplet, layered Poiseuille flow, and spinodal decomposition are simulated to validate the present LB model. It is found that the present model can achieve relatively small spurious velocity in the LB community, and the obtained numerical results also show good agreement with the analytical solutions or some available results. Lastly, we use the present model to investigate the droplet impact on a thin liquid film with a large density ratio of 1000 and the Reynolds number ranging from 20 to 500. The fascinating phenomena of droplet splashing is successfully reproduced by the present model and the numerically predicted spreading radius exhibits to obey the power law reported in the literature.

  11. Finite Mixture Multilevel Multidimensional Ordinal IRT Models for Large Scale Cross-Cultural Research

    Science.gov (United States)

    de Jong, Martijn G.; Steenkamp, Jan-Benedict E. M.

    2010-01-01

    We present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups of countries have different measurement operations, while…

  12. Full-Scale Approximations of Spatio-Temporal Covariance Models for Large Datasets

    KAUST Repository

    Zhang, Bohai

    2014-01-01

    Various continuously-indexed spatio-temporal process models have been constructed to characterize spatio-temporal dependence structures, but the computational complexity for model fitting and predictions grows in a cubic order with the size of dataset and application of such models is not feasible for large datasets. This article extends the full-scale approximation (FSA) approach by Sang and Huang (2012) to the spatio-temporal context to reduce computational complexity. A reversible jump Markov chain Monte Carlo (RJMCMC) algorithm is proposed to select knots automatically from a discrete set of spatio-temporal points. Our approach is applicable to nonseparable and nonstationary spatio-temporal covariance models. We illustrate the effectiveness of our method through simulation experiments and application to an ozone measurement dataset.

  13. RELAPS choked flow model and application to a large scale flow test

    International Nuclear Information System (INIS)

    Ransom, V.H.; Trapp, J.A.

    1980-01-01

    The RELAP5 code was used to simulate a large scale choked flow test. The fluid system used in the test was modeled in RELAP5 using a uniform, but coarse, nodalization. The choked mass discharge rate was calculated using the RELAP5 choked flow model. The calulations were in good agreement with the test data, and the flow was calculated to be near thermal equilibrium

  14. Cloud-enabled large-scale land surface model simulations with the NASA Land Information System

    Science.gov (United States)

    Duffy, D.; Vaughan, G.; Clark, M. P.; Peters-Lidard, C. D.; Nijssen, B.; Nearing, G. S.; Rheingrover, S.; Kumar, S.; Geiger, J. V.

    2017-12-01

    Developed by the Hydrological Sciences Laboratory at NASA Goddard Space Flight Center (GSFC), the Land Information System (LIS) is a high-performance software framework for terrestrial hydrology modeling and data assimilation. LIS provides the ability to integrate satellite and ground-based observational products and advanced modeling algorithms to extract land surface states and fluxes. Through a partnership with the National Center for Atmospheric Research (NCAR) and the University of Washington, the LIS model is currently being extended to include the Structure for Unifying Multiple Modeling Alternatives (SUMMA). With the addition of SUMMA in LIS, meaningful simulations containing a large multi-model ensemble will be enabled and can provide advanced probabilistic continental-domain modeling capabilities at spatial scales relevant for water managers. The resulting LIS/SUMMA application framework is difficult for non-experts to install due to the large amount of dependencies on specific versions of operating systems, libraries, and compilers. This has created a significant barrier to entry for domain scientists that are interested in using the software on their own systems or in the cloud. In addition, the requirement to support multiple run time environments across the LIS community has created a significant burden on the NASA team. To overcome these challenges, LIS/SUMMA has been deployed using Linux containers, which allows for an entire software package along with all dependences to be installed within a working runtime environment, and Kubernetes, which orchestrates the deployment of a cluster of containers. Within a cloud environment, users can now easily create a cluster of virtual machines and run large-scale LIS/SUMMA simulations. Installations that have taken weeks and months can now be performed in minutes of time. This presentation will discuss the steps required to create a cloud-enabled large-scale simulation, present examples of its use, and

  15. Field theory of large amplitude collective motion. A schematic model

    International Nuclear Information System (INIS)

    Reinhardt, H.

    1978-01-01

    By using path integral methods the equation for large amplitude collective motion for a schematic two-level model is derived. The original fermion theory is reformulated in terms of a collective (Bose) field. The classical equation of motion for the collective field coincides with the time-dependent Hartree-Fock equation. Its classical solution is quantized by means of the field-theoretical generalization of the WKB method. (author)

  16. Laboratory astrophysics. Model experiments of astrophysics with large-scale lasers

    International Nuclear Information System (INIS)

    Takabe, Hideaki

    2012-01-01

    I would like to review the model experiment of astrophysics with high-power, large-scale lasers constructed mainly for laser nuclear fusion research. The four research directions of this new field named 'Laser Astrophysics' are described with four examples mainly promoted in our institute. The description is of magazine style so as to be easily understood by non-specialists. A new theory and its model experiment on the collisionless shock and particle acceleration observed in supernova remnants (SNRs) are explained in detail and its result and coming research direction are clarified. In addition, the vacuum breakdown experiment to be realized with the near future ultra-intense laser is also introduced. (author)

  17. Systematic methods for defining coarse-grained maps in large biomolecules.

    Science.gov (United States)

    Zhang, Zhiyong

    2015-01-01

    Large biomolecules are involved in many important biological processes. It would be difficult to use large-scale atomistic molecular dynamics (MD) simulations to study the functional motions of these systems because of the computational expense. Therefore various coarse-grained (CG) approaches have attracted rapidly growing interest, which enable simulations of large biomolecules over longer effective timescales than all-atom MD simulations. The first issue in CG modeling is to construct CG maps from atomic structures. In this chapter, we review the recent development of a novel and systematic method for constructing CG representations of arbitrarily complex biomolecules, in order to preserve large-scale and functionally relevant essential dynamics (ED) at the CG level. In this ED-CG scheme, the essential dynamics can be characterized by principal component analysis (PCA) on a structural ensemble, or elastic network model (ENM) of a single atomic structure. Validation and applications of the method cover various biological systems, such as multi-domain proteins, protein complexes, and even biomolecular machines. The results demonstrate that the ED-CG method may serve as a very useful tool for identifying functional dynamics of large biomolecules at the CG level.

  18. Large-eddy simulation of ethanol spray combustion using a finite-rate combustion model

    Energy Technology Data Exchange (ETDEWEB)

    Li, K.; Zhou, L.X. [Tsinghua Univ., Beijing (China). Dept. of Engineering Mechanics; Chan, C.K. [Hong Kong Polytechnic Univ. (China). Dept. of Applied Mathematics

    2013-07-01

    Large-eddy simulation of spray combustion is under its rapid development, but the combustion models are less validated by detailed experimental data. In this paper, large-eddy simulation of ethanol-air spray combustion was made using an Eulerian-Lagrangian approach, a subgrid-scale kinetic energy stress model, and a finite-rate combustion model. The simulation results are validated in detail by experiments. The LES obtained statistically averaged temperature is in agreement with the experimental results in most regions. The instantaneous LES results show the coherent structures of the shear region near the high-temperature flame zone and the fuel vapor concentration map, indicating the droplets are concentrated in this shear region. The droplet sizes are found to be in the range of 20-100{mu}m. The instantaneous temperature map shows the close interaction between the coherent structures and the combustion reaction.

  19. Large-x dependence of νW2 in the generalized vector-dominance model

    International Nuclear Information System (INIS)

    Argyres, E.N.; Lam, C.S.

    1977-01-01

    It is well known that the usual generalized vector-meson-dominance (GVMD) model gives too large a contribution to νW 2 for large x. Various heuristic modifications, for example making use of the t/sub min/ effect, have been proposed in order to achieve a reduction of this contribution. In this paper we examine within the GVMD context whether such reductions can rigorously be achieved. This is done utilizing a potential as well as a relativistic eikonal model. We find that whereas a reduction equivalent to that of t/sub min/ can be arranged in vector-meson photoproduction, the same is not true for virtual-photon Compton scattering in such diagonal models. The reason for this difference is discussed in detail. Finally we show that the desired reduction can be obtained if nondiagonal vector-meson scattering terms are properly taken into account

  20. Electoral Proximity and the Political Involvement of Bureaucrats: A Natural Experiment in Argentina, 1904

    Directory of Open Access Journals (Sweden)

    Valentín Figueroa

    2016-01-01

    Full Text Available In this paper, I use a slightly modified version of the Becker–Stigler model of corrupt behavior to explain bureaucratic political involvement. Since bureaucrats prefer higher rewards and not to support losing candidates, we expect them to become politically involved near elections – when rewards are expected to be higher, and information more abundant. Taking advantage of a natural experiment, I employ differences-in-means and differences-in-differences techniques to esti-mate the effect of electoral proximity on the political involvement of justices of the peace in the city of Buenos Aires in 1904. I find a large, positive, and highly local effect of electoral proximity on their political involvement, with no appreciable impact in the months before or after elections.

  1. Modelling large scale human activity in San Francisco

    Science.gov (United States)

    Gonzalez, Marta

    2010-03-01

    Diverse group of people with a wide variety of schedules, activities and travel needs compose our cities nowadays. This represents a big challenge for modeling travel behaviors in urban environments; those models are of crucial interest for a wide variety of applications such as traffic forecasting, spreading of viruses, or measuring human exposure to air pollutants. The traditional means to obtain knowledge about travel behavior is limited to surveys on travel journeys. The obtained information is based in questionnaires that are usually costly to implement and with intrinsic limitations to cover large number of individuals and some problems of reliability. Using mobile phone data, we explore the basic characteristics of a model of human travel: The distribution of agents is proportional to the population density of a given region, and each agent has a characteristic trajectory size contain information on frequency of visits to different locations. Additionally we use a complementary data set given by smart subway fare cards offering us information about the exact time of each passenger getting in or getting out of the subway station and the coordinates of it. This allows us to uncover the temporal aspects of the mobility. Since we have the actual time and place of individual's origin and destination we can understand the temporal patterns in each visited location with further details. Integrating two described data set we provide a dynamical model of human travels that incorporates different aspects observed empirically.

  2. Mechanical test of the model coil wound with large conductor

    International Nuclear Information System (INIS)

    Hiue, Hisaaki; Sugimoto, Makoto; Nakajima, Hideo; Yasukawa, Yukio; Yoshida, Kiyoshi; Hasegawa, Mitsuru; Ito, Ikuo; Konno, Masayuki.

    1992-09-01

    The high rigidity and strength of the winding pack are required to realize the large superconducting magnet for the fusion reactor. This paper describes mechanical tests concerning the rigidity of the winding pack. Samples were prepared to evaluate the adhesive strength between conductors and insulators. Epoxy and Bismaleimide-Triazine resin (BT resin) were used as the conductor insulator. The stainless steel (SS) 304 bars, whose surface was treated mechanically and chemically, was applied to the modeled conductor. The model coil was would with the model conductors covered with the insulator by grand insulator. A winding model combining 3 x 3 conductors was produced for measuring shearing rigidity. The sample was loaded with pure shearing force at the LN 2 temperature. The bar winding sample, by 8 x 6 conductors, was measured the bending rigidity. These three point bending tests were carried out at room temperature. The pancake winding sample was loaded with compressive forces to measure compressive rigidity of winding. (author)

  3. Portraiture of constructivist parental involvement: A model to develop a community of practice

    Science.gov (United States)

    Dignam, Christopher Anthony

    This qualitative research study addressed the problem of the lack of parental involvement in secondary school science. Increasing parental involvement is vital in supporting student academic achievement and social growth. The purpose of this emergent phenomenological study was to identify conditions required to successfully construct a supportive learning environment to form partnerships between students, parents, and educators. The overall research question in this study investigated the conditions necessary to successfully enlist parental participation with students during science inquiry investigations at the secondary school level. One hundred thirteen pairs of parents and students engaged in a 6-week scientific inquiry activity and recorded attitudinal data in dialogue journals, questionnaires, open-ended surveys, and during one-one-one interviews conducted by the researcher between individual parents and students. Comparisons and cross-interpretations of inter-rater, codified, triangulated data were utilized for identifying emergent themes. Data analysis revealed the active involvement of parents in researching with their child during inquiry investigations, engaging in journaling, and assessing student performance fostered partnerships among students, parents, and educators and supported students' social skills development. The resulting model, employing constructivist leadership and enlisting parent involvement, provides conditions and strategies required to develop a community of practice that can help effect social change. The active involvement of parents fostered improved efficacy and a holistic mindset to develop in parents, students, and teachers. Based on these findings, the interactive collaboration of parents in science learning activities can proactively facilitate a community of practice that will assist educators in facilitating social change.

  4. District nurses' involvement in mental health: an exploratory survey.

    Science.gov (United States)

    Lee, Soo; Knight, Denise

    2006-04-01

    This article reports on a survey of district nurses' involvement in mental health interventions in one county. Seventy-nine questionnaires were sent and 46 were returned. Descriptive analysis was carried out using statistical software. The DNs reported encountering a wide range of mental health issues and interventions in practice: dementia, anxiety and depression featured highly. Over half (55%) of the respondents reported involvement in bereavement counselling, and 28% and 23% of respondents reported encountering anxiety management, and problem solving and alcohol advice respectively. A large proportion, however, reported no involvement in mental health interventions. Among the psychiatric professionals, district nurses tended to have most frequent contacts with social workers. GPs were the most likely person to whom DNs made referrals, followed by community psychiatric nurses. Despite the apparent awareness of the values of psychosocial interventions, DNs were equally influenced by the medical model of treatment. In order to realize the potential contribution of district nurses in mental health interventions, there is a need for primary care teams to foster a closer working relationship with mental health specialist services.

  5. Instantons and Large N

    Science.gov (United States)

    Mariño, Marcos

    2015-09-01

    Preface; Part I. Instantons: 1. Instantons in quantum mechanics; 2. Unstable vacua in quantum field theory; 3. Large order behavior and Borel summability; 4. Non-perturbative aspects of Yang-Mills theories; 5. Instantons and fermions; Part II. Large N: 6. Sigma models at large N; 7. The 1=N expansion in QCD; 8. Matrix models and matrix quantum mechanics at large N; 9. Large N QCD in two dimensions; 10. Instantons at large N; Appendix A. Harmonic analysis on S3; Appendix B. Heat kernel and zeta functions; Appendix C. Effective action for large N sigma models; References; Author index; Subject index.

  6. Prospectus: towards the development of high-fidelity models of wall turbulence at large Reynolds number.

    Science.gov (United States)

    Klewicki, J C; Chini, G P; Gibson, J F

    2017-03-13

    Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier-Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  7. Prospectus: towards the development of high-fidelity models of wall turbulence at large Reynolds number

    Science.gov (United States)

    Klewicki, J. C.; Chini, G. P.; Gibson, J. F.

    2017-01-01

    Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier–Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted. This article is part of the themed issue ‘Toward the development of high-fidelity models of wall turbulence at large Reynolds number’. PMID:28167585

  8. Lessons from the Large Hadron Collider for model-based experimentation : the concept of a model of data acquisition and the scope of the hierarchy of models

    NARCIS (Netherlands)

    Karaca, Koray

    2017-01-01

    According to the hierarchy of models (HoM) account of scientific experimentation developed by Patrick Suppes and elaborated by Deborah Mayo, theoretical considerations about the phenomena of interest are involved in an experiment through theoretical models that in turn relate to experimental data

  9. The Oncopig Cancer Model: An Innovative Large Animal Translational Oncology Platform

    DEFF Research Database (Denmark)

    Schachtschneider, Kyle M.; Schwind, Regina M.; Newson, Jordan

    2017-01-01

    -the Oncopig Cancer Model (OCM)-as a next-generation large animal platform for the study of hematologic and solid tumor oncology. With mutations in key tumor suppressor and oncogenes, TP53R167H and KRASG12D , the OCM recapitulates transcriptional hallmarks of human disease while also exhibiting clinically...

  10. Childhood craniopharyngioma: greater hypothalamic involvement before surgery is associated with higher homeostasis model insulin resistance index

    Directory of Open Access Journals (Sweden)

    Sainte-Rose Christian

    2009-04-01

    Full Text Available Abstract Background Obesity seems to be linked to the hypothalamic involvement in craniopharyngioma. We evaluated the pre-surgery relationship between the degree of this involvement on magnetic resonance imaging and insulin resistance, as evaluated by the homeostasis model insulin resistance index (HOMA. As insulin-like growth factor 1, leptin, soluble leptin receptor (sOB-R and ghrelin may also be involved, we compared their plasma concentrations and their link to weight change. Methods 27 children with craniopharyngioma were classified as either grade 0 (n = 7, no hypothalamic involvement, grade 1 (n = 8, compression without involvement, or grade 2 (n = 12, severe involvement. Results Despite having similar body mass indexes (BMI, the grade 2 patients had higher glucose, insulin and HOMA before surgery than the grade 0 (P = 0.02, The data for the whole population before and 6–18 months after surgery showed increases in BMI (P Conclusion The hypothalamic involvement by the craniopharyngioma before surgery seems to determine the degree of insulin resistance, regardless of the BMI. The pre-surgery HOMA values were correlated with the post-surgery weight gain. This suggests that obesity should be prevented by reducing inn secretion in those cases with hypothalamic involvement.

  11. One Patient, Two Uncommon B-Cell Neoplasms: Solitary Plasmacytoma following Complete Remission from Intravascular Large B-Cell Lymphoma Involving Central Nervous System

    Directory of Open Access Journals (Sweden)

    Joycelyn Lee

    2014-01-01

    Full Text Available Second lymphoid neoplasms are an uncommon but recognized feature of non-Hodgkin’s lymphomas, putatively arising secondary to common genetic or environmental risk factors. Previous limited evaluations of clonal relatedness between successive mature B-cell malignancies have yielded mixed results. We describe the case of a man with intravascular large B-cell lymphoma involving the central nervous system who went into clinical remission following immunochemotherapy and brain radiation, only to relapse 2 years later with a plasmacytoma of bone causing cauda equina syndrome. The plasmacytoma stained strongly for the cell cycle regulator cyclin D1 on immunohistochemistry, while the original intravascular large cell lymphoma was negative, a disparity providing no support for clonal identity between the 2 neoplasms. Continued efforts atcataloging and evaluating unique associations of B-cell malignancies are critical to improving understanding of overarching disease biology in B-cell malignancies.

  12. A 2D nonlinear multiring model for blood flow in large elastic arteries

    Science.gov (United States)

    Ghigo, Arthur R.; Fullana, Jose-Maria; Lagrée, Pierre-Yves

    2017-12-01

    In this paper, we propose a two-dimensional nonlinear ;multiring; model to compute blood flow in axisymmetric elastic arteries. This model is designed to overcome the numerical difficulties of three-dimensional fluid-structure interaction simulations of blood flow without using the over-simplifications necessary to obtain one-dimensional blood flow models. This multiring model is derived by integrating over concentric rings of fluid the simplified long-wave Navier-Stokes equations coupled to an elastic model of the arterial wall. The resulting system of balance laws provides a unified framework in which both the motion of the fluid and the displacement of the wall are dealt with simultaneously. The mathematical structure of the multiring model allows us to use a finite volume method that guarantees the conservation of mass and the positivity of the numerical solution and can deal with nonlinear flows and large deformations of the arterial wall. We show that the finite volume numerical solution of the multiring model provides at a reasonable computational cost an asymptotically valid description of blood flow velocity profiles and other averaged quantities (wall shear stress, flow rate, ...) in large elastic and quasi-rigid arteries. In particular, we validate the multiring model against well-known solutions such as the Womersley or the Poiseuille solutions as well as against steady boundary layer solutions in quasi-rigid constricted and expanded tubes.

  13. A simple transferable adaptive potential to study phase separation in large-scale xMgO-(1-x)SiO2 binary glasses.

    Science.gov (United States)

    Bidault, Xavier; Chaussedent, Stéphane; Blanc, Wilfried

    2015-10-21

    A simple transferable adaptive model is developed and it allows for the first time to simulate by molecular dynamics the separation of large phases in the MgO-SiO2 binary system, as experimentally observed and as predicted by the phase diagram, meaning that separated phases have various compositions. This is a real improvement over fixed-charge models, which are often limited to an interpretation involving the formation of pure clusters, or involving the modified random network model. Our adaptive model, efficient to reproduce known crystalline and glassy structures, allows us to track the formation of large amorphous Mg-rich Si-poor nanoparticles in an Mg-poor Si-rich matrix from a 0.1MgO-0.9SiO2 melt.

  14. Dynamic subgrid scale model of large eddy simulation of cross bundle flows

    International Nuclear Information System (INIS)

    Hassan, Y.A.; Barsamian, H.R.

    1996-01-01

    The dynamic subgrid scale closure model of Germano et. al (1991) is used in the large eddy simulation code GUST for incompressible isothermal flows. Tube bundle geometries of staggered and non-staggered arrays are considered in deep bundle simulations. The advantage of the dynamic subgrid scale model is the exclusion of an input model coefficient. The model coefficient is evaluated dynamically for each nodal location in the flow domain. Dynamic subgrid scale results are obtained in the form of power spectral densities and flow visualization of turbulent characteristics. Comparisons are performed among the dynamic subgrid scale model, the Smagorinsky eddy viscosity model (that is used as the base model for the dynamic subgrid scale model) and available experimental data. Spectral results of the dynamic subgrid scale model correlate better with experimental data. Satisfactory turbulence characteristics are observed through flow visualization

  15. QUAL-NET, a high temporal-resolution eutrophication model for large hydrographic networks

    Science.gov (United States)

    Minaudo, Camille; Curie, Florence; Jullian, Yann; Gassama, Nathalie; Moatar, Florentina

    2018-04-01

    To allow climate change impact assessment of water quality in river systems, the scientific community lacks efficient deterministic models able to simulate hydrological and biogeochemical processes in drainage networks at the regional scale, with high temporal resolution and water temperature explicitly determined. The model QUALity-NETwork (QUAL-NET) was developed and tested on the Middle Loire River Corridor, a sub-catchment of the Loire River in France, prone to eutrophication. Hourly variations computed efficiently by the model helped disentangle the complex interactions existing between hydrological and biological processes across different timescales. Phosphorus (P) availability was the most constraining factor for phytoplankton development in the Loire River, but simulating bacterial dynamics in QUAL-NET surprisingly evidenced large amounts of organic matter recycled within the water column through the microbial loop, which delivered significant fluxes of available P and enhanced phytoplankton growth. This explained why severe blooms still occur in the Loire River despite large P input reductions since 1990. QUAL-NET could be used to study past evolutions or predict future trajectories under climate change and land use scenarios.

  16. QUAL-NET, a high temporal-resolution eutrophication model for large hydrographic networks

    Directory of Open Access Journals (Sweden)

    C. Minaudo

    2018-04-01

    Full Text Available To allow climate change impact assessment of water quality in river systems, the scientific community lacks efficient deterministic models able to simulate hydrological and biogeochemical processes in drainage networks at the regional scale, with high temporal resolution and water temperature explicitly determined. The model QUALity-NETwork (QUAL-NET was developed and tested on the Middle Loire River Corridor, a sub-catchment of the Loire River in France, prone to eutrophication. Hourly variations computed efficiently by the model helped disentangle the complex interactions existing between hydrological and biological processes across different timescales. Phosphorus (P availability was the most constraining factor for phytoplankton development in the Loire River, but simulating bacterial dynamics in QUAL-NET surprisingly evidenced large amounts of organic matter recycled within the water column through the microbial loop, which delivered significant fluxes of available P and enhanced phytoplankton growth. This explained why severe blooms still occur in the Loire River despite large P input reductions since 1990. QUAL-NET could be used to study past evolutions or predict future trajectories under climate change and land use scenarios.

  17. Modifying a dynamic global vegetation model for simulating large spatial scale land surface water balance

    Science.gov (United States)

    Tang, G.; Bartlein, P. J.

    2012-01-01

    Water balance models of simple structure are easier to grasp and more clearly connect cause and effect than models of complex structure. Such models are essential for studying large spatial scale land surface water balance in the context of climate and land cover change, both natural and anthropogenic. This study aims to (i) develop a large spatial scale water balance model by modifying a dynamic global vegetation model (DGVM), and (ii) test the model's performance in simulating actual evapotranspiration (ET), soil moisture and surface runoff for the coterminous United States (US). Toward these ends, we first introduced development of the "LPJ-Hydrology" (LH) model by incorporating satellite-based land covers into the Lund-Potsdam-Jena (LPJ) DGVM instead of dynamically simulating them. We then ran LH using historical (1982-2006) climate data and satellite-based land covers at 2.5 arc-min grid cells. The simulated ET, soil moisture and surface runoff were compared to existing sets of observed or simulated data for the US. The results indicated that LH captures well the variation of monthly actual ET (R2 = 0.61, p 0.46, p 0.52) with observed values over the years 1982-2006, respectively. The modeled spatial patterns of annual ET and surface runoff are in accordance with previously published data. Compared to its predecessor, LH simulates better monthly stream flow in winter and early spring by incorporating effects of solar radiation on snowmelt. Overall, this study proves the feasibility of incorporating satellite-based land-covers into a DGVM for simulating large spatial scale land surface water balance. LH developed in this study should be a useful tool for studying effects of climate and land cover change on land surface hydrology at large spatial scales.

  18. The implementation of clay modeling and rat dissection into the human anatomy and physiology curriculum of a large urban community college.

    Science.gov (United States)

    Haspel, Carol; Motoike, Howard K; Lenchner, Erez

    2014-01-01

    After a considerable amount of research and experimentation, cat dissection was replaced with rat dissection and clay modeling in the human anatomy and physiology laboratory curricula at La Guardia Community College (LAGCC), a large urban community college of the City University of New York (CUNY). This article describes the challenges faculty overcame and the techniques used to solve them. Methods involved were: developing a laboratory manual in conjunction with the publisher, holding training sessions for faculty and staff, the development of instructional outlines for students and lesson plans for faculty, the installation of storage facilities to hold mannequins instead of cat specimens, and designing mannequin clean-up techniques that could be used by more than one thousand students each semester. The effectiveness of these curricular changes was assessed by examining student muscle practical examination grades and the responses of faculty and students to questionnaires. The results demonstrated that the majority of faculty felt prepared to teach using clay modeling and believed the activity was effective in presenting lesson content. Students undertaking clay modeling had significantly higher muscle practical examination grades than students undertaking cat dissection, and the majority of students believed that clay modeling was an effective technique to learn human skeletal, respiratory, and cardiovascular anatomy, which included the names and locations of blood vessels. Furthermore, the majority of students felt that rat dissection helped them learn nervous, digestive, urinary, and reproductive system anatomy. Faculty experience at LAGCC may serve as a resource to other academic institutions developing new curricula for large, on-going courses. © 2013 American Association of Anatomists.

  19. Nonlinear model and attitude dynamics of flexible spacecraft with large amplitude slosh

    Science.gov (United States)

    Deng, Mingle; Yue, Baozeng

    2017-04-01

    This paper is focused on the nonlinearly modelling and attitude dynamics of spacecraft coupled with large amplitude liquid sloshing dynamics and flexible appendage vibration. The large amplitude fuel slosh dynamics is included by using an improved moving pulsating ball model. The moving pulsating ball model is an equivalent mechanical model that is capable of imitating the whole liquid reorientation process. A modification is introduced in the capillary force computation in order to more precisely estimate the settling location of liquid in microgravity or zero-g environment. The flexible appendage is modelled as a three dimensional Bernoulli-Euler beam and the assumed modal method is employed to derive the nonlinear mechanical model for the overall coupled system of liquid filled spacecraft with appendage. The attitude maneuver is implemented by the momentum transfer technique, and a feedback controller is designed. The simulation results show that the liquid sloshing can always result in nutation behavior, but the effect of flexible deformation of appendage depends on the amplitude and direction of attitude maneuver performed by spacecraft. Moreover, it is found that the liquid sloshing and the vibration of flexible appendage are coupled with each other, and the coupling becomes more significant with more rapid motion of spacecraft. This study reveals that the appendage's flexibility has influence on the liquid's location and settling time in microgravity. The presented nonlinear system model can provide an important reference for the overall design of the modern spacecraft composed of rigid platform, liquid filled tank and flexible appendage.

  20. Design and modelling of innovative machinery systems for large ships

    DEFF Research Database (Denmark)

    Larsen, Ulrik

    Eighty percent of the growing global merchandise trade is transported by sea. The shipping industry is required to reduce the pollution and increase the energy efficiency of ships in the near future. There is a relatively large potential for approaching these requirements by implementing waste heat...... consisting of a two-zone combustion and NOx emission model, a double Wiebe heat release model, the Redlich-Kwong equation of state and the Woschni heat loss correlation. A novel methodology is presented and used to determine the optimum organic Rankine cycle process layout, working fluid and process......, are evaluated with regards to the fuel consumption and NOx emissions trade-off. The results of the calibration and validation of the engine model suggest that the main performance parameters can be predicted with adequate accuracies for the overall purpose. The results of the ORC and the Kalina cycle...

  1. Endolymphatic sac involvement in bacterial meningitis

    DEFF Research Database (Denmark)

    Møller, Martin Nue; Brandt, Christian; Østergaard, Christian

    2015-01-01

    The commonest sequelae of bacterial meningitis are related to the inner ear. Little is known about the inner ear immune defense. Evidence suggests that the endolymphatic sac provides some protection against infection. A potential involvement of the endolymphatic sac in bacterial meningitis...... is largely unaccounted for, and thus the object of the present study. A well-established adult rat model of Streptococcus pneumoniae meningitis was employed. Thirty adult rats were inoculated intrathecally with Streptococcus pneumoniae and received no additional treatment. Six rats were sham...... days. Bacteria invaded the inner ear through the cochlear aquaduct. On days 5-6, the bacteria invaded the endolymphatic sac through the endolymphatic duct subsequent to invasion of the vestibular endolymphatic compartment. No evidence of direct bacterial invasion of the sac through the meninges...

  2. Emergency preparedness: medical management of nuclear accidents involving large groups of victims

    International Nuclear Information System (INIS)

    Parmentier, N.; Nenot, J.C.

    1988-01-01

    The treatment of overexposed individuals implies hospitalisation in a specialized unit applying hematological intense care. If the accident results in a small number of casualties, the medical management does not raise major problems in most of the countries, where specialized units exist, as roughly 7% of the beds are available at any time. But an accident which would involved tens or hundreds of people raises much more problems for hospitalization. Such problems are also completely different and will involve steps in the medical handling, mainly triage, (combined injuries), determination of whole body dose levels, transient hospitalization. In this case, preplanning is necessary, adapted to the system of medical care in case of a catastrophic event in the given Country, with the main basic principles : emergency concerns essentially the classical injuries (burns and trauma) - and contamination problems in some cases - treatment of radiation syndrome is not an emergency during the first days but some essential actions have to be taken such as early blood sampling for biological dosimetry and for HLa typing

  3. Attenuation Model Using the Large-N Array from the Source Physics Experiment

    Science.gov (United States)

    Atterholt, J.; Chen, T.; Snelson, C. M.; Mellors, R. J.

    2017-12-01

    The Source Physics Experiment (SPE) consists of a series of chemical explosions at the Nevada National Security Site. SPE seeks to better characterize the influence of subsurface heterogeneities on seismic wave propagation and energy dissipation from explosions. As a part of this experiment, SPE-5, a 5000 kg TNT equivalent chemical explosion, was detonated in 2016. During the SPE-5 experiment, a Large-N array of 996 geophones (half 3-component and half z-component) was deployed. This array covered an area that includes loosely consolidated alluvium (weak rock) and weathered granite (hard rock), and recorded the SPE-5 explosion as well as 53 weight drops. We use these Large-N recordings to develop an attenuation model of the area to better characterize how geologic structures influence source energy partitioning. We found a clear variation in seismic attenuation for different rock types: high attenuation (low Q) for alluvium and low attenuation (high Q) for granite. The attenuation structure correlates well with local geology, and will be incorporated into the large simulation effort of the SPE program to validate predictive models. (LA-UR-17-26382)

  4. Response matrix method for large LMFBR analysis

    International Nuclear Information System (INIS)

    King, M.J.

    1977-06-01

    The feasibility of using response matrix techniques for computational models of large LMFBRs is examined. Since finite-difference methods based on diffusion theory have generally found a place in fast-reactor codes, a brief review of their general matrix foundation is given first in order to contrast it to the general strategy of response matrix methods. Then, in order to present the general method of response matrix technique, two illustrative examples are given. Matrix algorithms arising in the application to large LMFBRs are discussed, and the potential of the response matrix method is explored for a variety of computational problems. Principal properties of the matrices involved are derived with a view to application of numerical methods of solution. The Jacobi iterative method as applied to the current-balance eigenvalue problem is discussed

  5. Modeling and experiments of biomass combustion in a large-scale grate boiler

    DEFF Research Database (Denmark)

    Yin, Chungen; Rosendahl, Lasse; Kær, Søren Knudsen

    2007-01-01

    is inherently more difficult due to the complexity of the solid biomass fuel bed on the grate, the turbulent reacting flow in the combustion chamber and the intensive interaction between them. This paper presents the CFD validation efforts for a modern large-scale biomass-fired grate boiler. Modeling...... and experiments are both done for the grate boiler. The comparison between them shows an overall acceptable agreement in tendency. However at some measuring ports, big discrepancies between the modeling and the experiments are observed, mainly because the modeling-based boundary conditions (BCs) could differ...

  6. Structure of exotic nuclei by large-scale shell model calculations

    International Nuclear Information System (INIS)

    Utsuno, Yutaka; Otsuka, Takaharu; Mizusaki, Takahiro; Honma, Michio

    2006-01-01

    An extensive large-scale shell-model study is conducted for unstable nuclei around N = 20 and N = 28, aiming to investigate how the shell structure evolves from stable to unstable nuclei and affects the nuclear structure. The structure around N = 20 including the disappearance of the magic number is reproduced systematically, exemplified in the systematics of the electromagnetic moments in the Na isotope chain. As a key ingredient dominating the structure/shell evolution in the exotic nuclei from a general viewpoint, we pay attention to the tensor force. Including a proper strength of the tensor force in the effective interaction, we successfully reproduce the proton shell evolution ranging from N = 20 to 28 without any arbitrary modifications in the interaction and predict the ground state of 42Si to contain a large deformed component

  7. An overview of modeling methods for thermal mixing and stratification in large enclosures for reactor safety analysis

    Energy Technology Data Exchange (ETDEWEB)

    Haihua Zhao; Per F. Peterson

    2010-10-01

    Thermal mixing and stratification phenomena play major roles in the safety of reactor systems with large enclosures, such as containment safety in current fleet of LWRs, long-term passive containment cooling in Gen III+ plants including AP-1000 and ESBWR, the cold and hot pool mixing in pool type sodium cooled fast reactor systems (SFR), and reactor cavity cooling system behavior in high temperature gas cooled reactors (HTGR), etc. Depending on the fidelity requirement and computational resources, 0-D steady state models (heat transfer correlations), 0-D lumped parameter based transient models, 1-D physical-based coarse grain models, and 3-D CFD models are available. Current major system analysis codes either have no models or only 0-D models for thermal stratification and mixing, which can only give highly approximate results for simple cases. While 3-D CFD methods can be used to analyze simple configurations, these methods require very fine grid resolution to resolve thin substructures such as jets and wall boundaries. Due to prohibitive computational expenses for long transients in very large volumes, 3-D CFD simulations remain impractical for system analyses. For mixing in stably stratified large enclosures, UC Berkeley developed 1-D models basing on Zuber’s hierarchical two-tiered scaling analysis (HTTSA) method where the ambient fluid volume is represented by 1-D transient partial differential equations and substructures such as free or wall jets are modeled with 1-D integral models. This allows very large reductions in computational effort compared to 3-D CFD modeling. This paper will present an overview on important thermal mixing and stratification phenomena in large enclosures for different reactors, major modeling methods and their advantages and limits, potential paths to improve simulation capability and reduce analysis uncertainty in this area for advanced reactor system analysis tools.

  8. Validating modeled turbulent heat fluxes across large freshwater surfaces

    Science.gov (United States)

    Lofgren, B. M.; Fujisaki-Manome, A.; Gronewold, A.; Anderson, E. J.; Fitzpatrick, L.; Blanken, P.; Spence, C.; Lenters, J. D.; Xiao, C.; Charusambot, U.

    2017-12-01

    Turbulent fluxes of latent and sensible heat are important physical processes that influence the energy and water budgets of the Great Lakes. Validation and improvement of bulk flux algorithms to simulate these turbulent heat fluxes are critical for accurate prediction of hydrodynamics, water levels, weather, and climate over the region. Here we consider five heat flux algorithms from several model systems; the Finite-Volume Community Ocean Model, the Weather Research and Forecasting model, and the Large Lake Thermodynamics Model, which are used in research and operational environments and concentrate on different aspects of the Great Lakes' physical system, but interface at the lake surface. The heat flux algorithms were isolated from each model and driven by meteorological data from over-lake stations in the Great Lakes Evaporation Network. The simulation results were compared with eddy covariance flux measurements at the same stations. All models show the capacity to the seasonal cycle of the turbulent heat fluxes. Overall, the Coupled Ocean Atmosphere Response Experiment algorithm in FVCOM has the best agreement with eddy covariance measurements. Simulations with the other four algorithms are overall improved by updating the parameterization of roughness length scales of temperature and humidity. Agreement between modelled and observed fluxes notably varied with geographical locations of the stations. For example, at the Long Point station in Lake Erie, observed fluxes are likely influenced by the upwind land surface while the simulations do not take account of the land surface influence, and therefore the agreement is worse in general.

  9. An improved mounting device for attaching intracranial probes in large animal models.

    Science.gov (United States)

    Dunster, Kimble R

    2015-12-01

    The rigid support of intracranial probes can be difficult when using animal models, as mounting devices suitable for the probes are either not available, or designed for human use and not suitable in animal skulls. A cheap and reliable mounting device for securing intracranial probes in large animal models is described. Using commonly available clinical consumables, a universal mounting device for securing intracranial probes to the skull of large animals was developed and tested. A simply made mounting device to hold a variety of probes from 500 μm to 1.3 mm in diameter to the skull was developed. The device was used to hold probes to the skulls of sheep for up to 18 h. No adhesives or cements were used. The described device provides a reliable method of securing probes to the skull of animals.

  10. Two-Dimensional Physical and CFD Modelling of Large Gas Bubble Behaviour in Bath Smelting Furnaces

    Directory of Open Access Journals (Sweden)

    Yuhua Pan

    2010-09-01

    Full Text Available The behaviour of large gas bubbles in a liquid bath and the mechanisms of splash generation due to gas bubble rupture in high-intensity bath smelting furnaces were investigated by means of physical and mathematical (CFD modelling techniques. In the physical modelling work, a two-dimensional Perspex model of the pilot plant furnace at CSIRO Process Science and Engineering was established in the laboratory. An aqueous glycerol solution was used to simulate liquid slag. Air was injected via a submerged lance into the liquid bath and the bubble behaviour and the resultant splashing phenomena were observed and recorded with a high-speed video camera. In the mathematical modelling work, a two-dimensional CFD model was developed to simulate the free surface flows due to motion and deformation of large gas bubbles in the liquid bath and rupture of the bubbles at the bath free surface. It was concluded from these modelling investigations that the splashes generated in high-intensity bath smelting furnaces are mainly caused by the rupture of fast rising large gas bubbles. The acceleration of the bubbles into the preceding bubbles and the rupture of the coalescent bubbles at the bath surface contribute significantly to splash generation.

  11. Forcings and feedbacks on convection in the 2010 Pakistan flood: Modeling extreme precipitation with interactive large-scale ascent

    Science.gov (United States)

    Nie, Ji; Shaevitz, Daniel A.; Sobel, Adam H.

    2016-09-01

    Extratropical extreme precipitation events are usually associated with large-scale flow disturbances, strong ascent, and large latent heat release. The causal relationships between these factors are often not obvious, however, the roles of different physical processes in producing the extreme precipitation event can be difficult to disentangle. Here we examine the large-scale forcings and convective heating feedback in the precipitation events, which caused the 2010 Pakistan flood within the Column Quasi-Geostrophic framework. A cloud-revolving model (CRM) is forced with large-scale forcings (other than large-scale vertical motion) computed from the quasi-geostrophic omega equation using input data from a reanalysis data set, and the large-scale vertical motion is diagnosed interactively with the simulated convection. Numerical results show that the positive feedback of convective heating to large-scale dynamics is essential in amplifying the precipitation intensity to the observed values. Orographic lifting is the most important dynamic forcing in both events, while differential potential vorticity advection also contributes to the triggering of the first event. Horizontal moisture advection modulates the extreme events mainly by setting the environmental humidity, which modulates the amplitude of the convection's response to the dynamic forcings. When the CRM is replaced by either a single-column model (SCM) with parameterized convection or a dry model with a reduced effective static stability, the model results show substantial discrepancies compared with reanalysis data. The reasons for these discrepancies are examined, and the implications for global models and theoretical models are discussed.

  12. Validation of CALMET/CALPUFF models simulations around a large power plant stack

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez-Garces, A.; Souto, J. A.; Rodriguez, A.; Saavedra, S.; Casares, J. J.

    2015-07-01

    Calmest/CALPUFF modeling system is frequently used in the study of atmospheric processes and pollution, and several validation tests were performed until now; nevertheless, most of them were based on experiments with a large compilation of surface and aloft meteorological measurements, rarely available. At the same time, the use of a large operational smokestack as tracer/pollutant source is not usual. In this work, first CALMET meteorological diagnostic model is nested to WRF meteorological prognostic model simulations (3x3 km{sup 2} horizontal resolution) over a complex terrain and coastal domain at NW Spain, covering 100x100 km{sup 2}, with a coal-fired power plant emitting SO{sub 2}. Simulations were performed during three different periods when SO{sub 2} hourly glc peaks were observed. NCEP reanalysis were applied as initial and boundary conditions. Yong Sei University-Pleim-Chang (YSU) PBL scheme was selected in the WRF model to provide the best input to three different CALMET horizontal resolutions, 1x1 km{sup 2}, 0.5x0.5 km{sup 2}, and 0.2x0.2 km{sup 2}. The best results, very similar between them, were achieved using the last two resolutions; therefore, the 0.5x0.5 km{sup 2} resolution was selected to test different CALMET meteorological inputs, using several combinations of WRF outputs and/or surface and upper-air measurements available in the simulation domain. With respect to meteorological aloft models output, CALMET PBL depth estimations are very similar to PBL depth estimations using upper-air measurements (rawinsondes), and significantly better than WRF PBL depth results. Regarding surface models surface output, the available meteorological sites were divided in two groups, one to provide meteorological input to CALMET (when applied), and another to models validation. Comparing WRF and CALMET outputs against surface measurements (from sites for models validation) the lowest RMSE was achieved using as CALMET input dataset WRF output combined with

  13. Validation of CALMET/CALPUFF models simulations around a large power plant stack

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez-Garces, A.; Souto Rodriguez, J.A.; Saavedra, S.; Casares, J.J.

    2015-07-01

    CALMET/CALPUFF modeling system is frequently used in the study of atmospheric processes and pollution, and several validation tests were performed until now; nevertheless, most of them were based on experiments with a large compilation of surface and aloft meteorological measurements, rarely available. At the same time, the use of a large operational smokestack as tracer/pollutant source is not usual. In this work, first CALMET meteorological diagnostic model is nested to WRF meteorological prognostic model simulations (3x3 km2 horizontal resolution) over a complex terrain and coastal domain at NW Spain, covering 100x100 km2 , with a coal-fired power plant emitting SO2. Simulations were performed during three different periods when SO2 hourly glc peaks were observed. NCEP reanalysis were applied as initial and boundary conditions. Yong Sei University-Pleim-Chang (YSU) PBL scheme was selected in the WRF model to provide the best input to three different CALMET horizontal resolutions, 1x1 km2 , 0.5x0.5 km2 , and 0.2x0.2 km2. The best results, very similar between them, were achieved using the last two resolutions; therefore, the 0.5x0.5 km2 resolution was selected to test different CALMET meteorological inputs, using several combinations of WRF outputs and/or surface and upper-air measurements available in the simulation domain. With respect to meteorological aloft models output, CALMET PBL depth estimations are very similar to PBL depth estimations using upper-air measurements (rawinsondes), and significantly better than WRF PBL depth results. Regarding surface models surface output, the available meteorological sites were divided in two groups, one to provide meteorological input to CALMET (when applied), and another to models validation. Comparing WRF and CALMET outputs against surface measurements (from sites for models validation) the lowest RMSE was achieved using as CALMET input dataset WRF output combined with surface measurements (from sites for CALMET model

  14. Validation of CALMET/CALPUFF models simulations around a large power plant stack

    International Nuclear Information System (INIS)

    Hernandez-Garces, A.; Souto, J. A.; Rodriguez, A.; Saavedra, S.; Casares, J. J.

    2015-01-01

    Calmest/CALPUFF modeling system is frequently used in the study of atmospheric processes and pollution, and several validation tests were performed until now; nevertheless, most of them were based on experiments with a large compilation of surface and aloft meteorological measurements, rarely available. At the same time, the use of a large operational smokestack as tracer/pollutant source is not usual. In this work, first CALMET meteorological diagnostic model is nested to WRF meteorological prognostic model simulations (3x3 km 2 horizontal resolution) over a complex terrain and coastal domain at NW Spain, covering 100x100 km 2 , with a coal-fired power plant emitting SO 2 . Simulations were performed during three different periods when SO 2 hourly glc peaks were observed. NCEP reanalysis were applied as initial and boundary conditions. Yong Sei University-Pleim-Chang (YSU) PBL scheme was selected in the WRF model to provide the best input to three different CALMET horizontal resolutions, 1x1 km 2 , 0.5x0.5 km 2 , and 0.2x0.2 km 2 . The best results, very similar between them, were achieved using the last two resolutions; therefore, the 0.5x0.5 km 2 resolution was selected to test different CALMET meteorological inputs, using several combinations of WRF outputs and/or surface and upper-air measurements available in the simulation domain. With respect to meteorological aloft models output, CALMET PBL depth estimations are very similar to PBL depth estimations using upper-air measurements (rawinsondes), and significantly better than WRF PBL depth results. Regarding surface models surface output, the available meteorological sites were divided in two groups, one to provide meteorological input to CALMET (when applied), and another to models validation. Comparing WRF and CALMET outputs against surface measurements (from sites for models validation) the lowest RMSE was achieved using as CALMET input dataset WRF output combined with surface measurements (from sites for

  15. Coupled climate model simulations of Mediterranean winter cyclones and large-scale flow patterns

    Directory of Open Access Journals (Sweden)

    B. Ziv

    2013-03-01

    Full Text Available The study aims to evaluate the ability of global, coupled climate models to reproduce the synoptic regime of the Mediterranean Basin. The output of simulations of the 9 models included in the IPCC CMIP3 effort is compared to the NCEP-NCAR reanalyzed data for the period 1961–1990. The study examined the spatial distribution of cyclone occurrence, the mean Mediterranean upper- and lower-level troughs, the inter-annual variation and trend in the occurrence of the Mediterranean cyclones, and the main large-scale circulation patterns, represented by rotated EOFs of 500 hPa and sea level pressure. The models reproduce successfully the two maxima in cyclone density in the Mediterranean and their locations, the location of the average upper- and lower-level troughs, the relative inter-annual variation in cyclone occurrences and the structure of the four leading large scale EOFs. The main discrepancy is the models' underestimation of the cyclone density in the Mediterranean, especially in its western part. The models' skill in reproducing the cyclone distribution is found correlated with their spatial resolution, especially in the vertical. The current improvement in model spatial resolution suggests that their ability to reproduce the Mediterranean cyclones would be improved as well.

  16. Large scale air pollution estimation method combining land use regression and chemical transport modeling in a geostatistical framework.

    Science.gov (United States)

    Akita, Yasuyuki; Baldasano, Jose M; Beelen, Rob; Cirach, Marta; de Hoogh, Kees; Hoek, Gerard; Nieuwenhuijsen, Mark; Serre, Marc L; de Nazelle, Audrey

    2014-04-15

    In recognition that intraurban exposure gradients may be as large as between-city variations, recent air pollution epidemiologic studies have become increasingly interested in capturing within-city exposure gradients. In addition, because of the rapidly accumulating health data, recent studies also need to handle large study populations distributed over large geographic domains. Even though several modeling approaches have been introduced, a consistent modeling framework capturing within-city exposure variability and applicable to large geographic domains is still missing. To address these needs, we proposed a modeling framework based on the Bayesian Maximum Entropy method that integrates monitoring data and outputs from existing air quality models based on Land Use Regression (LUR) and Chemical Transport Models (CTM). The framework was applied to estimate the yearly average NO2 concentrations over the region of Catalunya in Spain. By jointly accounting for the global scale variability in the concentration from the output of CTM and the intraurban scale variability through LUR model output, the proposed framework outperformed more conventional approaches.

  17. Large scale injection test (LASGIT) modelling

    International Nuclear Information System (INIS)

    Arnedo, D.; Olivella, S.; Alonso, E.E.

    2010-01-01

    Document available in extended abstract form only. With the objective of understanding the gas flow processes through clay barriers in schemes of radioactive waste disposal, the Lasgit in situ experiment was planned and is currently in progress. The modelling of the experiment will permit to better understand of the responses, to confirm hypothesis of mechanisms and processes and to learn in order to design future experiments. The experiment and modelling activities are included in the project FORGE (FP7). The in situ large scale injection test Lasgit is currently being performed at the Aespoe Hard Rock Laboratory by SKB and BGS. An schematic layout of the test is shown. The deposition hole follows the KBS3 scheme. A copper canister is installed in the axe of the deposition hole, surrounded by blocks of highly compacted MX-80 bentonite. A concrete plug is placed at the top of the buffer. A metallic lid anchored to the surrounding host rock is included in order to prevent vertical movements of the whole system during gas injection stages (high gas injection pressures are expected to be reached). Hydration of the buffer material is achieved by injecting water through filter mats, two placed at the rock walls and two at the interfaces between bentonite blocks. Water is also injected through the 12 canister filters. Gas injection stages are performed injecting gas to some of the canister injection filters. Since the water pressure and the stresses (swelling pressure development) will be high during gas injection, it is necessary to inject at high gas pressures. This implies mechanical couplings as gas penetrates after the gas entry pressure is achieved and may produce deformations which in turn lead to permeability increments. A 3D hydro-mechanical numerical model of the test using CODE-BRIGHT is presented. The domain considered for the modelling is shown. The materials considered in the simulation are the MX-80 bentonite blocks (cylinders and rings), the concrete plug

  18. Modeling of dengue occurrences early warning involving temperature and rainfall factors

    Directory of Open Access Journals (Sweden)

    Prama Setia Putra

    2017-07-01

    Full Text Available Objective: To understand dengue transmission process and its vector dynamics and to develop early warning model of dengue occurrences based on mosquito population and host-vector threshold values considering temperature and rainfall. Methods: To obtain the early warning model, mosquito population and host-vector models are developed initially. Both are developed using differential equations. Basic offspring number (R0m and basic reproductive ratio (R0d which are the threshold values are derived from the models under constant parameters assumption. Temperature and rainfall effects on mosquito and dengue are performed in entomological and disease transmission parameters. Some of parameters are set as functions of temperature or rainfall while other parameters are set to be constant. Hereafter, both threshold values are computed using those parameters. Monthly dengue occurrences data are categorized as zero and one values which one means the outbreak does occur in that month. Logistics regression is chosen to bridge the threshold values and categorized data. Threshold values are considered as the input of early warning model. Semarang city is selected as the sample to develop this early waning model. Results: The derived threshold values which are R 0 m and R 0 d show to have relation that mosquito as dengue vector affects transmission of the disease. Result of the early warning model will be a value between zero and one. It is categorized as outbreak does occur when the value is larger than 0.5 while other is categorized as outbreak does not occur. By using single predictor, the model can perform 68% accuracy approximately. Conclusions: The extinction of mosquitoes will be followed by disease disappearance while mosquitoes existence can lead to disease free or endemic states. Model simulations show that mosquito population are more affected by weather factors than human. Involving weather factors implicitly in the threshold value and linking them

  19. Reference Management Methodologies for Large Structural Models at Kennedy Space Center

    Science.gov (United States)

    Jones, Corey; Bingham, Ryan; Schmidt, Rick

    2011-01-01

    There have been many challenges associated with modeling some of NASA KSC's largest structures. Given the size of the welded structures here at KSC, it was critically important to properly organize model struc.ture and carefully manage references. Additionally, because of the amount of hardware to be installed on these structures, it was very important to have a means to coordinate between different design teams and organizations, check for interferences, produce consistent drawings, and allow for simple release processes. Facing these challenges, the modeling team developed a unique reference management methodology and model fidelity methodology. This presentation will describe the techniques and methodologies that were developed for these projects. The attendees will learn about KSC's reference management and model fidelity methodologies for large structures. The attendees will understand the goals of these methodologies. The attendees will appreciate the advantages of developing a reference management methodology.

  20. Zone modelling of the thermal performances of a large-scale bloom reheating furnace

    International Nuclear Information System (INIS)

    Tan, Chee-Keong; Jenkins, Joana; Ward, John; Broughton, Jonathan; Heeley, Andy

    2013-01-01

    This paper describes the development and comparison of a two- (2D) and three-dimensional (3D) mathematical models, based on the zone method of radiation analysis, to simulate the thermal performances of a large bloom reheating furnace. The modelling approach adopted in the current paper differs from previous work since it takes into account the net radiation interchanges between the top and bottom firing sections of the furnace and also allows for enthalpy exchange due to the flows of combustion products between these sections. The models were initially validated at two different furnace throughput rates using experimental and plant's model data supplied by Tata Steel. The results to-date demonstrated that the model predictions are in good agreement with measured heating profiles of the blooms encountered in the actual furnace. It was also found no significant differences between the predictions from the 2D and 3D models. Following the validation, the 2D model was then used to assess the impact of the furnace responses to changing throughput rate. It was found that the potential furnace response to changing throughput rate influences the settling time of the furnace to the next steady state operation. Overall the current work demonstrates the feasibility and practicality of zone modelling and its potential for incorporation into a model based furnace control system. - Highlights: ► 2D and 3D zone models of large-scale bloom reheating furnace. ► The models were validated with experimental and plant model data. ► Examine the transient furnace response to changing the furnace throughput rates. ► No significant differences found between the predictions from the 2D and 3D models.

  1. A comparative modeling and molecular docking study on Mycobacterium tuberculosis targets involved in peptidoglycan biosynthesis.

    Science.gov (United States)

    Fakhar, Zeynab; Naiker, Suhashni; Alves, Claudio N; Govender, Thavendran; Maguire, Glenn E M; Lameira, Jeronimo; Lamichhane, Gyanu; Kruger, Hendrik G; Honarparvar, Bahareh

    2016-11-01

    An alarming rise of multidrug-resistant Mycobacterium tuberculosis strains and the continuous high global morbidity of tuberculosis have reinvigorated the need to identify novel targets to combat the disease. The enzymes that catalyze the biosynthesis of peptidoglycan in M. tuberculosis are essential and noteworthy therapeutic targets. In this study, the biochemical function and homology modeling of MurI, MurG, MraY, DapE, DapA, Alr, and Ddl enzymes of the CDC1551 M. tuberculosis strain involved in the biosynthesis of peptidoglycan cell wall are reported. Generation of the 3D structures was achieved with Modeller 9.13. To assess the structural quality of the obtained homology modeled targets, the models were validated using PROCHECK, PDBsum, QMEAN, and ERRAT scores. Molecular dynamics simulations were performed to calculate root mean square deviation (RMSD) and radius of gyration (Rg) of MurI and MurG target proteins and their corresponding templates. For further model validation, RMSD and Rg for selected targets/templates were investigated to compare the close proximity of their dynamic behavior in terms of protein stability and average distances. To identify the potential binding mode required for molecular docking, binding site information of all modeled targets was obtained using two prediction algorithms. A docking study was performed for MurI to determine the potential mode of interaction between the inhibitor and the active site residues. This study presents the first accounts of the 3D structural information for the selected M. tuberculosis targets involved in peptidoglycan biosynthesis.

  2. A mass-flux cumulus parameterization scheme for large-scale models: description and test with observations

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Tongwen [China Meteorological Administration (CMA), National Climate Center (Beijing Climate Center), Beijing (China)

    2012-02-15

    A simple mass-flux cumulus parameterization scheme suitable for large-scale atmospheric models is presented. The scheme is based on a bulk-cloud approach and has the following properties: (1) Deep convection is launched at the level of maximum moist static energy above the top of the boundary layer. It is triggered if there is positive convective available potential energy (CAPE) and relative humidity of the air at the lifting level of convection cloud is greater than 75%; (2) Convective updrafts for mass, dry static energy, moisture, cloud liquid water and momentum are parameterized by a one-dimensional entrainment/detrainment bulk-cloud model. The lateral entrainment of the environmental air into the unstable ascending parcel before it rises to the lifting condensation level is considered. The entrainment/detrainment amount for the updraft cloud parcel is separately determined according to the increase/decrease of updraft parcel mass with altitude, and the mass change for the adiabatic ascent cloud parcel with altitude is derived from a total energy conservation equation of the whole adiabatic system in which involves the updraft cloud parcel and the environment; (3) The convective downdraft is assumed saturated and originated from the level of minimum environmental saturated equivalent potential temperature within the updraft cloud; (4) The mass flux at the base of convective cloud is determined by a closure scheme suggested by Zhang (J Geophys Res 107(D14)), in which the increase/decrease of CAPE due to changes of the thermodynamic states in the free troposphere resulting from convection approximately balances the decrease/increase resulting from large-scale processes. Evaluation of the proposed convection scheme is performed by using a single column model (SCM) forced by the Atmospheric Radiation Measurement Program's (ARM) summer 1995 and 1997 Intensive Observing Period (IOP) observations, and field observations from the Global Atmospheric Research

  3. Power suppression at large scales in string inflation

    Energy Technology Data Exchange (ETDEWEB)

    Cicoli, Michele [Dipartimento di Fisica ed Astronomia, Università di Bologna, via Irnerio 46, Bologna, 40126 (Italy); Downes, Sean; Dutta, Bhaskar, E-mail: mcicoli@ictp.it, E-mail: sddownes@physics.tamu.edu, E-mail: dutta@physics.tamu.edu [Mitchell Institute for Fundamental Physics and Astronomy, Department of Physics and Astronomy, Texas A and M University, College Station, TX, 77843-4242 (United States)

    2013-12-01

    We study a possible origin of the anomalous suppression of the power spectrum at large angular scales in the cosmic microwave background within the framework of explicit string inflationary models where inflation is driven by a closed string modulus parameterizing the size of the extra dimensions. In this class of models the apparent power loss at large scales is caused by the background dynamics which involves a sharp transition from a fast-roll power law phase to a period of Starobinsky-like slow-roll inflation. An interesting feature of this class of string inflationary models is that the number of e-foldings of inflation is inversely proportional to the string coupling to a positive power. Therefore once the string coupling is tuned to small values in order to trust string perturbation theory, enough e-foldings of inflation are automatically obtained without the need of extra tuning. Moreover, in the less tuned cases the sharp transition responsible for the power loss takes place just before the last 50-60 e-foldings of inflation. We illustrate these general claims in the case of Fibre Inflation where we study the strength of this transition in terms of the attractor dynamics, finding that it induces a pivot from a blue to a redshifted power spectrum which can explain the apparent large scale power loss. We compute the effects of this pivot for example cases and demonstrate how magnitude and duration of this effect depend on model parameters.

  4. Simulated pre-industrial climate in Bergen Climate Model (version 2: model description and large-scale circulation features

    Directory of Open Access Journals (Sweden)

    O. H. Otterå

    2009-11-01

    Full Text Available The Bergen Climate Model (BCM is a fully-coupled atmosphere-ocean-sea-ice model that provides state-of-the-art computer simulations of the Earth's past, present, and future climate. Here, a pre-industrial multi-century simulation with an updated version of BCM is described and compared to observational data. The model is run without any form of flux adjustments and is stable for several centuries. The simulated climate reproduces the general large-scale circulation in the atmosphere reasonably well, except for a positive bias in the high latitude sea level pressure distribution. Also, by introducing an updated turbulence scheme in the atmosphere model a persistent cold bias has been eliminated. For the ocean part, the model drifts in sea surface temperatures and salinities are considerably reduced compared to earlier versions of BCM. Improved conservation properties in the ocean model have contributed to this. Furthermore, by choosing a reference pressure at 2000 m and including thermobaric effects in the ocean model, a more realistic meridional overturning circulation is simulated in the Atlantic Ocean. The simulated sea-ice extent in the Northern Hemisphere is in general agreement with observational data except for summer where the extent is somewhat underestimated. In the Southern Hemisphere, large negative biases are found in the simulated sea-ice extent. This is partly related to problems with the mixed layer parametrization, causing the mixed layer in the Southern Ocean to be too deep, which in turn makes it hard to maintain a realistic sea-ice cover here. However, despite some problematic issues, the pre-industrial control simulation presented here should still be appropriate for climate change studies requiring multi-century simulations.

  5. Repurposing of open data through large scale hydrological modelling - hypeweb.smhi.se

    Science.gov (United States)

    Strömbäck, Lena; Andersson, Jafet; Donnelly, Chantal; Gustafsson, David; Isberg, Kristina; Pechlivanidis, Ilias; Strömqvist, Johan; Arheimer, Berit

    2015-04-01

    Hydrological modelling demands large amounts of spatial data, such as soil properties, land use, topography, lakes and reservoirs, ice and snow coverage, water management (e.g. irrigation patterns and regulations), meteorological data and observed water discharge in rivers. By using such data, the hydrological model will in turn provide new data that can be used for new purposes (i.e. re-purposing). This presentation will give an example of how readily available open data from public portals have been re-purposed by using the Hydrological Predictions for the Environment (HYPE) model in a number of large-scale model applications covering numerous subbasins and rivers. HYPE is a dynamic, semi-distributed, process-based, and integrated catchment model. The model output is launched as new Open Data at the web site www.hypeweb.smhi.se to be used for (i) Climate change impact assessments on water resources and dynamics; (ii) The European Water Framework Directive (WFD) for characterization and development of measure programs to improve the ecological status of water bodies; (iii) Design variables for infrastructure constructions; (iv) Spatial water-resource mapping; (v) Operational forecasts (1-10 days and seasonal) on floods and droughts; (vi) Input to oceanographic models for operational forecasts and marine status assessments; (vii) Research. The following regional domains have been modelled so far with different resolutions (number of subbasins within brackets): Sweden (37 000), Europe (35 000), Arctic basin (30 000), La Plata River (6 000), Niger River (800), Middle-East North-Africa (31 000), and the Indian subcontinent (6 000). The Hype web site provides several interactive web applications for exploring results from the models. The user can explore an overview of various water variables for historical and future conditions. Moreover the user can explore and download historical time series of discharge for each basin and explore the performance of the model

  6. Job Satisfaction, Organizational Commitment and Job Involvement: The Mediating Role of Job Involvement.

    Science.gov (United States)

    Ćulibrk, Jelena; Delić, Milan; Mitrović, Slavica; Ćulibrk, Dubravko

    2018-01-01

    We conducted an empirical study aimed at identifying and quantifying the relationship between work characteristics, organizational commitment, job satisfaction, job involvement and organizational policies and procedures in the transition economy of Serbia, South Eastern Europe. The study, which included 566 persons, employed by 8 companies, revealed that existing models of work motivation need to be adapted to fit the empirical data, resulting in a revised research model elaborated in the paper. In the proposed model, job involvement partially mediates the effect of job satisfaction on organizational commitment. Job satisfaction in Serbia is affected by work characteristics but, contrary to many studies conducted in developed economies, organizational policies and procedures do not seem significantly affect employee satisfaction.

  7. Job Satisfaction, Organizational Commitment and Job Involvement: The Mediating Role of Job Involvement

    Science.gov (United States)

    Ćulibrk, Jelena; Delić, Milan; Mitrović, Slavica; Ćulibrk, Dubravko

    2018-01-01

    We conducted an empirical study aimed at identifying and quantifying the relationship between work characteristics, organizational commitment, job satisfaction, job involvement and organizational policies and procedures in the transition economy of Serbia, South Eastern Europe. The study, which included 566 persons, employed by 8 companies, revealed that existing models of work motivation need to be adapted to fit the empirical data, resulting in a revised research model elaborated in the paper. In the proposed model, job involvement partially mediates the effect of job satisfaction on organizational commitment. Job satisfaction in Serbia is affected by work characteristics but, contrary to many studies conducted in developed economies, organizational policies and procedures do not seem significantly affect employee satisfaction. PMID:29503623

  8. Hybrid Reynolds-Averaged/Large Eddy Simulation of the Flow in a Model SCRamjet Cavity Flameholder

    Science.gov (United States)

    Baurle, R. A.

    2016-01-01

    Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. Experimental data available for this configuration include velocity statistics obtained from particle image velocimetry. Several turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged/large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This e ort was undertaken to not only assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community, but to also begin to understand how this capability can best be used to augment standard Reynolds-averaged simulations. The numerical errors were quantified for the steady-state simulations, and at least qualitatively assessed for the scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results displayed a high degree of variability when comparing the flameholder fuel distributions obtained from each turbulence model. This prompted the consideration of applying the higher-fidelity scale-resolving simulations as a surrogate "truth" model to calibrate the Reynolds-averaged closures in a non-reacting setting prior to their use for the combusting simulations. In general, the Reynolds-averaged velocity profile predictions at the lowest fueling level matched the particle imaging measurements almost as well as was observed for the non-reacting condition. However, the velocity field predictions proved to be more sensitive to the flameholder fueling rate than was indicated in the measurements.

  9. Large-watershed flood simulation and forecasting based on different-resolution distributed hydrological model

    Science.gov (United States)

    Li, J.

    2017-12-01

    Large-watershed flood simulation and forecasting is very important for a distributed hydrological model in the application. There are some challenges including the model's spatial resolution effect, model performance and accuracy and so on. To cope with the challenge of the model's spatial resolution effect, different model resolution including 1000m*1000m, 600m*600m, 500m*500m, 400m*400m, 200m*200m were used to build the distributed hydrological model—Liuxihe model respectively. The purpose is to find which one is the best resolution for Liuxihe model in Large-watershed flood simulation and forecasting. This study sets up a physically based distributed hydrological model for flood forecasting of the Liujiang River basin in south China. Terrain data digital elevation model (DEM), soil type and land use type are downloaded from the website freely. The model parameters are optimized by using an improved Particle Swarm Optimization(PSO) algorithm; And parameter optimization could reduce the parameter uncertainty that exists for physically deriving model parameters. The different model resolution (200m*200m—1000m*1000m ) are proposed for modeling the Liujiang River basin flood with the Liuxihe model in this study. The best model's spatial resolution effect for flood simulation and forecasting is 200m*200m.And with the model's spatial resolution reduction, the model performance and accuracy also become worse and worse. When the model resolution is 1000m*1000m, the flood simulation and forecasting result is the worst, also the river channel divided based on this resolution is differs from the actual one. To keep the model with an acceptable performance, minimum model spatial resolution is needed. The suggested threshold model spatial resolution for modeling the Liujiang River basin flood is a 500m*500m grid cell, but the model spatial resolution with a 200m*200m grid cell is recommended in this study to keep the model at a best performance.

  10. Large scale solar district heating. Evaluation, modelling and designing

    Energy Technology Data Exchange (ETDEWEB)

    Heller, A.

    2000-07-01

    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the tool for design studies and on a local energy planning case. The evaluation of the central solar heating technology is based on measurements on the case plant in Marstal, Denmark, and on published and unpublished data for other, mainly Danish, CSDHP plants. Evaluations on the thermal, economical and environmental performances are reported, based on the experiences from the last decade. The measurements from the Marstal case are analysed, experiences extracted and minor improvements to the plant design proposed. For the detailed designing and energy planning of CSDHPs, a computer simulation model is developed and validated on the measurements from the Marstal case. The final model is then generalised to a 'generic' model for CSDHPs in general. The meteorological reference data, Danish Reference Year, is applied to find the mean performance for the plant designs. To find the expectable variety of the thermal performance of such plants, a method is proposed where data from a year with poor solar irradiation and a year with strong solar irradiation are applied. Equipped with a simulation tool design studies are carried out spreading from parameter analysis over energy planning for a new settlement to a proposal for the combination of plane solar collectors with high performance solar collectors, exemplified by a trough solar collector. The methodology of utilising computer simulation proved to be a cheap and relevant tool in the design of future solar heating plants. The thesis also exposed the demand for developing computer models for the more advanced solar collector designs and especially for the control operation of CSHPs. In the final chapter the CSHP technology is put into perspective with respect to other possible technologies to find the relevance of the application

  11. Model Predictive Control for Flexible Power Consumption of Large-Scale Refrigeration Systems

    DEFF Research Database (Denmark)

    Shafiei, Seyed Ehsan; Stoustrup, Jakob; Rasmussen, Henrik

    2014-01-01

    A model predictive control (MPC) scheme is introduced to directly control the electrical power consumption of large-scale refrigeration systems. Deviation from the baseline of the consumption is corresponded to the storing and delivering of thermal energy. By virtue of such correspondence...

  12. Characterize kinematic rupture history of large earthquakes with Multiple Haskell sources

    Science.gov (United States)

    Jia, Z.; Zhan, Z.

    2017-12-01

    Earthquakes are often regarded as continuous rupture along a single fault, but the occurrence of complex large events involving multiple faults and dynamic triggering challenges this view. Such rupture complexities cause difficulties in existing finite fault inversion algorithms, because they rely on specific parameterizations and regularizations to obtain physically meaningful solutions. Furthermore, it is difficult to assess reliability and uncertainty of obtained rupture models. Here we develop a Multi-Haskell Source (MHS) method to estimate rupture process of large earthquakes as a series of sub-events of varying location, timing and directivity. Each sub-event is characterized by a Haskell rupture model with uniform dislocation and constant unilateral rupture velocity. This flexible yet simple source parameterization allows us to constrain first-order rupture complexity of large earthquakes robustly. Additionally, relatively few parameters in the inverse problem yields improved uncertainty analysis based on Markov chain Monte Carlo sampling in a Bayesian framework. Synthetic tests and application of MHS method on real earthquakes show that our method can capture major features of large earthquake rupture process, and provide information for more detailed rupture history analysis.

  13. Modelling bark beetle disturbances in a large scale forest scenario model to assess climate change impacts and evaluate adaptive management strategies

    NARCIS (Netherlands)

    Seidl, R.; Schelhaas, M.J.; Lindner, M.; Lexer, M.J.

    2009-01-01

    To study potential consequences of climate-induced changes in the biotic disturbance regime at regional to national scale we integrated a model of Ips typographus (L. Scol. Col.) damages into the large-scale forest scenario model EFISCEN. A two-stage multivariate statistical meta-model was used to

  14. Long-term modelling of Carbon Capture and Storage, Nuclear Fusion, and large-scale District Heating

    DEFF Research Database (Denmark)

    Grohnheit, Poul Erik; Korsholm, Søren Bang; Lüthje, Mikael

    2011-01-01

    before 2050. The modelling tools developed by the International Energy Agency (IEA) Implementing Agreement ETSAP include both multi-regional global and long-term energy models till 2100, as well as national or regional models with shorter time horizons. Examples are the EFDA-TIMES model, focusing...... on nuclear fusion and the Pan European TIMES model, respectively. In the next decades CCS can be a driver for the development and expansion of large-scale district heating systems, which are currently widespread in Europe, Korea and China, and with large potentials in North America. If fusion will replace...... fossil fuel power plants with CCS in the second half of the century, the same infrastructure for heat distribution can be used which will support the penetration of both technologies. This paper will address the issue of infrastructure development and the use of CCS and fusion technologies using...

  15. Studies of supersymmetry models for the ATLAS experiment at the Large Hadron Collider

    CERN Document Server

    Barr, A J

    2002-01-01

    This thesis demonstrates that supersymmetry can be discovered with the ATLAS experiment even if nature conspires to choose one of two rather difficult cases. In the first case where baryon-number is weakly violated, the lightest supersymmetric particle decays into three quarks. This leads to events with a very large multiplicity of jets which presents a difficult combinatorical problem at a hadronic collider. The distinctive property of the second class of model -- anomaly-mediation -- is the near degeneracy of the super-partners of the SU(2) weak bosons. The heavier charged wino decays producing its invisible neutral partner, the presence of which must be inferred from the apparent non-conservation of transverse momentum, as well as secondary particle(s) with low transverse momentum which must be extracted from a large background. Monte-Carlo simulations are employed to show that for the models examined not only can the distinctive signature of the model can be extracted, but that a variety of measurements (...

  16. Dual Headquarters Involvement in Multibusiness Firms

    DEFF Research Database (Denmark)

    Nell, Phillip Christopher; Kappen, Philip; Dellestrand, Henrik

    The strategy literature has shown that headquarters involve themselves into subsidiary operations to add value. Yet, little is known about the extent to which multiple headquarters do so. Therefore, we investigate antecedents of corporate and divisional headquarters’ involvement in innovation...... development projects of subsidiaries. Analyses of 85 innovation development projects reveal that dual innovation importance (innovation that is important for the division and the rest of the firm), and dual dual embeddedness (innovating subsidiary is embedded both within the division and in the rest...... of the firm) lead to greater dual headquarters involvement, especially when the innovation development network is large. The results contribute to the literature on complex parenting and theory of selective headquarters involvement....

  17. A dynamic programming approach for quickly estimating large network-based MEV models

    DEFF Research Database (Denmark)

    Mai, Tien; Frejinger, Emma; Fosgerau, Mogens

    2017-01-01

    We propose a way to estimate a family of static Multivariate Extreme Value (MEV) models with large choice sets in short computational time. The resulting model is also straightforward and fast to use for prediction. Following Daly and Bierlaire (2006), the correlation structure is defined by a ro...... to converge (4.3 h on an Intel(R) 3.2 GHz machine using a non-parallelized code). We also show that our approach allows to estimate a cross-nested logit model of 111 nests with a real data set of more than 100,000 observations in 14 h....

  18. Hierarchical modeling and robust synthesis for the preliminary design of large scale complex systems

    Science.gov (United States)

    Koch, Patrick Nathan

    Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: (1) Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis, (2) Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration, and (3) Noise modeling techniques for implementing robust preliminary design when approximate models are employed. The method developed and associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system; the turbofan system-level problem is partitioned into engine cycle and configuration design and a compressor module is integrated for more detailed subsystem-level design exploration, improving system evaluation.

  19. Modal Measurements and Model Corrections of A Large Stroke Compliant Mechanism

    Directory of Open Access Journals (Sweden)

    Wijma W.

    2014-08-01

    Full Text Available In modelling flexure based mechanisms, generally flexures are modelled perfectly aligned and nominal values are assumed for the dimensions. To test the validity of these assumptions for a two Degrees Of Freedom (DOF large stroke compliant mechanism, eigenfrequency and mode shape measurements are compared to results obtained with a flexible multibody model. The mechanism consists of eleven cross flexures and seven interconnecting bodies. From the measurements 30% lower eigenfrequencies are observed than those obtained with the model. With a simplified model, it is demonstrated that these differences can be attributed to wrongly assumed leaf spring thickness and misalignment of the leaf springs in the cross flexures. These manufacturing tolerances thus significantly affect the behaviour of the two DOF mechanism, even though it was designed using the exact constraint design principle. This design principle avoids overconstraints to limit internal stresses due to manufacturing tolerances, yet this paper shows clearly that manufacturing imperfections can still result in significantly different dynamic behaviour.

  20. Development of Large Concrete Object Geometrical Model Based on Terrestrial Laser Scanning

    Directory of Open Access Journals (Sweden)

    Zaczek-Peplinska Janina

    2015-02-01

    Full Text Available The paper presents control periodic measurements of movements and survey of concrete dam on Dunajec River in Rożnów, Poland. Topographical survey was conducted using laser scanning technique. The goal of survey was data collection and creation of a geometrical model. Acquired cross- and horizontal sections were utilised to create a numerical model of object behaviour at various load depending of changing level of water in reservoir. Modelling was accomplished using finite elements technique. During the project an assessment was conducted to terrestrial laser scanning techniques for such type of research of large hydrotechnical objects such as gravitational water dams. Developed model can be used to define deformations and displacement prognosis.

  1. Large deviations of a long-time average in the Ehrenfest urn model

    Science.gov (United States)

    Meerson, Baruch; Zilber, Pini

    2018-05-01

    Since its inception in 1907, the Ehrenfest urn model (EUM) has served as a test bed of key concepts of statistical mechanics. Here we employ this model to study large deviations of a time-additive quantity. We consider two continuous-time versions of the EUM with K urns and N balls: with and without interactions between the balls in the same urn. We evaluate the probability distribution that the average number of balls in one urn over time T, , takes any specified value aN, where . For long observation time, , a Donsker–Varadhan large deviation principle holds: , where … denote additional parameters of the model. We calculate the rate function exactly by two different methods due to Donsker and Varadhan and compare the exact results with those obtained with a variant of WKB approximation (after Wentzel, Kramers and Brillouin). In the absence of interactions the WKB prediction for is exact for any N. In the presence of interactions the WKB method gives asymptotically exact results for . The WKB method also uncovers the (very simple) time history of the system which dominates the contribution of different time histories to .

  2. Ship detection using STFT sea background statistical modeling for large-scale oceansat remote sensing image

    Science.gov (United States)

    Wang, Lixia; Pei, Jihong; Xie, Weixin; Liu, Jinyuan

    2018-03-01

    Large-scale oceansat remote sensing images cover a big area sea surface, which fluctuation can be considered as a non-stationary process. Short-Time Fourier Transform (STFT) is a suitable analysis tool for the time varying nonstationary signal. In this paper, a novel ship detection method using 2-D STFT sea background statistical modeling for large-scale oceansat remote sensing images is proposed. First, the paper divides the large-scale oceansat remote sensing image into small sub-blocks, and 2-D STFT is applied to each sub-block individually. Second, the 2-D STFT spectrum of sub-blocks is studied and the obvious different characteristic between sea background and non-sea background is found. Finally, the statistical model for all valid frequency points in the STFT spectrum of sea background is given, and the ship detection method based on the 2-D STFT spectrum modeling is proposed. The experimental result shows that the proposed algorithm can detect ship targets with high recall rate and low missing rate.

  3. Modelling Morphological Response of Large Tidal Inlet Systems to Sea Level Rise

    NARCIS (Netherlands)

    Dissanayake, P.K.

    2011-01-01

    This dissertation qualitatively investigates the morphodynamic response of a large inlet system to IPCC projected relative sea level rise (RSLR). Adopted numerical approach (Delft3D) used a highly schematised model domain analogous to the Ameland inlet in the Dutch Wadden Sea. Predicted inlet

  4. Context-dependent encoding of fear and extinction memories in a large-scale network model of the basal amygdala.

    Science.gov (United States)

    Vlachos, Ioannis; Herry, Cyril; Lüthi, Andreas; Aertsen, Ad; Kumar, Arvind

    2011-03-01

    The basal nucleus of the amygdala (BA) is involved in the formation of context-dependent conditioned fear and extinction memories. To understand the underlying neural mechanisms we developed a large-scale neuron network model of the BA, composed of excitatory and inhibitory leaky-integrate-and-fire neurons. Excitatory BA neurons received conditioned stimulus (CS)-related input from the adjacent lateral nucleus (LA) and contextual input from the hippocampus or medial prefrontal cortex (mPFC). We implemented a plasticity mechanism according to which CS and contextual synapses were potentiated if CS and contextual inputs temporally coincided on the afferents of the excitatory neurons. Our simulations revealed a differential recruitment of two distinct subpopulations of BA neurons during conditioning and extinction, mimicking the activation of experimentally observed cell populations. We propose that these two subgroups encode contextual specificity of fear and extinction memories, respectively. Mutual competition between them, mediated by feedback inhibition and driven by contextual inputs, regulates the activity in the central amygdala (CEA) thereby controlling amygdala output and fear behavior. The model makes multiple testable predictions that may advance our understanding of fear and extinction memories.

  5. Context-dependent encoding of fear and extinction memories in a large-scale network model of the basal amygdala.

    Directory of Open Access Journals (Sweden)

    Ioannis Vlachos

    2011-03-01

    Full Text Available The basal nucleus of the amygdala (BA is involved in the formation of context-dependent conditioned fear and extinction memories. To understand the underlying neural mechanisms we developed a large-scale neuron network model of the BA, composed of excitatory and inhibitory leaky-integrate-and-fire neurons. Excitatory BA neurons received conditioned stimulus (CS-related input from the adjacent lateral nucleus (LA and contextual input from the hippocampus or medial prefrontal cortex (mPFC. We implemented a plasticity mechanism according to which CS and contextual synapses were potentiated if CS and contextual inputs temporally coincided on the afferents of the excitatory neurons. Our simulations revealed a differential recruitment of two distinct subpopulations of BA neurons during conditioning and extinction, mimicking the activation of experimentally observed cell populations. We propose that these two subgroups encode contextual specificity of fear and extinction memories, respectively. Mutual competition between them, mediated by feedback inhibition and driven by contextual inputs, regulates the activity in the central amygdala (CEA thereby controlling amygdala output and fear behavior. The model makes multiple testable predictions that may advance our understanding of fear and extinction memories.

  6. Assessing Human Modifications to Floodplains using Large-Scale Hydrogeomorphic Floodplain Modeling

    Science.gov (United States)

    Morrison, R. R.; Scheel, K.; Nardi, F.; Annis, A.

    2017-12-01

    Human modifications to floodplains for water resource and flood management purposes have significantly transformed river-floodplain connectivity dynamics in many watersheds. Bridges, levees, reservoirs, shifts in land use, and other hydraulic engineering works have altered flow patterns and caused changes in the timing and extent of floodplain inundation processes. These hydrogeomorphic changes have likely resulted in negative impacts to aquatic habitat and ecological processes. The availability of large-scale topographic datasets at high resolution provide an opportunity for detecting anthropogenic impacts by means of geomorphic mapping. We have developed and are implementing a methodology for comparing a hydrogeomorphic floodplain mapping technique to hydraulically-modeled floodplain boundaries to estimate floodplain loss due to human activities. Our hydrogeomorphic mapping methodology assumes that river valley morphology intrinsically includes information on flood-driven erosion and depositional phenomena. We use a digital elevation model-based algorithm to identify the floodplain as the area of the fluvial corridor laying below water reference levels, which are estimated using a simplified hydrologic model. Results from our hydrogeomorphic method are compared to hydraulically-derived flood zone maps and spatial datasets of levee protected-areas to explore where water management features, such as levees, have changed floodplain dynamics and landscape features. Parameters associated with commonly used F-index functions are quantified and analyzed to better understand how floodplain areas have been reduced within a basin. Preliminary results indicate that the hydrogeomorphic floodplain model is useful for quickly delineating floodplains at large watershed scales, but further analyses are needed to understand the caveats for using the model in determining floodplain loss due to levees. We plan to continue this work by exploring the spatial dependencies of the F

  7. Can limited area NWP and/or RCM models improve on large scales inside their domain?

    Science.gov (United States)

    Mesinger, Fedor; Veljovic, Katarina

    2017-04-01

    In a paper in press in Meteorology and Atmospheric Physics at the time this abstract is being written, Mesinger and Veljovic point out four requirements that need to be fulfilled by a limited area model (LAM), be it in NWP or RCM environment, to improve on large scales inside its domain. First, NWP/RCM model needs to be run on a relatively large domain. Note that domain size in quite inexpensive compared to resolution. Second, NWP/RCM model should not use more forcing at its boundaries than required by the mathematics of the problem. That means prescribing lateral boundary conditions only at its outside boundary, with one less prognostic variable prescribed at the outflow than at the inflow parts of the boundary. Next, nudging towards the large scales of the driver model must not be used, as it would obviously be nudging in the wrong direction if the nested model can improve on large scales inside its domain. And finally, the NWP/RCM model must have features that enable development of large scales improved compared to those of the driver model. This would typically include higher resolution, but obviously does not have to. Integrations showing improvements in large scales by LAM ensemble members are summarized in the mentioned paper in press. Ensemble members referred to are run using the Eta model, and are driven by ECMWF 32-day ensemble members, initialized 0000 UTC 4 October 2012. The Eta model used is the so-called "upgraded Eta," or "sloping steps Eta," which is free of the Gallus-Klemp problem of weak flow in the lee of the bell-shaped topography, seemed to many as suggesting the eta coordinate to be ill suited for high resolution models. The "sloping steps" in fact represent a simple version of the cut cell scheme. Accuracy of forecasting the position of jet stream winds, chosen to be those of speeds greater than 45 m/s at 250 hPa, expressed by Equitable Threat (or Gilbert) skill scores adjusted to unit bias (ETSa) was taken to show the skill at large scales

  8. Black holes from large N singlet models

    Science.gov (United States)

    Amado, Irene; Sundborg, Bo; Thorlacius, Larus; Wintergerst, Nico

    2018-03-01

    The emergent nature of spacetime geometry and black holes can be directly probed in simple holographic duals of higher spin gravity and tensionless string theory. To this end, we study time dependent thermal correlation functions of gauge invariant observables in suitably chosen free large N gauge theories. At low temperature and on short time scales the correlation functions encode propagation through an approximate AdS spacetime while interesting departures emerge at high temperature and on longer time scales. This includes the existence of evanescent modes and the exponential decay of time dependent boundary correlations, both of which are well known indicators of bulk black holes in AdS/CFT. In addition, a new time scale emerges after which the correlation functions return to a bulk thermal AdS form up to an overall temperature dependent normalization. A corresponding length scale was seen in equal time correlation functions in the same models in our earlier work.

  9. Large scale and cloud-based multi-model analytics experiments on climate change data in the Earth System Grid Federation

    Science.gov (United States)

    Fiore, Sandro; Płóciennik, Marcin; Doutriaux, Charles; Blanquer, Ignacio; Barbera, Roberto; Donvito, Giacinto; Williams, Dean N.; Anantharaj, Valentine; Salomoni, Davide D.; Aloisio, Giovanni

    2017-04-01

    In many scientific domains such as climate, data is often n-dimensional and requires tools that support specialized data types and primitives to be properly stored, accessed, analysed and visualized. Moreover, new challenges arise in large-scale scenarios and eco-systems where petabytes (PB) of data can be available and data can be distributed and/or replicated, such as the Earth System Grid Federation (ESGF) serving the Coupled Model Intercomparison Project, Phase 5 (CMIP5) experiment, providing access to 2.5PB of data for the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5). A case study on climate models intercomparison data analysis addressing several classes of multi-model experiments is being implemented in the context of the EU H2020 INDIGO-DataCloud project. Such experiments require the availability of large amount of data (multi-terabyte order) related to the output of several climate models simulations as well as the exploitation of scientific data management tools for large-scale data analytics. More specifically, the talk discusses in detail a use case on precipitation trend analysis in terms of requirements, architectural design solution, and infrastructural implementation. The experiment has been tested and validated on CMIP5 datasets, in the context of a large scale distributed testbed across EU and US involving three ESGF sites (LLNL, ORNL, and CMCC) and one central orchestrator site (PSNC). The general "environment" of the case study relates to: (i) multi-model data analysis inter-comparison challenges; (ii) addressed on CMIP5 data; and (iii) which are made available through the IS-ENES/ESGF infrastructure. The added value of the solution proposed in the INDIGO-DataCloud project are summarized in the following: (i) it implements a different paradigm (from client- to server-side); (ii) it intrinsically reduces data movement; (iii) it makes lightweight the end-user setup; (iv) it fosters re-usability (of data, final

  10. [Large vessels vasculopathy in systemic sclerosis].

    Science.gov (United States)

    Tejera Segura, Beatriz; Ferraz-Amaro, Iván

    2015-12-07

    Vasculopathy in systemic sclerosis is a severe, in many cases irreversible, manifestation that can lead to amputation. While the classical clinical manifestations of the disease have to do with the involvement of microcirculation, proximal vessels of upper and lower limbs can also be affected. This involvement of large vessels may be related to systemic sclerosis, vasculitis or atherosclerotic, and the differential diagnosis is not easy. To conduct a proper and early diagnosis, it is essential to start prompt appropriate treatment. In this review, we examine the involvement of large vessels in scleroderma, an understudied manifestation with important prognostic and therapeutic implications. Copyright © 2015 Elsevier España, S.L.U. All rights reserved.

  11. The necessary burden of involving stakeholders in agent-based modelling for education and decision-making

    Science.gov (United States)

    Bommel, P.; Bautista Solís, P.; Leclerc, G.

    2016-12-01

    We implemented a participatory process with water stakeholders for improving resilience to drought at watershed scale, and for reducing water pollution disputes in drought prone Northwestern Costa Rica. The purpose is to facilitate co-management in a rural watershed impacted by recurrent droughts related to ENSO. The process involved designing "ContaMiCuenca", a hybrid agent-based model where users can specify the decisions of their agents. We followed a Companion Modeling approach (www.commod.org) and organized 10 workshops that included research techniques such as participatory diagnostics, actor-resources-interaction and UML diagrams, multi-agents model design, and interactive simulation sessions. We collectively assessed the main water issues in the watershed, prioritized their importance, defined the objectives of the process, and pilot-tested ContaMiCuenca for environmental education with adults and children. Simulation sessions resulted in debates about the need to improve the model accuracy, arguably more relevant for decision-making. This helped identify sensible knowledge gaps in the groundwater pollution and aquifer dynamics that need to be addressed in order to improve our collective learning. Significant mismatches among participants expectations, objectives, and agendas considerably slowed down the participatory process. The main issue may originate in participants expecting technical solutions from a positivist science, as constantly promoted in the region by dole-out initiatives, which is incompatible with the constructivist stance of participatory modellers. This requires much closer interaction of community members with modellers, which may be hard to attain in the current research practice and institutional context. Nevertheless, overcoming these constraints is necessary for a true involvement of water stakeholders to achieve community-based decisions that facilitate integrated water management. Our findings provide significant guidance for

  12. Monte Carlo modelling of large scale NORM sources using MCNP.

    Science.gov (United States)

    Wallace, J D

    2013-12-01

    The representative Monte Carlo modelling of large scale planar sources (for comparison to external environmental radiation fields) is undertaken using substantial diameter and thin profile planar cylindrical sources. The relative impact of source extent, soil thickness and sky-shine are investigated to guide decisions relating to representative geometries. In addition, the impact of source to detector distance on the nature of the detector response, for a range of source sizes, has been investigated. These investigations, using an MCNP based model, indicate a soil cylinder of greater than 20 m diameter and of no less than 50 cm depth/height, combined with a 20 m deep sky section above the soil cylinder, are needed to representatively model the semi-infinite plane of uniformly distributed NORM sources. Initial investigation of the effect of detector placement indicate that smaller source sizes may be used to achieve a representative response at shorter source to detector distances. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  13. Predicting Preschoolers' Attachment Security from Fathers' Involvement, Internal Working Models, and Use of Social Support

    Science.gov (United States)

    Newland, Lisa A.; Coyl, Diana D.; Freeman, Harry

    2008-01-01

    Associations between preschoolers' attachment security, fathers' involvement (i.e. parenting behaviors and consistency) and fathering context (i.e. fathers' internal working models (IWMs) and use of social support) were examined in a subsample of 102 fathers, taken from a larger sample of 235 culturally diverse US families. The authors predicted…

  14. Large-signal modeling of multi-finger InP DHBT devices at millimeter-wave frequencies

    DEFF Research Database (Denmark)

    Johansen, Tom Keinicke; Midili, Virginio; Squartecchia, Michele

    2017-01-01

    A large-signal modeling approach has been developed for multi-finger devices fabricated in an Indium Phosphide (InP) Double Heterojunction Bipolar Transistor (DHBT) process. The approach utilizes unit-finger device models embedded in a multi-port parasitic network. The unit-finger model is based...... on an improved UCSD HBT model formulation avoiding an erroneous RciCbci transit-time contribution from the intrinsic collector region as found in other III-V based HBT models. The mutual heating between fingers is modeled by a thermal coupling network with parameters extracted from electro-thermal simulations...

  15. On Modeling Large-Scale Multi-Agent Systems with Parallel, Sequential and Genuinely Asynchronous Cellular Automata

    International Nuclear Information System (INIS)

    Tosic, P.T.

    2011-01-01

    We study certain types of Cellular Automata (CA) viewed as an abstraction of large-scale Multi-Agent Systems (MAS). We argue that the classical CA model needs to be modified in several important respects, in order to become a relevant and sufficiently general model for the large-scale MAS, and so that thus generalized model can capture many important MAS properties at the level of agent ensembles and their long-term collective behavior patterns. We specifically focus on the issue of inter-agent communication in CA, and propose sequential cellular automata (SCA) as the first step, and genuinely Asynchronous Cellular Automata (ACA) as the ultimate deterministic CA-based abstract models for large-scale MAS made of simple reactive agents. We first formulate deterministic and nondeterministic versions of sequential CA, and then summarize some interesting configuration space properties (i.e., possible behaviors) of a restricted class of sequential CA. In particular, we compare and contrast those properties of sequential CA with the corresponding properties of the classical (that is, parallel and perfectly synchronous) CA with the same restricted class of update rules. We analytically demonstrate failure of the studied sequential CA models to simulate all possible behaviors of perfectly synchronous parallel CA, even for a very restricted class of non-linear totalistic node update rules. The lesson learned is that the interleaving semantics of concurrency, when applied to sequential CA, is not refined enough to adequately capture the perfect synchrony of parallel CA updates. Last but not least, we outline what would be an appropriate CA-like abstraction for large-scale distributed computing insofar as the inter-agent communication model is concerned, and in that context we propose genuinely asynchronous CA. (author)

  16. Application of Large-Scale Database-Based Online Modeling to Plant State Long-Term Estimation

    Science.gov (United States)

    Ogawa, Masatoshi; Ogai, Harutoshi

    Recently, attention has been drawn to the local modeling techniques of a new idea called “Just-In-Time (JIT) modeling”. To apply “JIT modeling” to a large amount of database online, “Large-scale database-based Online Modeling (LOM)” has been proposed. LOM is a technique that makes the retrieval of neighboring data more efficient by using both “stepwise selection” and quantization. In order to predict the long-term state of the plant without using future data of manipulated variables, an Extended Sequential Prediction method of LOM (ESP-LOM) has been proposed. In this paper, the LOM and the ESP-LOM are introduced.

  17. Sample-path large deviations in credit risk

    NARCIS (Netherlands)

    Leijdekker, V.J.G.; Mandjes, M.R.H.; Spreij, P.J.C.

    2011-01-01

    The event of large losses plays an important role in credit risk. As these large losses are typically rare, and portfolios usually consist of a large number of positions, large deviation theory is the natural tool to analyze the tail asymptotics of the probabilities involved. We first derive a

  18. Influence of blocking on Northern European and Western Russian heatwaves in large climate model ensembles

    Science.gov (United States)

    Schaller, N.; Sillmann, J.; Anstey, J.; Fischer, E. M.; Grams, C. M.; Russo, S.

    2018-05-01

    Better preparedness for summer heatwaves could mitigate their adverse effects on society. This can potentially be attained through an increased understanding of the relationship between heatwaves and one of their main dynamical drivers, atmospheric blocking. In the 1979–2015 period, we find that there is a significant correlation between summer heatwave magnitudes and the number of days influenced by atmospheric blocking in Northern Europe and Western Russia. Using three large global climate model ensembles, we find similar correlations, indicating that these three models are able to represent the relationship between extreme temperature and atmospheric blocking, despite having biases in their simulation of individual climate variables such as temperature or geopotential height. Our results emphasize the need to use large ensembles of different global climate models as single realizations do not always capture this relationship. The three large ensembles further suggest that the relationship between summer heatwaves and atmospheric blocking will not change in the future. This could be used to statistically model heatwaves with atmospheric blocking as a covariate and aid decision-makers in planning disaster risk reduction and adaptation to climate change.

  19. Expression profiles of genes involved in xenobiotic metabolism and disposition in human renal tissues and renal cell models

    Energy Technology Data Exchange (ETDEWEB)

    Van der Hauwaert, Cynthia; Savary, Grégoire [EA4483, Université de Lille 2, Faculté de Médecine de Lille, Pôle Recherche, 59045 Lille (France); Buob, David [Institut de Pathologie, Centre de Biologie Pathologie Génétique, Centre Hospitalier Régional Universitaire de Lille, 59037 Lille (France); Leroy, Xavier; Aubert, Sébastien [Institut de Pathologie, Centre de Biologie Pathologie Génétique, Centre Hospitalier Régional Universitaire de Lille, 59037 Lille (France); Institut National de la Santé et de la Recherche Médicale, UMR837, Centre de Recherche Jean-Pierre Aubert, Equipe 5, 59045 Lille (France); Flamand, Vincent [Service d' Urologie, Hôpital Huriez, Centre Hospitalier Régional Universitaire de Lille, 59037 Lille (France); Hennino, Marie-Flore [EA4483, Université de Lille 2, Faculté de Médecine de Lille, Pôle Recherche, 59045 Lille (France); Service de Néphrologie, Hôpital Huriez, Centre Hospitalier Régional Universitaire de Lille, 59037 Lille (France); Perrais, Michaël [Institut National de la Santé et de la Recherche Médicale, UMR837, Centre de Recherche Jean-Pierre Aubert, Equipe 5, 59045 Lille (France); and others

    2014-09-15

    Numerous xenobiotics have been shown to be harmful for the kidney. Thus, to improve our knowledge of the cellular processing of these nephrotoxic compounds, we evaluated, by real-time PCR, the mRNA expression level of 377 genes encoding xenobiotic-metabolizing enzymes (XMEs), transporters, as well as nuclear receptors and transcription factors that coordinate their expression in eight normal human renal cortical tissues. Additionally, since several renal in vitro models are commonly used in pharmacological and toxicological studies, we investigated their metabolic capacities and compared them with those of renal tissues. The same set of genes was thus investigated in HEK293 and HK2 immortalized cell lines in commercial primary cultures of epithelial renal cells and in proximal tubular cell primary cultures. Altogether, our data offers a comprehensive description of kidney ability to process xenobiotics. Moreover, by hierarchical clustering, we observed large variations in gene expression profiles between renal cell lines and renal tissues. Primary cultures of proximal tubular epithelial cells exhibited the highest similarities with renal tissue in terms of transcript profiling. Moreover, compared to other renal cell models, Tacrolimus dose dependent toxic effects were lower in proximal tubular cell primary cultures that display the highest metabolism and disposition capacity. Therefore, primary cultures appear to be the most relevant in vitro model for investigating the metabolism and bioactivation of nephrotoxic compounds and for toxicological and pharmacological studies. - Highlights: • Renal proximal tubular (PT) cells are highly sensitive to xenobiotics. • Expression of genes involved in xenobiotic disposition was measured. • PT cells exhibited the highest similarities with renal tissue.

  20. Centrifuge modelling of large diameter pile in sand subject to lateral loading

    DEFF Research Database (Denmark)

    Leth, Caspar Thrane

    and cyclic behaviour of large diameter rigid piles in dry sand by use of physical modelling. The physical modelling has been carried out at Department of Civil Engineering at the Danish Technical University (DTU.BYG), in the period from 2005 to 2009. The main centrifuge facilities, and especially...... the equipment for lateral load tests were at the start of the research in 2005 outdated and a major part of the work with the geotechnical centrifuge included renovation and upgrading of the facilities. The research with respect to testing of large diameter piles included:  Construction of equipment...... with embedment lengths of 6, 8 and 10 times the diameter. The tests have been carried out with a load eccentricity of 2.5 m to 6.5 m above the sand surface. The present report includes a description of the centrifuge facilities, applied test procedure and equipment along with presentation of the obtained results....

  1. A simple orbit-attitude coupled modelling method for large solar power satellites

    Science.gov (United States)

    Li, Qingjun; Wang, Bo; Deng, Zichen; Ouyang, Huajiang; Wei, Yi

    2018-04-01

    A simple modelling method is proposed to study the orbit-attitude coupled dynamics of large solar power satellites based on natural coordinate formulation. The generalized coordinates are composed of Cartesian coordinates of two points and Cartesian components of two unitary vectors instead of Euler angles and angular velocities, which is the reason for its simplicity. Firstly, in order to develop natural coordinate formulation to take gravitational force and gravity gradient torque of a rigid body into account, Taylor series expansion is adopted to approximate the gravitational potential energy. The equations of motion are constructed through constrained Hamilton's equations. Then, an energy- and constraint-conserving algorithm is presented to solve the differential-algebraic equations. Finally, the proposed method is applied to simulate the orbit-attitude coupled dynamics and control of a large solar power satellite considering gravity gradient torque and solar radiation pressure. This method is also applicable to dynamic modelling of other rigid multibody aerospace systems.

  2. Distributed HUC-based modeling with SUMMA for ensemble streamflow forecasting over large regional domains.

    Science.gov (United States)

    Saharia, M.; Wood, A.; Clark, M. P.; Bennett, A.; Nijssen, B.; Clark, E.; Newman, A. J.

    2017-12-01

    Most operational streamflow forecasting systems rely on a forecaster-in-the-loop approach in which some parts of the forecast workflow require an experienced human forecaster. But this approach faces challenges surrounding process reproducibility, hindcasting capability, and extension to large domains. The operational hydrologic community is increasingly moving towards `over-the-loop' (completely automated) large-domain simulations yet recent developments indicate a widespread lack of community knowledge about the strengths and weaknesses of such systems for forecasting. A realistic representation of land surface hydrologic processes is a critical element for improving forecasts, but often comes at the substantial cost of forecast system agility and efficiency. While popular grid-based models support the distributed representation of land surface processes, intermediate-scale Hydrologic Unit Code (HUC)-based modeling could provide a more efficient and process-aligned spatial discretization, reducing the need for tradeoffs between model complexity and critical forecasting requirements such as ensemble methods and comprehensive model calibration. The National Center for Atmospheric Research is collaborating with the University of Washington, the Bureau of Reclamation and the USACE to implement, assess, and demonstrate real-time, over-the-loop distributed streamflow forecasting for several large western US river basins and regions. In this presentation, we present early results from short to medium range hydrologic and streamflow forecasts for the Pacific Northwest (PNW). We employ a real-time 1/16th degree daily ensemble model forcings as well as downscaled Global Ensemble Forecasting System (GEFS) meteorological forecasts. These datasets drive an intermediate-scale configuration of the Structure for Unifying Multiple Modeling Alternatives (SUMMA) model, which represents the PNW using over 11,700 HUCs. The system produces not only streamflow forecasts (using the Mizu

  3. Current fluctuations and statistics during a large deviation event in an exactly solvable transport model

    International Nuclear Information System (INIS)

    Hurtado, Pablo I; Garrido, Pedro L

    2009-01-01

    We study the distribution of the time-integrated current in an exactly solvable toy model of heat conduction, both analytically and numerically. The simplicity of the model allows us to derive the full current large deviation function and the system statistics during a large deviation event. In this way we unveil a relation between system statistics at the end of a large deviation event and for intermediate times. The mid-time statistics is independent of the sign of the current, a reflection of the time-reversal symmetry of microscopic dynamics, while the end-time statistics does depend on the current sign, and also on its microscopic definition. We compare our exact results with simulations based on the direct evaluation of large deviation functions, analyzing the finite-size corrections of this simulation method and deriving detailed bounds for its applicability. We also show how the Gallavotti–Cohen fluctuation theorem can be used to determine the range of validity of simulation results

  4. Patient involvement in hospital architecture

    DEFF Research Database (Denmark)

    Herriott, Richard

    2017-01-01

    the structure of the design process, identification and ranking of stakeholders, the methods of user-involvement and approaches to accessibility. The paper makes recommendations for a change of approach to user-participation in large-scale, long-duration projects. The paper adds new insight on an under...

  5. An Effect of the Environmental Pollution via Mathematical Model Involving the Mittag-Leffler Function

    Directory of Open Access Journals (Sweden)

    Anjali Goswami

    2017-08-01

    Full Text Available In the existing condition estimation of pollution effect on environment is big change for all of us. In this study we develop a new approach to estimate the effect of pollution on environment via mathematical model which involves the generalized Mittag-Leffler function of one variable $E_{\\alpha_{2},\\delta_{1};\\alpha_{3},\\delta_{2}}^{\\gamma_{1},\\alpha_{1}} (z$ which we introduced here.

  6. Scale breaking effects in the quark-parton model for large P perpendicular phenomena

    International Nuclear Information System (INIS)

    Baier, R.; Petersson, B.

    1977-01-01

    We discuss how the scaling violations suggested by an asymptotically free parton model, i.e., the Q 2 -dependence of the transverse momentum of partons within hadrons may affect the parton model description of large p perpendicular phenomena. We show that such a mechanism can provide an explanation for the magnitude of the opposite side correlations and their dependence on the trigger momentum. (author)

  7. Sizing and scaling requirements of a large-scale physical model for code validation

    International Nuclear Information System (INIS)

    Khaleel, R.; Legore, T.

    1990-01-01

    Model validation is an important consideration in application of a code for performance assessment and therefore in assessing the long-term behavior of the engineered and natural barriers of a geologic repository. Scaling considerations relevant to porous media flow are reviewed. An analysis approach is presented for determining the sizing requirements of a large-scale, hydrology physical model. The physical model will be used to validate performance assessment codes that evaluate the long-term behavior of the repository isolation system. Numerical simulation results for sizing requirements are presented for a porous medium model in which the media properties are spatially uncorrelated

  8. Use of personal computers in performing a linear modal analysis of a large finite-element model

    International Nuclear Information System (INIS)

    Wagenblast, G.R.

    1991-01-01

    This paper presents the use of personal computers in performing a dynamic frequency analysis of a large (2,801 degrees of freedom) finite-element model. Large model linear time history dynamic evaluations of safety related structures were previously restricted to mainframe computers using direct integration analysis methods. This restriction was a result of the limited memory and speed of personal computers. With the advances in memory capacity and speed of the personal computers, large finite-element problems now can be solved in the office in a timely and cost effective manner. Presented in three sections, this paper describes the procedure used to perform the dynamic frequency analysis of the large (2,801 degrees of freedom) finite-element model on a personal computer. Section 2.0 describes the structure and the finite-element model that was developed to represent the structure for use in the dynamic evaluation. Section 3.0 addresses the hardware and software used to perform the evaluation and the optimization of the hardware and software operating configuration to minimize the time required to perform the analysis. Section 4.0 explains the analysis techniques used to reduce the problem to a size compatible with the hardware and software memory capacity and configuration

  9. Large-N behaviour of string solutions in the Heisenberg model

    CERN Document Server

    Fujita, T; Takahashi, H

    2003-01-01

    We investigate the large-N behaviour of the complex solutions for the two-magnon system in the S = 1/2 Heisenberg XXZ model. The Bethe ansatz equations are numerically solved for the string solutions with a new iteration method. Clear evidence of the violation of the string configurations is found at N = 22, 62, 121, 200, 299, 417, but the broken states are still Bethe states. The number of Bethe states is consistent with the exact diagonalization, except for one singular state.

  10. A large deviations approach to the transient of the Erlang loss model

    NARCIS (Netherlands)

    Mandjes, M.R.H.; Ridder, Annemarie

    2001-01-01

    This paper deals with the transient behavior of the Erlang loss model. After scaling both arrival rate and number of trunks, an asymptotic analysis of the blocking probability is given. Apart from that, the most likely path to blocking is given. Compared to Shwartz and Weiss [Large Deviations for

  11. Earthquake cycles and physical modeling of the process leading up to a large earthquake

    Science.gov (United States)

    Ohnaka, Mitiyasu

    2004-08-01

    A thorough discussion is made on what the rational constitutive law for earthquake ruptures ought to be from the standpoint of the physics of rock friction and fracture on the basis of solid facts observed in the laboratory. From this standpoint, it is concluded that the constitutive law should be a slip-dependent law with parameters that may depend on slip rate or time. With the long-term goal of establishing a rational methodology of forecasting large earthquakes, the entire process of one cycle for a typical, large earthquake is modeled, and a comprehensive scenario that unifies individual models for intermediate-and short-term (immediate) forecasts is presented within the framework based on the slip-dependent constitutive law and the earthquake cycle model. The earthquake cycle includes the phase of accumulation of elastic strain energy with tectonic loading (phase II), and the phase of rupture nucleation at the critical stage where an adequate amount of the elastic strain energy has been stored (phase III). Phase II plays a critical role in physical modeling of intermediate-term forecasting, and phase III in physical modeling of short-term (immediate) forecasting. The seismogenic layer and individual faults therein are inhomogeneous, and some of the physical quantities inherent in earthquake ruptures exhibit scale-dependence. It is therefore critically important to incorporate the properties of inhomogeneity and physical scaling, in order to construct realistic, unified scenarios with predictive capability. The scenario presented may be significant and useful as a necessary first step for establishing the methodology for forecasting large earthquakes.

  12. Modeling the coupled return-spread high frequency dynamics of large tick assets

    Science.gov (United States)

    Curato, Gianbiagio; Lillo, Fabrizio

    2015-01-01

    Large tick assets, i.e. assets where one tick movement is a significant fraction of the price and bid-ask spread is almost always equal to one tick, display a dynamics in which price changes and spread are strongly coupled. We present an approach based on the hidden Markov model, also known in econometrics as the Markov switching model, for the dynamics of price changes, where the latent Markov process is described by the transitions between spreads. We then use a finite Markov mixture of logit regressions on past squared price changes to describe temporal dependencies in the dynamics of price changes. The model can thus be seen as a double chain Markov model. We show that the model describes the shape of the price change distribution at different time scales, volatility clustering, and the anomalous decrease of kurtosis. We calibrate our models based on Nasdaq stocks and we show that this model reproduces remarkably well the statistical properties of real data.

  13. The Amateurs' Love Affair with Large Datasets

    Science.gov (United States)

    Price, Aaron; Jacoby, S. H.; Henden, A.

    2006-12-01

    Amateur astronomers are professionals in other areas. They bring expertise from such varied and technical careers as computer science, mathematics, engineering, and marketing. These skills, coupled with an enthusiasm for astronomy, can be used to help manage the large data sets coming online in the next decade. We will show specific examples where teams of amateurs have been involved in mining large, online data sets and have authored and published their own papers in peer-reviewed astronomical journals. Using the proposed LSST database as an example, we will outline a framework for involving amateurs in data analysis and education with large astronomical surveys.

  14. Parameter estimation in large-scale systems biology models: a parallel and self-adaptive cooperative strategy.

    Science.gov (United States)

    Penas, David R; González, Patricia; Egea, Jose A; Doallo, Ramón; Banga, Julio R

    2017-01-21

    The development of large-scale kinetic models is one of the current key issues in computational systems biology and bioinformatics. Here we consider the problem of parameter estimation in nonlinear dynamic models. Global optimization methods can be used to solve this type of problems but the associated computational cost is very large. Moreover, many of these methods need the tuning of a number of adjustable search parameters, requiring a number of initial exploratory runs and therefore further increasing the computation times. Here we present a novel parallel method, self-adaptive cooperative enhanced scatter search (saCeSS), to accelerate the solution of this class of problems. The method is based on the scatter search optimization metaheuristic and incorporates several key new mechanisms: (i) asynchronous cooperation between parallel processes, (ii) coarse and fine-grained parallelism, and (iii) self-tuning strategies. The performance and robustness of saCeSS is illustrated by solving a set of challenging parameter estimation problems, including medium and large-scale kinetic models of the bacterium E. coli, bakerés yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The results consistently show that saCeSS is a robust and efficient method, allowing very significant reduction of computation times with respect to several previous state of the art methods (from days to minutes, in several cases) even when only a small number of processors is used. The new parallel cooperative method presented here allows the solution of medium and large scale parameter estimation problems in reasonable computation times and with small hardware requirements. Further, the method includes self-tuning mechanisms which facilitate its use by non-experts. We believe that this new method can play a key role in the development of large-scale and even whole-cell dynamic models.

  15. Deterministic Model for Rubber-Metal Contact Including the Interaction Between Asperities

    NARCIS (Netherlands)

    Deladi, E.L.; de Rooij, M.B.; Schipper, D.J.

    2005-01-01

    Rubber-metal contact involves relatively large deformations and large real contact areas compared to metal-metal contact. Here, a deterministic model is proposed for the contact between rubber and metal surfaces, which takes into account the interaction between neighboring asperities. In this model,

  16. Characteristics of joint involvement and relationships with systemic inflammation in systemic sclerosis

    DEFF Research Database (Denmark)

    Avouac, Jerome; Walker, Ulrich; Tyndall, Alan

    2010-01-01

    To determine the prevalence of and independent factors associated with joint involvement in a large population of patients with systemic sclerosis (SSc).......To determine the prevalence of and independent factors associated with joint involvement in a large population of patients with systemic sclerosis (SSc)....

  17. A Bayesian spatio-temporal geostatistical model with an auxiliary lattice for large datasets

    KAUST Repository

    Xu, Ganggang

    2015-01-01

    When spatio-temporal datasets are large, the computational burden can lead to failures in the implementation of traditional geostatistical tools. In this paper, we propose a computationally efficient Bayesian hierarchical spatio-temporal model in which the spatial dependence is approximated by a Gaussian Markov random field (GMRF) while the temporal correlation is described using a vector autoregressive model. By introducing an auxiliary lattice on the spatial region of interest, the proposed method is not only able to handle irregularly spaced observations in the spatial domain, but it is also able to bypass the missing data problem in a spatio-temporal process. Because the computational complexity of the proposed Markov chain Monte Carlo algorithm is of the order O(n) with n the total number of observations in space and time, our method can be used to handle very large spatio-temporal datasets with reasonable CPU times. The performance of the proposed model is illustrated using simulation studies and a dataset of precipitation data from the coterminous United States.

  18. Large-order behavior of nondecoupling effects in the standard model and triviality

    International Nuclear Information System (INIS)

    Aoki, K.

    1994-01-01

    We compute some nondecoupling effects in the standard model, such as the ρ parameter, to all orders in the coupling constant expansion. We analyze their large order behavior and explicitly show how they are related to the nonperturbative cutoff dependence of these nondecoupling effects due to the triviality of the theory

  19. Multiscale virtual particle based elastic network model (MVP-ENM) for normal mode analysis of large-sized biomolecules.

    Science.gov (United States)

    Xia, Kelin

    2017-12-20

    In this paper, a multiscale virtual particle based elastic network model (MVP-ENM) is proposed for the normal mode analysis of large-sized biomolecules. The multiscale virtual particle (MVP) model is proposed for the discretization of biomolecular density data. With this model, large-sized biomolecular structures can be coarse-grained into virtual particles such that a balance between model accuracy and computational cost can be achieved. An elastic network is constructed by assuming "connections" between virtual particles. The connection is described by a special harmonic potential function, which considers the influence from both the mass distributions and distance relations of the virtual particles. Two independent models, i.e., the multiscale virtual particle based Gaussian network model (MVP-GNM) and the multiscale virtual particle based anisotropic network model (MVP-ANM), are proposed. It has been found that in the Debye-Waller factor (B-factor) prediction, the results from our MVP-GNM with a high resolution are as good as the ones from GNM. Even with low resolutions, our MVP-GNM can still capture the global behavior of the B-factor very well with mismatches predominantly from the regions with large B-factor values. Further, it has been demonstrated that the low-frequency eigenmodes from our MVP-ANM are highly consistent with the ones from ANM even with very low resolutions and a coarse grid. Finally, the great advantage of MVP-ANM model for large-sized biomolecules has been demonstrated by using two poliovirus virus structures. The paper ends with a conclusion.

  20. Neuronal involvement in cisplatin neuropathy

    DEFF Research Database (Denmark)

    Krarup-Hansen, A; Helweg-Larsen, Susanne Elisabeth; Schmalbruch, H

    2007-01-01

    of large dorsal root ganglion cells. Motor conduction studies, autonomic function and warm and cold temperature sensation remained unchanged at all doses of cisplatin treatment. The results of these studies are consistent with degeneration of large sensory neurons whereas there was no evidence of distal......Although it is well known that cisplatin causes a sensory neuropathy, the primary site of involvement is not established. The clinical symptoms localized in a stocking-glove distribution may be explained by a length dependent neuronopathy or by a distal axonopathy. To study whether the whole neuron...

  1. Solving large mixed linear models using preconditioned conjugate gradient iteration.

    Science.gov (United States)

    Strandén, I; Lidauer, M

    1999-12-01

    Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.

  2. Planck limits on non-canonical generalizations of large-field inflation models

    Energy Technology Data Exchange (ETDEWEB)

    Stein, Nina K.; Kinney, William H., E-mail: ninastei@buffalo.edu, E-mail: whkinney@buffalo.edu [Dept. of Physics, University at Buffalo, the State University of New York, Buffalo, NY 14260-1500 (United States)

    2017-04-01

    In this paper, we consider two case examples of Dirac-Born-Infeld (DBI) generalizations of canonical large-field inflation models, characterized by a reduced sound speed, c {sub S} < 1. The reduced speed of sound lowers the tensor-scalar ratio, improving the fit of the models to the data, but increases the equilateral-mode non-Gaussianity, f {sup equil.}{sub NL}, which the latest results from the Planck satellite constrain by a new upper bound. We examine constraints on these models in light of the most recent Planck and BICEP/Keck results, and find that they have a greatly decreased window of viability. The upper bound on f {sup equil.}{sub NL} corresponds to a lower bound on the sound speed and a corresponding lower bound on the tensor-scalar ratio of r ∼ 0.01, so that near-future Cosmic Microwave Background observations may be capable of ruling out entire classes of DBI inflation models. The result is, however, not universal: infrared-type DBI inflation models, where the speed of sound increases with time, are not subject to the bound.

  3. A wave propagation model of blood flow in large vessels using an approximate velocity profile function

    NARCIS (Netherlands)

    Bessems, D.; Rutten, M.C.M.; Vosse, van de F.N.

    2007-01-01

    Lumped-parameter models (zero-dimensional) and wave-propagation models (one-dimensional) for pressure and flow in large vessels, as well as fully three-dimensional fluid–structure interaction models for pressure and velocity, can contribute valuably to answering physiological and patho-physiological

  4. Large-scale solar purchasing

    International Nuclear Information System (INIS)

    1999-01-01

    The principal objective of the project was to participate in the definition of a new IEA task concerning solar procurement (''the Task'') and to assess whether involvement in the task would be in the interest of the UK active solar heating industry. The project also aimed to assess the importance of large scale solar purchasing to UK active solar heating market development and to evaluate the level of interest in large scale solar purchasing amongst potential large scale purchasers (in particular housing associations and housing developers). A further aim of the project was to consider means of stimulating large scale active solar heating purchasing activity within the UK. (author)

  5. Global models underestimate large decadal declining and rising water storage trends relative to GRACE satellite data

    Science.gov (United States)

    Scanlon, Bridget R.; Zhang, Zizhan; Save, Himanshu; Sun, Alexander Y.; van Beek, Ludovicus P. H.; Wiese, David N.; Reedy, Robert C.; Longuevergne, Laurent; Döll, Petra; Bierkens, Marc F. P.

    2018-01-01

    Assessing reliability of global models is critical because of increasing reliance on these models to address past and projected future climate and human stresses on global water resources. Here, we evaluate model reliability based on a comprehensive comparison of decadal trends (2002–2014) in land water storage from seven global models (WGHM, PCR-GLOBWB, GLDAS NOAH, MOSAIC, VIC, CLM, and CLSM) to trends from three Gravity Recovery and Climate Experiment (GRACE) satellite solutions in 186 river basins (∼60% of global land area). Medians of modeled basin water storage trends greatly underestimate GRACE-derived large decreasing (≤−0.5 km3/y) and increasing (≥0.5 km3/y) trends. Decreasing trends from GRACE are mostly related to human use (irrigation) and climate variations, whereas increasing trends reflect climate variations. For example, in the Amazon, GRACE estimates a large increasing trend of ∼43 km3/y, whereas most models estimate decreasing trends (−71 to 11 km3/y). Land water storage trends, summed over all basins, are positive for GRACE (∼71–82 km3/y) but negative for models (−450 to −12 km3/y), contributing opposing trends to global mean sea level change. Impacts of climate forcing on decadal land water storage trends exceed those of modeled human intervention by about a factor of 2. The model-GRACE comparison highlights potential areas of future model development, particularly simulated water storage. The inability of models to capture large decadal water storage trends based on GRACE indicates that model projections of climate and human-induced water storage changes may be underestimated. PMID:29358394

  6. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel's MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  7. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2013-12-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel\\'s MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  8. Modelling of large-scale structures arising under developed turbulent convection in a horizontal fluid layer (with application to the problem of tropical cyclone origination

    Directory of Open Access Journals (Sweden)

    G. V. Levina

    2000-01-01

    Full Text Available The work is concerned with the results of theoretical and laboratory modelling the processes of the large-scale structure generation under turbulent convection in the rotating-plane horizontal layer of an incompressible fluid with unstable stratification. The theoretical model describes three alternative ways of creating unstable stratification: a layer heating from below, a volumetric heating of a fluid with internal heat sources and combination of both factors. The analysis of the model equations show that under conditions of high intensity of the small-scale convection and low level of heat loss through the horizontal layer boundaries a long wave instability may arise. The condition for the existence of an instability and criterion identifying the threshold of its initiation have been determined. The principle of action of the discovered instability mechanism has been described. Theoretical predictions have been verified by a series of experiments on a laboratory model. The horizontal dimensions of the experimentally-obtained long-lived vortices are 4÷6 times larger than the thickness of the fluid layer. This work presents a description of the laboratory setup and experimental procedure. From the geophysical viewpoint the examined mechanism of the long wave instability is supposed to be adequate to allow a description of the initial step in the evolution of such large-scale vortices as tropical cyclones - a transition form the small-scale cumulus clouds to the state of the atmosphere involving cloud clusters (the stage of initial tropical perturbation.

  9. Evaluation of sub grid scale and local wall models in Large-eddy simulations of separated flow

    OpenAIRE

    Sam Ali Al; Szasz Robert; Revstedt Johan

    2015-01-01

    The performance of the Sub Grid Scale models is studied by simulating a separated flow over a wavy channel. The first and second order statistical moments of the resolved velocities obtained by using Large-Eddy simulations at different mesh resolutions are compared with Direct Numerical Simulations data. The effectiveness of modeling the wall stresses by using local log-law is then tested on a relatively coarse grid. The results exhibit a good agreement between highly-resolved Large Eddy Simu...

  10. Lepton number violation in theories with a large number of standard model copies

    International Nuclear Information System (INIS)

    Kovalenko, Sergey; Schmidt, Ivan; Paes, Heinrich

    2011-01-01

    We examine lepton number violation (LNV) in theories with a saturated black hole bound on a large number of species. Such theories have been advocated recently as a possible solution to the hierarchy problem and an explanation of the smallness of neutrino masses. On the other hand, the violation of the lepton number can be a potential phenomenological problem of this N-copy extension of the standard model as due to the low quantum gravity scale black holes may induce TeV scale LNV operators generating unacceptably large rates of LNV processes. We show, however, that this issue can be avoided by introducing a spontaneously broken U 1(B-L) . Then, due to the existence of a specific compensation mechanism between contributions of different Majorana neutrino states, LNV processes in the standard model copy become extremely suppressed with rates far beyond experimental reach.

  11. Strong motion modeling at the Paducah Diffusion Facility for a large New Madrid earthquake

    International Nuclear Information System (INIS)

    Herrmann, R.B.

    1991-01-01

    The Paducah Diffusion Facility is within 80 kilometers of the location of the very large New Madrid earthquakes which occurred during the winter of 1811-1812. Because of their size, seismic moment of 2.0 x 10 27 dyne-cm or moment magnitude M w = 7.5, the possible recurrence of these earthquakes is a major element in the assessment of seismic hazard at the facility. Probabilistic hazard analysis can provide uniform hazard response spectra estimates for structure evaluation, but a deterministic modeling of a such a large earthquake can provide strong constraints on the expected duration of motion. The large earthquake is modeled by specifying the earthquake fault and its orientation with respect to the site, and by specifying the rupture process. Synthetic time histories, based on forward modeling of the wavefield, from each subelement are combined to yield a three component time history at the site. Various simulations are performed to sufficiently exercise possible spatial and temporal distributions of energy release on the fault. Preliminary results demonstrate the sensitivity of the method to various assumptions, and also indicate strongly that the total duration of ground motion at the site is controlled primarily by the length of the rupture process on the fault

  12. Assessing internal exposure in the absence of an appropriate model: two cases involving an incidental inhalation of transuranic elements

    International Nuclear Information System (INIS)

    Blanchin, N.; Grappin, L.; Guillermin, A.M.; Lafon, P.; Miele, A.; Berard, P.; Blanchardon, E.; Fottorino, R.

    2008-01-01

    Two incidents involving internal exposure by inhalation of transuranic compounds are presented herein. The results of the measurements of urinary and faecal excretions of the two individuals involved do not concur with the values predicted by the ICRP models that should be applied by default, according to the circumstances of the incidents and the chemical form of the products involved: oxide in the first case and nitrate in the second. These cases are remarkable in the similarity of their biokinetic behaviour even though they occurred in different situations and involved different chemical compounds. Both situations provide an illustration of the management of internal contamination events. The precautions to be taken and the questions that the physician should ask himself in the estimation of the internal dose are listed as follows: What type of examinations should be prescribed and at what frequency? What analysis results should be used in assessing the dose? How can the effect of the Ca-DTPA treatment be assessed? How long is it necessary to perform radio toxicological exams before assessing the dose? What should be done if the ICRP model corresponding to the initial circumstances does not fit the measurement data? Finally, our selected hypotheses, used to explain specific biokinetic behaviour and to estimate its intake in both cases, are detailed. These incidental contaminations suggest that further studies should be carried out to develop a new model for inhalation of transuranic compounds that would follow neither the S nor the M absorption type of the respiratory tract model of ICRP publication 66. (authors)

  13. Assessing internal exposure in the absence of an appropriate model: two cases involving an incidental inhalation of transuranic elements

    International Nuclear Information System (INIS)

    Blanchin, Nicolas; Fottorino, Robert; Grappin, Louise; Guillermin, Anne-Marie; Lafon, Philippe; Miele, Alain; Berard, Philippe; Blanchardon, Eric

    2008-01-01

    Two incidents involving internal exposure by inhalation of transuranic compounds are presented herein. The results of the measurements of urinary and faecal excretions of the two individuals involved do not concur with the values predicted by the ICRP models that should be applied by default, according to the circumstances of the incidents and the chemical form of the products involved: oxide in the first case and nitrate in the second. These cases are remarkable in the similarity of their biokinetic behaviour even though they occurred in different situations and involved different chemical compounds. Both situations provide an illustration of the management of internal contamination events. The precautions to be taken and the questions that the physician should ask himself in the estimation of the internal dose are listed as follows: a) What type of examinations should be prescribed and at what frequency?; b) What analysis results should be used in assessing the dose?; c) How can the effect of the Ca-DTPA treatment be assessed?; d) How long is it necessary to perform radio toxicological exams before assessing the dose?; e) What should be done if the ICRP model corresponding to the initial circumstances does not fit the measurement data? Finally, our selected hypotheses, used to explain specific biokinetic behaviour and to estimate its intake in both cases, are detailed. These incidental contaminations suggest that further studies should be carried out to develop a new model for inhalation of transuranic compounds that would follow neither the S nor the M absorption type of the respiratory tract model of ICRP publication 66. (author)

  14. The relationship between large-scale and convective states in the tropics - Towards an improved representation of convection in large-scale models

    Energy Technology Data Exchange (ETDEWEB)

    Jakob, Christian [Monash Univ., Melbourne, VIC (Australia)

    2015-02-26

    This report summarises an investigation into the relationship of tropical thunderstorms to the atmospheric conditions they are embedded in. The study is based on the use of radar observations at the Atmospheric Radiation Measurement site in Darwin run under the auspices of the DOE Atmospheric Systems Research program. Linking the larger scales of the atmosphere with the smaller scales of thunderstorms is crucial for the development of the representation of thunderstorms in weather and climate models, which is carried out by a process termed parametrisation. Through the analysis of radar and wind profiler observations the project made several fundamental discoveries about tropical storms and quantified the relationship of the occurrence and intensity of these storms to the large-scale atmosphere. We were able to show that the rainfall averaged over an area the size of a typical climate model grid-box is largely controlled by the number of storms in the area, and less so by the storm intensity. This allows us to completely rethink the way we represent such storms in climate models. We also found that storms occur in three distinct categories based on their depth and that the transition between these categories is strongly related to the larger scale dynamical features of the atmosphere more so than its thermodynamic state. Finally, we used our observational findings to test and refine a new approach to cumulus parametrisation which relies on the stochastic modelling of the area covered by different convective cloud types.

  15. Parental Involvement in Norwegian Schools

    Science.gov (United States)

    Paulsen, Jan Merok

    2012-01-01

    This article examines findings on key challenges of school-parent relations in Norway. The review is based on recent large-scale studies on several issues, including formalized school-parent cooperation, parental involvement in the pedagogical discourse, and teacher perspectives on the parents' role in the school community. Findings suggest a…

  16. Application of seeding and automatic differentiation in a large scale ocean circulation model

    Directory of Open Access Journals (Sweden)

    Frode Martinsen

    2005-07-01

    Full Text Available Computation of the Jacobian in a 3-dimensional general ocean circulation model is considered in this paper. The Jacobian matrix considered in this paper is square, large and sparse. When a large and sparse Jacobian is being computed, proper seeding is essential to reduce computational times. This paper presents a manually designed seeding motivated by the Arakawa-C staggered grid, and gives results for the manually designed seeding as compated to identity seeding and optimal seeding. Finite differences are computed for reference.

  17. Model reduction for the dynamics and control of large structural systems via neutral network processing direct numerical optimization

    Science.gov (United States)

    Becus, Georges A.; Chan, Alistair K.

    1993-01-01

    Three neural network processing approaches in a direct numerical optimization model reduction scheme are proposed and investigated. Large structural systems, such as large space structures, offer new challenges to both structural dynamicists and control engineers. One such challenge is that of dimensionality. Indeed these distributed parameter systems can be modeled either by infinite dimensional mathematical models (typically partial differential equations) or by high dimensional discrete models (typically finite element models) often exhibiting thousands of vibrational modes usually closely spaced and with little, if any, damping. Clearly, some form of model reduction is in order, especially for the control engineer who can actively control but a few of the modes using system identification based on a limited number of sensors. Inasmuch as the amount of 'control spillover' (in which the control inputs excite the neglected dynamics) and/or 'observation spillover' (where neglected dynamics affect system identification) is to a large extent determined by the choice of particular reduced model (RM), the way in which this model reduction is carried out is often critical.

  18. A large-scale forest landscape model incorporating multi-scale processes and utilizing forest inventory data

    Science.gov (United States)

    Wen J. Wang; Hong S. He; Martin A. Spetich; Stephen R. Shifley; Frank R. Thompson III; David R. Larsen; Jacob S. Fraser; Jian. Yang

    2013-01-01

    Two challenges confronting forest landscape models (FLMs) are how to simulate fine, standscale processes while making large-scale (i.e., .107 ha) simulation possible, and how to take advantage of extensive forest inventory data such as U.S. Forest Inventory and Analysis (FIA) data to initialize and constrain model parameters. We present the LANDIS PRO model that...

  19. Understanding dynamics of large-scale atmospheric vortices with moist-convective shallow water model

    International Nuclear Information System (INIS)

    Rostami, M.; Zeitlin, V.

    2016-01-01

    Atmospheric jets and vortices which, together with inertia-gravity waves, constitute the principal dynamical entities of large-scale atmospheric motions, are well described in the framework of one- or multi-layer rotating shallow water models, which are obtained by vertically averaging of full “primitive” equations. There is a simple and physically consistent way to include moist convection in these models by adding a relaxational parameterization of precipitation and coupling precipitation with convective fluxes with the help of moist enthalpy conservation. We recall the construction of moist-convective rotating shallow water model (mcRSW) model and give an example of application to upper-layer atmospheric vortices. (paper)

  20. Hierarchical and Matrix Structures in a Large Organizational Email Network: Visualization and Modeling Approaches

    OpenAIRE

    Sims, Benjamin H.; Sinitsyn, Nikolai; Eidenbenz, Stephan J.

    2014-01-01

    This paper presents findings from a study of the email network of a large scientific research organization, focusing on methods for visualizing and modeling organizational hierarchies within large, complex network datasets. In the first part of the paper, we find that visualization and interpretation of complex organizational network data is facilitated by integration of network data with information on formal organizational divisions and levels. By aggregating and visualizing email traffic b...

  1. Diagnosis of abdominal abscess: A large animal model

    International Nuclear Information System (INIS)

    Harper, R.A.; Meek, A.C.; Chidlow, A.D.; Galvin, D.A.J.; McCollum, C.N.

    1988-01-01

    In order to evaluate potential isotopic techniques for the diagnosis of occult sepsis an experimental model in large animals is required. Sponges placed in the abdomen of pigs were injected with mixed colonic bacteria. In 4 animals Kefzol (500 mg IV) and Metronidazole (1 g PR) were administered before the sponges were inserted and compared to 4 given no antibiotics. Finally, in 12 pigs, 20 mls autologous blood was injected into the sponge before antibiotic prophylaxis and bacterial inoculation. 111 In-leucocyte scans and post mortem were then performed 2 weeks later. Without antibiotic cover purulent peritonitis developed in all 4 pigs. Prophylactic antibiotics prevented overwhelming sepsis but at 2 weeks there was only brown fluid surrounding the sponge. Blood added to the sponge produced abscesses in every animal confirmed by leucocytosis of 25.35x10 9 cells/L, 111 In-leucocyte scanning and post mortem. Culturing the thick yellow pus showed a mixed colony of aerobes and anaerobes, similar to those cultured in clinical practice. An intra-abdominal sponge containing blood and faecal organisms in a pig on prophylactic antibiotics reliably produced a chronic abscess. This model is ideal for studies on alternative methods of abscess diagnosis and radiation dosimetry. (orig.)

  2. Large-scale shell model calculations for the N=126 isotones Po-Pu

    International Nuclear Information System (INIS)

    Caurier, E.; Rejmund, M.; Grawe, H.

    2003-04-01

    Large-scale shell model calculations were performed in the full Z=82-126 proton model space π(Oh 9/2 , 1f 7/2 , Oi 13/2 , 2p 3/2 , 1f 5/2 , 2p 1/2 ) employing the code NATHAN. The modified Kuo-Herling interaction was used, no truncation was applied up to protactinium (Z=91) and seniority truncation beyond. The results are compared to experimental data including binding energies, level schemes and electromagnetic transition rates. An overall excellent agreement is obtained for states that can be described in this model space. Limitations of the approach with respect to excitations across the Z=82 and N=126 shells and deficiencies of the interaction are discussed. (orig.)

  3. An integrated model for assessing both crop productivity and agricultural water resources at a large scale

    Science.gov (United States)

    Okada, M.; Sakurai, G.; Iizumi, T.; Yokozawa, M.

    2012-12-01

    Agricultural production utilizes regional resources (e.g. river water and ground water) as well as local resources (e.g. temperature, rainfall, solar energy). Future climate changes and increasing demand due to population increases and economic developments would intensively affect the availability of water resources for agricultural production. While many studies assessed the impacts of climate change on agriculture, there are few studies that dynamically account for changes in water resources and crop production. This study proposes an integrated model for assessing both crop productivity and agricultural water resources at a large scale. Also, the irrigation management to subseasonal variability in weather and crop response varies for each region and each crop. To deal with such variations, we used the Markov Chain Monte Carlo technique to quantify regional-specific parameters associated with crop growth and irrigation water estimations. We coupled a large-scale crop model (Sakurai et al. 2012), with a global water resources model, H08 (Hanasaki et al. 2008). The integrated model was consisting of five sub-models for the following processes: land surface, crop growth, river routing, reservoir operation, and anthropogenic water withdrawal. The land surface sub-model was based on a watershed hydrology model, SWAT (Neitsch et al. 2009). Surface and subsurface runoffs simulated by the land surface sub-model were input to the river routing sub-model of the H08 model. A part of regional water resources available for agriculture, simulated by the H08 model, was input as irrigation water to the land surface sub-model. The timing and amount of irrigation water was simulated at a daily step. The integrated model reproduced the observed streamflow in an individual watershed. Additionally, the model accurately reproduced the trends and interannual variations of crop yields. To demonstrate the usefulness of the integrated model, we compared two types of impact assessment of

  4. Do not Lose Your Students in Large Lectures: A Five-Step Paper-Based Model to Foster Students’ Participation

    Directory of Open Access Journals (Sweden)

    Mona Hassan Aburahma

    2015-07-01

    Full Text Available Like most of the pharmacy colleges in developing countries with high population growth, public pharmacy colleges in Egypt are experiencing a significant increase in students’ enrollment annually due to the large youth population, accompanied with the keenness of students to join pharmacy colleges as a step to a better future career. In this context, large lectures represent a popular approach for teaching the students as economic and logistic constraints prevent splitting them into smaller groups. Nevertheless, the impact of large lectures in relation to student learning has been widely questioned due to their educational limitations, which are related to the passive role the students maintain in lectures. Despite the reported feebleness underlying large lectures and lecturing in general, large lectures will likely continue to be taught in the same format in these countries. Accordingly, to soften the negative impacts of large lectures, this article describes a simple and feasible 5-step paper-based model to transform lectures from a passive information delivery space into an active learning environment. This model mainly suits educational establishments with financial constraints, nevertheless, it can be applied in lectures presented in any educational environment to improve active participation of students. The components and the expected advantages of employing the 5-step paper-based model in large lectures as well as its limitations and ways to overcome them are presented briefly. The impact of applying this model on students’ engagement and learning is currently being investigated.

  5. Model-based diagnosis of large diesel engines based on angular speed variations of the crankshaft

    Science.gov (United States)

    Desbazeille, M.; Randall, R. B.; Guillet, F.; El Badaoui, M.; Hoisnard, C.

    2010-07-01

    This work aims at monitoring large diesel engines by analyzing the crankshaft angular speed variations. It focuses on a powerful 20-cylinder diesel engine with crankshaft natural frequencies within the operating speed range. First, the angular speed variations are modeled at the crankshaft free end. This includes modeling both the crankshaft dynamical behavior and the excitation torques. As the engine is very large, the first crankshaft torsional modes are in the low frequency range. A model with the assumption of a flexible crankshaft is required. The excitation torques depend on the in-cylinder pressure curve. The latter is modeled with a phenomenological model. Mechanical and combustion parameters of the model are optimized with the help of actual data. Then, an automated diagnosis based on an artificially intelligent system is proposed. Neural networks are used for pattern recognition of the angular speed waveforms in normal and faulty conditions. Reference patterns required in the training phase are computed with the model, calibrated using a small number of actual measurements. Promising results are obtained. An experimental fuel leakage fault is successfully diagnosed, including detection and localization of the faulty cylinder, as well as the approximation of the fault severity.

  6. Mechanical strength model for plastic bonded granular materials at high strain rates and large strains

    International Nuclear Information System (INIS)

    Browning, R.V.; Scammon, R.J.

    1998-01-01

    Modeling impact events on systems containing plastic bonded explosive materials requires accurate models for stress evolution at high strain rates out to large strains. For example, in the Steven test geometry reactions occur after strains of 0.5 or more are reached for PBX-9501. The morphology of this class of materials and properties of the constituents are briefly described. We then review the viscoelastic behavior observed at small strains for this class of material, and evaluate large strain models used for granular materials such as cap models. Dilatation under shearing deformations of the PBX is experimentally observed and is one of the key features modeled in cap style plasticity theories, together with bulk plastic flow at high pressures. We propose a model that combines viscoelastic behavior at small strains but adds intergranular stresses at larger strains. A procedure using numerical simulations and comparisons with results from flyer plate tests and low rate uniaxial stress tests is used to develop a rough set of constants for PBX-9501. Comparisons with the high rate flyer plate tests demonstrate that the observed characteristic behavior is captured by this viscoelastic based model. copyright 1998 American Institute of Physics

  7. Exploratory studies into seasonal flow forecasting potential for large lakes

    Science.gov (United States)

    Sene, Kevin; Tych, Wlodek; Beven, Keith

    2018-01-01

    In seasonal flow forecasting applications, one factor which can help predictability is a significant hydrological response time between rainfall and flows. On account of storage influences, large lakes therefore provide a useful test case although, due to the spatial scales involved, there are a number of modelling challenges related to data availability and understanding the individual components in the water balance. Here some possible model structures are investigated using a range of stochastic regression and transfer function techniques with additional insights gained from simple analytical approximations. The methods were evaluated using records for two of the largest lakes in the world - Lake Malawi and Lake Victoria - with forecast skill demonstrated several months ahead using water balance models formulated in terms of net inflows. In both cases slight improvements were obtained for lead times up to 4-5 months from including climate indices in the data assimilation component. The paper concludes with a discussion of the relevance of the results to operational flow forecasting systems for other large lakes.

  8. Anisotropic modeling and joint-MAP stitching for improved ultrasound model-based iterative reconstruction of large and thick specimens

    Energy Technology Data Exchange (ETDEWEB)

    Almansouri, Hani [Purdue University; Venkatakrishnan, Singanallur V. [ORNL; Clayton, Dwight A. [ORNL; Polsky, Yarom [ORNL; Bouman, Charles [Purdue University; Santos-Villalobos, Hector J. [ORNL

    2018-04-01

    One-sided non-destructive evaluation (NDE) is widely used to inspect materials, such as concrete structures in nuclear power plants (NPP). A widely used method for one-sided NDE is the synthetic aperture focusing technique (SAFT). The SAFT algorithm produces reasonable results when inspecting simple structures. However, for complex structures, such as heavily reinforced thick concrete structures, SAFT results in artifacts and hence there is a need for a more sophisticated inversion technique. Model-based iterative reconstruction (MBIR) algorithms, which are typically equivalent to regularized inversion techniques, offer a powerful framework to incorporate complex models for the physics, detector miscalibrations and the materials being imaged to obtain high quality reconstructions. Previously, we have proposed an ultrasonic MBIR method that signifcantly improves reconstruction quality compared to SAFT. However, the method made some simplifying assumptions on the propagation model and did not disucss ways to handle data that is obtained by raster scanning a system over a surface to inspect large regions. In this paper, we propose a novel MBIR algorithm that incorporates an anisotropic forward model and allows for the joint processing of data obtained from a system that raster scans a large surface. We demonstrate that the new MBIR method can produce dramatic improvements in reconstruction quality compared to SAFT and suppresses articfacts compared to the perviously presented MBIR approach.

  9. A Method to Quantify Plant Availability and Initiating Event Frequency Using a Large Event Tree, Small Fault Tree Model

    International Nuclear Information System (INIS)

    Kee, Ernest J.; Sun, Alice; Rodgers, Shawn; Popova, ElmiraV; Nelson, Paul; Moiseytseva, Vera; Wang, Eric

    2006-01-01

    South Texas Project uses a large fault tree to produce scenarios (minimal cut sets) used in quantification of plant availability and event frequency predictions. On the other hand, the South Texas Project probabilistic risk assessment model uses a large event tree, small fault tree for quantifying core damage and radioactive release frequency predictions. The South Texas Project is converting its availability and event frequency model to use a large event tree, small fault in an effort to streamline application support and to provide additional detail in results. The availability and event frequency model as well as the applications it supports (maintenance and operational risk management, system engineering health assessment, preventive maintenance optimization, and RIAM) are briefly described. A methodology to perform availability modeling in a large event tree, small fault tree framework is described in detail. How the methodology can be used to support South Texas Project maintenance and operations risk management is described in detail. Differences with other fault tree methods and other recently proposed methods are discussed in detail. While the methods described are novel to the South Texas Project Risk Management program and to large event tree, small fault tree models, concepts in the area of application support and availability modeling have wider applicability to the industry. (authors)

  10. Large Eddy simulation of turbulence: A subgrid scale model including shear, vorticity, rotation, and buoyancy

    Science.gov (United States)

    Canuto, V. M.

    1994-01-01

    The Reynolds numbers that characterize geophysical and astrophysical turbulence (Re approximately equals 10(exp 8) for the planetary boundary layer and Re approximately equals 10(exp 14) for the Sun's interior) are too large to allow a direct numerical simulation (DNS) of the fundamental Navier-Stokes and temperature equations. In fact, the spatial number of grid points N approximately Re(exp 9/4) exceeds the computational capability of today's supercomputers. Alternative treatments are the ensemble-time average approach, and/or the volume average approach. Since the first method (Reynolds stress approach) is largely analytical, the resulting turbulence equations entail manageable computational requirements and can thus be linked to a stellar evolutionary code or, in the geophysical case, to general circulation models. In the volume average approach, one carries out a large eddy simulation (LES) which resolves numerically the largest scales, while the unresolved scales must be treated theoretically with a subgrid scale model (SGS). Contrary to the ensemble average approach, the LES+SGS approach has considerable computational requirements. Even if this prevents (for the time being) a LES+SGS model to be linked to stellar or geophysical codes, it is still of the greatest relevance as an 'experimental tool' to be used, inter alia, to improve the parameterizations needed in the ensemble average approach. Such a methodology has been successfully adopted in studies of the convective planetary boundary layer. Experienc e with the LES+SGS approach from different fields has shown that its reliability depends on the healthiness of the SGS model for numerical stability as well as for physical completeness. At present, the most widely used SGS model, the Smagorinsky model, accounts for the effect of the shear induced by the large resolved scales on the unresolved scales but does not account for the effects of buoyancy, anisotropy, rotation, and stable stratification. The

  11. Mathematical modelling and optimization of a large-scale combined cooling, heat, and power system that incorporates unit changeover and time-of-use electricity price

    International Nuclear Information System (INIS)

    Zhu, Qiannan; Luo, Xianglong; Zhang, Bingjian; Chen, Ying

    2017-01-01

    Highlights: • We propose a novel superstructure for the design and optimization of LSCCHP. • A multi-objective multi-period MINLP model is formulated. • The unit start-up cost and time-of-use electricity prices are involved. • Unit size discretization strategy is proposed to linearize the original MINLP model. • A case study is elaborated to demonstrate the effectiveness of the proposed method. - Abstract: Building energy systems, particularly large public ones, are major energy consumers and pollutant emission contributors. In this study, a superstructure of large-scale combined cooling, heat, and power system is constructed. The off-design unit, economic cost, and CO_2 emission models are also formulated. Moreover, a multi-objective mixed integer nonlinear programming model is formulated for the simultaneous system synthesis, technology selection, unit sizing, and operation optimization of large-scale combined cooling, heat, and power system. Time-of-use electricity price and unit changeover cost are incorporated into the problem model. The economic objective is to minimize the total annual cost, which comprises the operation and investment costs of large-scale combined cooling, heat, and power system. The environmental objective is to minimize the annual global CO_2 emission of large-scale combined cooling, heat, and power system. The augmented ε–constraint method is applied to achieve the Pareto frontier of the design configuration, thereby reflecting the set of solutions that represent optimal trade-offs between the economic and environmental objectives. Sensitivity analysis is conducted to reflect the impact of natural gas price on the combined cooling, heat, and power system. The synthesis and design of combined cooling, heat, and power system for an airport in China is studied to test the proposed synthesis and design methodology. The Pareto curve of multi-objective optimization shows that the total annual cost varies from 102.53 to 94.59 M

  12. Modeling of large-scale oxy-fuel combustion processes

    DEFF Research Database (Denmark)

    Yin, Chungen

    2012-01-01

    Quite some studies have been conducted in order to implement oxy-fuel combustion with flue gas recycle in conventional utility boilers as an effective effort of carbon capture and storage. However, combustion under oxy-fuel conditions is significantly different from conventional air-fuel firing......, among which radiative heat transfer under oxy-fuel conditions is one of the fundamental issues. This paper demonstrates the nongray-gas effects in modeling of large-scale oxy-fuel combustion processes. Oxy-fuel combustion of natural gas in a 609MW utility boiler is numerically studied, in which...... calculation of the oxy-fuel WSGGM remarkably over-predicts the radiative heat transfer to the furnace walls and under-predicts the gas temperature at the furnace exit plane, which also result in a higher incomplete combustion in the gray calculation. Moreover, the gray and non-gray calculations of the same...

  13. Swirling flow in model of large two-stroke diesel engine

    DEFF Research Database (Denmark)

    Ingvorsen, Kristian Mark; Meyer, Knud Erik; Schnipper, Teis

    2012-01-01

    A scale model of a simplified cylinder in a uniflow scavenged large two-stroke marine diesel engine is constructed to investigate the scavenging process. Angled ports near the bottom of the cylinder liner are uncovered as the piston reaches the bottom dead center. Fresh air enters through the ports...... forcing the gas in the cylinder to leave through an exhaust valve located in the cylinder head. The scavenging flow is a transient (opening/closing ports) confined port-generated turbulent swirl flow, with complex phenomena such as central recirculation zones, vortex breakdown and vortex precession...

  14. Involvement of spinal orexin A in the electroacupuncture analgesia in a rat model of post-laparotomy pain

    Directory of Open Access Journals (Sweden)

    Feng Xiao-Ming

    2012-11-01

    Full Text Available Abstract Background Orexin A (OXA, hypocretin/hcrt 1 is a newly discovered potential analgesic substance. However, whether OXA is involved in acupuncture analgesia remains unknown. The present study was designed to investigate the involvement of spinal OXA in electroacupuncture (EA analgesia. Methods A modified rat model of post-laparotomy pain was adopted and evaluated. Von Frey filaments were used to measure mechanical allodynia of the hind paw and abdomen. EA at 2/15 Hz or 2/100 Hz was performed once on the bilateral ST36 and SP6 for 30 min perioperatively. SB-334867, a selective orexin 1 receptor (OX1R antagonist with a higher affinity for OXA than OXB, was intrathecally injected to observe its effect on EA analgesia. Results OXA at 0.3 nmol and EA at 2/15 Hz produced respective analgesic effects on the model (P0.05. In addition, naloxone, a selective opioid receptor antagonist, failed to antagonize OXA-induced analgesia (P>0.05. Conclusions The results of the present study indicate the involvement of OXA in EA analgesia via OX1R in an opioid-independent way.

  15. Competition Between Two Large-Amplitude Motion Models: New Hybrid Hamiltonian Versus Old Pure-Tunneling Hamiltonian

    Science.gov (United States)

    Kleiner, Isabelle; Hougen, Jon T.

    2017-06-01

    In this talk we report on our progress in trying to make the hybrid Hamiltonian competitive with the pure-tunneling Hamiltonian for treating large-amplitude motions in methylamine. A treatment using the pure-tunneling model has the advantages of: (i) requiring relatively little computer time, (ii) working with relatively uncorrelated fitting parameters, and (iii) yielding in the vast majority of cases fits to experimental measurement accuracy. These advantages are all illustrated in the work published this past year on a gigantic v_{t} = 1 data set for the torsional fundamental band in methyl amine. A treatment using the hybrid model has the advantages of: (i) being able to carry out a global fit involving both v_{t} = 0 and v_{t} = 1 energy levels and (ii) working with fitting parameters that have a clearer physical interpretation. Unfortunately, a treatment using the hybrid model has the great disadvantage of requiring a highly correlated set of fitting parameters to achieve reasonable fitting accuracy, which complicates the search for a good set of molecular fitting parameters and a fit to experimental accuracy. At the time of writing this abstract, we have been able to carry out a fit with J up to 15 that includes all available infrared data in the v_{t} = 1-0 torsional fundamental band, all ground-state microwave data with K up to 10 and J up to 15, and about a hundred microwave lines within the v_{t} = 1 torsional state, achieving weighted root-mean-square (rms) deviations of about 1.4, 2.8, and 4.2 for these three categories of data. We will give an update of this situation at the meeting. I. Gulaczyk, M. Kreglewski, V.-M. Horneman, J. Mol. Spectrosc., in Press (2017).

  16. Volterra representation enables modeling of complex synaptic nonlinear dynamics in large-scale simulations.

    Science.gov (United States)

    Hu, Eric Y; Bouteiller, Jean-Marie C; Song, Dong; Baudry, Michel; Berger, Theodore W

    2015-01-01

    Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations.

  17. SWAT meta-modeling as support of the management scenario analysis in large watersheds.

    Science.gov (United States)

    Azzellino, A; Çevirgen, S; Giupponi, C; Parati, P; Ragusa, F; Salvetti, R

    2015-01-01

    In the last two decades, numerous models and modeling techniques have been developed to simulate nonpoint source pollution effects. Most models simulate the hydrological, chemical, and physical processes involved in the entrainment and transport of sediment, nutrients, and pesticides. Very often these models require a distributed modeling approach and are limited in scope by the requirement of homogeneity and by the need to manipulate extensive data sets. Physically based models are extensively used in this field as a decision support for managing the nonpoint source emissions. A common characteristic of this type of model is a demanding input of several state variables that makes the calibration and effort-costing in implementing any simulation scenario more difficult. In this study the USDA Soil and Water Assessment Tool (SWAT) was used to model the Venice Lagoon Watershed (VLW), Northern Italy. A Multi-Layer Perceptron (MLP) network was trained on SWAT simulations and used as a meta-model for scenario analysis. The MLP meta-model was successfully trained and showed an overall accuracy higher than 70% both on the training and on the evaluation set, allowing a significant simplification in conducting scenario analysis.

  18. Nonlinear behavior of stimulated scatter in large underdense plasmas

    International Nuclear Information System (INIS)

    Kruer, W.L.; Estabrook, K.G.

    1979-01-01

    Several nonlinear effects which limit Brillouin and Raman scatter of intense light in large underdense plasmas are examined. After briefly considering ion trapping and harmonic generation, we focus on the self-consistent ion heating which occurs as an integral part of the Brillouin scattering process. In the long-term nonlinear state, the ion wave amplitude is determined by damping on the heated ion tail which self-consistently forms. A simple model of the scatter is presented and compared with particle simulations. A similar model is also applied to Raman scatter and compared with simulations. Our calculations emphasize that modest tails on the electron distribution function can significantly limit instabilities involving electron plasma waves

  19. Large mixing of light and heavy neutrinos in seesaw models and the LHC

    International Nuclear Information System (INIS)

    He Xiaogang; Oh, Sechul; Tandean, Jusak; Wen, C.-C.

    2009-01-01

    In the type-I seesaw model the size of mixing between light and heavy neutrinos, ν and N, respectively, is of order the square root of their mass ratio, (m ν /m N ) 1/2 , with only one generation of the neutrinos. Since the light-neutrino mass must be less than an eV or so, the mixing would be very small, even for a heavy-neutrino mass of order a few hundred GeV. This would make it unlikely to test the model directly at the LHC, as the amplitude for producing the heavy neutrino is proportional to the mixing size. However, it has been realized for some time that, with more than one generation of light and heavy neutrinos, the mixing can be significantly larger in certain situations. In this paper we explore this possibility further and consider specific examples in detail in the context of type-I seesaw. We study its implications for the single production of the heavy neutrinos at the LHC via the main channel qq ' →W*→lN involving an ordinary charged lepton l. We then extend the discussion to the type-III seesaw model, which has richer phenomenology due to presence of the charged partners of the heavy neutrinos, and examine the implications for the single production of these heavy leptons at the LHC. In the latter model the new kinds of solutions that we find also make it possible to have sizable flavor-changing neutral-current effects in processes involving ordinary charged leptons.

  20. Computing in Large-Scale Dynamic Systems

    NARCIS (Netherlands)

    Pruteanu, A.S.

    2013-01-01

    Software applications developed for large-scale systems have always been difficult to de- velop due to problems caused by the large number of computing devices involved. Above a certain network size (roughly one hundred), necessary services such as code updating, topol- ogy discovery and data