WorldWideScience

Sample records for models large group

  1. Two-group modeling of interfacial area transport in large diameter channels

    Energy Technology Data Exchange (ETDEWEB)

    Schlegel, J.P., E-mail: schlegelj@mst.edu [Department of Mining and Nuclear Engineering, Missouri University of Science and Technology, 301 W 14th St., Rolla, MO 65409 (United States); Hibiki, T.; Ishii, M. [School of Nuclear Engineering, Purdue University, 400 Central Dr., West Lafayette, IN 47907 (United States)

    2015-11-15

    Highlights: • Implemented updated constitutive models and benchmarking method for IATE in large pipes. • New model and method with new data improved the overall IATE prediction for large pipes. • Not all conditions well predicted shows that further development is still required. - Abstract: A comparison of the existing two-group interfacial area transport equation source and sink terms for large diameter channels with recently collected interfacial area concentration measurements (Schlegel et al., 2012, 2014. Int. J. Heat Fluid Flow 47, 42) has indicated that the model does not perform well in predicting interfacial area transport outside of the range of flow conditions used in the original benchmarking effort. In order to reduce the error in the prediction of interfacial area concentration by the interfacial area transport equation, several constitutive relations have been updated including the turbulence model and relative velocity correlation. The transport equation utilizing these updated models has been modified by updating the inter-group transfer and Group 2 coalescence and disintegration kernels using an expanded range of experimental conditions extending to pipe sizes of 0.304 m [12 in.], gas velocities of up to nearly 11 m/s [36.1 ft/s] and liquid velocities of up to 2 m/s [6.56 ft/s], as well as conditions with both bubbly flow and cap-bubbly flow injection (Schlegel et al., 2012, 2014). The modifications to the transport equation have resulted in a decrease in the RMS error for void fraction and interfacial area concentration from 17.32% to 12.3% and 21.26% to 19.6%. The combined RMS error, for both void fraction and interfacial area concentration, is below 15% for most of the experiments used in the comparison, a distinct improvement over the previous version of the model.

  2. Memory Transmission in Small Groups and Large Networks: An Agent-Based Model.

    Science.gov (United States)

    Luhmann, Christian C; Rajaram, Suparna

    2015-12-01

    The spread of social influence in large social networks has long been an interest of social scientists. In the domain of memory, collaborative memory experiments have illuminated cognitive mechanisms that allow information to be transmitted between interacting individuals, but these experiments have focused on small-scale social contexts. In the current study, we took a computational approach, circumventing the practical constraints of laboratory paradigms and providing novel results at scales unreachable by laboratory methodologies. Our model embodied theoretical knowledge derived from small-group experiments and replicated foundational results regarding collaborative inhibition and memory convergence in small groups. Ultimately, we investigated large-scale, realistic social networks and found that agents are influenced by the agents with which they interact, but we also found that agents are influenced by nonneighbors (i.e., the neighbors of their neighbors). The similarity between these results and the reports of behavioral transmission in large networks offers a major theoretical insight by linking behavioral transmission to the spread of information. © The Author(s) 2015.

  3. Quantitative Modeling of Membrane Transport and Anisogamy by Small Groups Within a Large-Enrollment Organismal Biology Course

    Directory of Open Access Journals (Sweden)

    Eric S. Haag

    2016-12-01

    Full Text Available Quantitative modeling is not a standard part of undergraduate biology education, yet is routine in the physical sciences. Because of the obvious biophysical aspects, classes in anatomy and physiology offer an opportunity to introduce modeling approaches to the introductory curriculum. Here, we describe two in-class exercises for small groups working within a large-enrollment introductory course in organismal biology. Both build and derive biological insights from quantitative models, implemented using spreadsheets. One exercise models the evolution of anisogamy (i.e., small sperm and large eggs from an initial state of isogamy. Groups of four students work on Excel spreadsheets (from one to four laptops per group. The other exercise uses an online simulator to generate data related to membrane transport of a solute, and a cloud-based spreadsheet to analyze them. We provide tips for implementing these exercises gleaned from two years of experience.

  4. Effects of core models and neutron energy group structures on xenon oscillation in large graphite-moderated reactors

    International Nuclear Information System (INIS)

    Yamasita, Kiyonobu; Harada, Hiroo; Murata, Isao; Shindo, Ryuichi; Tsuruoka, Takuya.

    1993-01-01

    Xenon oscillations of large graphite-moderated reactors have been analyzed by a multi-group diffusion code with two- and three-dimensional core models to study the effects of the geometric core models and the neutron energy group structures on the evaluation of the Xe oscillation behavior. The study clarified the following. It is important for accurate Xe oscillation simulations to use the neutron energy group structure that describes well the large change in the absorption cross section of Xe in the thermal energy range of 0.1∼0.65 eV, because the energy structure in this energy range has significant influences on the amplitude and the period of oscillations in power distributions. Two-dimensional R-Z models can be used instead of three-dimensional R-θ-Z models for evaluation of the threshold power of Xe oscillation, but two-dimensional R-θ models cannot be used for evaluation of the threshold power. Although the threshold power evaluated with the R-θ-Z models coincides with that of the R-Z models, it does not coincide with that of the R-θ models. (author)

  5. LLNL Chemical Kinetics Modeling Group

    Energy Technology Data Exchange (ETDEWEB)

    Pitz, W J; Westbrook, C K; Mehl, M; Herbinet, O; Curran, H J; Silke, E J

    2008-09-24

    The LLNL chemical kinetics modeling group has been responsible for much progress in the development of chemical kinetic models for practical fuels. The group began its work in the early 1970s, developing chemical kinetic models for methane, ethane, ethanol and halogenated inhibitors. Most recently, it has been developing chemical kinetic models for large n-alkanes, cycloalkanes, hexenes, and large methyl esters. These component models are needed to represent gasoline, diesel, jet, and oil-sand-derived fuels.

  6. Large-group psychodynamics and massive violence

    Directory of Open Access Journals (Sweden)

    Vamik D. Volkan

    2006-06-01

    Full Text Available Beginning with Freud, psychoanalytic theories concerning large groups have mainly focused on individuals' perceptions of what their large groups psychologically mean to them. This chapter examines some aspects of large-group psychology in its own right and studies psychodynamics of ethnic, national, religious or ideological groups, the membership of which originates in childhood. I will compare the mourning process in individuals with the mourning process in large groups to illustrate why we need to study large-group psychology as a subject in itself. As part of this discussion I will also describe signs and symptoms of large-group regression. When there is a threat against a large-group's identity, massive violence may be initiated and this violence in turn, has an obvious impact on public health.

  7. The subjective experience of the self in the large group: two models for study.

    Science.gov (United States)

    Shields, W

    2001-04-01

    More and more opportunities now exist for group therapists to engage in the study of the self in the large group at local, national, and international conferences as well as in clinical and other organizational settings. This may be particularly important for the group therapist in the next century with potential benefit not only for individuals but also for groups and social systems of all kinds. In this article, I review my own subjective experiences in the large group context and in large study group experiences. Then, I contrast the group analytic and the group relations approaches to the large group with particular reference to Winnicott's theory about maturational processes in a facilitating environment.

  8. Distributed Model Predictive Control over Multiple Groups of Vehicles in Highway Intelligent Space for Large Scale System

    Directory of Open Access Journals (Sweden)

    Tang Xiaofeng

    2014-01-01

    Full Text Available The paper presents the three time warning distances for solving the large scale system of multiple groups of vehicles safety driving characteristics towards highway tunnel environment based on distributed model prediction control approach. Generally speaking, the system includes two parts. First, multiple vehicles are divided into multiple groups. Meanwhile, the distributed model predictive control approach is proposed to calculate the information framework of each group. Each group of optimization performance considers the local optimization and the neighboring subgroup of optimization characteristics, which could ensure the global optimization performance. Second, the three time warning distances are studied based on the basic principles used for highway intelligent space (HIS and the information framework concept is proposed according to the multiple groups of vehicles. The math model is built to avoid the chain avoidance of vehicles. The results demonstrate that the proposed highway intelligent space method could effectively ensure driving safety of multiple groups of vehicles under the environment of fog, rain, or snow.

  9. Secure Group Communications for Large Dynamic Multicast Group

    Institute of Scientific and Technical Information of China (English)

    Liu Jing; Zhou Mingtian

    2003-01-01

    As the major problem in multicast security, the group key management has been the focus of research But few results are satisfactory. In this paper, the problems of group key management and access control for large dynamic multicast group have been researched and a solution based on SubGroup Secure Controllers (SGSCs) is presented, which solves many problems in IOLUS system and WGL scheme.

  10. The large-Nc renormalization group

    International Nuclear Information System (INIS)

    Dorey, N.

    1995-01-01

    In this talk, we review how effective theories of mesons and baryons become exactly soluble in the large-N c , limit. We start with a generic hadron Lagrangian constrained only by certain well-known large-N c , selection rules. The bare vertices of the theory are dressed by an infinite class of UV divergent Feynman diagrams at leading order in 1/N c . We show how all these leading-order dia, grams can be summed exactly using semiclassical techniques. The saddle-point field configuration is reminiscent of the chiral bag: hedgehog pions outside a sphere of radius Λ -1 (Λ being the UV cutoff of the effective theory) matched onto nucleon degrees of freedom for r ≤ Λ -1 . The effect of this pion cloud is to renormalize the bare nucleon mass, nucleon-Δ hyperfine mass splitting, and Yukawa couplings of the theory. The corresponding large-N c , renormalization group equations for these parameters are presented, and solved explicitly in a series of simple models. We explain under what conditions the Skyrmion emerges as a UV fixed-point of the RG flow as Λ → ∞

  11. Two-group interfacial area concentration correlations of two-phase flows in large diameter pipes

    International Nuclear Information System (INIS)

    Shen, Xiuzhong; Hibiki, Takashi

    2015-01-01

    The reliable empirical correlations and models are one of the important ways to predict the interfacial area concentration (IAC) in two-phase flows. However, up to now, no correlation or model is available for the prediction of the IAC in the two-phase flows in large diameter pipes. This study collected an IAC experimental database of two-phase flows taken under various flow conditions in large diameter pipes and presented a systematic way to predict the IAC for two-phase flows from bubbly, cap-bubbly to churn flow in large diameter pipes by categorizing bubbles into two groups (group-1: spherical and distorted bubble, group-2: cap bubble). Correlations were developed to predict the group-1 void fraction from the void fraction of all bubble. The IAC contribution from group-1 bubbles was modeled by using the dominant parameters of group-1 bubble void fraction and Reynolds number based on the parameter-dependent analysis of Hibiki and Ishii (2001, 2002) using one-dimensional bubble number density and interfacial area transport equations. A new drift velocity correlation for two-phase flow with large cap bubbles in large diameter pipes was derived in this study. By comparing the newly-derived drift velocity correlation with the existing drift velocity correlation of Kataoka and Ishii (1987) for large diameter pipes and using the characteristics of the representative bubbles among the group 2 bubbles, we developed the model of IAC and bubble size for group 2 cap bubbles. The developed models for estimating the IAC are compared with the entire collected database. A reasonable agreement was obtained with average relative errors of ±28.1%, ±54.4% and ±29.6% for group 1, group 2 and all bubbles respectively. (author)

  12. A Life-Cycle Model of Human Social Groups Produces a U-Shaped Distribution in Group Size.

    Directory of Open Access Journals (Sweden)

    Gul Deniz Salali

    Full Text Available One of the central puzzles in the study of sociocultural evolution is how and why transitions from small-scale human groups to large-scale, hierarchically more complex ones occurred. Here we develop a spatially explicit agent-based model as a first step towards understanding the ecological dynamics of small and large-scale human groups. By analogy with the interactions between single-celled and multicellular organisms, we build a theory of group lifecycles as an emergent property of single cell demographic and expansion behaviours. We find that once the transition from small-scale to large-scale groups occurs, a few large-scale groups continue expanding while small-scale groups gradually become scarcer, and large-scale groups become larger in size and fewer in number over time. Demographic and expansion behaviours of groups are largely influenced by the distribution and availability of resources. Our results conform to a pattern of human political change in which religions and nation states come to be represented by a few large units and many smaller ones. Future enhancements of the model should include decision-making rules and probabilities of fragmentation for large-scale societies. We suggest that the synthesis of population ecology and social evolution will generate increasingly plausible models of human group dynamics.

  13. A model for the use of blended learning in large group teaching sessions.

    Science.gov (United States)

    Herbert, Cristan; Velan, Gary M; Pryor, Wendy M; Kumar, Rakesh K

    2017-11-09

    appreciated for their flexibility, which enabled students to work at their own pace. In transforming this introductory Pathology course, we have demonstrated a model for the use of blended learning in large group teaching sessions, which achieved high levels of completion, satisfaction and value for learning.

  14. A model for the use of blended learning in large group teaching sessions

    Directory of Open Access Journals (Sweden)

    Cristan Herbert

    2017-11-01

    modules were described as enjoyable, motivating and were appreciated for their flexibility, which enabled students to work at their own pace. Conclusions In transforming this introductory Pathology course, we have demonstrated a model for the use of blended learning in large group teaching sessions, which achieved high levels of completion, satisfaction and value for learning.

  15. LARGE AND SMALL GROUP TYPEWRITING PROJECT.

    Science.gov (United States)

    JEFFS, GEORGE A.; AND OTHERS

    AN INVESTIGATION WAS CONDUCTED TO DETERMINE IF GROUPS OF HIGH SCHOOL STUDENTS NUMERICALLY IN EXCESS OF 50 COULD BE AS EFFECTIVELY INSTRUCTED IN TYPEWRITING SKILLS AS GROUPS OF LESS THAN 30. STUDENTS ENROLLED IN 1ST-YEAR TYPEWRITING WERE RANDOMLY ASSIGNED TO TWO LARGE GROUPS AND THREE SMALL GROUPS TAUGHT BY THE SAME INSTRUCTOR. TEACHER-MADE,…

  16. Managing large-scale models: DBS

    International Nuclear Information System (INIS)

    1981-05-01

    A set of fundamental management tools for developing and operating a large scale model and data base system is presented. Based on experience in operating and developing a large scale computerized system, the only reasonable way to gain strong management control of such a system is to implement appropriate controls and procedures. Chapter I discusses the purpose of the book. Chapter II classifies a broad range of generic management problems into three groups: documentation, operations, and maintenance. First, system problems are identified then solutions for gaining management control are disucssed. Chapters III, IV, and V present practical methods for dealing with these problems. These methods were developed for managing SEAS but have general application for large scale models and data bases

  17. Large-scale effects of migration and conflict in pre-agricultural groups: Insights from a dynamic model.

    Directory of Open Access Journals (Sweden)

    Francesco Gargano

    Full Text Available The debate on the causes of conflict in human societies has deep roots. In particular, the extent of conflict in hunter-gatherer groups remains unclear. Some authors suggest that large-scale violence only arose with the spreading of agriculture and the building of complex societies. To shed light on this issue, we developed a model based on operatorial techniques simulating population-resource dynamics within a two-dimensional lattice, with humans and natural resources interacting in each cell of the lattice. The model outcomes under different conditions were compared with recently available demographic data for prehistoric South America. Only under conditions that include migration among cells and conflict was the model able to consistently reproduce the empirical data at a continental scale. We argue that the interplay between resource competition, migration, and conflict drove the population dynamics of South America after the colonization phase and before the introduction of agriculture. The relation between population and resources indeed emerged as a key factor leading to migration and conflict once the carrying capacity of the environment has been reached.

  18. The zero-dimensional O(N) vector model as a benchmark for perturbation theory, the large-N expansion and the functional renormalization group

    International Nuclear Information System (INIS)

    Keitel, Jan; Bartosch, Lorenz

    2012-01-01

    We consider the zero-dimensional O(N) vector model as a simple example to calculate n-point correlation functions using perturbation theory, the large-N expansion and the functional renormalization group (FRG). Comparing our findings with exact results, we show that perturbation theory breaks down for moderate interactions for all N, as one should expect. While the interaction-induced shift of the free energy and the self-energy are well described by the large-N expansion even for small N, this is not the case for higher order correlation functions. However, using the FRG in its one-particle irreducible formalism, we see that very few running couplings suffice to get accurate results for arbitrary N in the strong coupling regime, outperforming the large-N expansion for small N. We further remark on how the derivative expansion, a well-known approximation strategy for the FRG, reduces to an exact method for the zero-dimensional O(N) vector model. (paper)

  19. Resolving Microzooplankton Functional Groups In A Size-Structured Planktonic Model

    Science.gov (United States)

    Taniguchi, D.; Dutkiewicz, S.; Follows, M. J.; Jahn, O.; Menden-Deuer, S.

    2016-02-01

    Microzooplankton are important marine grazers, often consuming a large fraction of primary productivity. They consist of a great diversity of organisms with different behaviors, characteristics, and rates. This functional diversity, and its consequences, are not currently reflected in large-scale ocean ecological simulations. How should these organisms be represented, and what are the implications for their biogeography? We develop a size-structured, trait-based model to characterize a diversity of microzooplankton functional groups. We compile and examine size-based laboratory data on the traits, revealing some patterns with size and functional group that we interpret with mechanistic theory. Fitting the model to the data provides parameterizations of key rates and properties, which we employ in a numerical ocean model. The diversity of grazing preference, rates, and trophic strategies enables the coexistence of different functional groups of micro-grazers under various environmental conditions, and the model produces testable predictions of the biogeography.

  20. Group Centric Networking: Large Scale Over the Air Testing of Group Centric Networking

    Science.gov (United States)

    2016-11-01

    Large Scale Over-the-Air Testing of Group Centric Networking Logan Mercer, Greg Kuperman, Andrew Hunter, Brian Proulx MIT Lincoln Laboratory...performance of Group Centric Networking (GCN), a networking protocol developed for robust and scalable communications in lossy networks where users are...devices, and the ad-hoc nature of the network . Group Centric Networking (GCN) is a proposed networking protocol that addresses challenges specific to

  1. A Large Group Decision Making Approach Based on TOPSIS Framework with Unknown Weights Information

    Directory of Open Access Journals (Sweden)

    Li Yupeng

    2017-01-01

    Full Text Available Large group decision making considering multiple attributes is imperative in many decision areas. The weights of the decision makers (DMs is difficult to obtain for the large number of DMs. To cope with this issue, an integrated multiple-attributes large group decision making framework is proposed in this article. The fuzziness and hesitation of the linguistic decision variables are described by interval-valued intuitionistic fuzzy sets. The weights of the DMs are optimized by constructing a non-linear programming model, in which the original decision matrices are aggregated by using the interval-valued intuitionistic fuzzy weighted average operator. By solving the non-linear programming model with MATLAB®, the weights of the DMs and the fuzzy comprehensive decision matrix are determined. Then the weights of the criteria are calculated based on the information entropy theory. At last, the TOPSIS framework is employed to establish the decision process. The divergence between interval-valued intuitionistic fuzzy numbers is calculated by interval-valued intuitionistic fuzzy cross entropy. A real-world case study is constructed to elaborate the feasibility and effectiveness of the proposed methodology.

  2. Structured approaches to large-scale systems: Variational integrators for interconnected Lagrange-Dirac systems and structured model reduction on Lie groups

    Science.gov (United States)

    Parks, Helen Frances

    This dissertation presents two projects related to the structured integration of large-scale mechanical systems. Structured integration uses the considerable differential geometric structure inherent in mechanical motion to inform the design of numerical integration schemes. This process improves the qualitative properties of simulations and becomes especially valuable as a measure of accuracy over long time simulations in which traditional Gronwall accuracy estimates lose their meaning. Often, structured integration schemes replicate continuous symmetries and their associated conservation laws at the discrete level. Such is the case for variational integrators, which discretely replicate the process of deriving equations of motion from variational principles. This results in the conservation of momenta associated to symmetries in the discrete system and conservation of a symplectic form when applicable. In the case of Lagrange-Dirac systems, variational integrators preserve a discrete analogue of the Dirac structure preserved in the continuous flow. In the first project of this thesis, we extend Dirac variational integrators to accommodate interconnected systems. We hope this work will find use in the fields of control, where a controlled system can be thought of as a "plant" system joined to its controller, and in the approach of very large systems, where modular modeling may prove easier than monolithically modeling the entire system. The second project of the thesis considers a different approach to large systems. Given a detailed model of the full system, can we reduce it to a more computationally efficient model without losing essential geometric structures in the system? Asked without the reference to structure, this is the essential question of the field of model reduction. The answer there has been a resounding yes, with Principal Orthogonal Decomposition (POD) with snapshots rising as one of the most successful methods. Our project builds on previous work

  3. Group music performance causes elevated pain thresholds and social bonding in small and large groups of singers

    Science.gov (United States)

    Weinstein, Daniel; Launay, Jacques; Pearce, Eiluned; Dunbar, Robin I. M.; Stewart, Lauren

    2016-01-01

    Over our evolutionary history, humans have faced the problem of how to create and maintain social bonds in progressively larger groups compared to those of our primate ancestors. Evidence from historical and anthropological records suggests that group music-making might act as a mechanism by which this large-scale social bonding could occur. While previous research has shown effects of music making on social bonds in small group contexts, the question of whether this effect ‘scales up’ to larger groups is particularly important when considering the potential role of music for large-scale social bonding. The current study recruited individuals from a community choir that met in both small (n = 20 – 80) and large (a ‘megachoir’ combining individuals from the smaller subchoirs n = 232) group contexts. Participants gave self-report measures (via a survey) of social bonding and had pain threshold measurements taken (as a proxy for endorphin release) before and after 90 minutes of singing. Results showed that feelings of inclusion, connectivity, positive affect, and measures of endorphin release all increased across singing rehearsals and that the influence of group singing was comparable for pain thresholds in the large versus small group context. Levels of social closeness were found to be greater at pre- and post-levels for the small choir condition. However, the large choir condition experienced a greater change in social closeness as compared to the small condition. The finding that singing together fosters social closeness – even in large contexts where individuals are not known to each other – is consistent with evolutionary accounts that emphasize the role of music in social bonding, particularly in the context of creating larger cohesive groups than other primates are able to manage. PMID:27158219

  4. Understanding Group/Party Affiliation Using Social Networks and Agent-Based Modeling

    Science.gov (United States)

    Campbell, Kenyth

    2012-01-01

    The dynamics of group affiliation and group dispersion is a concept that is most often studied in order for political candidates to better understand the most efficient way to conduct their campaigns. While political campaigning in the United States is a very hot topic that most politicians analyze and study, the concept of group/party affiliation presents its own area of study that producers very interesting results. One tool for examining party affiliation on a large scale is agent-based modeling (ABM), a paradigm in the modeling and simulation (M&S) field perfectly suited for aggregating individual behaviors to observe large swaths of a population. For this study agent based modeling was used in order to look at a community of agents and determine what factors can affect the group/party affiliation patterns that are present. In the agent-based model that was used for this experiment many factors were present but two main factors were used to determine the results. The results of this study show that it is possible to use agent-based modeling to explore group/party affiliation and construct a model that can mimic real world events. More importantly, the model in the study allows for the results found in a smaller community to be translated into larger experiments to determine if the results will remain present on a much larger scale.

  5. Highly Scalable Trip Grouping for Large Scale Collective Transportation Systems

    DEFF Research Database (Denmark)

    Gidofalvi, Gyozo; Pedersen, Torben Bach; Risch, Tore

    2008-01-01

    Transportation-related problems, like road congestion, parking, and pollution, are increasing in most cities. In order to reduce traffic, recent work has proposed methods for vehicle sharing, for example for sharing cabs by grouping "closeby" cab requests and thus minimizing transportation cost...... and utilizing cab space. However, the methods published so far do not scale to large data volumes, which is necessary to facilitate large-scale collective transportation systems, e.g., ride-sharing systems for large cities. This paper presents highly scalable trip grouping algorithms, which generalize previous...

  6. Large Animal Stroke Models vs. Rodent Stroke Models, Pros and Cons, and Combination?

    Science.gov (United States)

    Cai, Bin; Wang, Ning

    2016-01-01

    Stroke is a leading cause of serious long-term disability worldwide and the second leading cause of death in many countries. Long-time attempts to salvage dying neurons via various neuroprotective agents have failed in stroke translational research, owing in part to the huge gap between animal stroke models and stroke patients, which also suggests that rodent models have limited predictive value and that alternate large animal models are likely to become important in future translational research. The genetic background, physiological characteristics, behavioral characteristics, and brain structure of large animals, especially nonhuman primates, are analogous to humans, and resemble humans in stroke. Moreover, relatively new regional imaging techniques, measurements of regional cerebral blood flow, and sophisticated physiological monitoring can be more easily performed on the same animal at multiple time points. As a result, we can use large animal stroke models to decrease the gap and promote translation of basic science stroke research. At the same time, we should not neglect the disadvantages of the large animal stroke model such as the significant expense and ethical considerations, which can be overcome by rodent models. Rodents should be selected as stroke models for initial testing and primates or cats are desirable as a second species, which was recommended by the Stroke Therapy Academic Industry Roundtable (STAIR) group in 2009.

  7. Topic modeling for cluster analysis of large biological and medical datasets.

    Science.gov (United States)

    Zhao, Weizhong; Zou, Wen; Chen, James J

    2014-01-01

    The big data moniker is nowhere better deserved than to describe the ever-increasing prodigiousness and complexity of biological and medical datasets. New methods are needed to generate and test hypotheses, foster biological interpretation, and build validated predictors. Although multivariate techniques such as cluster analysis may allow researchers to identify groups, or clusters, of related variables, the accuracies and effectiveness of traditional clustering methods diminish for large and hyper dimensional datasets. Topic modeling is an active research field in machine learning and has been mainly used as an analytical tool to structure large textual corpora for data mining. Its ability to reduce high dimensionality to a small number of latent variables makes it suitable as a means for clustering or overcoming clustering difficulties in large biological and medical datasets. In this study, three topic model-derived clustering methods, highest probable topic assignment, feature selection and feature extraction, are proposed and tested on the cluster analysis of three large datasets: Salmonella pulsed-field gel electrophoresis (PFGE) dataset, lung cancer dataset, and breast cancer dataset, which represent various types of large biological or medical datasets. All three various methods are shown to improve the efficacy/effectiveness of clustering results on the three datasets in comparison to traditional methods. A preferable cluster analysis method emerged for each of the three datasets on the basis of replicating known biological truths. Topic modeling could be advantageously applied to the large datasets of biological or medical research. The three proposed topic model-derived clustering methods, highest probable topic assignment, feature selection and feature extraction, yield clustering improvements for the three different data types. Clusters more efficaciously represent truthful groupings and subgroupings in the data than traditional methods, suggesting

  8. GRIP LANGLEY AEROSOL RESEARCH GROUP EXPERIMENT (LARGE) V1

    Data.gov (United States)

    National Aeronautics and Space Administration — Langley Aerosol Research Group Experiment (LARGE) measures ultrafine aerosol number density, total and non-volatile aerosol number density, dry aerosol size...

  9. Five Large Generation Groups:Competing in Capital Operation

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    @@ Since the reform of electric power industry in 2002,the newly established five large generation groups have been persisting in the development strategy of "taking electricity as the core and extending to up-and-downstream businesses." Stringent measures were taken in capital operation and their potential has been shown through electric power assets acquiring,coal and financial resources investing,capital market financing as well as power utility restructuring.The five groups are playing more and more important roles in merger and acquisition (M&A) and capital markets.

  10. On renormalization group flow in matrix model

    International Nuclear Information System (INIS)

    Gao, H.B.

    1992-10-01

    The renormalization group flow recently found by Brezin and Zinn-Justin by integrating out redundant entries of the (N+1)x(N+1) Hermitian random matrix is studied. By introducing explicitly the RG flow parameter, and adding suitable counter terms to the matrix potential of the one matrix model, we deduce some interesting properties of the RG trajectories. In particular, the string equation for the general massive model interpolating between the UV and IR fixed points turns out to be a consequence of RG flow. An ambiguity in the UV region of the RG trajectory is remarked to be related to the large order behaviour of the one matrix model. (author). 7 refs

  11. Modelling group dynamic animal movement

    DEFF Research Database (Denmark)

    Langrock, Roland; Hopcraft, J. Grant C.; Blackwell, Paul G.

    2014-01-01

    makes its movement decisions relative to the group centroid. The basic idea is framed within the flexible class of hidden Markov models, extending previous work on modelling animal movement by means of multi-state random walks. While in simulation experiments parameter estimators exhibit some bias......, to date, practical statistical methods which can include group dynamics in animal movement models have been lacking. We consider a flexible modelling framework that distinguishes a group-level model, describing the movement of the group's centre, and an individual-level model, such that each individual......Group dynamic movement is a fundamental aspect of many species' movements. The need to adequately model individuals' interactions with other group members has been recognised, particularly in order to differentiate the role of social forces in individual movement from environmental factors. However...

  12. Large Sets in Boolean and Non-Boolean Groups and Topology

    Directory of Open Access Journals (Sweden)

    Ol’ga V. Sipacheva

    2017-10-01

    Full Text Available Various notions of large sets in groups, including the classical notions of thick, syndetic, and piecewise syndetic sets and the new notion of vast sets in groups, are studied with emphasis on the interplay between such sets in Boolean groups. Natural topologies closely related to vast sets are considered; as a byproduct, interesting relations between vast sets and ultrafilters are revealed.

  13. Application of renormalization group theory to the large-eddy simulation of transitional boundary layers

    Science.gov (United States)

    Piomelli, Ugo; Zang, Thomas A.; Speziale, Charles G.; Lund, Thomas S.

    1990-01-01

    An eddy viscosity model based on the renormalization group theory of Yakhot and Orszag (1986) is applied to the large-eddy simulation of transition in a flat-plate boundary layer. The simulation predicts with satisfactory accuracy the mean velocity and Reynolds stress profiles, as well as the development of the important scales of motion. The evolution of the structures characteristic of the nonlinear stages of transition is also predicted reasonably well.

  14. Renormalisation group improved leptogenesis in family symmetry models

    International Nuclear Information System (INIS)

    Cooper, Iain K.; King, Stephen F.; Luhn, Christoph

    2012-01-01

    We study renormalisation group (RG) corrections relevant for leptogenesis in the case of family symmetry models such as the Altarelli-Feruglio A 4 model of tri-bimaximal lepton mixing or its extension to tri-maximal mixing. Such corrections are particularly relevant since in large classes of family symmetry models, to leading order, the CP violating parameters of leptogenesis would be identically zero at the family symmetry breaking scale, due to the form dominance property. We find that RG corrections violate form dominance and enable such models to yield viable leptogenesis at the scale of right-handed neutrino masses. More generally, the results of this paper show that RG corrections to leptogenesis cannot be ignored for any family symmetry model involving sizeable neutrino and τ Yukawa couplings.

  15. Student decision making in large group discussion

    Science.gov (United States)

    Kustusch, Mary Bridget; Ptak, Corey; Sayre, Eleanor C.; Franklin, Scott V.

    2015-04-01

    It is increasingly common in physics classes for students to work together to solve problems and perform laboratory experiments. When students work together, they need to negotiate the roles and decision making within the group. We examine how a large group of students negotiates authority as part of their two week summer College Readiness Program at Rochester Institute of Technology. The program is designed to develop metacognitive skills in first generation and Deaf and hard-of-hearing (DHH) STEM undergraduates through cooperative group work, laboratory experimentation, and explicit reflection exercises. On the first full day of the program, the students collaboratively developed a sign for the word ``metacognition'' for which there is not a sign in American Sign Language. This presentation will focus on three aspects of the ensuing discussion: (1) how the instructor communicated expectations about decision making; (2) how the instructor promoted student-driven decision making rather than instructor-driven policy; and (3) one student's shifts in decision making behavior. We conclude by discussing implications of this research for activity-based physics instruction.

  16. A Nationwide Overview of Sight-Singing Requirements of Large-Group Choral Festivals

    Science.gov (United States)

    Norris, Charles E.

    2004-01-01

    The purpose of this study was to examine sight-singing requirements at junior and senior high school large-group ratings-based choral festivals throughout the United States. Responses to the following questions were sought from each state: (1) Are there ratings-based large-group choral festivals? (2) Is sight-singing a requirement? (3) Are there…

  17. Memory Efficient PCA Methods for Large Group ICA.

    Science.gov (United States)

    Rachakonda, Srinivas; Silva, Rogers F; Liu, Jingyu; Calhoun, Vince D

    2016-01-01

    Principal component analysis (PCA) is widely used for data reduction in group independent component analysis (ICA) of fMRI data. Commonly, group-level PCA of temporally concatenated datasets is computed prior to ICA of the group principal components. This work focuses on reducing very high dimensional temporally concatenated datasets into its group PCA space. Existing randomized PCA methods can determine the PCA subspace with minimal memory requirements and, thus, are ideal for solving large PCA problems. Since the number of dataloads is not typically optimized, we extend one of these methods to compute PCA of very large datasets with a minimal number of dataloads. This method is coined multi power iteration (MPOWIT). The key idea behind MPOWIT is to estimate a subspace larger than the desired one, while checking for convergence of only the smaller subset of interest. The number of iterations is reduced considerably (as well as the number of dataloads), accelerating convergence without loss of accuracy. More importantly, in the proposed implementation of MPOWIT, the memory required for successful recovery of the group principal components becomes independent of the number of subjects analyzed. Highly efficient subsampled eigenvalue decomposition techniques are also introduced, furnishing excellent PCA subspace approximations that can be used for intelligent initialization of randomized methods such as MPOWIT. Together, these developments enable efficient estimation of accurate principal components, as we illustrate by solving a 1600-subject group-level PCA of fMRI with standard acquisition parameters, on a regular desktop computer with only 4 GB RAM, in just a few hours. MPOWIT is also highly scalable and could realistically solve group-level PCA of fMRI on thousands of subjects, or more, using standard hardware, limited only by time, not memory. Also, the MPOWIT algorithm is highly parallelizable, which would enable fast, distributed implementations ideal for big

  18. Memory efficient PCA methods for large group ICA

    Directory of Open Access Journals (Sweden)

    Srinivas eRachakonda

    2016-02-01

    Full Text Available Principal component analysis (PCA is widely used for data reduction in group independent component analysis (ICA of fMRI data. Commonly, group-level PCA of temporally concatenated datasets is computed prior to ICA of the group principal components. This work focuses on reducing very high dimensional temporally concatenated datasets into its group PCA space. Existing randomized PCA methods can determine the PCA subspace with minimal memory requirements and, thus, are ideal for solving large PCA problems. Since the number of dataloads is not typically optimized, we extend one of these methods to compute PCA of very large datasets with a minimal number of dataloads. This method is coined multi power iteration (MPOWIT. The key idea behind MPOWIT is to estimate a subspace larger than the desired one, while checking for convergence of only the smaller subset of interest. The number of iterations is reduced considerably (as well as the number of dataloads, accelerating convergence without loss of accuracy. More importantly, in the proposed implementation of MPOWIT, the memory required for successful recovery of the group principal components becomes independent of the number of subjects analyzed. Highly efficient subsampled eigenvalue decomposition techniques are also introduced, furnishing excellent PCA subspace approximations that can be used for intelligent initialization of randomized methods such as MPOWIT. Together, these developments enable efficient estimation of accurate principal components, as we illustrate by solving a 1600-subject group-level PCA of fMRI with standard acquisition parameters, on a regular desktop computer with only 4GB RAM, in just a few hours. MPOWIT is also highly scalable and could realistically solve group-level PCA of fMRI on thousands of subjects, or more, using standard hardware, limited only by time, not memory. Also, the MPOWIT algorithm is highly parallelizable, which would enable fast, distributed implementations

  19. Medical students perceive better group learning processes when large classes are made to seem small.

    Science.gov (United States)

    Hommes, Juliette; Arah, Onyebuchi A; de Grave, Willem; Schuwirth, Lambert W T; Scherpbier, Albert J J A; Bos, Gerard M J

    2014-01-01

    Medical schools struggle with large classes, which might interfere with the effectiveness of learning within small groups due to students being unfamiliar to fellow students. The aim of this study was to assess the effects of making a large class seem small on the students' collaborative learning processes. A randomised controlled intervention study was undertaken to make a large class seem small, without the need to reduce the number of students enrolling in the medical programme. The class was divided into subsets: two small subsets (n=50) as the intervention groups; a control group (n=102) was mixed with the remaining students (the non-randomised group n∼100) to create one large subset. The undergraduate curriculum of the Maastricht Medical School, applying the Problem-Based Learning principles. In this learning context, students learn mainly in tutorial groups, composed randomly from a large class every 6-10 weeks. The formal group learning activities were organised within the subsets. Students from the intervention groups met frequently within the formal groups, in contrast to the students from the large subset who hardly enrolled with the same students in formal activities. Three outcome measures assessed students' group learning processes over time: learning within formally organised small groups, learning with other students in the informal context and perceptions of the intervention. Formal group learning processes were perceived more positive in the intervention groups from the second study year on, with a mean increase of β=0.48. Informal group learning activities occurred almost exclusively within the subsets as defined by the intervention from the first week involved in the medical curriculum (E-I indexes>-0.69). Interviews tapped mainly positive effects and negligible negative side effects of the intervention. Better group learning processes can be achieved in large medical schools by making large classes seem small.

  20. TIME DISTRIBUTIONS OF LARGE AND SMALL SUNSPOT GROUPS OVER FOUR SOLAR CYCLES

    International Nuclear Information System (INIS)

    Kilcik, A.; Yurchyshyn, V. B.; Abramenko, V.; Goode, P. R.; Cao, W.; Ozguc, A.; Rozelot, J. P.

    2011-01-01

    Here we analyze solar activity by focusing on time variations of the number of sunspot groups (SGs) as a function of their modified Zurich class. We analyzed data for solar cycles 20-23 by using Rome (cycles 20 and 21) and Learmonth Solar Observatory (cycles 22 and 23) SG numbers. All SGs recorded during these time intervals were separated into two groups. The first group includes small SGs (A, B, C, H, and J classes by Zurich classification), and the second group consists of large SGs (D, E, F, and G classes). We then calculated small and large SG numbers from their daily mean numbers as observed on the solar disk during a given month. We report that the time variations of small and large SG numbers are asymmetric except for solar cycle 22. In general, large SG numbers appear to reach their maximum in the middle of the solar cycle (phases 0.45-0.5), while the international sunspot numbers and the small SG numbers generally peak much earlier (solar cycle phases 0.29-0.35). Moreover, the 10.7 cm solar radio flux, the facular area, and the maximum coronal mass ejection speed show better agreement with the large SG numbers than they do with the small SG numbers. Our results suggest that the large SG numbers are more likely to shed light on solar activity and its geophysical implications. Our findings may also influence our understanding of long-term variations of the total solar irradiance, which is thought to be an important factor in the Sun-Earth climate relationship.

  1. Large-scale parallel configuration interaction. II. Two- and four-component double-group general active space implementation with application to BiH

    DEFF Research Database (Denmark)

    Knecht, Stefan; Jensen, Hans Jørgen Aagaard; Fleig, Timo

    2010-01-01

    We present a parallel implementation of a large-scale relativistic double-group configuration interaction CIprogram. It is applicable with a large variety of two- and four-component Hamiltonians. The parallel algorithm is based on a distributed data model in combination with a static load balanci...

  2. Medical Students Perceive Better Group Learning Processes when Large Classes Are Made to Seem Small

    Science.gov (United States)

    Hommes, Juliette; Arah, Onyebuchi A.; de Grave, Willem; Schuwirth, Lambert W. T.; Scherpbier, Albert J. J. A.; Bos, Gerard M. J.

    2014-01-01

    Objective Medical schools struggle with large classes, which might interfere with the effectiveness of learning within small groups due to students being unfamiliar to fellow students. The aim of this study was to assess the effects of making a large class seem small on the students' collaborative learning processes. Design A randomised controlled intervention study was undertaken to make a large class seem small, without the need to reduce the number of students enrolling in the medical programme. The class was divided into subsets: two small subsets (n = 50) as the intervention groups; a control group (n = 102) was mixed with the remaining students (the non-randomised group n∼100) to create one large subset. Setting The undergraduate curriculum of the Maastricht Medical School, applying the Problem-Based Learning principles. In this learning context, students learn mainly in tutorial groups, composed randomly from a large class every 6–10 weeks. Intervention The formal group learning activities were organised within the subsets. Students from the intervention groups met frequently within the formal groups, in contrast to the students from the large subset who hardly enrolled with the same students in formal activities. Main Outcome Measures Three outcome measures assessed students' group learning processes over time: learning within formally organised small groups, learning with other students in the informal context and perceptions of the intervention. Results Formal group learning processes were perceived more positive in the intervention groups from the second study year on, with a mean increase of β = 0.48. Informal group learning activities occurred almost exclusively within the subsets as defined by the intervention from the first week involved in the medical curriculum (E-I indexes>−0.69). Interviews tapped mainly positive effects and negligible negative side effects of the intervention. Conclusion Better group learning processes can be

  3. Large animal and primate models of spinal cord injury for the testing of novel therapies.

    Science.gov (United States)

    Kwon, Brian K; Streijger, Femke; Hill, Caitlin E; Anderson, Aileen J; Bacon, Mark; Beattie, Michael S; Blesch, Armin; Bradbury, Elizabeth J; Brown, Arthur; Bresnahan, Jacqueline C; Case, Casey C; Colburn, Raymond W; David, Samuel; Fawcett, James W; Ferguson, Adam R; Fischer, Itzhak; Floyd, Candace L; Gensel, John C; Houle, John D; Jakeman, Lyn B; Jeffery, Nick D; Jones, Linda Ann Truett; Kleitman, Naomi; Kocsis, Jeffery; Lu, Paul; Magnuson, David S K; Marsala, Martin; Moore, Simon W; Mothe, Andrea J; Oudega, Martin; Plant, Giles W; Rabchevsky, Alexander Sasha; Schwab, Jan M; Silver, Jerry; Steward, Oswald; Xu, Xiao-Ming; Guest, James D; Tetzlaff, Wolfram

    2015-07-01

    Large animal and primate models of spinal cord injury (SCI) are being increasingly utilized for the testing of novel therapies. While these represent intermediary animal species between rodents and humans and offer the opportunity to pose unique research questions prior to clinical trials, the role that such large animal and primate models should play in the translational pipeline is unclear. In this initiative we engaged members of the SCI research community in a questionnaire and round-table focus group discussion around the use of such models. Forty-one SCI researchers from academia, industry, and granting agencies were asked to complete a questionnaire about their opinion regarding the use of large animal and primate models in the context of testing novel therapeutics. The questions centered around how large animal and primate models of SCI would be best utilized in the spectrum of preclinical testing, and how much testing in rodent models was warranted before employing these models. Further questions were posed at a focus group meeting attended by the respondents. The group generally felt that large animal and primate models of SCI serve a potentially useful role in the translational pipeline for novel therapies, and that the rational use of these models would depend on the type of therapy and specific research question being addressed. While testing within these models should not be mandatory, the detection of beneficial effects using these models lends additional support for translating a therapy to humans. These models provides an opportunity to evaluate and refine surgical procedures prior to use in humans, and safety and bio-distribution in a spinal cord more similar in size and anatomy to that of humans. Our results reveal that while many feel that these models are valuable in the testing of novel therapies, important questions remain unanswered about how they should be used and how data derived from them should be interpreted. Copyright © 2015 Elsevier

  4. The effect of continuous grouping of pigs in large groups on stress response and haematological parameters

    DEFF Research Database (Denmark)

    Damgaard, Birthe Marie; Studnitz, Merete; Jensen, Karin Hjelholt

    2009-01-01

    The consequences of an ‘all in-all out' static group of uniform age vs. a continuously dynamic group with litter introduction and exit every third week were examined with respect to stress response and haematological parameters in large groups of 60 pigs. The experiment included a total of 480 pigs...... from weaning at the age of 4 weeks to the age of 18 weeks after weaning. Limited differences were found in stress and haematological parameters between pigs in dynamic and static groups. The cortisol response to the stress test was increasing with the duration of the stress test in pigs from...... the dynamic group while it was decreasing in the static group. The health condition and the growth performance were reduced in the dynamic groups compared with the static groups. In the dynamic groups the haematological parameters indicated an activation of the immune system characterised by an increased...

  5. Cooperative Coevolution with Formula-Based Variable Grouping for Large-Scale Global Optimization.

    Science.gov (United States)

    Wang, Yuping; Liu, Haiyan; Wei, Fei; Zong, Tingting; Li, Xiaodong

    2017-08-09

    For a large-scale global optimization (LSGO) problem, divide-and-conquer is usually considered an effective strategy to decompose the problem into smaller subproblems, each of which can then be solved individually. Among these decomposition methods, variable grouping is shown to be promising in recent years. Existing variable grouping methods usually assume the problem to be black-box (i.e., assuming that an analytical model of the objective function is unknown), and they attempt to learn appropriate variable grouping that would allow for a better decomposition of the problem. In such cases, these variable grouping methods do not make a direct use of the formula of the objective function. However, it can be argued that many real-world problems are white-box problems, that is, the formulas of objective functions are often known a priori. These formulas of the objective functions provide rich information which can then be used to design an effective variable group method. In this article, a formula-based grouping strategy (FBG) for white-box problems is first proposed. It groups variables directly via the formula of an objective function which usually consists of a finite number of operations (i.e., four arithmetic operations "[Formula: see text]", "[Formula: see text]", "[Formula: see text]", "[Formula: see text]" and composite operations of basic elementary functions). In FBG, the operations are classified into two classes: one resulting in nonseparable variables, and the other resulting in separable variables. In FBG, variables can be automatically grouped into a suitable number of non-interacting subcomponents, with variables in each subcomponent being interdependent. FBG can easily be applied to any white-box problem and can be integrated into a cooperative coevolution framework. Based on FBG, a novel cooperative coevolution algorithm with formula-based variable grouping (so-called CCF) is proposed in this article for decomposing a large-scale white-box problem

  6. Integrating Collaborative Learning Groups in the Large Enrollment Lecture

    Science.gov (United States)

    Adams, J. P.; Brissenden, G.; Lindell Adrian, R.; Slater, T. F.

    1998-12-01

    Recent reforms for undergraduate education propose that students should work in teams to solve problems that simulate problems that research scientists address. In the context of an innovative large-enrollment course at Montana State University, faculty have developed a series of 15 in-class, collaborative learning group activities that provide students with realistic scenarios to investigate. Focusing on a team approach, the four principle types of activities employed are historical, conceptual, process, and open-ended activities. Examples of these activities include classifying stellar spectra, characterizing galaxies, parallax measurements, estimating stellar radii, and correlating star colors with absolute magnitudes. Summative evaluation results from a combination of attitude surveys, astronomy concept examinations, and focus group interviews strongly suggest that, overall, students are learning more astronomy, believe that the group activities are valuable, enjoy the less-lecture course format, and have significantly higher attendance rates. In addition, class observations of 48 self-formed, collaborative learning groups reveal that female students are more engaged in single-gender learning groups than in mixed gender groups.

  7. Interacting star clusters in the Large Magellanic Cloud. Overmerging problem solved by cluster group formation

    Science.gov (United States)

    Leon, Stéphane; Bergond, Gilles; Vallenari, Antonella

    1999-04-01

    We present the tidal tail distributions of a sample of candidate binary clusters located in the bar of the Large Magellanic Cloud (LMC). One isolated cluster, SL 268, is presented in order to study the effect of the LMC tidal field. All the candidate binary clusters show tidal tails, confirming that the pairs are formed by physically linked objects. The stellar mass in the tails covers a large range, from 1.8x 10(3) to 3x 10(4) \\msun. We derive a total mass estimate for SL 268 and SL 356. At large radii, the projected density profiles of SL 268 and SL 356 fall off as r(-gamma ) , with gamma = 2.27 and gamma =3.44, respectively. Out of 4 pairs or multiple systems, 2 are older than the theoretical survival time of binary clusters (going from a few 10(6) years to 10(8) years). A pair shows too large age difference between the components to be consistent with classical theoretical models of binary cluster formation (Fujimoto & Kumai \\cite{fujimoto97}). We refer to this as the ``overmerging'' problem. A different scenario is proposed: the formation proceeds in large molecular complexes giving birth to groups of clusters over a few 10(7) years. In these groups the expected cluster encounter rate is larger, and tidal capture has higher probability. Cluster pairs are not born together through the splitting of the parent cloud, but formed later by tidal capture. For 3 pairs, we tentatively identify the star cluster group (SCG) memberships. The SCG formation, through the recent cluster starburst triggered by the LMC-SMC encounter, in contrast with the quiescent open cluster formation in the Milky Way can be an explanation to the paucity of binary clusters observed in our Galaxy. Based on observations collected at the European Southern Observatory, La Silla, Chile}

  8. Activity of CERN and LNF groups on large area GEM detectors

    Energy Technology Data Exchange (ETDEWEB)

    Alfonsi, M. [CERN, Geneva (Switzerland); Bencivenni, G. [Laboratori Nazionali di Frascati dell' INFN, Frascati (Italy); Brock, I. [Physikalisches Institute der Universitat Bonn, Bonn (Germany); Cerioni, S. [Laboratori Nazionali di Frascati dell' INFN, Frascati (Italy); Croci, G.; David, E. [CERN, Geneva (Switzerland); De Lucia, E. [Laboratori Nazionali di Frascati dell' INFN, Frascati (Italy); De Oliveira, R. [CERN, Geneva (Switzerland); De Robertis, G. [Sezione INFN di Bari, Bari (Italy); Domenici, D., E-mail: Danilo.Domenici@lnf.infn.i [Laboratori Nazionali di Frascati dell' INFN, Frascati (Italy); Duarte Pinto, S. [CERN, Geneva (Switzerland); Felici, G.; Gatta, M.; Jacewicz, M. [Laboratori Nazionali di Frascati dell' INFN, Frascati (Italy); Loddo, F. [Sezione INFN di Bari, Bari (Italy); Morello, G. [Dipeartimento di Fisica Universita della Calabria e INFN, Cosenza (Italy); Pistilli, M. [Laboratori Nazionali di Frascati dell' INFN, Frascati (Italy); Ranieri, A. [Sezione INFN di Bari, Bari (Italy); Ropelewski, L. [CERN, Geneva (Switzerland); Sauli, F. [TERA Foundation, Novara (Italy)

    2010-05-21

    We report on the activity of CERN and INFN-LNF groups on the development of large area GEM detectors. The two groups work together within the RD51 Collaboration, to aim at the development of Micro-pattern Gas detectors technologies. The vast request for large area foils by the GEM community has driven a change in the manufacturing procedure by the TS-DEM-PMT laboratory, needed to overcome the previous size limitation of 450x450mm{sup 2}. Now a single-mask technology is used allowing foils to be made as large as 450x2000mm{sup 2}. The limitation in the short size, due to the definite width of the raw material, can be overcome by splicing more foils together. A 10x10cm{sup 2} GEM detector with the new single-mask foil has been tested with X-rays and the results are shown. Possible future applications for large area GEM are the TOTEM experiment upgrade at CERN, and the KLOE-2 experiment at the Dafne {Phi}-factory in Frascati.

  9. Activity of CERN and LNF groups on large area GEM detectors

    International Nuclear Information System (INIS)

    Alfonsi, M.; Bencivenni, G.; Brock, I.; Cerioni, S.; Croci, G.; David, E.; De Lucia, E.; De Oliveira, R.; De Robertis, G.; Domenici, D.; Duarte Pinto, S.; Felici, G.; Gatta, M.; Jacewicz, M.; Loddo, F.; Morello, G.; Pistilli, M.; Ranieri, A.; Ropelewski, L.; Sauli, F.

    2010-01-01

    We report on the activity of CERN and INFN-LNF groups on the development of large area GEM detectors. The two groups work together within the RD51 Collaboration, to aim at the development of Micro-pattern Gas detectors technologies. The vast request for large area foils by the GEM community has driven a change in the manufacturing procedure by the TS-DEM-PMT laboratory, needed to overcome the previous size limitation of 450x450mm 2 . Now a single-mask technology is used allowing foils to be made as large as 450x2000mm 2 . The limitation in the short size, due to the definite width of the raw material, can be overcome by splicing more foils together. A 10x10cm 2 GEM detector with the new single-mask foil has been tested with X-rays and the results are shown. Possible future applications for large area GEM are the TOTEM experiment upgrade at CERN, and the KLOE-2 experiment at the Dafne Φ-factory in Frascati.

  10. Finite Mixture Multilevel Multidimensional Ordinal IRT Models for Large Scale Cross-Cultural Research

    Science.gov (United States)

    de Jong, Martijn G.; Steenkamp, Jan-Benedict E. M.

    2010-01-01

    We present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups of countries have different measurement operations, while…

  11. Qualitative Analysis of Collaborative Learning Groups in Large Enrollment Introductory Astronomy

    Science.gov (United States)

    Skala, Chija; Slater, Timothy F.; Adams, Jeffrey P.

    2000-08-01

    Large-lecture introductory astronomy courses for undergraduate, non-science majors present numerous problems for faculty. As part of a systematic effort to improve the course learning environment, a series of small-group, collaborative learning activities were implemented in an otherwise conventional lecture astronomy survey course. These activities were used once each week during the regularly scheduled lecture period. After eight weeks, ten focus group interviews were conducted to qualitatively assess the impact and dynamics of these small group learning activities. Overall, the data strongly suggest that students enjoy participating in the in-class learning activities in learning teams of three to four students. These students firmly believe that they are learning more than they would from lectures alone. Inductive analysis of the transcripts revealed five major themes prevalent among the students' perspectives: (1) self-formed, cooperative group composition and formation should be more regulated by the instructor; (2) team members' assigned rolls should be less formally structured by the instructors; (3) cooperative groups helped in learning the course content; (4) time constraints on lectures and activities need to be more carefully aligned; and (5) gender issues can exist within the groups. These themes serve as a guide for instructors who are developing instructional interventions for large lecture courses.

  12. Computer-aided polymer design using group contribution plus property models

    DEFF Research Database (Denmark)

    Satyanarayana, Kavitha Chelakara; Abildskov, Jens; Gani, Rafiqul

    2009-01-01

    . Polymer repeat unit property prediction models are required to calculate the properties of the generated repeat units. A systematic framework incorporating recently developed group contribution plus (GC(+)) models and an extended CAMD technique to include design of polymer repeat units is highlighted...... in this paper. The advantage of a GC(+) model in CAMD applications is that a very large number of polymer structures can be considered even though some of the group parameters may not be available. A number of case studies involving different polymer design problems have been solved through the developed......The preliminary step for polymer product design is to identify the basic repeat unit structure of the polymer that matches the target properties. Computer-aided molecular design (CAMD) approaches can be applied for generating the polymer repeat unit structures that match the required constraints...

  13. Will Large DSO-Managed Group Practices Be the Predominant Setting for Oral Health Care by 2025? Two Viewpoints: Viewpoint 1: Large DSO-Managed Group Practices Will Be the Setting in Which the Majority of Oral Health Care Is Delivered by 2025 and Viewpoint 2: Increases in DSO-Managed Group Practices Will Be Offset by Models Allowing Dentists to Retain the Independence and Freedom of a Traditional Practice.

    Science.gov (United States)

    Cole, James R; Dodge, William W; Findley, John S; Young, Stephen K; Horn, Bruce D; Kalkwarf, Kenneth L; Martin, Max M; Winder, Ronald L

    2015-05-01

    This Point/Counterpoint article discusses the transformation of dental practice from the traditional solo/small-group (partnership) model of the 1900s to large Dental Support Organizations (DSO) that support affiliated dental practices by providing nonclinical functions such as, but not limited to, accounting, human resources, marketing, and legal and practice management. Many feel that DSO-managed group practices (DMGPs) with employed providers will become the setting in which the majority of oral health care will be delivered in the future. Viewpoint 1 asserts that the traditional dental practice patterns of the past are shifting as many younger dentists gravitate toward employed positions in large group practices or the public sector. Although educational debt is relevant in predicting graduates' practice choices, other variables such as gender, race, and work-life balance play critical roles as well. Societal characteristics demonstrated by aging Gen Xers and those in the Millennial generation blend seamlessly with the opportunities DMGPs offer their employees. Viewpoint 2 contends the traditional model of dental care delivery-allowing entrepreneurial practitioners to make decisions in an autonomous setting-is changing but not to the degree nor as rapidly as Viewpoint 1 professes. Millennials entering the dental profession, with characteristics universally attributed to their generation, see value in the independence and flexibility that a traditional practice allows. Although DMGPs provide dentists one option for practice, several alternative delivery models offer current dentists and future dental school graduates many of the advantages of DMGPs while allowing them to maintain the independence and freedom a traditional practice provides.

  14. A Large Group Decision Making Approach Based on TOPSIS Framework with Unknown Weights Information

    OpenAIRE

    Li Yupeng; Lian Xiaozhen; Lu Cheng; Wang Zhaotong

    2017-01-01

    Large group decision making considering multiple attributes is imperative in many decision areas. The weights of the decision makers (DMs) is difficult to obtain for the large number of DMs. To cope with this issue, an integrated multiple-attributes large group decision making framework is proposed in this article. The fuzziness and hesitation of the linguistic decision variables are described by interval-valued intuitionistic fuzzy sets. The weights of the DMs are optimized by constructing a...

  15. Group Capability Model

    Science.gov (United States)

    Olejarski, Michael; Appleton, Amy; Deltorchio, Stephen

    2009-01-01

    The Group Capability Model (GCM) is a software tool that allows an organization, from first line management to senior executive, to monitor and track the health (capability) of various groups in performing their contractual obligations. GCM calculates a Group Capability Index (GCI) by comparing actual head counts, certifications, and/or skills within a group. The model can also be used to simulate the effects of employee usage, training, and attrition on the GCI. A universal tool and common method was required due to the high risk of losing skills necessary to complete the Space Shuttle Program and meet the needs of the Constellation Program. During this transition from one space vehicle to another, the uncertainty among the critical skilled workforce is high and attrition has the potential to be unmanageable. GCM allows managers to establish requirements for their group in the form of head counts, certification requirements, or skills requirements. GCM then calculates a Group Capability Index (GCI), where a score of 1 indicates that the group is at the appropriate level; anything less than 1 indicates a potential for improvement. This shows the health of a group, both currently and over time. GCM accepts as input head count, certification needs, critical needs, competency needs, and competency critical needs. In addition, team members are categorized by years of experience, percentage of contribution, ex-members and their skills, availability, function, and in-work requirements. Outputs are several reports, including actual vs. required head count, actual vs. required certificates, CGI change over time (by month), and more. The program stores historical data for summary and historical reporting, which is done via an Excel spreadsheet that is color-coded to show health statistics at a glance. GCM has provided the Shuttle Ground Processing team with a quantifiable, repeatable approach to assessing and managing the skills in their organization. They now have a common

  16. Sutherland models for complex reflection groups

    International Nuclear Information System (INIS)

    Crampe, N.; Young, C.A.S.

    2008-01-01

    There are known to be integrable Sutherland models associated to every real root system, or, which is almost equivalent, to every real reflection group. Real reflection groups are special cases of complex reflection groups. In this paper we associate certain integrable Sutherland models to the classical family of complex reflection groups. Internal degrees of freedom are introduced, defining dynamical spin chains, and the freezing limit taken to obtain static chains of Haldane-Shastry type. By considering the relation of these models to the usual BC N case, we are led to systems with both real and complex reflection groups as symmetries. We demonstrate their integrability by means of new Dunkl operators, associated to wreath products of dihedral groups

  17. Large and small sets with respect to homomorphisms and products of groups

    Directory of Open Access Journals (Sweden)

    Riccardo Gusso

    2002-10-01

    Full Text Available We study the behaviour of large, small and medium subsets with respect to homomorphisms and products of groups. Then we introduce the definition af a P-small set in abelian groups and we investigate the relations between this kind of smallness and the previous one, giving some examples that distinguish them.

  18. Integrable lattice models and quantum groups

    International Nuclear Information System (INIS)

    Saleur, H.; Zuber, J.B.

    1990-01-01

    These lectures aim at introducing some basic algebraic concepts on lattice integrable models, in particular quantum groups, and to discuss some connections with knot theory and conformal field theories. The list of contents is: Vertex models and Yang-Baxter equation; Quantum sl(2) algebra and the Yang-Baxter equation; U q sl(2) as a symmetry of statistical mechanical models; Face models; Face models attached to graphs; Yang-Baxter equation, braid group and link polynomials

  19. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  20. Efficient querying of large process model repositories

    NARCIS (Netherlands)

    Jin, Tao; Wang, Jianmin; La Rosa, M.; Hofstede, ter A.H.M.; Wen, Lijie

    2013-01-01

    Recent years have seen an increased uptake of business process management technology in industries. This has resulted in organizations trying to manage large collections of business process models. One of the challenges facing these organizations concerns the retrieval of models from large business

  1. Diversity in the representation of large-scale circulation associated with ENSO-Indian summer monsoon teleconnections in CMIP5 models

    Science.gov (United States)

    Ramu, Dandi A.; Chowdary, Jasti S.; Ramakrishna, S. S. V. S.; Kumar, O. S. R. U. B.

    2018-04-01

    Realistic simulation of large-scale circulation patterns associated with El Niño-Southern Oscillation (ENSO) is vital in coupled models in order to represent teleconnections to different regions of globe. The diversity in representing large-scale circulation patterns associated with ENSO-Indian summer monsoon (ISM) teleconnections in 23 Coupled Model Intercomparison Project Phase 5 (CMIP5) models is examined. CMIP5 models have been classified into three groups based on the correlation between Niño3.4 sea surface temperature (SST) index and ISM rainfall anomalies, models in group 1 (G1) overestimated El Niño-ISM teleconections and group 3 (G3) models underestimated it, whereas these teleconnections are better represented in group 2 (G2) models. Results show that in G1 models, El Niño-induced Tropical Indian Ocean (TIO) SST anomalies are not well represented. Anomalous low-level anticyclonic circulation anomalies over the southeastern TIO and western subtropical northwest Pacific (WSNP) cyclonic circulation are shifted too far west to 60° E and 120° E, respectively. This bias in circulation patterns implies dry wind advection from extratropics/midlatitudes to Indian subcontinent. In addition to this, large-scale upper level convergence together with lower level divergence over ISM region corresponding to El Niño are stronger in G1 models than in observations. Thus, unrealistic shift in low-level circulation centers corroborated by upper level circulation changes are responsible for overestimation of ENSO-ISM teleconnections in G1 models. Warm Pacific SST anomalies associated with El Niño are shifted too far west in many G3 models unlike in the observations. Further large-scale circulation anomalies over the Pacific and ISM region are misrepresented during El Niño years in G3 models. Too strong upper-level convergence away from Indian subcontinent and too weak WSNP cyclonic circulation are prominent in most of G3 models in which ENSO-ISM teleconnections are

  2. Modeling perceptual grouping and figure-ground segregation by means of active reentrant connections.

    OpenAIRE

    Sporns, O; Tononi, G; Edelman, G M

    1991-01-01

    The segmentation of visual scenes is a fundamental process of early vision, but the underlying neural mechanisms are still largely unknown. Theoretical considerations as well as neurophysiological findings point to the importance in such processes of temporal correlations in neuronal activity. In a previous model, we showed that reentrant signaling among rhythmically active neuronal groups can correlate responses along spatially extended contours. We now have modified and extended this model ...

  3. Constituent models and large transverse momentum reactions

    International Nuclear Information System (INIS)

    Brodsky, S.J.

    1975-01-01

    The discussion of constituent models and large transverse momentum reactions includes the structure of hard scattering models, dimensional counting rules for large transverse momentum reactions, dimensional counting and exclusive processes, the deuteron form factor, applications to inclusive reactions, predictions for meson and photon beams, the charge-cubed test for the e/sup +-/p → e/sup +-/γX asymmetry, the quasi-elastic peak in inclusive hadronic reactions, correlations, and the multiplicity bump at large transverse momentum. Also covered are the partition method for bound state calculations, proofs of dimensional counting, minimal neutralization and quark--quark scattering, the development of the constituent interchange model, and the A dependence of high transverse momentum reactions

  4. Large Scale Computing for the Modelling of Whole Brain Connectivity

    DEFF Research Database (Denmark)

    Albers, Kristoffer Jon

    organization of the brain in continuously increasing resolution. From these images, networks of structural and functional connectivity can be constructed. Bayesian stochastic block modelling provides a prominent data-driven approach for uncovering the latent organization, by clustering the networks into groups...... of neurons. Relying on Markov Chain Monte Carlo (MCMC) simulations as the workhorse in Bayesian inference however poses significant computational challenges, especially when modelling networks at the scale and complexity supported by high-resolution whole-brain MRI. In this thesis, we present how to overcome...... these computational limitations and apply Bayesian stochastic block models for un-supervised data-driven clustering of whole-brain connectivity in full image resolution. We implement high-performance software that allows us to efficiently apply stochastic blockmodelling with MCMC sampling on large complex networks...

  5. Shell model in large spaces and statistical spectroscopy

    International Nuclear Information System (INIS)

    Kota, V.K.B.

    1996-01-01

    For many nuclear structure problems of current interest it is essential to deal with shell model in large spaces. For this, three different approaches are now in use and two of them are: (i) the conventional shell model diagonalization approach but taking into account new advances in computer technology; (ii) the shell model Monte Carlo method. A brief overview of these two methods is given. Large space shell model studies raise fundamental questions regarding the information content of the shell model spectrum of complex nuclei. This led to the third approach- the statistical spectroscopy methods. The principles of statistical spectroscopy have their basis in nuclear quantum chaos and they are described (which are substantiated by large scale shell model calculations) in some detail. (author)

  6. Assessing Activity and Location of Individual Laying Hens in Large Groups Using Modern Technology.

    Science.gov (United States)

    Siegford, Janice M; Berezowski, John; Biswas, Subir K; Daigle, Courtney L; Gebhardt-Henrich, Sabine G; Hernandez, Carlos E; Thurner, Stefan; Toscano, Michael J

    2016-02-02

    Tracking individual animals within large groups is increasingly possible, offering an exciting opportunity to researchers. Whereas previously only relatively indistinguishable groups of individual animals could be observed and combined into pen level data, we can now focus on individual actors within these large groups and track their activities across time and space with minimal intervention and disturbance. The development is particularly relevant to the poultry industry as, due to a shift away from battery cages, flock sizes are increasingly becoming larger and environments more complex. Many efforts have been made to track individual bird behavior and activity in large groups using a variety of methodologies with variable success. Of the technologies in use, each has associated benefits and detriments, which can make the approach more or less suitable for certain environments and experiments. Within this article, we have divided several tracking systems that are currently available into two major categories (radio frequency identification and radio signal strength) and review the strengths and weaknesses of each, as well as environments or conditions for which they may be most suitable. We also describe related topics including types of analysis for the data and concerns with selecting focal birds.

  7. Assessing Activity and Location of Individual Laying Hens in Large Groups Using Modern Technology

    Directory of Open Access Journals (Sweden)

    Janice M. Siegford

    2016-02-01

    Full Text Available Tracking individual animals within large groups is increasingly possible, offering an exciting opportunity to researchers. Whereas previously only relatively indistinguishable groups of individual animals could be observed and combined into pen level data, we can now focus on individual actors within these large groups and track their activities across time and space with minimal intervention and disturbance. The development is particularly relevant to the poultry industry as, due to a shift away from battery cages, flock sizes are increasingly becoming larger and environments more complex. Many efforts have been made to track individual bird behavior and activity in large groups using a variety of methodologies with variable success. Of the technologies in use, each has associated benefits and detriments, which can make the approach more or less suitable for certain environments and experiments. Within this article, we have divided several tracking systems that are currently available into two major categories (radio frequency identification and radio signal strength and review the strengths and weaknesses of each, as well as environments or conditions for which they may be most suitable. We also describe related topics including types of analysis for the data and concerns with selecting focal birds.

  8. Does the interpersonal model apply across eating disorder diagnostic groups? A structural equation modeling approach.

    Science.gov (United States)

    Ivanova, Iryna V; Tasca, Giorgio A; Proulx, Geneviève; Bissada, Hany

    2015-11-01

    Interpersonal model has been validated with binge-eating disorder (BED), but it is not yet known if the model applies across a range of eating disorders (ED). The goal of this study was to investigate the validity of the interpersonal model in anorexia nervosa (restricting type; ANR and binge-eating/purge type; ANBP), bulimia nervosa (BN), BED, and eating disorder not otherwise specified (EDNOS). Data from a cross-sectional sample of 1459 treatment-seeking women diagnosed with ANR, ANBP, BN, BED and EDNOS were examined for indirect effects of interpersonal problems on ED psychopathology mediated through negative affect. Findings from structural equation modeling demonstrated the mediating role of negative affect in four of the five diagnostic groups. There were significant, medium to large (.239, .558), indirect effects in the ANR, BN, BED and EDNOS groups but not in the ANBP group. The results of the first reverse model of interpersonal problems as a mediator between negative affect and ED psychopathology were nonsignificant, suggesting the specificity of these hypothesized paths. However, in the second reverse model ED psychopathology was related to interpersonal problems indirectly through negative affect. This is the first study to find support for the interpersonal model of ED in a clinical sample of women with diverse ED diagnoses, though there may be a reciprocal relationship between ED psychopathology and relationship problems through negative affect. Negative affect partially explains the relationship between interpersonal problems and ED psychopathology in women diagnosed with ANR, BN, BED and EDNOS. Interpersonal psychotherapies for ED may be addressing the underlying interpersonal-affective difficulties, thereby reducing ED psychopathology. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Establishing Peer Mentor-Led Writing Groups in Large First-Year Courses

    Science.gov (United States)

    Marcoux, Sarah; Marken, Liv; Yu, Stan

    2012-01-01

    This paper describes the results of a pilot project designed to improve students' academic writing in a large (200-student) first-year Agriculture class at the University of Saskatchewan. In collaboration with the course's professor, the Writing Centre coordinator and a summer student designed curriculum for four two-hour Writing Group sessions…

  10. Modeling Perceptual Grouping and Figure-Ground Segregation by Means of Active Reentrant Connections

    Science.gov (United States)

    Sporns, Olaf; Tononi, Giulio; Edelman, Gerald M.

    1991-01-01

    The segmentation of visual scenes is a fundamental process of early vision, but the underlying neural mechanisms are still largely unknown. Theoretical considerations as well as neurophysiological findings point to the importance in such processes of temporal correlations in neuronal activity. In a previous model, we showed that reentrant signaling among rhythmically active neuronal groups can correlate responses along spatially extended contours. We now have modified and extended this model to address the problems of perceptual grouping and figure-ground segregation in vision. A novel feature is that the efficacy of the connections is allowed to change on a fast time scale. This results in active reentrant connections that amplify the correlations among neuronal groups. The responses of the model are able to link the elements corresponding to a coherent figure and to segregate them from the background or from another figure in a way that is consistent with the so-called Gestalt laws.

  11. Modeling perceptual grouping and figure-ground segregation by means of active reentrant connections.

    Science.gov (United States)

    Sporns, O; Tononi, G; Edelman, G M

    1991-01-01

    The segmentation of visual scenes is a fundamental process of early vision, but the underlying neural mechanisms are still largely unknown. Theoretical considerations as well as neurophysiological findings point to the importance in such processes of temporal correlations in neuronal activity. In a previous model, we showed that reentrant signaling among rhythmically active neuronal groups can correlate responses along spatially extended contours. We now have modified and extended this model to address the problems of perceptual grouping and figure-ground segregation in vision. A novel feature is that the efficacy of the connections is allowed to change on a fast time scale. This results in active reentrant connections that amplify the correlations among neuronal groups. The responses of the model are able to link the elements corresponding to a coherent figure and to segregate them from the background or from another figure in a way that is consistent with the so-called Gestalt laws.

  12. Large-scale modelling of neuronal systems

    International Nuclear Information System (INIS)

    Castellani, G.; Verondini, E.; Giampieri, E.; Bersani, F.; Remondini, D.; Milanesi, L.; Zironi, I.

    2009-01-01

    The brain is, without any doubt, the most, complex system of the human body. Its complexity is also due to the extremely high number of neurons, as well as the huge number of synapses connecting them. Each neuron is capable to perform complex tasks, like learning and memorizing a large class of patterns. The simulation of large neuronal systems is challenging for both technological and computational reasons, and can open new perspectives for the comprehension of brain functioning. A well-known and widely accepted model of bidirectional synaptic plasticity, the BCM model, is stated by a differential equation approach based on bistability and selectivity properties. We have modified the BCM model extending it from a single-neuron to a whole-network model. This new model is capable to generate interesting network topologies starting from a small number of local parameters, describing the interaction between incoming and outgoing links from each neuron. We have characterized this model in terms of complex network theory, showing how this, learning rule can be a support For network generation.

  13. Drivers Advancing Oral Health in a Large Group Dental Practice Organization.

    Science.gov (United States)

    Simmons, Kristen; Gibson, Stephanie; White, Joel M

    2016-06-01

    Three change drivers are being implemented to high standards of patient centric and evidence-based oral health care within the context of a large multispecialty dental group practice organization based on the commitment of the dental hygienist chief operating officer and her team. A recent environmental scan elucidated 6 change drivers that can impact the provision of oral health care. Practitioners who can embrace and maximize aspects of these change drivers will move dentistry forward and create future opportunities. This article explains how 3 of these change drivers are being applied in a privately held, accountable risk-bearing entity that provides individualized treatment programs for more than 417,000 members. To facilitate integration of the conceptual changes related to the drivers, a multi-institutional, multidisciplinary, highly functioning collaborative work group was formed. The document Dental Hygiene at a Crossroads for Change(1) inspired the first author, a dental hygienist in a unique position as chief operating officer of a large group practice, to pursue evidence-based organizational change and to impact the quality of patient care. This was accomplished by implementing technological advances including dental diagnosis terminology in the electronic health record, clinical decision support, standardized treatment guidelines, quality metrics, and patient engagement to improve oral health outcomes at the patient and population levels. The systems and processes used to implement 3 change drivers into a large multi-practice dental setting is presented to inform and inspire others to implement change drivers with the potential for advancing oral health. Technology implementing best practices and improving patient engagement are excellent drivers to advance oral health and are an effective use of oral health care dollars. Improved oral health can be leveraged through technological advances to improve clinical practice. Copyright © 2016 Elsevier Inc

  14. Using Agent Base Models to Optimize Large Scale Network for Large System Inventories

    Science.gov (United States)

    Shameldin, Ramez Ahmed; Bowling, Shannon R.

    2010-01-01

    The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.

  15. Disinformative data in large-scale hydrological modelling

    Directory of Open Access Journals (Sweden)

    A. Kauffeldt

    2013-07-01

    Full Text Available Large-scale hydrological modelling has become an important tool for the study of global and regional water resources, climate impacts, and water-resources management. However, modelling efforts over large spatial domains are fraught with problems of data scarcity, uncertainties and inconsistencies between model forcing and evaluation data. Model-independent methods to screen and analyse data for such problems are needed. This study aimed at identifying data inconsistencies in global datasets using a pre-modelling analysis, inconsistencies that can be disinformative for subsequent modelling. The consistency between (i basin areas for different hydrographic datasets, and (ii between climate data (precipitation and potential evaporation and discharge data, was examined in terms of how well basin areas were represented in the flow networks and the possibility of water-balance closure. It was found that (i most basins could be well represented in both gridded basin delineations and polygon-based ones, but some basins exhibited large area discrepancies between flow-network datasets and archived basin areas, (ii basins exhibiting too-high runoff coefficients were abundant in areas where precipitation data were likely affected by snow undercatch, and (iii the occurrence of basins exhibiting losses exceeding the potential-evaporation limit was strongly dependent on the potential-evaporation data, both in terms of numbers and geographical distribution. Some inconsistencies may be resolved by considering sub-grid variability in climate data, surface-dependent potential-evaporation estimates, etc., but further studies are needed to determine the reasons for the inconsistencies found. Our results emphasise the need for pre-modelling data analysis to identify dataset inconsistencies as an important first step in any large-scale study. Applying data-screening methods before modelling should also increase our chances to draw robust conclusions from subsequent

  16. Comparing linear probability model coefficients across groups

    DEFF Research Database (Denmark)

    Holm, Anders; Ejrnæs, Mette; Karlson, Kristian Bernt

    2015-01-01

    of the following three components: outcome truncation, scale parameters and distributional shape of the predictor variable. These results point to limitations in using linear probability model coefficients for group comparisons. We also provide Monte Carlo simulations and real examples to illustrate......This article offers a formal identification analysis of the problem in comparing coefficients from linear probability models between groups. We show that differences in coefficients from these models can result not only from genuine differences in effects, but also from differences in one or more...... these limitations, and we suggest a restricted approach to using linear probability model coefficients in group comparisons....

  17. Regularization modeling for large-eddy simulation

    NARCIS (Netherlands)

    Geurts, Bernardus J.; Holm, D.D.

    2003-01-01

    A new modeling approach for large-eddy simulation (LES) is obtained by combining a "regularization principle" with an explicit filter and its inversion. This regularization approach allows a systematic derivation of the implied subgrid model, which resolves the closure problem. The central role of

  18. Long-Term Calculations with Large Air Pollution Models

    DEFF Research Database (Denmark)

    Ambelas Skjøth, C.; Bastrup-Birk, A.; Brandt, J.

    1999-01-01

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  19. Long-term resource variation and group size: A large-sample field test of the Resource Dispersion Hypothesis

    Directory of Open Access Journals (Sweden)

    Morecroft Michael D

    2001-07-01

    Full Text Available Abstract Background The Resource Dispersion Hypothesis (RDH proposes a mechanism for the passive formation of social groups where resources are dispersed, even in the absence of any benefits of group living per se. Despite supportive modelling, it lacks empirical testing. The RDH predicts that, rather than Territory Size (TS increasing monotonically with Group Size (GS to account for increasing metabolic needs, TS is constrained by the dispersion of resource patches, whereas GS is independently limited by their richness. We conducted multiple-year tests of these predictions using data from the long-term study of badgers Meles meles in Wytham Woods, England. The study has long failed to identify direct benefits from group living and, consequently, alternative explanations for their large group sizes have been sought. Results TS was not consistently related to resource dispersion, nor was GS consistently related to resource richness. Results differed according to data groupings and whether territories were mapped using minimum convex polygons or traditional methods. Habitats differed significantly in resource availability, but there was also evidence that food resources may be spatially aggregated within habitat types as well as between them. Conclusions This is, we believe, the largest ever test of the RDH and builds on the long-term project that initiated part of the thinking behind the hypothesis. Support for predictions were mixed and depended on year and the method used to map territory borders. We suggest that within-habitat patchiness, as well as model assumptions, should be further investigated for improved tests of the RDH in the future.

  20. Hydrological-niche models predict water plant functional group distributions in diverse wetland types.

    Science.gov (United States)

    Deane, David C; Nicol, Jason M; Gehrig, Susan L; Harding, Claire; Aldridge, Kane T; Goodman, Abigail M; Brookes, Justin D

    2017-06-01

    Human use of water resources threatens environmental water supplies. If resource managers are to develop policies that avoid unacceptable ecological impacts, some means to predict ecosystem response to changes in water availability is necessary. This is difficult to achieve at spatial scales relevant for water resource management because of the high natural variability in ecosystem hydrology and ecology. Water plant functional groups classify species with similar hydrological niche preferences together, allowing a qualitative means to generalize community responses to changes in hydrology. We tested the potential for functional groups in making quantitative prediction of water plant functional group distributions across diverse wetland types over a large geographical extent. We sampled wetlands covering a broad range of hydrogeomorphic and salinity conditions in South Australia, collecting both hydrological and floristic data from 687 quadrats across 28 wetland hydrological gradients. We built hydrological-niche models for eight water plant functional groups using a range of candidate models combining different surface inundation metrics. We then tested the predictive performance of top-ranked individual and averaged models for each functional group. Cross validation showed that models achieved acceptable predictive performance, with correct classification rates in the range 0.68-0.95. Model predictions can be made at any spatial scale that hydrological data are available and could be implemented in a geographical information system. We show the response of water plant functional groups to inundation is consistent enough across diverse wetland types to quantify the probability of hydrological impacts over regional spatial scales. © 2017 by the Ecological Society of America.

  1. Large-scale multimedia modeling applications

    International Nuclear Information System (INIS)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications

  2. Spatial occupancy models for large data sets

    Science.gov (United States)

    Johnson, Devin S.; Conn, Paul B.; Hooten, Mevin B.; Ray, Justina C.; Pond, Bruce A.

    2013-01-01

    Since its development, occupancy modeling has become a popular and useful tool for ecologists wishing to learn about the dynamics of species occurrence over time and space. Such models require presence–absence data to be collected at spatially indexed survey units. However, only recently have researchers recognized the need to correct for spatially induced overdisperison by explicitly accounting for spatial autocorrelation in occupancy probability. Previous efforts to incorporate such autocorrelation have largely focused on logit-normal formulations for occupancy, with spatial autocorrelation induced by a random effect within a hierarchical modeling framework. Although useful, computational time generally limits such an approach to relatively small data sets, and there are often problems with algorithm instability, yielding unsatisfactory results. Further, recent research has revealed a hidden form of multicollinearity in such applications, which may lead to parameter bias if not explicitly addressed. Combining several techniques, we present a unifying hierarchical spatial occupancy model specification that is particularly effective over large spatial extents. This approach employs a probit mixture framework for occupancy and can easily accommodate a reduced-dimensional spatial process to resolve issues with multicollinearity and spatial confounding while improving algorithm convergence. Using open-source software, we demonstrate this new model specification using a case study involving occupancy of caribou (Rangifer tarandus) over a set of 1080 survey units spanning a large contiguous region (108 000 km2) in northern Ontario, Canada. Overall, the combination of a more efficient specification and open-source software allows for a facile and stable implementation of spatial occupancy models for large data sets.

  3. What determines area burned in large landscapes? Insights from a decade of comparative landscape-fire modelling

    Science.gov (United States)

    Geoffrey J. Cary; Robert E. Keane; Mike D. Flannigan; Ian D. Davies; Russ A. Parsons

    2015-01-01

    Understanding what determines area burned in large landscapes is critical for informing wildland fire management in fire-prone environments and for representing fire activity in Dynamic Global Vegetation Models. For the past ten years, a group of landscape-fire modellers have been exploring the relative influence of key determinants of area burned in temperate and...

  4. An Audit of the Effectiveness of Large Group Neurology Tutorials for Irish Undergraduate Medical Students

    LENUS (Irish Health Repository)

    Kearney, H

    2016-07-01

    The aim of this audit was to determine the effectiveness of large group tutorials for teaching neurology to medical students. Students were asked to complete a questionnaire rating their confidence on a ten point Likert scale in a number of domains in the undergraduate education guidelines from the Association of British Neurologists (ABN). We then arranged a series of interactive large group tutorials for the class and repeated the questionnaire one month after teaching. In the three core domains of neurological: history taking, examination and differential diagnosis, none of the students rated their confidence as nine or ten out of ten prior to teaching. This increased to 6% for history taking, 12 % in examination and 25% for differential diagnosis after eight weeks of tutorials. This audit demonstrates that in our centre, large group tutorials were an effective means of teaching, as measured by the ABN guidelines in undergraduate neurology.

  5. Large Mammalian Animal Models of Heart Disease

    Directory of Open Access Journals (Sweden)

    Paula Camacho

    2016-10-01

    Full Text Available Due to the biological complexity of the cardiovascular system, the animal model is an urgent pre-clinical need to advance our knowledge of cardiovascular disease and to explore new drugs to repair the damaged heart. Ideally, a model system should be inexpensive, easily manipulated, reproducible, a biological representative of human disease, and ethically sound. Although a larger animal model is more expensive and difficult to manipulate, its genetic, structural, functional, and even disease similarities to humans make it an ideal model to first consider. This review presents the commonly-used large animals—dog, sheep, pig, and non-human primates—while the less-used other large animals—cows, horses—are excluded. The review attempts to introduce unique points for each species regarding its biological property, degrees of susceptibility to develop certain types of heart diseases, and methodology of induced conditions. For example, dogs barely develop myocardial infarction, while dilated cardiomyopathy is developed quite often. Based on the similarities of each species to the human, the model selection may first consider non-human primates—pig, sheep, then dog—but it also depends on other factors, for example, purposes, funding, ethics, and policy. We hope this review can serve as a basic outline of large animal models for cardiovascular researchers and clinicians.

  6. Large animals as potential models of human mental and behavioral disorders.

    Science.gov (United States)

    Danek, Michał; Danek, Janusz; Araszkiewicz, Aleksander

    2017-12-30

    Many animal models in different species have been developed for mental and behavioral disorders. This review presents large animals (dog, ovine, swine, horse) as potential models of this disorders. The article was based on the researches that were published in the peer-reviewed journals. Aliterature research was carried out using the PubMed database. The above issues were discussed in the several problem groups in accordance with the WHO International Statistical Classification of Diseases and Related Health Problems 10thRevision (ICD-10), in particular regarding: organic, including symptomatic, disorders; mental disorders (Alzheimer's disease and Huntington's disease, pernicious anemia and hepatic encephalopathy, epilepsy, Parkinson's disease, Creutzfeldt-Jakob disease); behavioral disorders due to psychoactive substance use (alcoholic intoxication, abuse of morphine); schizophrenia and other schizotypal disorders (puerperal psychosis); mood (affective) disorders (depressive episode); neurotic, stress-related and somatoform disorders (posttraumatic stress disorder, obsessive-compulsive disorder); behavioral syndromes associated with physiological disturbances and physical factors (anxiety disorders, anorexia nervosa, narcolepsy); mental retardation (Cohen syndrome, Down syndrome, Hunter syndrome); behavioral and emotional disorders (attention deficit hyperactivity disorder). This data indicates many large animal disorders which can be models to examine the above human mental and behavioral disorders.

  7. Multi-scale inference of interaction rules in animal groups using Bayesian model selection.

    Directory of Open Access Journals (Sweden)

    Richard P Mann

    2012-01-01

    Full Text Available Inference of interaction rules of animals moving in groups usually relies on an analysis of large scale system behaviour. Models are tuned through repeated simulation until they match the observed behaviour. More recent work has used the fine scale motions of animals to validate and fit the rules of interaction of animals in groups. Here, we use a Bayesian methodology to compare a variety of models to the collective motion of glass prawns (Paratya australiensis. We show that these exhibit a stereotypical 'phase transition', whereby an increase in density leads to the onset of collective motion in one direction. We fit models to this data, which range from: a mean-field model where all prawns interact globally; to a spatial Markovian model where prawns are self-propelled particles influenced only by the current positions and directions of their neighbours; up to non-Markovian models where prawns have 'memory' of previous interactions, integrating their experiences over time when deciding to change behaviour. We show that the mean-field model fits the large scale behaviour of the system, but does not capture fine scale rules of interaction, which are primarily mediated by physical contact. Conversely, the Markovian self-propelled particle model captures the fine scale rules of interaction but fails to reproduce global dynamics. The most sophisticated model, the non-Markovian model, provides a good match to the data at both the fine scale and in terms of reproducing global dynamics. We conclude that prawns' movements are influenced by not just the current direction of nearby conspecifics, but also those encountered in the recent past. Given the simplicity of prawns as a study system our research suggests that self-propelled particle models of collective motion should, if they are to be realistic at multiple biological scales, include memory of previous interactions and other non-Markovian effects.

  8. Multi-scale inference of interaction rules in animal groups using Bayesian model selection.

    Directory of Open Access Journals (Sweden)

    Richard P Mann

    Full Text Available Inference of interaction rules of animals moving in groups usually relies on an analysis of large scale system behaviour. Models are tuned through repeated simulation until they match the observed behaviour. More recent work has used the fine scale motions of animals to validate and fit the rules of interaction of animals in groups. Here, we use a Bayesian methodology to compare a variety of models to the collective motion of glass prawns (Paratya australiensis. We show that these exhibit a stereotypical 'phase transition', whereby an increase in density leads to the onset of collective motion in one direction. We fit models to this data, which range from: a mean-field model where all prawns interact globally; to a spatial Markovian model where prawns are self-propelled particles influenced only by the current positions and directions of their neighbours; up to non-Markovian models where prawns have 'memory' of previous interactions, integrating their experiences over time when deciding to change behaviour. We show that the mean-field model fits the large scale behaviour of the system, but does not capture the observed locality of interactions. Traditional self-propelled particle models fail to capture the fine scale dynamics of the system. The most sophisticated model, the non-Markovian model, provides a good match to the data at both the fine scale and in terms of reproducing global dynamics, while maintaining a biologically plausible perceptual range. We conclude that prawns' movements are influenced by not just the current direction of nearby conspecifics, but also those encountered in the recent past. Given the simplicity of prawns as a study system our research suggests that self-propelled particle models of collective motion should, if they are to be realistic at multiple biological scales, include memory of previous interactions and other non-Markovian effects.

  9. Group spike-and-slab lasso generalized linear models for disease prediction and associated genes detection by incorporating pathway information.

    Science.gov (United States)

    Tang, Zaixiang; Shen, Yueping; Li, Yan; Zhang, Xinyan; Wen, Jia; Qian, Chen'ao; Zhuang, Wenzhuo; Shi, Xinghua; Yi, Nengjun

    2018-03-15

    Large-scale molecular data have been increasingly used as an important resource for prognostic prediction of diseases and detection of associated genes. However, standard approaches for omics data analysis ignore the group structure among genes encoded in functional relationships or pathway information. We propose new Bayesian hierarchical generalized linear models, called group spike-and-slab lasso GLMs, for predicting disease outcomes and detecting associated genes by incorporating large-scale molecular data and group structures. The proposed model employs a mixture double-exponential prior for coefficients that induces self-adaptive shrinkage amount on different coefficients. The group information is incorporated into the model by setting group-specific parameters. We have developed a fast and stable deterministic algorithm to fit the proposed hierarchal GLMs, which can perform variable selection within groups. We assess the performance of the proposed method on several simulated scenarios, by varying the overlap among groups, group size, number of non-null groups, and the correlation within group. Compared with existing methods, the proposed method provides not only more accurate estimates of the parameters but also better prediction. We further demonstrate the application of the proposed procedure on three cancer datasets by utilizing pathway structures of genes. Our results show that the proposed method generates powerful models for predicting disease outcomes and detecting associated genes. The methods have been implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). nyi@uab.edu. Supplementary data are available at Bioinformatics online.

  10. Seasonal patterns of mixed species groups in large East African mammals.

    Science.gov (United States)

    Kiffner, Christian; Kioko, John; Leweri, Cecilia; Krause, Stefan

    2014-01-01

    Mixed mammal species groups are common in East African savannah ecosystems. Yet, it is largely unknown if co-occurrences of large mammals result from random processes or social preferences and if interspecific associations are consistent across ecosystems and seasons. Because species may exchange important information and services, understanding patterns and drivers of heterospecific interactions is crucial for advancing animal and community ecology. We recorded 5403 single and multi-species clusters in the Serengeti-Ngorongoro and Tarangire-Manyara ecosystems during dry and wet seasons and used social network analyses to detect patterns of species associations. We found statistically significant associations between multiple species and association patterns differed spatially and seasonally. Consistently, wildebeest and zebras preferred being associated with other species, whereas carnivores, African elephants, Maasai giraffes and Kirk's dik-diks avoided being in mixed groups. During the dry season, we found that the betweenness (a measure of importance in the flow of information or disease) of species did not differ from a random expectation based on species abundance. In contrast, in the wet season, we found that these patterns were not simply explained by variations in abundances, suggesting that heterospecific associations were actively formed. These seasonal differences in observed patterns suggest that interspecific associations may be driven by resource overlap when resources are limited and by resource partitioning or anti-predator advantages when resources are abundant. We discuss potential mechanisms that could drive seasonal variation in the cost-benefit tradeoffs that underpin the formation of mixed-species groups.

  11. Group theory for unified model building

    International Nuclear Information System (INIS)

    Slansky, R.

    1981-01-01

    The results gathered here on simple Lie algebras have been selected with attention to the needs of unified model builders who study Yang-Mills theories based on simple, local-symmetry groups that contain as a subgroup the SUsup(w) 2 x Usup(w) 1 x SUsup(c) 3 symmetry of the standard theory of electromagnetic, weak, and strong interactions. The major topics include, after a brief review of the standard model and its unification into a simple group, the use of Dynkin diagrams to analyze the structure of the group generators and to keep track of the weights (quantum numbers) of the representation vectors; an analysis of the subgroup structure of simple groups, including explicit coordinatizations of the projections in weight space; lists of representations, tensor products and branching rules for a number of simple groups; and other details about groups and their representations that are often helpful for surveying unified models, including vector-coupling coefficient calculations. Tabulations of representations, tensor products, and branching rules for E 6 , SO 10 , SU 6 , F 4 , SO 9 , SO 5 , SO 8 , SO 7 , SU 4 , E 7 , E 8 , SU 8 , SO 14 , SO 18 , SO 22 , and for completeness, SU 3 are included. (These tables may have other applications.) Group-theoretical techniques for analyzing symmetry breaking are described in detail and many examples are reviewed, including explicit parameterizations of mass matrices. (orig.)

  12. Description of the East Brazil Large Marine Ecosystem using a trophic model

    Directory of Open Access Journals (Sweden)

    Kátia M.F. Freire

    2008-09-01

    Full Text Available The objective of this study was to describe the marine ecosystem off northeastern Brazil. A trophic model was constructed for the 1970s using Ecopath with Ecosim. The impact of most of the forty-one functional groups was modest, probably due to the highly reticulated diet matrix. However, seagrass and macroalgae exerted a strong positive impact on manatee and herbivorous reef fishes, respectively. A high negative impact of omnivorous reef fishes on spiny lobsters and of sharks on swordfish was observed. Spiny lobsters and swordfish had the largest biomass changes for the simulation period (1978-2000; tunas, other large pelagics and sharks showed intermediate rates of biomass decline; and a slight increase in biomass was observed for toothed cetaceans, large carnivorous reef fishes, and dolphinfish. Recycling was an important feature of this ecosystem with low phytoplankton-originated primary production. The mean transfer efficiency between trophic levels was 11.4%. The gross efficiency of the fisheries was very low (0.00002, probably due to the low exploitation rate of most of the resources in the 1970s. Basic local information was missing for many groups. When information gaps are filled, this model may serve more credibly for the exploration of fishing policies for this area within an ecosystem approach.

  13. Large scale model testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.

    1989-01-01

    Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs

  14. Black women, work, stress, and perceived discrimination: the focused support group model as an intervention for stress reduction.

    Science.gov (United States)

    Mays, V M

    1995-01-01

    This exploratory study examined the use of two components (small and large groups) of a community-based intervention, the Focused Support Group (FSG) model, to alleviate employment-related stressors in Black women. Participants were assigned to small groups based on occupational status. Groups met for five weekly 3-hr sessions in didactic or small- and large-group formats. Two evaluations following the didactic session and the small and large group sessions elicited information on satisfaction with each of the formats, self-reported change in stress, awareness of interpersonal and sociopolitical issues affecting Black women in the labor force, assessing support networks, and usefulness of specific discussion topics to stress reduction. Results indicated the usefulness of the small- and large-group formats in reduction of self-reported stress and increases in personal and professional sources of support. Discussions on race and sex discrimination in the workplace were effective in overall stress reduction. The study highlights labor force participation as a potential source of stress for Black women, and supports the development of culture- and gender-appropriate community interventions as viable and cost-effective methods for stress reduction.

  15. Efficacy of formative evaluation using a focus group for a large classroom setting in an accelerated pharmacy program.

    Science.gov (United States)

    Nolette, Shaun; Nguyen, Alyssa; Kogan, David; Oswald, Catherine; Whittaker, Alana; Chakraborty, Arup

    2017-07-01

    Formative evaluation is a process utilized to improve communication between students and faculty. This evaluation method allows the ability to address pertinent issues in a timely manner; however, implementation of formative evaluation can be a challenge, especially in a large classroom setting. Using mediated formative evaluation, the purpose of this study is to determine if a student based focus group is a viable option to improve efficacy of communication between an instructor and students as well as time management in a large classroom setting. Out of 140 total students, six students were selected to form a focus group - one from each of six total sections of the classroom. Each focus group representative was responsible for collecting all the questions from students of their corresponding sections and submitting them to the instructor two to three times a day. Responses from the instructor were either passed back to pertinent students by the focus group representatives or addressed directly with students by the instructor. This study was conducted using a fifteen-question survey after the focus group model was utilized for one month. A printed copy of the survey was distributed in the class by student investigators. Questions were of varying types, including Likert scale, yes/no, and open-ended response. One hundred forty surveys were administered, and 90 complete responses were collected. Surveys showed that 93.3% of students found that use of the focus group made them more likely to ask questions for understanding. The surveys also showed 95.5% of students found utilizing the focus group for questions allowed for better understanding of difficult concepts. General open-ended answer portions of the survey showed that most students found the focus group allowed them to ask questions more easily since they did not feel intimidated by asking in front of the whole class. No correlation was found between demographic characteristics and survey responses. This may

  16. Models for large superconducting toroidal magnet systems

    International Nuclear Information System (INIS)

    Arendt, F.; Brechna, H.; Erb, J.; Komarek, P.; Krauth, H.; Maurer, W.

    1976-01-01

    Prior to the design of large GJ toroidal magnet systems it is appropriate to procure small scale models, which can simulate their pertinent properties and allow to investigate their relevant phenomena. The important feature of the model is to show under which circumstances the system performance can be extrapolated to large magnets. Based on parameters such as the maximum magnetic field and the current density, the maximum tolerable magneto-mechanical stresses, a simple method of designing model magnets is presented. It is shown how pertinent design parameters are changed when the toroidal dimensions are altered. In addition some conductor cost estimations are given based on reactor power output and wall loading

  17. Managing more than the mean: Using quantile regression to identify factors related to large elk groups

    Science.gov (United States)

    Brennan, Angela K.; Cross, Paul C.; Creely, Scott

    2015-01-01

    Summary Animal group size distributions are often right-skewed, whereby most groups are small, but most individuals occur in larger groups that may also disproportionately affect ecology and policy. In this case, examining covariates associated with upper quantiles of the group size distribution could facilitate better understanding and management of large animal groups.

  18. Theory and modeling group

    Science.gov (United States)

    Holman, Gordon D.

    1989-01-01

    The primary purpose of the Theory and Modeling Group meeting was to identify scientists engaged or interested in theoretical work pertinent to the Max '91 program, and to encourage theorists to pursue modeling which is directly relevant to data which can be expected to result from the program. A list of participants and their institutions is presented. Two solar flare paradigms were discussed during the meeting -- the importance of magnetic reconnection in flares and the applicability of numerical simulation results to solar flare studies.

  19. Exactly soluble models for surface partition of large clusters

    International Nuclear Information System (INIS)

    Bugaev, K.A.; Bugaev, K.A.; Elliott, J.B.

    2007-01-01

    The surface partition of large clusters is studied analytically within a framework of the 'Hills and Dales Model'. Three formulations are solved exactly by using the Laplace-Fourier transformation method. In the limit of small amplitude deformations, the 'Hills and Dales Model' gives the upper and lower bounds for the surface entropy coefficient of large clusters. The found surface entropy coefficients are compared with those of large clusters within the 2- and 3-dimensional Ising models

  20. Bayesian latent feature modeling for modeling bipartite networks with overlapping groups

    DEFF Research Database (Denmark)

    Jørgensen, Philip H.; Mørup, Morten; Schmidt, Mikkel Nørgaard

    2016-01-01

    Bi-partite networks are commonly modelled using latent class or latent feature models. Whereas the existing latent class models admit marginalization of parameters specifying the strength of interaction between groups, existing latent feature models do not admit analytical marginalization...... by the notion of community structure such that the edge density within groups is higher than between groups. Our model further assumes that entities can have different propensities of generating links in one of the modes. The proposed framework is contrasted on both synthetic and real bi-partite networks...... feature representations in bipartite networks provides a new framework for accounting for structure in bi-partite networks using binary latent feature representations providing interpretable representations that well characterize structure as quantified by link prediction....

  1. Computational Modeling of Large Wildfires: A Roadmap

    KAUST Repository

    Coen, Janice L.; Douglas, Craig C.

    2010-01-01

    Wildland fire behavior, particularly that of large, uncontrolled wildfires, has not been well understood or predicted. Our methodology to simulate this phenomenon uses high-resolution dynamic models made of numerical weather prediction (NWP) models

  2. Glucocorticoid induced osteopenia in cancellous bone of sheep: validation of large animal model for spine fusion and biomaterial research

    DEFF Research Database (Denmark)

    Ding, Ming; Cheng, Liming; Bollen, Peter

    2010-01-01

    STUDY DESIGN: Glucocorticoid with low calcium and phosphorus intake induces osteopenia in cancellous bone of sheep. OBJECTIVE: To validate a large animal model for spine fusion and biomaterial research. SUMMARY OF BACKGROUND DATA: A variety of ovariectomized animals has been used to study...... osteoporosis. Most experimental spine fusions were based on normal animals, and there is a great need for suitable large animal models with adequate bone size that closely resemble osteoporosis in humans. METHODS: Eighteen female skeletal mature sheep were randomly allocated into 3 groups, 6 each. Group 1 (GC......-1) received prednisolone (GC) treatment (0.60 mg/kg/day, 5 times weekly) for 7 months. Group 2 (GC-2) received the same treatment as GC-1 for 7 months followed by 3 months without treatment. Group 3 was left untreated and served as the controls. All sheep received restricted diet with low calcium...

  3. Seasonal patterns of mixed species groups in large East African mammals.

    Directory of Open Access Journals (Sweden)

    Christian Kiffner

    Full Text Available Mixed mammal species groups are common in East African savannah ecosystems. Yet, it is largely unknown if co-occurrences of large mammals result from random processes or social preferences and if interspecific associations are consistent across ecosystems and seasons. Because species may exchange important information and services, understanding patterns and drivers of heterospecific interactions is crucial for advancing animal and community ecology. We recorded 5403 single and multi-species clusters in the Serengeti-Ngorongoro and Tarangire-Manyara ecosystems during dry and wet seasons and used social network analyses to detect patterns of species associations. We found statistically significant associations between multiple species and association patterns differed spatially and seasonally. Consistently, wildebeest and zebras preferred being associated with other species, whereas carnivores, African elephants, Maasai giraffes and Kirk's dik-diks avoided being in mixed groups. During the dry season, we found that the betweenness (a measure of importance in the flow of information or disease of species did not differ from a random expectation based on species abundance. In contrast, in the wet season, we found that these patterns were not simply explained by variations in abundances, suggesting that heterospecific associations were actively formed. These seasonal differences in observed patterns suggest that interspecific associations may be driven by resource overlap when resources are limited and by resource partitioning or anti-predator advantages when resources are abundant. We discuss potential mechanisms that could drive seasonal variation in the cost-benefit tradeoffs that underpin the formation of mixed-species groups.

  4. From evolution to revolution: understanding mutability in large and disruptive human groups

    Science.gov (United States)

    Whitaker, Roger M.; Felmlee, Diane; Verma, Dinesh C.; Preece, Alun; Williams, Grace-Rose

    2017-05-01

    Over the last 70 years there has been a major shift in the threats to global peace. While the 1950's and 1960's were characterised by the cold war and the arms race, many security threats are now characterised by group behaviours that are disruptive, subversive or extreme. In many cases such groups are loosely and chaotically organised, but their ideals are sociologically and psychologically embedded in group members to the extent that the group represents a major threat. As a result, insights into how human groups form, emerge and change are critical, but surprisingly limited insights into the mutability of human groups exist. In this paper we argue that important clues to understand the mutability of groups come from examining the evolutionary origins of human behaviour. In particular, groups have been instrumental in human evolution, used as a basis to derive survival advantage, leaving all humans with a basic disposition to navigate the world through social networking and managing their presence in a group. From this analysis we present five critical features of social groups that govern mutability, relating to social norms, individual standing, status rivalry, ingroup bias and cooperation. We argue that understanding how these five dimensions interact and evolve can provide new insights into group mutation and evolution. Importantly, these features lend themselves to digital modeling. Therefore computational simulation can support generative exploration of groups and the discovery of latent factors, relevant to both internal group and external group modelling. Finally we consider the role of online social media in relation to understanding the mutability of groups. This can play an active role in supporting collective behaviour, and analysis of social media in the context of the five dimensions of group mutability provides a fresh basis to interpret the forces affecting groups.

  5. Establishing the pig as a large animal model for vaccine development against human cancer

    DEFF Research Database (Denmark)

    Overgaard, Nana Haahr; Frøsig, Thomas Mørch; Welner, Simon

    2015-01-01

    Immunotherapy has increased overall survival of metastatic cancer patients, and cancer antigens are promising vaccine targets. To fulfill the promise, appropriate tailoring of the vaccine formulations to mount in vivo cytotoxic T cell (CTL) responses toward co-delivered cancer antigens is essential...... and the porcine immunome is closer related to the human counterpart, we here introduce pigs as a supplementary large animal model for human cancer vaccine development. IDO and RhoC, both important in human cancer development and progression, were used as vaccine targets and 12 pigs were immunized with overlapping......C-derived peptides across all groups with no adjuvant being superior. These findings support the further use of pigs as a large animal model for vaccine development against human cancer....

  6. Large-scale modeling of rain fields from a rain cell deterministic model

    Science.gov (United States)

    FéRal, Laurent; Sauvageot, Henri; Castanet, Laurent; Lemorton, JoëL.; Cornet, FréDéRic; Leconte, Katia

    2006-04-01

    A methodology to simulate two-dimensional rain rate fields at large scale (1000 × 1000 km2, the scale of a satellite telecommunication beam or a terrestrial fixed broadband wireless access network) is proposed. It relies on a rain rate field cellular decomposition. At small scale (˜20 × 20 km2), the rain field is split up into its macroscopic components, the rain cells, described by the Hybrid Cell (HYCELL) cellular model. At midscale (˜150 × 150 km2), the rain field results from the conglomeration of rain cells modeled by HYCELL. To account for the rain cell spatial distribution at midscale, the latter is modeled by a doubly aggregative isotropic random walk, the optimal parameterization of which is derived from radar observations at midscale. The extension of the simulation area from the midscale to the large scale (1000 × 1000 km2) requires the modeling of the weather frontal area. The latter is first modeled by a Gaussian field with anisotropic covariance function. The Gaussian field is then turned into a binary field, giving the large-scale locations over which it is raining. This transformation requires the definition of the rain occupation rate over large-scale areas. Its probability distribution is determined from observations by the French operational radar network ARAMIS. The coupling with the rain field modeling at midscale is immediate whenever the large-scale field is split up into midscale subareas. The rain field thus generated accounts for the local CDF at each point, defining a structure spatially correlated at small scale, midscale, and large scale. It is then suggested that this approach be used by system designers to evaluate diversity gain, terrestrial path attenuation, or slant path attenuation for different azimuth and elevation angle directions.

  7. Hydrodynamic model research in Waseda group

    International Nuclear Information System (INIS)

    Muroya, Shin

    2010-01-01

    Constructing 'High Energy Material Science' had been proposed by Namiki as the guiding principle for the scientists of the high energy physics group lead by himself in Waseda University when the author started to study multiple particle production in 1980s toward the semi-phenomenological model for the quark gluon plasma (QGP). Their strategy was based on three stages to build an intermediate one between the fundamental theory of QCD and the phenomenological model. The quantum theoretical Langevin equation was taken up as the semi-phenomenological model at the intermediate stage and the Landau hydrodynamic model was chosen as the phenomenological model to focus on the 'phase transition' of QGP. A review is given here over the quantum theoretical Langevin equation formalism developed there and followed by the further progress with the 1+1 dimensional viscous fluid model as well as the hydrodynamic model with cylindrical symmetry. The developments of the baryon fluid model and Hanbury-Brown Twiss effect are also reviewed. After 1995 younger generation physicists came to the group to develop those models further. Activities by Hirano, Nonaka and Morita beyond the past generation's hydrodynamic model are picked up briefly. (S. Funahashi)

  8. Active Exploration of Large 3D Model Repositories.

    Science.gov (United States)

    Gao, Lin; Cao, Yan-Pei; Lai, Yu-Kun; Huang, Hao-Zhi; Kobbelt, Leif; Hu, Shi-Min

    2015-12-01

    With broader availability of large-scale 3D model repositories, the need for efficient and effective exploration becomes more and more urgent. Existing model retrieval techniques do not scale well with the size of the database since often a large number of very similar objects are returned for a query, and the possibilities to refine the search are quite limited. We propose an interactive approach where the user feeds an active learning procedure by labeling either entire models or parts of them as "like" or "dislike" such that the system can automatically update an active set of recommended models. To provide an intuitive user interface, candidate models are presented based on their estimated relevance for the current query. From the methodological point of view, our main contribution is to exploit not only the similarity between a query and the database models but also the similarities among the database models themselves. We achieve this by an offline pre-processing stage, where global and local shape descriptors are computed for each model and a sparse distance metric is derived that can be evaluated efficiently even for very large databases. We demonstrate the effectiveness of our method by interactively exploring a repository containing over 100 K models.

  9. Renormalization-group theory for the eddy viscosity in subgrid modeling

    Science.gov (United States)

    Zhou, YE; Vahala, George; Hossain, Murshed

    1988-01-01

    Renormalization-group theory is applied to incompressible three-dimensional Navier-Stokes turbulence so as to eliminate unresolvable small scales. The renormalized Navier-Stokes equation now includes a triple nonlinearity with the eddy viscosity exhibiting a mild cusp behavior, in qualitative agreement with the test-field model results of Kraichnan. For the cusp behavior to arise, not only is the triple nonlinearity necessary but the effects of pressure must be incorporated in the triple term. The renormalized eddy viscosity will not exhibit a cusp behavior if it is assumed that a spectral gap exists between the large and small scales.

  10. Renormalization group study of the one-dimensional quantum Potts model

    International Nuclear Information System (INIS)

    Solyom, J.; Pfeuty, P.

    1981-01-01

    The phase transition of the classical two-dimensional Potts model, in particular the order of the transition as the number of components q increases, is studied by constructing renormalization group transformations on the equivalent one-dimensional quatum problem. It is shown that the block transformation with two sites per cell indicates the existence of a critical qsub(c) separating the small q and large q regions with different critical behaviours. The physically accessible fixed point for q>qsub(c) is a discontinuity fixed point where the specific heat exponent α=1 and therefore the transition is of first order. (author)

  11. Assessing the reliability of predictive activity coefficient models for molecules consisting of several functional groups

    Directory of Open Access Journals (Sweden)

    R. P. Gerber

    2013-03-01

    Full Text Available Currently, the most successful predictive models for activity coefficients are those based on functional groups such as UNIFAC. In contrast, these models require a large amount of experimental data for the determination of their parameter matrix. A more recent alternative is the models based on COSMO, for which only a small set of universal parameters must be calibrated. In this work, a recalibrated COSMO-SAC model was compared with the UNIFAC (Do model employing experimental infinite dilution activity coefficient data for 2236 non-hydrogen-bonding binary mixtures at different temperatures. As expected, UNIFAC (Do presented better overall performance, with a mean absolute error of 0.12 ln-units against 0.22 for our COSMO-SAC implementation. However, in cases involving molecules with several functional groups or when functional groups appear in an unusual way, the deviation for UNIFAC was 0.44 as opposed to 0.20 for COSMO-SAC. These results show that COSMO-SAC provides more reliable predictions for multi-functional or more complex molecules, reaffirming its future prospects.

  12. Group Membership, Group Change, and Intergroup Attitudes: A Recategorization Model Based on Cognitive Consistency Principles

    Science.gov (United States)

    Roth, Jenny; Steffens, Melanie C.; Vignoles, Vivian L.

    2018-01-01

    The present article introduces a model based on cognitive consistency principles to predict how new identities become integrated into the self-concept, with consequences for intergroup attitudes. The model specifies four concepts (self-concept, stereotypes, identification, and group compatibility) as associative connections. The model builds on two cognitive principles, balance–congruity and imbalance–dissonance, to predict identification with social groups that people currently belong to, belonged to in the past, or newly belong to. More precisely, the model suggests that the relative strength of self-group associations (i.e., identification) depends in part on the (in)compatibility of the different social groups. Combining insights into cognitive representation of knowledge, intergroup bias, and explicit/implicit attitude change, we further derive predictions for intergroup attitudes. We suggest that intergroup attitudes alter depending on the relative associative strength between the social groups and the self, which in turn is determined by the (in)compatibility between social groups. This model unifies existing models on the integration of social identities into the self-concept by suggesting that basic cognitive mechanisms play an important role in facilitating or hindering identity integration and thus contribute to reducing or increasing intergroup bias. PMID:29681878

  13. Group Membership, Group Change, and Intergroup Attitudes: A Recategorization Model Based on Cognitive Consistency Principles

    Directory of Open Access Journals (Sweden)

    Jenny Roth

    2018-04-01

    Full Text Available The present article introduces a model based on cognitive consistency principles to predict how new identities become integrated into the self-concept, with consequences for intergroup attitudes. The model specifies four concepts (self-concept, stereotypes, identification, and group compatibility as associative connections. The model builds on two cognitive principles, balance–congruity and imbalance–dissonance, to predict identification with social groups that people currently belong to, belonged to in the past, or newly belong to. More precisely, the model suggests that the relative strength of self-group associations (i.e., identification depends in part on the (incompatibility of the different social groups. Combining insights into cognitive representation of knowledge, intergroup bias, and explicit/implicit attitude change, we further derive predictions for intergroup attitudes. We suggest that intergroup attitudes alter depending on the relative associative strength between the social groups and the self, which in turn is determined by the (incompatibility between social groups. This model unifies existing models on the integration of social identities into the self-concept by suggesting that basic cognitive mechanisms play an important role in facilitating or hindering identity integration and thus contribute to reducing or increasing intergroup bias.

  14. Group Membership, Group Change, and Intergroup Attitudes: A Recategorization Model Based on Cognitive Consistency Principles.

    Science.gov (United States)

    Roth, Jenny; Steffens, Melanie C; Vignoles, Vivian L

    2018-01-01

    The present article introduces a model based on cognitive consistency principles to predict how new identities become integrated into the self-concept, with consequences for intergroup attitudes. The model specifies four concepts (self-concept, stereotypes, identification, and group compatibility) as associative connections. The model builds on two cognitive principles, balance-congruity and imbalance-dissonance, to predict identification with social groups that people currently belong to, belonged to in the past, or newly belong to. More precisely, the model suggests that the relative strength of self-group associations (i.e., identification) depends in part on the (in)compatibility of the different social groups. Combining insights into cognitive representation of knowledge, intergroup bias, and explicit/implicit attitude change, we further derive predictions for intergroup attitudes. We suggest that intergroup attitudes alter depending on the relative associative strength between the social groups and the self, which in turn is determined by the (in)compatibility between social groups. This model unifies existing models on the integration of social identities into the self-concept by suggesting that basic cognitive mechanisms play an important role in facilitating or hindering identity integration and thus contribute to reducing or increasing intergroup bias.

  15. WORK GROUP DEVELOPMENT MODELS – THE EVOLUTION FROM SIMPLE GROUP TO EFFECTIVE TEAM

    Directory of Open Access Journals (Sweden)

    Raluca ZOLTAN

    2016-02-01

    Full Text Available Currently, work teams are increasingly studied by virtue of the advantages they have compared to the work groups. But a true team does not appear overnight but must complete several steps to overcome the initial stage of its existence as a group. The question that arises is at what point a simple group is turning into an effective team. Even though the development process of group into a team is not a linear process, the models found in the literature provides a rich framework for analyzing and identifying the features which group acquires over time till it become a team in the true sense of word. Thus, in this article we propose an analysis of the main models of group development in order to point out, even in a relative manner, the stage when the simple work group becomes an effective work team.

  16. On spinfoam models in large spin regime

    International Nuclear Information System (INIS)

    Han, Muxin

    2014-01-01

    We study the semiclassical behavior of Lorentzian Engle–Pereira–Rovelli–Livine (EPRL) spinfoam model, by taking into account the sum over spins in the large spin regime. We also employ the method of stationary phase analysis with parameters and the so-called, almost analytic machinery, in order to find the asymptotic behavior of the contributions from all possible large spin configurations in the spinfoam model. The spins contributing the sum are written as J f = λj f , where λ is a large parameter resulting in an asymptotic expansion via stationary phase approximation. The analysis shows that at least for the simplicial Lorentzian geometries (as spinfoam critical configurations), they contribute the leading order approximation of spinfoam amplitude only when their deficit angles satisfy γ Θ-ring f ≤λ −1/2 mod 4πZ. Our analysis results in a curvature expansion of the semiclassical low energy effective action from the spinfoam model, where the UV modifications of Einstein gravity appear as subleading high-curvature corrections. (paper)

  17. A Binaural Grouping Model for Predicting Speech Intelligibility in Multitalker Environments

    Directory of Open Access Journals (Sweden)

    Jing Mi

    2016-09-01

    Full Text Available Spatially separating speech maskers from target speech often leads to a large intelligibility improvement. Modeling this phenomenon has long been of interest to binaural-hearing researchers for uncovering brain mechanisms and for improving signal-processing algorithms in hearing-assistive devices. Much of the previous binaural modeling work focused on the unmasking enabled by binaural cues at the periphery, and little quantitative modeling has been directed toward the grouping or source-separation benefits of binaural processing. In this article, we propose a binaural model that focuses on grouping, specifically on the selection of time-frequency units that are dominated by signals from the direction of the target. The proposed model uses Equalization-Cancellation (EC processing with a binary decision rule to estimate a time-frequency binary mask. EC processing is carried out to cancel the target signal and the energy change between the EC input and output is used as a feature that reflects target dominance in each time-frequency unit. The processing in the proposed model requires little computational resources and is straightforward to implement. In combination with the Coherence-based Speech Intelligibility Index, the model is applied to predict the speech intelligibility data measured by Marrone et al. The predicted speech reception threshold matches the pattern of the measured data well, even though the predicted intelligibility improvements relative to the colocated condition are larger than some of the measured data, which may reflect the lack of internal noise in this initial version of the model.

  18. A Binaural Grouping Model for Predicting Speech Intelligibility in Multitalker Environments.

    Science.gov (United States)

    Mi, Jing; Colburn, H Steven

    2016-10-03

    Spatially separating speech maskers from target speech often leads to a large intelligibility improvement. Modeling this phenomenon has long been of interest to binaural-hearing researchers for uncovering brain mechanisms and for improving signal-processing algorithms in hearing-assistive devices. Much of the previous binaural modeling work focused on the unmasking enabled by binaural cues at the periphery, and little quantitative modeling has been directed toward the grouping or source-separation benefits of binaural processing. In this article, we propose a binaural model that focuses on grouping, specifically on the selection of time-frequency units that are dominated by signals from the direction of the target. The proposed model uses Equalization-Cancellation (EC) processing with a binary decision rule to estimate a time-frequency binary mask. EC processing is carried out to cancel the target signal and the energy change between the EC input and output is used as a feature that reflects target dominance in each time-frequency unit. The processing in the proposed model requires little computational resources and is straightforward to implement. In combination with the Coherence-based Speech Intelligibility Index, the model is applied to predict the speech intelligibility data measured by Marrone et al. The predicted speech reception threshold matches the pattern of the measured data well, even though the predicted intelligibility improvements relative to the colocated condition are larger than some of the measured data, which may reflect the lack of internal noise in this initial version of the model. © The Author(s) 2016.

  19. Large Scale Management of Physicists Personal Analysis Data Without Employing User and Group Quotas

    International Nuclear Information System (INIS)

    Norman, A.; Diesbug, M.; Gheith, M.; Illingworth, R.; Lyon, A.; Mengel, M.

    2015-01-01

    The ability of modern HEP experiments to acquire and process unprecedented amounts of data and simulation have lead to an explosion in the volume of information that individual scientists deal with on a daily basis. Explosion has resulted in a need for individuals to generate and keep large personal analysis data sets which represent the skimmed portions of official data collections, pertaining to their specific analysis. While a significant reduction in size compared to the original data, these personal analysis and simulation sets can be many terabytes or 10s of TB in size and consist of 10s of thousands of files. When this personal data is aggregated across the many physicists in a single analysis group or experiment it can represent data volumes on par or exceeding the official production samples which require special data handling techniques to deal with effectively.In this paper we explore the changes to the Fermilab computing infrastructure and computing models which have been developed to allow experimenters to effectively manage their personal analysis data and other data that falls outside of the typically centrally managed production chains. In particular we describe the models and tools that are being used to provide the modern neutrino experiments like NOvA with storage resources that are sufficient to meet their analysis needs, without imposing specific quotas on users or groups of users. We discuss the storage mechanisms and the caching algorithms that are being used as well as the toolkits are have been developed to allow the users to easily operate with terascale+ datasets. (paper)

  20. New Pathways between Group Theory and Model Theory

    CERN Document Server

    Fuchs, László; Goldsmith, Brendan; Strüngmann, Lutz

    2017-01-01

    This volume focuses on group theory and model theory with a particular emphasis on the interplay of the two areas. The survey papers provide an overview of the developments across group, module, and model theory while the research papers present the most recent study in those same areas. With introductory sections that make the topics easily accessible to students, the papers in this volume will appeal to beginning graduate students and experienced researchers alike. As a whole, this book offers a cross-section view of the areas in group, module, and model theory, covering topics such as DP-minimal groups, Abelian groups, countable 1-transitive trees, and module approximations. The papers in this book are the proceedings of the conference “New Pathways between Group Theory and Model Theory,” which took place February 1-4, 2016, in Mülheim an der Ruhr, Germany, in honor of the editors’ colleague Rüdiger Göbel. This publication is dedicated to Professor Göbel, who passed away in 2014. He was one of th...

  1. Research on large-scale wind farm modeling

    Science.gov (United States)

    Ma, Longfei; Zhang, Baoqun; Gong, Cheng; Jiao, Ran; Shi, Rui; Chi, Zhongjun; Ding, Yifeng

    2017-01-01

    Due to intermittent and adulatory properties of wind energy, when large-scale wind farm connected to the grid, it will have much impact on the power system, which is different from traditional power plants. Therefore it is necessary to establish an effective wind farm model to simulate and analyze the influence wind farms have on the grid as well as the transient characteristics of the wind turbines when the grid is at fault. However we must first establish an effective WTGs model. As the doubly-fed VSCF wind turbine has become the mainstream wind turbine model currently, this article first investigates the research progress of doubly-fed VSCF wind turbine, and then describes the detailed building process of the model. After that investigating the common wind farm modeling methods and pointing out the problems encountered. As WAMS is widely used in the power system, which makes online parameter identification of the wind farm model based on off-output characteristics of wind farm be possible, with a focus on interpretation of the new idea of identification-based modeling of large wind farms, which can be realized by two concrete methods.

  2. Modelling and control of large cryogenic refrigerator

    International Nuclear Information System (INIS)

    Bonne, Francois

    2014-01-01

    This manuscript is concern with both the modeling and the derivation of control schemes for large cryogenic refrigerators. The particular case of those which are submitted to highly variable pulsed heat load is studied. A model of each object that normally compose a large cryo-refrigerator is proposed. The methodology to gather objects model into the model of a subsystem is presented. The manuscript also shows how to obtain a linear equivalent model of the subsystem. Based on the derived models, advances control scheme are proposed. Precisely, a linear quadratic controller for warm compression station working with both two and three pressures state is derived, and a predictive constrained one for the cold-box is obtained. The particularity of those control schemes is that they fit the computing and data storage capabilities of Programmable Logic Controllers (PLC) with are well used in industry. The open loop model prediction capability is assessed using experimental data. Developed control schemes are validated in simulation and experimentally on the 400W1.8K SBT's cryogenic test facility and on the CERN's LHC warm compression station. (author) [fr

  3. Group navigation and the "many-wrongs principle" in models of animal movement.

    Science.gov (United States)

    Codling, E A; Pitchford, J W; Simpson, S D

    2007-07-01

    Traditional studies of animal navigation over both long and short distances have usually considered the orientation ability of the individual only, without reference to the implications of group membership. However, recent work has suggested that being in a group can significantly improve the ability of an individual to align toward and reach a target direction or point, even when all group members have limited navigational ability and there are no leaders. This effect is known as the "many-wrongs principle" since the large number of individual navigational errors across the group are suppressed by interactions and group cohesion. In this paper, we simulate the many-wrongs principle using a simple individual-based model of movement based on a biased random walk that includes group interactions. We study the ability of the group as a whole to reach a target given different levels of individual navigation error, group size, interaction radius, and environmental turbulence. In scenarios with low levels of environmental turbulence, simulation results demonstrate a navigational benefit from group membership, particularly for small group sizes. In contrast, when movement takes place in a highly turbulent environment, simulation results suggest that the best strategy is to navigate as individuals rather than as a group.

  4. Group-Based Active Learning of Classification Models.

    Science.gov (United States)

    Luo, Zhipeng; Hauskrecht, Milos

    2017-05-01

    Learning of classification models from real-world data often requires additional human expert effort to annotate the data. However, this process can be rather costly and finding ways of reducing the human annotation effort is critical for this task. The objective of this paper is to develop and study new ways of providing human feedback for efficient learning of classification models by labeling groups of examples. Briefly, unlike traditional active learning methods that seek feedback on individual examples, we develop a new group-based active learning framework that solicits label information on groups of multiple examples. In order to describe groups in a user-friendly way, conjunctive patterns are used to compactly represent groups. Our empirical study on 12 UCI data sets demonstrates the advantages and superiority of our approach over both classic instance-based active learning work, as well as existing group-based active-learning methods.

  5. Modelling large scale human activity in San Francisco

    Science.gov (United States)

    Gonzalez, Marta

    2010-03-01

    Diverse group of people with a wide variety of schedules, activities and travel needs compose our cities nowadays. This represents a big challenge for modeling travel behaviors in urban environments; those models are of crucial interest for a wide variety of applications such as traffic forecasting, spreading of viruses, or measuring human exposure to air pollutants. The traditional means to obtain knowledge about travel behavior is limited to surveys on travel journeys. The obtained information is based in questionnaires that are usually costly to implement and with intrinsic limitations to cover large number of individuals and some problems of reliability. Using mobile phone data, we explore the basic characteristics of a model of human travel: The distribution of agents is proportional to the population density of a given region, and each agent has a characteristic trajectory size contain information on frequency of visits to different locations. Additionally we use a complementary data set given by smart subway fare cards offering us information about the exact time of each passenger getting in or getting out of the subway station and the coordinates of it. This allows us to uncover the temporal aspects of the mobility. Since we have the actual time and place of individual's origin and destination we can understand the temporal patterns in each visited location with further details. Integrating two described data set we provide a dynamical model of human travels that incorporates different aspects observed empirically.

  6. TOPICAL REVIEW: Nonlinear aspects of the renormalization group flows of Dyson's hierarchical model

    Science.gov (United States)

    Meurice, Y.

    2007-06-01

    We review recent results concerning the renormalization group (RG) transformation of Dyson's hierarchical model (HM). This model can be seen as an approximation of a scalar field theory on a lattice. We introduce the HM and show that its large group of symmetry simplifies drastically the blockspinning procedure. Several equivalent forms of the recursion formula are presented with unified notations. Rigourous and numerical results concerning the recursion formula are summarized. It is pointed out that the recursion formula of the HM is inequivalent to both Wilson's approximate recursion formula and Polchinski's equation in the local potential approximation (despite the very small difference with the exponents of the latter). We draw a comparison between the RG of the HM and functional RG equations in the local potential approximation. The construction of the linear and nonlinear scaling variables is discussed in an operational way. We describe the calculation of non-universal critical amplitudes in terms of the scaling variables of two fixed points. This question appears as a problem of interpolation between these fixed points. Universal amplitude ratios are calculated. We discuss the large-N limit and the complex singularities of the critical potential calculable in this limit. The interpolation between the HM and more conventional lattice models is presented as a symmetry breaking problem. We briefly introduce models with an approximate supersymmetry. One important goal of this review is to present a configuration space counterpart, suitable for lattice formulations, of functional RG equations formulated in momentum space (often called exact RG equations and abbreviated ERGE).

  7. On-line core monitoring system based on buckling corrected modified one group model

    International Nuclear Information System (INIS)

    Freire, Fernando S.

    2011-01-01

    Nuclear power reactors require core monitoring during plant operation. To provide safe, clean and reliable core continuously evaluate core conditions. Currently, the reactor core monitoring process is carried out by nuclear code systems that together with data from plant instrumentation, such as, thermocouples, ex-core detectors and fixed or moveable In-core detectors, can easily predict and monitor a variety of plant conditions. Typically, the standard nodal methods can be found on the heart of such nuclear monitoring code systems. However, standard nodal methods require large computer running times when compared with standards course-mesh finite difference schemes. Unfortunately, classic finite-difference models require a fine mesh reactor core representation. To override this unlikely model characteristic we can usually use the classic modified one group model to take some account for the main core neutronic behavior. In this model a course-mesh core representation can be easily evaluated with a crude treatment of thermal neutrons leakage. In this work, an improvement made on classic modified one group model based on a buckling thermal correction was used to obtain a fast, accurate and reliable core monitoring system methodology for future applications, providing a powerful tool for core monitoring process. (author)

  8. Modeling containment of large wildfires using generalized linear mixed-model analysis

    Science.gov (United States)

    Mark Finney; Isaac C. Grenfell; Charles W. McHugh

    2009-01-01

    Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...

  9. Model of large pool fires

    Energy Technology Data Exchange (ETDEWEB)

    Fay, J.A. [Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)]. E-mail: jfay@mit.edu

    2006-08-21

    A two zone entrainment model of pool fires is proposed to depict the fluid flow and flame properties of the fire. Consisting of combustion and plume zones, it provides a consistent scheme for developing non-dimensional scaling parameters for correlating and extrapolating pool fire visible flame length, flame tilt, surface emissive power, and fuel evaporation rate. The model is extended to include grey gas thermal radiation from soot particles in the flame zone, accounting for emission and absorption in both optically thin and thick regions. A model of convective heat transfer from the combustion zone to the liquid fuel pool, and from a water substrate to cryogenic fuel pools spreading on water, provides evaporation rates for both adiabatic and non-adiabatic fires. The model is tested against field measurements of large scale pool fires, principally of LNG, and is generally in agreement with experimental values of all variables.

  10. Model of large pool fires

    International Nuclear Information System (INIS)

    Fay, J.A.

    2006-01-01

    A two zone entrainment model of pool fires is proposed to depict the fluid flow and flame properties of the fire. Consisting of combustion and plume zones, it provides a consistent scheme for developing non-dimensional scaling parameters for correlating and extrapolating pool fire visible flame length, flame tilt, surface emissive power, and fuel evaporation rate. The model is extended to include grey gas thermal radiation from soot particles in the flame zone, accounting for emission and absorption in both optically thin and thick regions. A model of convective heat transfer from the combustion zone to the liquid fuel pool, and from a water substrate to cryogenic fuel pools spreading on water, provides evaporation rates for both adiabatic and non-adiabatic fires. The model is tested against field measurements of large scale pool fires, principally of LNG, and is generally in agreement with experimental values of all variables

  11. Comparison Between Overtopping Discharge in Small and Large Scale Models

    DEFF Research Database (Denmark)

    Helgason, Einar; Burcharth, Hans F.

    2006-01-01

    The present paper presents overtopping measurements from small scale model test performed at the Haudraulic & Coastal Engineering Laboratory, Aalborg University, Denmark and large scale model tests performed at the Largde Wave Channel,Hannover, Germany. Comparison between results obtained from...... small and large scale model tests show no clear evidence of scale effects for overtopping above a threshold value. In the large scale model no overtopping was measured for waveheights below Hs = 0.5m as the water sunk into the voids between the stones on the crest. For low overtopping scale effects...

  12. Producing Distribution Maps for a Spatially-Explicit Ecosystem Model Using Large Monitoring and Environmental Databases and a Combination of Interpolation and Extrapolation

    Directory of Open Access Journals (Sweden)

    Arnaud Grüss

    2018-01-01

    Full Text Available To be able to simulate spatial patterns of predator-prey interactions, many spatially-explicit ecosystem modeling platforms, including Atlantis, need to be provided with distribution maps defining the annual or seasonal spatial distributions of functional groups and life stages. We developed a methodology combining extrapolation and interpolation of the predictions made by statistical habitat models to produce distribution maps for the fish and invertebrates represented in the Atlantis model of the Gulf of Mexico (GOM Large Marine Ecosystem (LME (“Atlantis-GOM”. This methodology consists of: (1 compiling a large monitoring database, gathering all the fisheries-independent and fisheries-dependent data collected in the northern (U.S. GOM since 2000; (2 compiling a large environmental database, storing all the environmental parameters known to influence the spatial distribution patterns of fish and invertebrates of the GOM; (3 fitting binomial generalized additive models (GAMs to the large monitoring and environmental databases, and geostatistical binomial generalized linear mixed models (GLMMs to the large monitoring database; and (4 employing GAM predictions to infer spatial distributions in the southern GOM, and GLMM predictions to infer spatial distributions in the U.S. GOM. Thus, our methodology allows for reasonable extrapolation in the southern GOM based on a large amount of monitoring and environmental data, and for interpolation in the U.S. GOM accurately reflecting the probability of encountering fish and invertebrates in that region. We used an iterative cross-validation procedure to validate GAMs. When a GAM did not pass the validation test, we employed a GAM for a related functional group/life stage to generate distribution maps for the southern GOM. In addition, no geostatistical GLMMs were fit for the functional groups and life stages whose depth, longitudinal and latitudinal ranges within the U.S. GOM are not entirely covered by

  13. Constituent rearrangement model and large transverse momentum reactions

    International Nuclear Information System (INIS)

    Igarashi, Yuji; Imachi, Masahiro; Matsuoka, Takeo; Otsuki, Shoichiro; Sawada, Shoji.

    1978-01-01

    In this chapter, two models based on the constituent rearrangement picture for large p sub( t) phenomena are summarized. One is the quark-junction model, and the other is the correlating quark rearrangement model. Counting rules of the models apply to both two-body reactions and hadron productions. (author)

  14. Large-signal modeling method for power FETs and diodes

    Energy Technology Data Exchange (ETDEWEB)

    Sun Lu; Wang Jiali; Wang Shan; Li Xuezheng; Shi Hui; Wang Na; Guo Shengping, E-mail: sunlu_1019@126.co [School of Electromechanical Engineering, Xidian University, Xi' an 710071 (China)

    2009-06-01

    Under a large signal drive level, a frequency domain black box model of the nonlinear scattering function is introduced into power FETs and diodes. A time domain measurement system and a calibration method based on a digital oscilloscope are designed to extract the nonlinear scattering function of semiconductor devices. The extracted models can reflect the real electrical performance of semiconductor devices and propose a new large-signal model to the design of microwave semiconductor circuits.

  15. Large-signal modeling method for power FETs and diodes

    International Nuclear Information System (INIS)

    Sun Lu; Wang Jiali; Wang Shan; Li Xuezheng; Shi Hui; Wang Na; Guo Shengping

    2009-01-01

    Under a large signal drive level, a frequency domain black box model of the nonlinear scattering function is introduced into power FETs and diodes. A time domain measurement system and a calibration method based on a digital oscilloscope are designed to extract the nonlinear scattering function of semiconductor devices. The extracted models can reflect the real electrical performance of semiconductor devices and propose a new large-signal model to the design of microwave semiconductor circuits.

  16. Dynamics of group knowledge production in facilitated modelling workshops

    DEFF Research Database (Denmark)

    Tavella, Elena; Franco, L. Alberto

    2015-01-01

    by which models are jointly developed with group members interacting face-to-face, with or without computer support. The models produced are used to inform negotiations about the nature of the issues faced by the group, and how to address them. While the facilitated modelling literature is impressive......, the workshop. Drawing on the knowledge-perspective of group communication, we conducted a micro-level analysis of a transcript of a facilitated modelling workshop held with the management team of an Alternative Food Network in the UK. Our analysis suggests that facilitated modelling interactions can take...

  17. Large scale stochastic spatio-temporal modelling with PCRaster

    NARCIS (Netherlands)

    Karssenberg, D.J.; Drost, N.; Schmitz, O.; Jong, K. de; Bierkens, M.F.P.

    2013-01-01

    PCRaster is a software framework for building spatio-temporal models of land surface processes (http://www.pcraster.eu). Building blocks of models are spatial operations on raster maps, including a large suite of operations for water and sediment routing. These operations are available to model

  18. Fast three-dimensional core optimization based on modified one-group model

    Energy Technology Data Exchange (ETDEWEB)

    Freire, Fernando S. [ELETROBRAS Termonuclear S.A. - ELETRONUCLEAR, Rio de Janeiro, RJ (Brazil). Dept. GCN-T], e-mail: freire@eletronuclear.gov.br; Martinez, Aquilino S.; Silva, Fernando C. da [Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear], e-mail: aquilino@con.ufrj.br, e-mail: fernando@con.ufrj.br

    2009-07-01

    The optimization of any nuclear reactor core is an extremely complex process that consumes a large amount of computer time. Fortunately, the nuclear designer can rely on a variety of methodologies able to approximate the analysis of each available core loading pattern. Two-dimensional codes are usually used to analyze the loading scheme. However, when particular axial effects are present in the core, two-dimensional analysis cannot produce good results and three-dimensional analysis can be required at all time. Basically, in this paper are presented the major advantages that can be found when one use the modified one-group diffusion theory coupled with a buckling correction model in optimization process. The results of the proposed model are very accurate when compared to benchmark results obtained from detailed calculations using three-dimensional nodal codes (author)

  19. Fast three-dimensional core optimization based on modified one-group model

    International Nuclear Information System (INIS)

    Freire, Fernando S.; Martinez, Aquilino S.; Silva, Fernando C. da

    2009-01-01

    The optimization of any nuclear reactor core is an extremely complex process that consumes a large amount of computer time. Fortunately, the nuclear designer can rely on a variety of methodologies able to approximate the analysis of each available core loading pattern. Two-dimensional codes are usually used to analyze the loading scheme. However, when particular axial effects are present in the core, two-dimensional analysis cannot produce good results and three-dimensional analysis can be required at all time. Basically, in this paper are presented the major advantages that can be found when one use the modified one-group diffusion theory coupled with a buckling correction model in optimization process. The results of the proposed model are very accurate when compared to benchmark results obtained from detailed calculations using three-dimensional nodal codes (author)

  20. Group-level self-definition and self-investment: a hierarchical (multicomponent) model of in-group identification.

    Science.gov (United States)

    Leach, Colin Wayne; van Zomeren, Martijn; Zebel, Sven; Vliek, Michael L W; Pennekamp, Sjoerd F; Doosje, Bertjan; Ouwerkerk, Jaap W; Spears, Russell

    2008-07-01

    Recent research shows individuals' identification with in-groups to be psychologically important and socially consequential. However, there is little agreement about how identification should be conceptualized or measured. On the basis of previous work, the authors identified 5 specific components of in-group identification and offered a hierarchical 2-dimensional model within which these components are organized. Studies 1 and 2 used confirmatory factor analysis to validate the proposed model of self-definition (individual self-stereotyping, in-group homogeneity) and self-investment (solidarity, satisfaction, and centrality) dimensions, across 3 different group identities. Studies 3 and 4 demonstrated the construct validity of the 5 components by examining their (concurrent) correlations with established measures of in-group identification. Studies 5-7 demonstrated the predictive and discriminant validity of the 5 components by examining their (prospective) prediction of individuals' orientation to, and emotions about, real intergroup relations. Together, these studies illustrate the conceptual and empirical value of a hierarchical multicomponent model of in-group identification.

  1. DMPy: a Python package for automated mathematical model construction of large-scale metabolic systems.

    Science.gov (United States)

    Smith, Robert W; van Rosmalen, Rik P; Martins Dos Santos, Vitor A P; Fleck, Christian

    2018-06-19

    Models of metabolism are often used in biotechnology and pharmaceutical research to identify drug targets or increase the direct production of valuable compounds. Due to the complexity of large metabolic systems, a number of conclusions have been drawn using mathematical methods with simplifying assumptions. For example, constraint-based models describe changes of internal concentrations that occur much quicker than alterations in cell physiology. Thus, metabolite concentrations and reaction fluxes are fixed to constant values. This greatly reduces the mathematical complexity, while providing a reasonably good description of the system in steady state. However, without a large number of constraints, many different flux sets can describe the optimal model and we obtain no information on how metabolite levels dynamically change. Thus, to accurately determine what is taking place within the cell, finer quality data and more detailed models need to be constructed. In this paper we present a computational framework, DMPy, that uses a network scheme as input to automatically search for kinetic rates and produce a mathematical model that describes temporal changes of metabolite fluxes. The parameter search utilises several online databases to find measured reaction parameters. From this, we take advantage of previous modelling efforts, such as Parameter Balancing, to produce an initial mathematical model of a metabolic pathway. We analyse the effect of parameter uncertainty on model dynamics and test how recent flux-based model reduction techniques alter system properties. To our knowledge this is the first time such analysis has been performed on large models of metabolism. Our results highlight that good estimates of at least 80% of the reaction rates are required to accurately model metabolic systems. Furthermore, reducing the size of the model by grouping reactions together based on fluxes alters the resulting system dynamics. The presented pipeline automates the

  2. Induction of continuous expanding infrarenal aortic aneurysms in a large porcine animal model

    DEFF Research Database (Denmark)

    Kloster, Brian Ozeraitis; Lund, Lars; Lindholt, Jes S.

    2015-01-01

    frequent complication was a neurological deficit in the lower limbs. ConclusionIn pigs it’s possible to induce continuous expanding AAA’s based upon proteolytic degradation and pathological flow, resembling the real life dynamics of human aneurysms. Because the lumbars are preserved, it’s also a potential......BackgroundA large animal model with a continuous expanding infrarenal aortic aneurysm gives access to a more realistic AAA model with anatomy and physiology similar to humans, and thus allows for new experimental research in the natural history and treatment options of the disease. Methods10 pigs......, hereafter the pigs were euthanized for inspection and AAA wall sampling for histological analysis. ResultsIn group A, all pigs developed continuous expanding AAA’s with a mean increase in AP-diameter to 16.26 ± 0.93 mm equivalent to a 57% increase. In group B the AP-diameters increased to 11.33 ± 0.13 mm...

  3. Nuclear spectroscopy in large shell model spaces: recent advances

    International Nuclear Information System (INIS)

    Kota, V.K.B.

    1995-01-01

    Three different approaches are now available for carrying out nuclear spectroscopy studies in large shell model spaces and they are: (i) the conventional shell model diagonalization approach but taking into account new advances in computer technology; (ii) the recently introduced Monte Carlo method for the shell model; (iii) the spectral averaging theory, based on central limit theorems, in indefinitely large shell model spaces. The various principles, recent applications and possibilities of these three methods are described and the similarity between the Monte Carlo method and the spectral averaging theory is emphasized. (author). 28 refs., 1 fig., 5 tabs

  4. Modeling Temporal Behavior in Large Networks: A Dynamic Mixed-Membership Model

    Energy Technology Data Exchange (ETDEWEB)

    Rossi, R; Gallagher, B; Neville, J; Henderson, K

    2011-11-11

    Given a large time-evolving network, how can we model and characterize the temporal behaviors of individual nodes (and network states)? How can we model the behavioral transition patterns of nodes? We propose a temporal behavior model that captures the 'roles' of nodes in the graph and how they evolve over time. The proposed dynamic behavioral mixed-membership model (DBMM) is scalable, fully automatic (no user-defined parameters), non-parametric/data-driven (no specific functional form or parameterization), interpretable (identifies explainable patterns), and flexible (applicable to dynamic and streaming networks). Moreover, the interpretable behavioral roles are generalizable, computationally efficient, and natively supports attributes. We applied our model for (a) identifying patterns and trends of nodes and network states based on the temporal behavior, (b) predicting future structural changes, and (c) detecting unusual temporal behavior transitions. We use eight large real-world datasets from different time-evolving settings (dynamic and streaming). In particular, we model the evolving mixed-memberships and the corresponding behavioral transitions of Twitter, Facebook, IP-Traces, Email (University), Internet AS, Enron, Reality, and IMDB. The experiments demonstrate the scalability, flexibility, and effectiveness of our model for identifying interesting patterns, detecting unusual structural transitions, and predicting the future structural changes of the network and individual nodes.

  5. Group Buying Schemes : A Sustainable Business Model?

    OpenAIRE

    Köpp, Sebastian; Mukhachou, Aliaksei; Schwaninger, Markus

    2013-01-01

    Die Autoren gehen der Frage nach, ob "Group Buying Schemes" wie beispielsweise von den Unternehmen Groupon und Dein Deal angeboten, ein nachhaltiges Geschäftsmodell sind. Anhand der Fallstudie Groupon wird mit einem System Dynamics Modell festgestellt, dass das Geschäftsmodell geändert werden muss, wenn die Unternehmung auf Dauer lebensfähig sein soll. The authors examine if group buying schemes are a sustainable business model. By means of the Groupon case study and using a System Dynami...

  6. Traffic assignment models in large-scale applications

    DEFF Research Database (Denmark)

    Rasmussen, Thomas Kjær

    the potential of the method proposed and the possibility to use individual-based GPS units for travel surveys in real-life large-scale multi-modal networks. Congestion is known to highly influence the way we act in the transportation network (and organise our lives), because of longer travel times...... of observations of actual behaviour to obtain estimates of the (monetary) value of different travel time components, thereby increasing the behavioural realism of largescale models. vii The generation of choice sets is a vital component in route choice models. This is, however, not a straight-forward task in real......, but the reliability of the travel time also has a large impact on our travel choices. Consequently, in order to improve the realism of transport models, correct understanding and representation of two values that are related to the value of time (VoT) are essential: (i) the value of congestion (VoC), as the Vo...

  7. Group Contribution Based Process Flowsheet Synthesis, Design and Modelling

    DEFF Research Database (Denmark)

    d'Anterroches, Loïc; Gani, Rafiqul

    2004-01-01

    This paper presents a process-group-contribution Method to model. simulate and synthesize a flowsheet. The process-group based representation of a flowsheet together with a process "property" model are presented. The process-group based synthesis method is developed on the basis of the computer...... aided molecular design methods and gives the ability to screen numerous process alternatives without the need to use the rigorous process simulation models. The process "property" model calculates the design targets for the generated flowsheet alternatives while a reverse modelling method (also...... developed) determines the design variables matching the target. A simple illustrative example highlighting the main features of the methodology is also presented....

  8. Investigating Facebook Groups through a Random Graph Model

    OpenAIRE

    Dinithi Pallegedara; Lei Pan

    2014-01-01

    Facebook disseminates messages for billions of users everyday. Though there are log files stored on central servers, law enforcement agencies outside of the U.S. cannot easily acquire server log files from Facebook. This work models Facebook user groups by using a random graph model. Our aim is to facilitate detectives quickly estimating the size of a Facebook group with which a suspect is involved. We estimate this group size according to the number of immediate friends and the number of ext...

  9. Glucocorticoid induced osteopenia in cancellous bone of sheep: validation of large animal model for spine fusion and biomaterial research.

    Science.gov (United States)

    Ding, Ming; Cheng, Liming; Bollen, Peter; Schwarz, Peter; Overgaard, Søren

    2010-02-15

    Glucocorticoid with low calcium and phosphorus intake induces osteopenia in cancellous bone of sheep. To validate a large animal model for spine fusion and biomaterial research. A variety of ovariectomized animals has been used to study osteoporosis. Most experimental spine fusions were based on normal animals, and there is a great need for suitable large animal models with adequate bone size that closely resemble osteoporosis in humans. Eighteen female skeletal mature sheep were randomly allocated into 3 groups, 6 each. Group 1 (GC-1) received prednisolone (GC) treatment (0.60 mg/kg/day, 5 times weekly) for 7 months. Group 2 (GC-2) received the same treatment as GC-1 for 7 months followed by 3 months without treatment. Group 3 was left untreated and served as the controls. All sheep received restricted diet with low calcium and phosphorus during experiment. After killing the animals, cancellous bone specimens from the vertebra, femurs, and tibias were micro-CT scanned and tested mechanically. Serum biomarkers were determined. In lumbar vertebra, the GC treatment resulted in significant decrease of cancellous bone volume fraction and trabecular thickness, and bone strength. However, the microarchitecture and bone strength of GC-2 recovered to a similar level of the controls. A similar trend of microarchitectural changes was also observed in the distal femur and proximal tibia of both GC treated sheep. The bone formation marker serum-osteocalcin was largely reduced in GC-1 compared to the controls, but recovered with a rebound increase at month 10 in GC-2. The current investigation demonstrates that the changes in microarchitecture and mechanical properties were comparable with those observed in humans after long-term GC treatment. A prolonged GC treatment is needed for a long-term observation to keep osteopenic bone. This model resembles long-term glucocorticoid treated osteoporotic model, and is useful in preclinical studies.

  10. Large-scale hydrology in Europe : observed patterns and model performance

    Energy Technology Data Exchange (ETDEWEB)

    Gudmundsson, Lukas

    2011-06-15

    In a changing climate, terrestrial water storages are of great interest as water availability impacts key aspects of ecosystem functioning. Thus, a better understanding of the variations of wet and dry periods will contribute to fully grasp processes of the earth system such as nutrient cycling and vegetation dynamics. Currently, river runoff from small, nearly natural, catchments is one of the few variables of the terrestrial water balance that is regularly monitored with detailed spatial and temporal coverage on large scales. River runoff, therefore, provides a foundation to approach European hydrology with respect to observed patterns on large scales, with regard to the ability of models to capture these.The analysis of observed river flow from small catchments, focused on the identification and description of spatial patterns of simultaneous temporal variations of runoff. These are dominated by large-scale variations of climatic variables but also altered by catchment processes. It was shown that time series of annual low, mean and high flows follow the same atmospheric drivers. The observation that high flows are more closely coupled to large scale atmospheric drivers than low flows, indicates the increasing influence of catchment properties on runoff under dry conditions. Further, it was shown that the low-frequency variability of European runoff is dominated by two opposing centres of simultaneous variations, such that dry years in the north are accompanied by wet years in the south.Large-scale hydrological models are simplified representations of our current perception of the terrestrial water balance on large scales. Quantification of the models strengths and weaknesses is the prerequisite for a reliable interpretation of simulation results. Model evaluations may also enable to detect shortcomings with model assumptions and thus enable a refinement of the current perception of hydrological systems. The ability of a multi model ensemble of nine large

  11. Fuzzy classification of phantom parent groups in an animal model

    Directory of Open Access Journals (Sweden)

    Fikse Freddy

    2009-09-01

    Full Text Available Abstract Background Genetic evaluation models often include genetic groups to account for unequal genetic level of animals with unknown parentage. The definition of phantom parent groups usually includes a time component (e.g. years. Combining several time periods to ensure sufficiently large groups may create problems since all phantom parents in a group are considered contemporaries. Methods To avoid the downside of such distinct classification, a fuzzy logic approach is suggested. A phantom parent can be assigned to several genetic groups, with proportions between zero and one that sum to one. Rules were presented for assigning coefficients to the inverse of the relationship matrix for fuzzy-classified genetic groups. This approach was illustrated with simulated data from ten generations of mass selection. Observations and pedigree records were randomly deleted. Phantom parent groups were defined on the basis of gender and generation number. In one scenario, uncertainty about generation of birth was simulated for some animals with unknown parents. In the distinct classification, one of the two possible generations of birth was randomly chosen to assign phantom parents to genetic groups for animals with simulated uncertainty, whereas the phantom parents were assigned to both possible genetic groups in the fuzzy classification. Results The empirical prediction error variance (PEV was somewhat lower for fuzzy-classified genetic groups. The ranking of animals with unknown parents was more correct and less variable across replicates in comparison with distinct genetic groups. In another scenario, each phantom parent was assigned to three groups, one pertaining to its gender, and two pertaining to the first and last generation, with proportion depending on the (true generation of birth. Due to the lower number of groups, the empirical PEV of breeding values was smaller when genetic groups were fuzzy-classified. Conclusion Fuzzy

  12. A wide-range model of two-group gross sections in the dynamics code HEXTRAN

    International Nuclear Information System (INIS)

    Kaloinen, E.; Peltonen, J.

    2002-01-01

    In dynamic analyses the thermal hydraulic conditions within the reactor core may have a large variation, which sets a special requirement on the modeling of cross sections. The standard model in the dynamics code HEXTRAN is the same as in the static design code HEXBU-3D/MODS. It is based on a linear and second order fitting of two-group cross sections on fuel and moderator temperature, moderator density and boron density. A new, wide-range model of cross sections developed in Fortum Nuclear Services for HEXBU-3D/MOD6 has been included as an option into HEXTRAN. In this model the nodal cross sections are constructed from seven state variables in a polynomial of more than 40 terms. Coefficients of the polynomial are created by a least squares fitting to the results of a large number of fuel assembly calculations. Depending on the choice of state variables for the spectrum calculations, the new cross section model is capable to cover local conditions from cold zero power to boiling at full power. The 5. dynamic benchmark problem of AER is analyzed with the new option and results are compared to calculations with the standard model of cross sections in HEXTRAN (Authors)

  13. Modeling Group Interactions via Open Data Sources

    Science.gov (United States)

    2011-08-30

    data. The state-of-art search engines are designed to help general query-specific search and not suitable for finding disconnected online groups. The...groups, (2) developing innovative mathematical and statistical models and efficient algorithms that leverage existing search engines and employ

  14. Modelling the exposure of wildlife to radiation: key findings and activities of IAEA working groups

    Energy Technology Data Exchange (ETDEWEB)

    Beresford, Nicholas A. [NERC Centre for Ecology and Hydrology, Lancaster Environment Center, Library Av., Bailrigg, Lancaster, LA1 4AP (United Kingdom); School of Environment and Life Sciences, University of Salford, Manchester, M4 4WT (United Kingdom); Vives i Batlle, Jordi; Vandenhove, Hildegarde [Belgian Nuclear Research Centre, Belgian Nuclear Research Centre, Boeretang 200, 2400 Mol (Belgium); Beaugelin-Seiller, Karine [Institut de Radioprotection et de Surete Nucleaire (IRSN), PRP-ENV, SERIS, LM2E, Cadarache (France); Johansen, Mathew P. [ANSTO Australian Nuclear Science and Technology Organisation, New Illawarra Rd, Menai, NSW (Australia); Goulet, Richard [Canadian Nuclear Safety Commission, Environmental Risk Assessment Division, 280 Slater, Ottawa, K1A0H3 (Canada); Wood, Michael D. [School of Environment and Life Sciences, University of Salford, Manchester, M4 4WT (United Kingdom); Ruedig, Elizabeth [Department of Environmental and Radiological Health Sciences, Colorado State University, Fort Collins (United States); Stark, Karolina; Bradshaw, Clare [Department of Ecology, Environment and Plant Sciences, Stockholm University, SE-10691 (Sweden); Andersson, Pal [Swedish Radiation Safety Authority, SE-171 16, Stockholm (Sweden); Copplestone, David [Biological and Environmental Sciences, University of Stirling, Stirling, FK9 4LA (United Kingdom); Yankovich, Tamara L.; Fesenko, Sergey [International Atomic Energy Agency, Vienna International Centre, 1400, Vienna (Austria)

    2014-07-01

    In total, participants from 14 countries, representing 19 organisations, actively participated in the model application/inter-comparison activities of the IAEA's EMRAS II programme Biota Modelling Group. A range of models/approaches were used by participants (e.g. the ERICA Tool, RESRAD-BIOTA, the ICRP Framework). The agreed objectives of the group were: 'To improve Member State's capabilities for protection of the environment by comparing and validating models being used, or developed, for biota dose assessment (that may be used) as part of the regulatory process of licensing and compliance monitoring of authorised releases of radionuclides.' The activities of the group, the findings of which will be described, included: - An assessment of the predicted unweighted absorbed dose rates for 74 radionuclides estimated by 10 approaches for five of the ICRPs Reference Animal and Plant geometries assuming 1 Bq per unit organism or media. - Modelling the effect of heterogeneous distributions of radionuclides in sediment profiles on the estimated exposure of organisms. - Model prediction - field data comparisons for freshwater ecosystems in a uranium mining area and a number of wetland environments. - An evaluation of the application of available models to a scenario considering radioactive waste buried in shallow trenches. - Estimating the contribution of {sup 235}U to dose rates in freshwater environments. - Evaluation of the factors contributing to variation in modelling results. The work of the group continues within the framework of the IAEA's MODARIA programme, which was initiated in 2012. The work plan of the MODARIA working group has largely been defined by the findings of the previous EMRAS programme. On-going activities of the working group, which will be described, include the development of a database of dynamic parameters for wildlife dose assessment and exercises involving modelling the exposure of organisms in the marine coastal

  15. Homogenization of Large-Scale Movement Models in Ecology

    Science.gov (United States)

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  16. ABOUT MODELING COMPLEX ASSEMBLIES IN SOLIDWORKS – LARGE AXIAL BEARING

    Directory of Open Access Journals (Sweden)

    Cătălin IANCU

    2017-12-01

    Full Text Available In this paperwork is presented the modeling strategy used in SOLIDWORKS for modeling special items as large axial bearing and the steps to be taken in order to obtain a better design. In the paper are presented the features that are used for modeling parts, and then the steps that must be taken in order to obtain the 3D model of a large axial bearing used for bucket-wheel equipment for charcoal moving.

  17. Leader-based and self-organized communication: modelling group-mass recruitment in ants.

    Science.gov (United States)

    Collignon, Bertrand; Deneubourg, Jean Louis; Detrain, Claire

    2012-11-21

    For collective decisions to be made, the information acquired by experienced individuals about resources' location has to be shared with naïve individuals through recruitment. Here, we investigate the properties of collective responses arising from a leader-based recruitment and a self-organized communication by chemical trails. We develop a generalized model based on biological data drawn from Tetramorium caespitum ant species of which collective foraging relies on the coupling of group leading and trail recruitment. We show that for leader-based recruitment, small groups of recruits have to be guided in a very efficient way to allow a collective exploitation of food while large group requires less attention from their leader. In the case of self-organized recruitment through a chemical trail, a critical value of trail amount has to be laid per forager in order to launch collective food exploitation. Thereafter, ants can maintain collective foraging by emitting signal intensity below this threshold. Finally, we demonstrate how the coupling of both recruitment mechanisms may benefit to collectively foraging species. These theoretical results are then compared with experimental data from recruitment by T. caespitum ant colonies performing group-mass recruitment towards a single food source. We evidence the key role of leaders as initiators and catalysts of recruitment before this leader-based process is overtaken by self-organised communication through trails. This model brings new insights as well as a theoretical background to empirical studies about cooperative foraging in group-living species. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Towards a Pragmatic Model for Group-Based, Technology-Mediated, Project-Oriented Learning - An Overview of the B2C Model

    Science.gov (United States)

    Lawlor, John; Conneely, Claire; Tangney, Brendan

    The poor assimilation of ICT in formal education is firmly rooted in models of learning prevalent in the classroom which are largely teacher-led, individualistic and reproductive, with little connection between theory and practice and poor linkages across the curriculum. A new model of classroom practice is required to allow for creativity, peer-learning, thematic learning, collaboration and problem solving, i.e. the skills commonly deemed necessary for the knowledge-based society of the 21st century. This paper describes the B2C model for group-based, technology-mediated, project-oriented learning which, while being developed as part of an out of school programme, offers a pragmatic alternative to traditional classroom pedagogy.

  19. Rapid monitoring of large groups of internally contaminated people following a radiation accident

    International Nuclear Information System (INIS)

    1994-05-01

    In the management of an emergency, it is necessary to assess the radiation exposures of people in the affected areas. An essential component in the programme is the monitoring of internal contamination. Existing fixed installations for the assessment of incorporated radionuclides may be of limited value in these circumstances because they may be inconveniently sited, oversensitive for the purpose, or inadequately equipped and staffed to cope with the large numbers referred to them. The IAEA considered it important to produce guidance on rapid monitoring of large groups of internally contaminated people. The purpose of this document is to provide Member States with an overview on techniques that can be applied during abnormal or accidental situations. Refs and figs

  20. Do not Lose Your Students in Large Lectures: A Five-Step Paper-Based Model to Foster Students’ Participation

    Directory of Open Access Journals (Sweden)

    Mona Hassan Aburahma

    2015-07-01

    Full Text Available Like most of the pharmacy colleges in developing countries with high population growth, public pharmacy colleges in Egypt are experiencing a significant increase in students’ enrollment annually due to the large youth population, accompanied with the keenness of students to join pharmacy colleges as a step to a better future career. In this context, large lectures represent a popular approach for teaching the students as economic and logistic constraints prevent splitting them into smaller groups. Nevertheless, the impact of large lectures in relation to student learning has been widely questioned due to their educational limitations, which are related to the passive role the students maintain in lectures. Despite the reported feebleness underlying large lectures and lecturing in general, large lectures will likely continue to be taught in the same format in these countries. Accordingly, to soften the negative impacts of large lectures, this article describes a simple and feasible 5-step paper-based model to transform lectures from a passive information delivery space into an active learning environment. This model mainly suits educational establishments with financial constraints, nevertheless, it can be applied in lectures presented in any educational environment to improve active participation of students. The components and the expected advantages of employing the 5-step paper-based model in large lectures as well as its limitations and ways to overcome them are presented briefly. The impact of applying this model on students’ engagement and learning is currently being investigated.

  1. Optimization of large-scale heterogeneous system-of-systems models.

    Energy Technology Data Exchange (ETDEWEB)

    Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Lee, Herbert K. H. (University of California, Santa Cruz, Santa Cruz, CA); Hart, William Eugene; Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Woodruff, David L. (University of California, Davis, Davis, CA)

    2012-01-01

    Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

  2. Synchrony and Physiological Arousal Increase Cohesion and Cooperation in Large Naturalistic Groups.

    Science.gov (United States)

    Jackson, Joshua Conrad; Jong, Jonathan; Bilkey, David; Whitehouse, Harvey; Zollmann, Stefanie; McNaughton, Craig; Halberstadt, Jamin

    2018-01-09

    Separate research streams have identified synchrony and arousal as two factors that might contribute to the effects of human rituals on social cohesion and cooperation. But no research has manipulated these variables in the field to investigate their causal - and potentially interactive - effects on prosocial behaviour. Across four experimental sessions involving large samples of strangers, we manipulated the synchronous and physiologically arousing affordances of a group marching task within a sports stadium. We observed participants' subsequent movement, grouping, and cooperation via a camera hidden in the stadium's roof. Synchrony and arousal both showed main effects, predicting larger groups, tighter clustering, and more cooperative behaviour in a free-rider dilemma. Synchrony and arousal also interacted on measures of clustering and cooperation such that synchrony only encouraged closer clustering-and encouraged greater cooperation-when paired with physiological arousal. The research helps us understand why synchrony and arousal often co-occur in rituals around the world. It also represents the first use of real-time spatial tracking as a precise and naturalistic method of simulating collective rituals.

  3. Modeling, Analysis, and Optimization Issues for Large Space Structures

    Science.gov (United States)

    Pinson, L. D. (Compiler); Amos, A. K. (Compiler); Venkayya, V. B. (Compiler)

    1983-01-01

    Topics concerning the modeling, analysis, and optimization of large space structures are discussed including structure-control interaction, structural and structural dynamics modeling, thermal analysis, testing, and design.

  4. The EU model evaluation group

    International Nuclear Information System (INIS)

    Petersen, K.E.

    1999-01-01

    The model evaluation group (MEG) was launched in 1992 growing out of the Major Technological Hazards Programme with EU/DG XII. The goal of MEG was to improve the culture in which models were developed, particularly by encouraging voluntary model evaluation procedures based on a formalised and consensus protocol. The evaluation intended to assess the fitness-for-purpose of the models being used as a measure of the quality. The approach adopted was focused on developing a generic model evaluation protocol and subsequent targeting this onto specific areas of application. Five such developments have been initiated, on heavy gas dispersion, liquid pool fires, gas explosions, human factors and momentum fires. The quality of models is an important element when complying with the 'Seveso Directive' requiring that the safety reports submitted to the authorities comprise an assessment of the extent and severity of the consequences of identified major accidents. Further, the quality of models become important in the land use planning process, where the proximity of industrial sites to vulnerable areas may be critical. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)

  5. An interactive display system for large-scale 3D models

    Science.gov (United States)

    Liu, Zijian; Sun, Kun; Tao, Wenbing; Liu, Liman

    2018-04-01

    With the improvement of 3D reconstruction theory and the rapid development of computer hardware technology, the reconstructed 3D models are enlarging in scale and increasing in complexity. Models with tens of thousands of 3D points or triangular meshes are common in practical applications. Due to storage and computing power limitation, it is difficult to achieve real-time display and interaction with large scale 3D models for some common 3D display software, such as MeshLab. In this paper, we propose a display system for large-scale 3D scene models. We construct the LOD (Levels of Detail) model of the reconstructed 3D scene in advance, and then use an out-of-core view-dependent multi-resolution rendering scheme to realize the real-time display of the large-scale 3D model. With the proposed method, our display system is able to render in real time while roaming in the reconstructed scene and 3D camera poses can also be displayed. Furthermore, the memory consumption can be significantly decreased via internal and external memory exchange mechanism, so that it is possible to display a large scale reconstructed scene with over millions of 3D points or triangular meshes in a regular PC with only 4GB RAM.

  6. Modeling phytoplankton community in reservoirs. A comparison between taxonomic and functional groups-based models.

    Science.gov (United States)

    Di Maggio, Jimena; Fernández, Carolina; Parodi, Elisa R; Diaz, M Soledad; Estrada, Vanina

    2016-01-01

    In this paper we address the formulation of two mechanistic water quality models that differ in the way the phytoplankton community is described. We carry out parameter estimation subject to differential-algebraic constraints and validation for each model and comparison between models performance. The first approach aggregates phytoplankton species based on their phylogenetic characteristics (Taxonomic group model) and the second one, on their morpho-functional properties following Reynolds' classification (Functional group model). The latter approach takes into account tolerance and sensitivity to environmental conditions. The constrained parameter estimation problems are formulated within an equation oriented framework, with a maximum likelihood objective function. The study site is Paso de las Piedras Reservoir (Argentina), which supplies water for consumption for 450,000 population. Numerical results show that phytoplankton morpho-functional groups more closely represent each species growth requirements within the group. Each model performance is quantitatively assessed by three diagnostic measures. Parameter estimation results for seasonal dynamics of the phytoplankton community and main biogeochemical variables for a one-year time horizon are presented and compared for both models, showing the functional group model enhanced performance. Finally, we explore increasing nutrient loading scenarios and predict their effect on phytoplankton dynamics throughout a one-year time horizon. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Large-group psychodynamics and massive violence Psicodinâmica da violência de grandes grupos e da violência de massas

    Directory of Open Access Journals (Sweden)

    Vamik D. Volkan

    2006-06-01

    Full Text Available Beginning with Freud, psychoanalytic theories concerning large groups have mainly focused on individuals' perceptions of what their large groups psychologically mean to them. This chapter examines some aspects of large-group psychology in its own right and studies psychodynamics of ethnic, national, religious or ideological groups, the membership of which originates in childhood. I will compare the mourning process in individuals with the mourning process in large groups to illustrate why we need to study large-group psychology as a subject in itself. As part of this discussion I will also describe signs and symptoms of large-group regression. When there is a threat against a large-group's identity, massive violence may be initiated and this violence in turn, has an obvious impact on public health.A partir de Freud, as teorias psicanalíticas a respeito de grandes grupos focalizam principalmente as percepções e os significados que os indivíduos psicologicamente atribuem a eles. Este texto analisa alguns aspectos sobre a psicologia dos grandes grupos e sua psicodinâmica interna e específica. Toma como referência grupos étnicos, nacionais, religiosos e ideológicos cujo pertencimento dos sujeitos iniciou-se na infância. Faz-se uma comparação entre o processo de luto em indivíduos e o processo de luto em grandes grupos para ilustrar por que é necessário investir no conhecimento da psicologia destes últimos, como um objeto específico. Descreve ainda sinais e sintomas de regressão em grandes grupos. Quando há ameaça à identidade coletiva pode ocorrer um processo de violência de massas que obviamente influencia na sua saúde coletiva.

  8. Psicodinâmica da violência de grandes grupos e da violência de massas Large-group psychodynamics and massive violence

    Directory of Open Access Journals (Sweden)

    Vamik D. Volkan

    2006-01-01

    Full Text Available A partir de Freud, as teorias psicanalistas sobre grandes grupos focalizam, principalmente, as percepções e os significados que, psicologicamente, os indivíduos atribuem a eles. Este texto analisa alguns aspectos sobre a psicologia dos grandes grupos e sua psicodinâmica interna e específica. Toma como referência grupos étnicos, nacionais, religiosos e ideológicos cujo pertencimento dos sujeitos iniciou-se na infância. O autor faz uma comparação entre o processo de luto em indivíduos e o processo de luto em grandes grupos para ilustrar por que é necessário investir no conhecimento da psicologia destes últimos como um objeto específico. O autor descreve, ainda, sinais e sintomas de regressão em grandes grupos. Quando há ameaça à identidade coletiva, pode ocorrer um processo de violência de massas que obviamente influencia a saúde pública.Beginning with Freud, psychoanalytic theories concerning large groups have mainly focused on individuals' perceptions of what their large groups psychologically mean to them. This text examines some aspects of large-group psychology in its own right and studies psychodynamics of ethnic, national, religious or ideological groups, the membership of which originates in childhood. I will compare the mourning process in individuals with the mourning process in large groups to illustrate why we need to study large-group psychology as a subject in itself. As part of this discussion I will also describe signs and symptoms of large-group regression.When there is a threat against a large-group's identity, massive violence may be initiated and this violence in turn, has an obvious impact on public health.

  9. A collision avoidance model for two-pedestrian groups: Considering random avoidance patterns

    Science.gov (United States)

    Zhou, Zhuping; Cai, Yifei; Ke, Ruimin; Yang, Jiwei

    2017-06-01

    Grouping is a common phenomenon in pedestrian crowds and group modeling is still an open challenging problem. When grouping pedestrians avoid each other, different patterns can be observed. Pedestrians can keep close with group members and avoid other groups in cluster. Also, they can avoid other groups separately. Considering this randomness in avoidance patterns, we propose a collision avoidance model for two-pedestrian groups. In our model, the avoidance model is proposed based on velocity obstacle method at first. Then grouping model is established using Distance constrained line (DCL), by transforming DCL into the framework of velocity obstacle, the avoidance model and grouping model are successfully put into one unified calculation structure. Within this structure, an algorithm is developed to solve the problem when solutions of the two models conflict with each other. Two groups of bidirectional pedestrian experiments are designed to verify the model. The accuracy of avoidance behavior and grouping behavior is validated in the microscopic level, while the lane formation phenomenon and fundamental diagrams is validated in the macroscopic level. The experiments results show our model is convincing and has a good expansibility to describe three or more pedestrian groups.

  10. Metallogenic model for continental volcanic-type rich and large uranium deposits

    International Nuclear Information System (INIS)

    Chen Guihua

    1998-01-01

    A metallogenic model for continental volcanic-type rich and large/super large uranium deposits has been established on the basis of analysis of occurrence features and ore-forming mechanism of some continental volcanic-type rich and large/super large uranium deposits in the world. The model proposes that uranium-enriched granite or granitic basement is the foundation, premetallogenic polycyclic and multistage volcanic eruptions are prerequisites, intense tectonic-extensional environment is the key for the ore formation, and relatively enclosed geologic setting is the reliable protection condition of the deposit. By using the model the author explains the occurrence regularities of some rich and large/super large uranium deposits such as Strelichof uranium deposit in Russia, Dornot uranium deposit in Mongolia, Olympic Dam Cu-U-Au-REE deposit in Australia, uranium deposit No.460 and Zhoujiashan uranium deposit in China, and then compares the above deposits with a large poor uranium deposit No.661 as well

  11. Group heterogeneity increases the risks of large group size: a longitudinal study of productivity in research groups.

    Science.gov (United States)

    Cummings, Jonathon N; Kiesler, Sara; Bosagh Zadeh, Reza; Balakrishnan, Aruna D

    2013-06-01

    Heterogeneous groups are valuable, but differences among members can weaken group identification. Weak group identification may be especially problematic in larger groups, which, in contrast with smaller groups, require more attention to motivating members and coordinating their tasks. We hypothesized that as groups increase in size, productivity would decrease with greater heterogeneity. We studied the longitudinal productivity of 549 research groups varying in disciplinary heterogeneity, institutional heterogeneity, and size. We examined their publication and citation productivity before their projects started and 5 to 9 years later. Larger groups were more productive than smaller groups, but their marginal productivity declined as their heterogeneity increased, either because their members belonged to more disciplines or to more institutions. These results provide evidence that group heterogeneity moderates the effects of group size, and they suggest that desirable diversity in groups may be better leveraged in smaller, more cohesive units.

  12. Group Clustering Mechanism for P2P Large Scale Data Sharing Collaboration

    Institute of Scientific and Technical Information of China (English)

    DENGQianni; LUXinda; CHENLi

    2005-01-01

    Research shows that P2P scientific collaboration network will exhibit small-world topology, as do a large number of social networks for which the same pattern has been documented. In this paper we propose a topology building protocol to benefit from the small world feature. We find that the idea of Freenet resembles the dynamic pattern of social interactions in scientific data sharing and the small world characteristic of Freenet is propitious to improve the file locating performance in scientificdata sharing. But the LRU (Least recently used) datas-tore cache replacement scheme of Freenet is not suitableto be used in scientific data sharing network. Based onthe group locality of scientific collaboration, we proposean enhanced group clustering cache replacement scheme.Simulation shows that this scheme improves the request hitratio dramatically while keeping the small average hops per successful request comparable to LRU.

  13. Modeling and simulation of large HVDC systems

    Energy Technology Data Exchange (ETDEWEB)

    Jin, H.; Sood, V.K.

    1993-01-01

    This paper addresses the complexity and the amount of work in preparing simulation data and in implementing various converter control schemes and the excessive simulation time involved in modelling and simulation of large HVDC systems. The Power Electronic Circuit Analysis program (PECAN) is used to address these problems and a large HVDC system with two dc links is simulated using PECAN. A benchmark HVDC system is studied to compare the simulation results with those from other packages. The simulation time and results are provided in the paper.

  14. Introducing the fit-criteria assessment plot - A visualisation tool to assist class enumeration in group-based trajectory modelling.

    Science.gov (United States)

    Klijn, Sven L; Weijenberg, Matty P; Lemmens, Paul; van den Brandt, Piet A; Lima Passos, Valéria

    2017-10-01

    Background and objective Group-based trajectory modelling is a model-based clustering technique applied for the identification of latent patterns of temporal changes. Despite its manifold applications in clinical and health sciences, potential problems of the model selection procedure are often overlooked. The choice of the number of latent trajectories (class-enumeration), for instance, is to a large degree based on statistical criteria that are not fail-safe. Moreover, the process as a whole is not transparent. To facilitate class enumeration, we introduce a graphical summary display of several fit and model adequacy criteria, the fit-criteria assessment plot. Methods An R-code that accepts universal data input is presented. The programme condenses relevant group-based trajectory modelling output information of model fit indices in automated graphical displays. Examples based on real and simulated data are provided to illustrate, assess and validate fit-criteria assessment plot's utility. Results Fit-criteria assessment plot provides an overview of fit criteria on a single page, placing users in an informed position to make a decision. Fit-criteria assessment plot does not automatically select the most appropriate model but eases the model assessment procedure. Conclusions Fit-criteria assessment plot is an exploratory, visualisation tool that can be employed to assist decisions in the initial and decisive phase of group-based trajectory modelling analysis. Considering group-based trajectory modelling's widespread resonance in medical and epidemiological sciences, a more comprehensive, easily interpretable and transparent display of the iterative process of class enumeration may foster group-based trajectory modelling's adequate use.

  15. Report of the large solenoid detector group

    International Nuclear Information System (INIS)

    Hanson, G.G.; Mori, S.; Pondrom, L.G.

    1987-09-01

    This report presents a conceptual design of a large solenoid for studying physics at the SSC. The parameters and nature of the detector have been chosen based on present estimates of what is required to allow the study of heavy quarks, supersymmetry, heavy Higgs particles, WW scattering at large invariant masses, new W and Z bosons, and very large momentum transfer parton-parton scattering. Simply stated, the goal is to obtain optimum detection and identification of electrons, muons, neutrinos, jets, W's and Z's over a large rapidity region. The primary region of interest extends over +-3 units of rapidity, although the calorimetry must extend to +-5.5 units if optimal missing energy resolution is to be obtained. A magnetic field was incorporated because of the importance of identifying the signs of the charges for both electrons and muons and because of the added possibility of identifying tau leptons and secondary vertices. In addition, the existence of a magnetic field may prove useful for studying new physics processes about which we currently have no knowledge. Since hermeticity of the calorimetry is extremely important, the entire central and endcap calorimeters were located inside the solenoid. This does not at the moment seem to produce significant problems (although many issues remain to be resolved) and in fact leads to a very effective muon detector in the central region

  16. Wave propagation model of heat conduction and group speed

    Science.gov (United States)

    Zhang, Long; Zhang, Xiaomin; Peng, Song

    2018-03-01

    In view of the finite relaxation model of non-Fourier's law, the Cattaneo and Vernotte (CV) model and Fourier's law are presented in this work for comparing wave propagation modes. Independent variable translation is applied to solve the partial differential equation. Results show that the general form of the time spatial distribution of temperature for the three media comprises two solutions: those corresponding to the positive and negative logarithmic heating rates. The former shows that a group of heat waves whose spatial distribution follows the exponential function law propagates at a group speed; the speed of propagation is related to the logarithmic heating rate. The total speed of all the possible heat waves can be combined to form the group speed of the wave propagation. The latter indicates that the spatial distribution of temperature, which follows the exponential function law, decays with time. These features show that propagation accelerates when heated and decelerates when cooled. For the model media that follow Fourier's law and correspond to the positive heat rate of heat conduction, the propagation mode is also considered the propagation of a group of heat waves because the group speed has no upper bound. For the finite relaxation model with non-Fourier media, the interval of group speed is bounded and the maximum speed can be obtained when the logarithmic heating rate is exactly the reciprocal of relaxation time. And for the CV model with a non-Fourier medium, the interval of group speed is also bounded and the maximum value can be obtained when the logarithmic heating rate is infinite.

  17. Hydrogen combustion modelling in large-scale geometries

    International Nuclear Information System (INIS)

    Studer, E.; Beccantini, A.; Kudriakov, S.; Velikorodny, A.

    2014-01-01

    Hydrogen risk mitigation issues based on catalytic recombiners cannot exclude flammable clouds to be formed during the course of a severe accident in a Nuclear Power Plant. Consequences of combustion processes have to be assessed based on existing knowledge and state of the art in CFD combustion modelling. The Fukushima accidents have also revealed the need for taking into account the hydrogen explosion phenomena in risk management. Thus combustion modelling in a large-scale geometry is one of the remaining severe accident safety issues. At present day there doesn't exist a combustion model which can accurately describe a combustion process inside a geometrical configuration typical of the Nuclear Power Plant (NPP) environment. Therefore the major attention in model development has to be paid on the adoption of existing approaches or creation of the new ones capable of reliably predicting the possibility of the flame acceleration in the geometries of that type. A set of experiments performed previously in RUT facility and Heiss Dampf Reactor (HDR) facility is used as a validation database for development of three-dimensional gas dynamic model for the simulation of hydrogen-air-steam combustion in large-scale geometries. The combustion regimes include slow deflagration, fast deflagration, and detonation. Modelling is based on Reactive Discrete Equation Method (RDEM) where flame is represented as an interface separating reactants and combustion products. The transport of the progress variable is governed by different flame surface wrinkling factors. The results of numerical simulation are presented together with the comparisons, critical discussions and conclusions. (authors)

  18. Modeling of 3D Aluminum Polycrystals during Large Deformations

    International Nuclear Information System (INIS)

    Maniatty, Antoinette M.; Littlewood, David J.; Lu Jing; Pyle, Devin

    2007-01-01

    An approach for generating, meshing, and modeling 3D polycrystals, with a focus on aluminum alloys, subjected to large deformation processes is presented. A Potts type model is used to generate statistically representative grain structures with periodicity to allow scale-linking. The grain structures are compared to experimentally observed grain structures to validate that they are representative. A procedure for generating a geometric model from the voxel data is developed allowing for adaptive meshing of the generated grain structure. Material behavior is governed by an appropriate crystal, elasto-viscoplastic constitutive model. The elastic-viscoplastic model is implemented in a three-dimensional, finite deformation, mixed, finite element program. In order to handle the large-scale problems of interest, a parallel implementation is utilized. A multiscale procedure is used to link larger scale models of deformation processes to the polycrystal model, where periodic boundary conditions on the fluctuation field are enforced. Finite-element models, of 3D polycrystal grain structures will be presented along with observations made from these simulations

  19. An accurate and simple large signal model of HEMT

    DEFF Research Database (Denmark)

    Liu, Qing

    1989-01-01

    A large-signal model of discrete HEMTs (high-electron-mobility transistors) has been developed. It is simple and suitable for SPICE simulation of hybrid digital ICs. The model parameters are extracted by using computer programs and data provided by the manufacturer. Based on this model, a hybrid...

  20. Linear mixed-effects modeling approach to FMRI group analysis.

    Science.gov (United States)

    Chen, Gang; Saad, Ziad S; Britton, Jennifer C; Pine, Daniel S; Cox, Robert W

    2013-06-01

    Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance-covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance-covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity

  1. Aero-Acoustic Modelling using Large Eddy Simulation

    International Nuclear Information System (INIS)

    Shen, W Z; Soerensen, J N

    2007-01-01

    The splitting technique for aero-acoustic computations is extended to simulate three-dimensional flow and acoustic waves from airfoils. The aero-acoustic model is coupled to a sub-grid-scale turbulence model for Large-Eddy Simulations. In the first test case, the model is applied to compute laminar flow past a NACA 0015 airfoil at a Reynolds number of 800, a Mach number of 0.2 and an angle of attack of 20 deg. The model is then applied to compute turbulent flow past a NACA 0015 airfoil at a Reynolds number of 100 000, a Mach number of 0.2 and an angle of attack of 20 deg. The predicted noise spectrum is compared to experimental data

  2. Group Elevator Peak Scheduling Based on Robust Optimization Model

    Directory of Open Access Journals (Sweden)

    ZHANG, J.

    2013-08-01

    Full Text Available Scheduling of Elevator Group Control System (EGCS is a typical combinatorial optimization problem. Uncertain group scheduling under peak traffic flows has become a research focus and difficulty recently. RO (Robust Optimization method is a novel and effective way to deal with uncertain scheduling problem. In this paper, a peak scheduling method based on RO model for multi-elevator system is proposed. The method is immune to the uncertainty of peak traffic flows, optimal scheduling is realized without getting exact numbers of each calling floor's waiting passengers. Specifically, energy-saving oriented multi-objective scheduling price is proposed, RO uncertain peak scheduling model is built to minimize the price. Because RO uncertain model could not be solved directly, RO uncertain model is transformed to RO certain model by elevator scheduling robust counterparts. Because solution space of elevator scheduling is enormous, to solve RO certain model in short time, ant colony solving algorithm for elevator scheduling is proposed. Based on the algorithm, optimal scheduling solutions are found quickly, and group elevators are scheduled according to the solutions. Simulation results show the method could improve scheduling performances effectively in peak pattern. Group elevators' efficient operation is realized by the RO scheduling method.

  3. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    NARCIS (Netherlands)

    Loon, van A.F.; Huijgevoort, van M.H.J.; Lanen, van H.A.J.

    2012-01-01

    Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological

  4. Balancing selfishness and norm conformity can explain human behavior in large-scale prisoner's dilemma games and can poise human groups near criticality

    Science.gov (United States)

    Realpe-Gómez, John; Andrighetto, Giulia; Nardin, Luis Gustavo; Montoya, Javier Antonio

    2018-04-01

    Cooperation is central to the success of human societies as it is crucial for overcoming some of the most pressing social challenges of our time; still, how human cooperation is achieved and may persist is a main puzzle in the social and biological sciences. Recently, scholars have recognized the importance of social norms as solutions to major local and large-scale collective action problems, from the management of water resources to the reduction of smoking in public places to the change in fertility practices. Yet a well-founded model of the effect of social norms on human cooperation is still lacking. Using statistical-physics techniques and integrating findings from cognitive and behavioral sciences, we present an analytically tractable model in which individuals base their decisions to cooperate both on the economic rewards they obtain and on the degree to which their action complies with social norms. Results from this parsimonious model are in agreement with observations in recent large-scale experiments with humans. We also find the phase diagram of the model and show that the experimental human group is poised near a critical point, a regime where recent work suggests living systems respond to changing external conditions in an efficient and coordinated manner.

  5. Item response theory at subject- and group-level

    NARCIS (Netherlands)

    Tobi, Hilde

    1990-01-01

    This paper reviews the literature about item response models for the subject level and aggregated level (group level). Group-level item response models (IRMs) are used in the United States in large-scale assessment programs such as the National Assessment of Educational Progress and the California

  6. Verifying large SDL-specifications using model checking

    NARCIS (Netherlands)

    Sidorova, N.; Steffen, M.; Reed, R.; Reed, J.

    2001-01-01

    In this paper we propose a methodology for model-checking based verification of large SDL specifications. The methodology is illustrated by a case study of an industrial medium-access protocol for wireless ATM. To cope with the state space explosion, the verification exploits the layered and modular

  7. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    Directory of Open Access Journals (Sweden)

    A. F. Van Loon

    2012-11-01

    Full Text Available Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological drought? To answer this question, we evaluated the simulation of drought propagation in an ensemble mean of ten large-scale models, both land-surface models and global hydrological models, that participated in the model intercomparison project of WATCH (WaterMIP. For a selection of case study areas, we studied drought characteristics (number of droughts, duration, severity, drought propagation features (pooling, attenuation, lag, lengthening, and hydrological drought typology (classical rainfall deficit drought, rain-to-snow-season drought, wet-to-dry-season drought, cold snow season drought, warm snow season drought, composite drought.

    Drought characteristics simulated by large-scale models clearly reflected drought propagation; i.e. drought events became fewer and longer when moving through the hydrological cycle. However, more differentiation was expected between fast and slowly responding systems, with slowly responding systems having fewer and longer droughts in runoff than fast responding systems. This was not found using large-scale models. Drought propagation features were poorly reproduced by the large-scale models, because runoff reacted immediately to precipitation, in all case study areas. This fast reaction to precipitation, even in cold climates in winter and in semi-arid climates in summer, also greatly influenced the hydrological drought typology as identified by the large-scale models. In general, the large-scale models had the correct representation of drought types, but the percentages of occurrence had some important mismatches, e.g. an overestimation of classical rainfall deficit droughts, and an

  8. Extending SME to Handle Large-Scale Cognitive Modeling.

    Science.gov (United States)

    Forbus, Kenneth D; Ferguson, Ronald W; Lovett, Andrew; Gentner, Dedre

    2017-07-01

    Analogy and similarity are central phenomena in human cognition, involved in processes ranging from visual perception to conceptual change. To capture this centrality requires that a model of comparison must be able to integrate with other processes and handle the size and complexity of the representations required by the tasks being modeled. This paper describes extensions to Structure-Mapping Engine (SME) since its inception in 1986 that have increased its scope of operation. We first review the basic SME algorithm, describe psychological evidence for SME as a process model, and summarize its role in simulating similarity-based retrieval and generalization. Then we describe five techniques now incorporated into the SME that have enabled it to tackle large-scale modeling tasks: (a) Greedy merging rapidly constructs one or more best interpretations of a match in polynomial time: O(n 2 log(n)); (b) Incremental operation enables mappings to be extended as new information is retrieved or derived about the base or target, to model situations where information in a task is updated over time; (c) Ubiquitous predicates model the varying degrees to which items may suggest alignment; (d) Structural evaluation of analogical inferences models aspects of plausibility judgments; (e) Match filters enable large-scale task models to communicate constraints to SME to influence the mapping process. We illustrate via examples from published studies how these enable it to capture a broader range of psychological phenomena than before. Copyright © 2016 Cognitive Science Society, Inc.

  9. Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam's Window.

    Science.gov (United States)

    Onorante, Luca; Raftery, Adrian E

    2016-01-01

    Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam's window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods.

  10. Integrating an agent-based model into a large-scale hydrological model for evaluating drought management in California

    Science.gov (United States)

    Sheffield, J.; He, X.; Wada, Y.; Burek, P.; Kahil, M.; Wood, E. F.; Oppenheimer, M.

    2017-12-01

    California has endured record-breaking drought since winter 2011 and will likely experience more severe and persistent drought in the coming decades under changing climate. At the same time, human water management practices can also affect drought frequency and intensity, which underscores the importance of human behaviour in effective drought adaptation and mitigation. Currently, although a few large-scale hydrological and water resources models (e.g., PCR-GLOBWB) consider human water use and management practices (e.g., irrigation, reservoir operation, groundwater pumping), none of them includes the dynamic feedback between local human behaviors/decisions and the natural hydrological system. It is, therefore, vital to integrate social and behavioral dimensions into current hydrological modeling frameworks. This study applies the agent-based modeling (ABM) approach and couples it with a large-scale hydrological model (i.e., Community Water Model, CWatM) in order to have a balanced representation of social, environmental and economic factors and a more realistic representation of the bi-directional interactions and feedbacks in coupled human and natural systems. In this study, we focus on drought management in California and considers two types of agents, which are (groups of) farmers and state management authorities, and assumed that their corresponding objectives are to maximize the net crop profit and to maintain sufficient water supply, respectively. Farmers' behaviors are linked with local agricultural practices such as cropping patterns and deficit irrigation. More precisely, farmers' decisions are incorporated into CWatM across different time scales in terms of daily irrigation amount, seasonal/annual decisions on crop types and irrigated area as well as the long-term investment of irrigation infrastructure. This simulation-based optimization framework is further applied by performing different sets of scenarios to investigate and evaluate the effectiveness

  11. An evolutionary theory of large-scale human warfare: Group-structured cultural selection.

    Science.gov (United States)

    Zefferman, Matthew R; Mathew, Sarah

    2015-01-01

    When humans wage war, it is not unusual for battlefields to be strewn with dead warriors. These warriors typically were men in their reproductive prime who, had they not died in battle, might have gone on to father more children. Typically, they are also genetically unrelated to one another. We know of no other animal species in which reproductively capable, genetically unrelated individuals risk their lives in this manner. Because the immense private costs borne by individual warriors create benefits that are shared widely by others in their group, warfare is a stark evolutionary puzzle that is difficult to explain. Although several scholars have posited models of the evolution of human warfare, these models do not adequately explain how humans solve the problem of collective action in warfare at the evolutionarily novel scale of hundreds of genetically unrelated individuals. We propose that group-structured cultural selection explains this phenomenon. © 2015 Wiley Periodicals, Inc.

  12. Evaluation of receptivity of the medical students in a lecture of a large group

    Directory of Open Access Journals (Sweden)

    Vidyarthi SurendraK, Nayak RoopaP, GuptaSandeep K

    2014-04-01

    Full Text Available Background: Lecturing is widely used teaching method in higher education. Instructors of large classes may have only option to deliver lecture to convey informations to large group students.Aims and Objectives: The present study was to evaluate the effectiveness/receptivity of interactive lecturing in a large group of MBBS second year students. Material and Methods: The present study was conducted in the well-equipped lecture theater of Dhanalakshmi Srinivasan Medical College and Hospital (DSMCH, Tamil Nadu. A fully prepared interactive lecture on the specific topic was delivered by using power point presentation for second year MBBS students. Before start to deliver the lecture, instructor distributed multiple choice 10 questionnaires to attempt within 10 minutes. After 30 minutes of delivering lecture, again instructor distributed same 10 sets of multiple choice questionnaires to attempt in 10 minutes. The topic was never disclosed to the students before to deliver the lecture. Statistics: We analyzed the pre-lecture & post-lecture questions of each student by applying the paired t-test formula by using www.openepi.com version 3.01 online/offline software and by using Microsoft Excel Sheet Windows 2010. Results: The 31 male, 80 female including 111 students of average age 18.58 years baseline (pre-lecture receptivity mean % was 30.99 ± 14.64 and post-lecture receptivity mean % was increased upto 53.51± 19.52. The only 12 students out of 111 post-lecture receptivity values was less (mean % 25.8± 10.84 than the baseline (mean % 45± 9.05 receptive value and this reduction of receptivity was more towards negative side. Conclusion: In interactive lecture session with power point presentation students/learners can learn, even in large-class environments, but it should be active-learner centered.

  13. Challenges of Modeling Flood Risk at Large Scales

    Science.gov (United States)

    Guin, J.; Simic, M.; Rowe, J.

    2009-04-01

    Flood risk management is a major concern for many nations and for the insurance sector in places where this peril is insured. A prerequisite for risk management, whether in the public sector or in the private sector is an accurate estimation of the risk. Mitigation measures and traditional flood management techniques are most successful when the problem is viewed at a large regional scale such that all inter-dependencies in a river network are well understood. From an insurance perspective the jury is still out there on whether flood is an insurable peril. However, with advances in modeling techniques and computer power it is possible to develop models that allow proper risk quantification at the scale suitable for a viable insurance market for flood peril. In order to serve the insurance market a model has to be event-simulation based and has to provide financial risk estimation that forms the basis for risk pricing, risk transfer and risk management at all levels of insurance industry at large. In short, for a collection of properties, henceforth referred to as a portfolio, the critical output of the model is an annual probability distribution of economic losses from a single flood occurrence (flood event) or from an aggregation of all events in any given year. In this paper, the challenges of developing such a model are discussed in the context of Great Britain for which a model has been developed. The model comprises of several, physically motivated components so that the primary attributes of the phenomenon are accounted for. The first component, the rainfall generator simulates a continuous series of rainfall events in space and time over thousands of years, which are physically realistic while maintaining the statistical properties of rainfall at all locations over the model domain. A physically based runoff generation module feeds all the rivers in Great Britain, whose total length of stream links amounts to about 60,000 km. A dynamical flow routing

  14. Achieving 90% Adoption of Clinical Practice Guidelines Using the Delphi Consensus Method in a Large Orthopedic Group.

    Science.gov (United States)

    Bini, Stefano A; Mahajan, John

    2016-11-01

    Little is known about the implementation rate of clinical practice guidelines (CPGs). Our purpose was to report on the adoption rate of CPGs created and implemented by a large orthopedic group using the Delphi consensus method. The draft CPGs were created before the group's annual meeting by 5 teams each assigned a subset of topics. The draft guidelines included a statement and a summary of the available evidence. Each guideline was debated in both small-group and plenary sessions. Voting was anonymous and a 75% supermajority was required for passage. A Likert scale was used to survey the patient's experience with the process at 1 week, and the Kirkpatrick evaluation model was used to gauge the efficacy of the process over a 6-month time frame. Eighty-five orthopedic surgeons attended the meeting. Fifteen guidelines grouped into 5 topics were created. All passed. Eighty-six percent of attendees found the process effective and 84% felt that participating in the process made it more likely that they would adopt the guidelines. At 1 week, an average of 62% of attendees stated they were practicing the guideline as written (range: 35%-72%), and at 6 months, 96% stated they were practicing them (range: 82%-100%). We have demonstrated that a modified Delphi method for reaching consensus can be very effective in both creating CPGs and leading to their adoption. Further we have shown that the process is well received by participants and that an inclusionary approach can be highly successful. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Validation of CALMET/CALPUFF models simulations around a large power plant stack

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez-Garces, A.; Souto, J. A.; Rodriguez, A.; Saavedra, S.; Casares, J. J.

    2015-07-01

    Calmest/CALPUFF modeling system is frequently used in the study of atmospheric processes and pollution, and several validation tests were performed until now; nevertheless, most of them were based on experiments with a large compilation of surface and aloft meteorological measurements, rarely available. At the same time, the use of a large operational smokestack as tracer/pollutant source is not usual. In this work, first CALMET meteorological diagnostic model is nested to WRF meteorological prognostic model simulations (3x3 km{sup 2} horizontal resolution) over a complex terrain and coastal domain at NW Spain, covering 100x100 km{sup 2}, with a coal-fired power plant emitting SO{sub 2}. Simulations were performed during three different periods when SO{sub 2} hourly glc peaks were observed. NCEP reanalysis were applied as initial and boundary conditions. Yong Sei University-Pleim-Chang (YSU) PBL scheme was selected in the WRF model to provide the best input to three different CALMET horizontal resolutions, 1x1 km{sup 2}, 0.5x0.5 km{sup 2}, and 0.2x0.2 km{sup 2}. The best results, very similar between them, were achieved using the last two resolutions; therefore, the 0.5x0.5 km{sup 2} resolution was selected to test different CALMET meteorological inputs, using several combinations of WRF outputs and/or surface and upper-air measurements available in the simulation domain. With respect to meteorological aloft models output, CALMET PBL depth estimations are very similar to PBL depth estimations using upper-air measurements (rawinsondes), and significantly better than WRF PBL depth results. Regarding surface models surface output, the available meteorological sites were divided in two groups, one to provide meteorological input to CALMET (when applied), and another to models validation. Comparing WRF and CALMET outputs against surface measurements (from sites for models validation) the lowest RMSE was achieved using as CALMET input dataset WRF output combined with

  16. Validation of CALMET/CALPUFF models simulations around a large power plant stack

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez-Garces, A.; Souto Rodriguez, J.A.; Saavedra, S.; Casares, J.J.

    2015-07-01

    CALMET/CALPUFF modeling system is frequently used in the study of atmospheric processes and pollution, and several validation tests were performed until now; nevertheless, most of them were based on experiments with a large compilation of surface and aloft meteorological measurements, rarely available. At the same time, the use of a large operational smokestack as tracer/pollutant source is not usual. In this work, first CALMET meteorological diagnostic model is nested to WRF meteorological prognostic model simulations (3x3 km2 horizontal resolution) over a complex terrain and coastal domain at NW Spain, covering 100x100 km2 , with a coal-fired power plant emitting SO2. Simulations were performed during three different periods when SO2 hourly glc peaks were observed. NCEP reanalysis were applied as initial and boundary conditions. Yong Sei University-Pleim-Chang (YSU) PBL scheme was selected in the WRF model to provide the best input to three different CALMET horizontal resolutions, 1x1 km2 , 0.5x0.5 km2 , and 0.2x0.2 km2. The best results, very similar between them, were achieved using the last two resolutions; therefore, the 0.5x0.5 km2 resolution was selected to test different CALMET meteorological inputs, using several combinations of WRF outputs and/or surface and upper-air measurements available in the simulation domain. With respect to meteorological aloft models output, CALMET PBL depth estimations are very similar to PBL depth estimations using upper-air measurements (rawinsondes), and significantly better than WRF PBL depth results. Regarding surface models surface output, the available meteorological sites were divided in two groups, one to provide meteorological input to CALMET (when applied), and another to models validation. Comparing WRF and CALMET outputs against surface measurements (from sites for models validation) the lowest RMSE was achieved using as CALMET input dataset WRF output combined with surface measurements (from sites for CALMET model

  17. Validation of CALMET/CALPUFF models simulations around a large power plant stack

    International Nuclear Information System (INIS)

    Hernandez-Garces, A.; Souto, J. A.; Rodriguez, A.; Saavedra, S.; Casares, J. J.

    2015-01-01

    Calmest/CALPUFF modeling system is frequently used in the study of atmospheric processes and pollution, and several validation tests were performed until now; nevertheless, most of them were based on experiments with a large compilation of surface and aloft meteorological measurements, rarely available. At the same time, the use of a large operational smokestack as tracer/pollutant source is not usual. In this work, first CALMET meteorological diagnostic model is nested to WRF meteorological prognostic model simulations (3x3 km 2 horizontal resolution) over a complex terrain and coastal domain at NW Spain, covering 100x100 km 2 , with a coal-fired power plant emitting SO 2 . Simulations were performed during three different periods when SO 2 hourly glc peaks were observed. NCEP reanalysis were applied as initial and boundary conditions. Yong Sei University-Pleim-Chang (YSU) PBL scheme was selected in the WRF model to provide the best input to three different CALMET horizontal resolutions, 1x1 km 2 , 0.5x0.5 km 2 , and 0.2x0.2 km 2 . The best results, very similar between them, were achieved using the last two resolutions; therefore, the 0.5x0.5 km 2 resolution was selected to test different CALMET meteorological inputs, using several combinations of WRF outputs and/or surface and upper-air measurements available in the simulation domain. With respect to meteorological aloft models output, CALMET PBL depth estimations are very similar to PBL depth estimations using upper-air measurements (rawinsondes), and significantly better than WRF PBL depth results. Regarding surface models surface output, the available meteorological sites were divided in two groups, one to provide meteorological input to CALMET (when applied), and another to models validation. Comparing WRF and CALMET outputs against surface measurements (from sites for models validation) the lowest RMSE was achieved using as CALMET input dataset WRF output combined with surface measurements (from sites for

  18. Group-ICA model order highlights patterns of functional brain connectivity

    Directory of Open Access Journals (Sweden)

    Ahmed eAbou Elseoud

    2011-06-01

    Full Text Available Resting-state networks (RSNs can be reliably and reproducibly detected using independent component analysis (ICA at both individual subject and group levels. Altering ICA dimensionality (model order estimation can have a significant impact on the spatial characteristics of the RSNs as well as their parcellation into sub-networks. Recent evidence from several neuroimaging studies suggests that the human brain has a modular hierarchical organization which resembles the hierarchy depicted by different ICA model orders. We hypothesized that functional connectivity between-group differences measured with ICA might be affected by model order selection. We investigated differences in functional connectivity using so-called dual-regression as a function of ICA model order in a group of unmedicated seasonal affective disorder (SAD patients compared to normal healthy controls. The results showed that the detected disease-related differences in functional connectivity alter as a function of ICA model order. The volume of between-group differences altered significantly as a function of ICA model order reaching maximum at model order 70 (which seems to be an optimal point that conveys the largest between-group difference then stabilized afterwards. Our results show that fine-grained RSNs enable better detection of detailed disease-related functional connectivity changes. However, high model orders show an increased risk of false positives that needs to be overcome. Our findings suggest that multilevel ICA exploration of functional connectivity enables optimization of sensitivity to brain disorders.

  19. Working group report: Flavor physics and model building

    Indian Academy of Sciences (India)

    cO Indian Academy of Sciences. Vol. ... This is the report of flavor physics and model building working group at ... those in model building have been primarily devoted to neutrino physics. ..... [12] Andrei Gritsan, ICHEP 2004, Beijing, China.

  20. Diagrammatic group theory in quark models

    International Nuclear Information System (INIS)

    Canning, G.P.

    1977-05-01

    A simple and systematic diagrammatic method is presented for calculating the numerical factors arising from group theory in quark models: dimensions, casimir invariants, vector coupling coefficients and especially recoupling coefficients. Some coefficients for the coupling of 3 quark objects are listed for SU(n) and SU(2n). (orig.) [de

  1. Safety assessment of dangerous goods transport enterprise based on the relative entropy aggregation in group decision making model.

    Science.gov (United States)

    Wu, Jun; Li, Chengbing; Huo, Yueying

    2014-01-01

    Safety of dangerous goods transport is directly related to the operation safety of dangerous goods transport enterprise. Aiming at the problem of the high accident rate and large harm in dangerous goods logistics transportation, this paper took the group decision making problem based on integration and coordination thought into a multiagent multiobjective group decision making problem; a secondary decision model was established and applied to the safety assessment of dangerous goods transport enterprise. First of all, we used dynamic multivalue background and entropy theory building the first level multiobjective decision model. Secondly, experts were to empower according to the principle of clustering analysis, and combining with the relative entropy theory to establish a secondary rally optimization model based on relative entropy in group decision making, and discuss the solution of the model. Then, after investigation and analysis, we establish the dangerous goods transport enterprise safety evaluation index system. Finally, case analysis to five dangerous goods transport enterprises in the Inner Mongolia Autonomous Region validates the feasibility and effectiveness of this model for dangerous goods transport enterprise recognition, which provides vital decision making basis for recognizing the dangerous goods transport enterprises.

  2. Study on dynamic multi-objective approach considering coal and water conflict in large scale coal group

    Science.gov (United States)

    Feng, Qing; Lu, Li

    2018-01-01

    In the process of coal mining, destruction and pollution of groundwater in has reached an imminent time, and groundwater is not only related to the ecological environment, but also affect the health of human life. Similarly, coal and water conflict is still one of the world's problems in large scale coal mining regions. Based on this, this paper presents a dynamic multi-objective optimization model to deal with the conflict of the coal and water in the coal group with multiple subordinate collieries and arrive at a comprehensive arrangement to achieve environmentally friendly coal mining strategy. Through calculation, this paper draws the output of each subordinate coal mine. And on this basis, we continue to adjust the environmental protection parameters to compare the coal production at different collieries at different stages under different attitude of the government. At last, the paper conclude that, in either case, it is the first arrangement to give priority to the production of low-drainage, high-yield coal mines.

  3. Vibration tests on pile-group foundations using large-scale blast excitation

    International Nuclear Information System (INIS)

    Tanaka, Hideo; Hijikata, Katsuichirou; Hashimoto, Takayuki; Fujiwara, Kazushige; Kontani, Osamu; Miyamoto, Yuji; Suzuki, Atsushi

    2005-01-01

    Extensive vibration tests have been performed on pile-supported structures at a large-scale mining site. Ground motions induced by large-scale blasting operations were used as excitation forces for vibration tests. The main objective of this research is to investigate the dynamic behavior of pile-supported structures, in particular, pile-group effects. Two test structures were constructed in an excavated 4 m deep pit. One structure had 25 steel tubular piles and the other had 4 piles. The super-structures were exactly the same. The test pit was backfilled with sand of appropriate grain size distributions in order to obtain good compaction, especially between the 25 piles. Accelerations were measured at the structures, in the test pit and in the adjacent free field, and pile strains were measured. The vibration tests were performed six times with different levels of input motions. The maximum horizontal acceleration recorded at the adjacent ground surface varied from 57 cm/s 2 to 1683 cm/s 2 according to the distances between the test site and the blast areas. Maximum strains were 13,400 micro-strains were recorded at the pile top of the 4-pile structure, which means that these piles were subjected to yielding

  4. Deciphering the crowd: modeling and identification of pedestrian group motion.

    Science.gov (United States)

    Yücel, Zeynep; Zanlungo, Francesco; Ikeda, Tetsushi; Miyashita, Takahiro; Hagita, Norihiro

    2013-01-14

    Associating attributes to pedestrians in a crowd is relevant for various areas like surveillance, customer profiling and service providing. The attributes of interest greatly depend on the application domain and might involve such social relations as friends or family as well as the hierarchy of the group including the leader or subordinates. Nevertheless, the complex social setting inherently complicates this task. We attack this problem by exploiting the small group structures in the crowd. The relations among individuals and their peers within a social group are reliable indicators of social attributes. To that end, this paper identifies social groups based on explicit motion models integrated through a hypothesis testing scheme. We develop two models relating positional and directional relations. A pair of pedestrians is identified as belonging to the same group or not by utilizing the two models in parallel, which defines a compound hypothesis testing scheme. By testing the proposed approach on three datasets with different environmental properties and group characteristics, it is demonstrated that we achieve an identification accuracy of 87% to 99%. The contribution of this study lies in its definition of positional and directional relation models, its description of compound evaluations, and the resolution of ambiguities with our proposed uncertainty measure based on the local and global indicators of group relation.

  5. Does company size matter? Validation of an integrative model of safety behavior across small and large construction companies.

    Science.gov (United States)

    Guo, Brian H W; Yiu, Tak Wing; González, Vicente A

    2018-02-01

    Previous safety climate studies primarily focused on either large construction companies or the construction industry as a whole, while little is known about whether company size has significant effects on workers' understanding of safety climate measures and relationships between safety climate factors and safety behavior. Thus, this study aims to: (a) test the measurement equivalence (ME) of a safety climate measure across workers from small and large companies; (b) investigate if company size alters the causal structure of the integrative model developed by Guo, Yiu, and González (2016). Data were collected from 253 construction workers in New Zealand using a safety climate measure. This study used multi-group confirmatory factor analyses (MCFA) to test the measurement equivalence of the safety climate measure and structure invariance of the integrative model. Results indicate that workers from small and large companies understood the safety climate measure in a similar manner. In addition, it was suggested that company size does not change the causal structure and mediational processes of the integrative model. Both measurement equivalence of the safety climate measure and structural invariance of the integrative model were supported by this study. Practical applications: Findings of this study provided strong support for a meaningful use of the safety climate measure across construction companies in different sizes. Safety behavior promotion strategies designed based on the integrative model may be well suited for both large and small companies. Copyright © 2017 National Safety Council and Elsevier Ltd. All rights reserved.

  6. Topological Poisson Sigma models on Poisson-Lie groups

    International Nuclear Information System (INIS)

    Calvo, Ivan; Falceto, Fernando; Garcia-Alvarez, David

    2003-01-01

    We solve the topological Poisson Sigma model for a Poisson-Lie group G and its dual G*. We show that the gauge symmetry for each model is given by its dual group that acts by dressing transformations on the target. The resolution of both models in the open geometry reveals that there exists a map from the reduced phase of each model (P and P*) to the main symplectic leaf of the Heisenberg double (D 0 ) such that the symplectic forms on P, P* are obtained as the pull-back by those maps of the symplectic structure on D 0 . This uncovers a duality between P and P* under the exchange of bulk degrees of freedom of one model with boundary degrees of freedom of the other one. We finally solve the Poisson Sigma model for the Poisson structure on G given by a pair of r-matrices that generalizes the Poisson-Lie case. The Hamiltonian analysis of the theory requires the introduction of a deformation of the Heisenberg double. (author)

  7. How the group affects the mind : A cognitive model of idea generation in groups

    NARCIS (Netherlands)

    Nijstad, Bernard A.; Stroebe, Wolfgang

    2006-01-01

    A model called search for ideas in associative memory (SIAM) is proposed to account for various research findings in the area of group idea generation. The model assumes that idea generation is a repeated search for ideas in associative memory, which proceeds in 2 stages (knowledge activation and

  8. Large deflection of viscoelastic beams using fractional derivative model

    International Nuclear Information System (INIS)

    Bahranini, Seyed Masoud Sotoodeh; Eghtesad, Mohammad; Ghavanloo, Esmaeal; Farid, Mehrdad

    2013-01-01

    This paper deals with large deflection of viscoelastic beams using a fractional derivative model. For this purpose, a nonlinear finite element formulation of viscoelastic beams in conjunction with the fractional derivative constitutive equations has been developed. The four-parameter fractional derivative model has been used to describe the constitutive equations. The deflected configuration for a uniform beam with different boundary conditions and loads is presented. The effect of the order of fractional derivative on the large deflection of the cantilever viscoelastic beam, is investigated after 10, 100, and 1000 hours. The main contribution of this paper is finite element implementation for nonlinear analysis of viscoelastic fractional model using the storage of both strain and stress histories. The validity of the present analysis is confirmed by comparing the results with those found in the literature.

  9. Misspecified poisson regression models for large-scale registry data: inference for 'large n and small p'.

    Science.gov (United States)

    Grøn, Randi; Gerds, Thomas A; Andersen, Per K

    2016-03-30

    Poisson regression is an important tool in register-based epidemiology where it is used to study the association between exposure variables and event rates. In this paper, we will discuss the situation with 'large n and small p', where n is the sample size and p is the number of available covariates. Specifically, we are concerned with modeling options when there are time-varying covariates that can have time-varying effects. One problem is that tests of the proportional hazards assumption, of no interactions between exposure and other observed variables, or of other modeling assumptions have large power due to the large sample size and will often indicate statistical significance even for numerically small deviations that are unimportant for the subject matter. Another problem is that information on important confounders may be unavailable. In practice, this situation may lead to simple working models that are then likely misspecified. To support and improve conclusions drawn from such models, we discuss methods for sensitivity analysis, for estimation of average exposure effects using aggregated data, and a semi-parametric bootstrap method to obtain robust standard errors. The methods are illustrated using data from the Danish national registries investigating the diabetes incidence for individuals treated with antipsychotics compared with the general unexposed population. Copyright © 2015 John Wiley & Sons, Ltd.

  10. Protein homology model refinement by large-scale energy optimization.

    Science.gov (United States)

    Park, Hahnbeom; Ovchinnikov, Sergey; Kim, David E; DiMaio, Frank; Baker, David

    2018-03-20

    Proteins fold to their lowest free-energy structures, and hence the most straightforward way to increase the accuracy of a partially incorrect protein structure model is to search for the lowest-energy nearby structure. This direct approach has met with little success for two reasons: first, energy function inaccuracies can lead to false energy minima, resulting in model degradation rather than improvement; and second, even with an accurate energy function, the search problem is formidable because the energy only drops considerably in the immediate vicinity of the global minimum, and there are a very large number of degrees of freedom. Here we describe a large-scale energy optimization-based refinement method that incorporates advances in both search and energy function accuracy that can substantially improve the accuracy of low-resolution homology models. The method refined low-resolution homology models into correct folds for 50 of 84 diverse protein families and generated improved models in recent blind structure prediction experiments. Analyses of the basis for these improvements reveal contributions from both the improvements in conformational sampling techniques and the energy function.

  11. Use of a statistical model of the whole femur in a large scale, multi-model study of femoral neck fracture risk.

    Science.gov (United States)

    Bryan, Rebecca; Nair, Prasanth B; Taylor, Mark

    2009-09-18

    Interpatient variability is often overlooked in orthopaedic computational studies due to the substantial challenges involved in sourcing and generating large numbers of bone models. A statistical model of the whole femur incorporating both geometric and material property variation was developed as a potential solution to this problem. The statistical model was constructed using principal component analysis, applied to 21 individual computer tomography scans. To test the ability of the statistical model to generate realistic, unique, finite element (FE) femur models it was used as a source of 1000 femurs to drive a study on femoral neck fracture risk. The study simulated the impact of an oblique fall to the side, a scenario known to account for a large proportion of hip fractures in the elderly and have a lower fracture load than alternative loading approaches. FE model generation, application of subject specific loading and boundary conditions, FE processing and post processing of the solutions were completed automatically. The generated models were within the bounds of the training data used to create the statistical model with a high mesh quality, able to be used directly by the FE solver without remeshing. The results indicated that 28 of the 1000 femurs were at highest risk of fracture. Closer analysis revealed the percentage of cortical bone in the proximal femur to be a crucial differentiator between the failed and non-failed groups. The likely fracture location was indicated to be intertrochantic. Comparison to previous computational, clinical and experimental work revealed support for these findings.

  12. Testing Group Mean Differences of Latent Variables in Multilevel Data Using Multiple-Group Multilevel CFA and Multilevel MIMIC Modeling.

    Science.gov (United States)

    Kim, Eun Sook; Cao, Chunhua

    2015-01-01

    Considering that group comparisons are common in social science, we examined two latent group mean testing methods when groups of interest were either at the between or within level of multilevel data: multiple-group multilevel confirmatory factor analysis (MG ML CFA) and multilevel multiple-indicators multiple-causes modeling (ML MIMIC). The performance of these methods were investigated through three Monte Carlo studies. In Studies 1 and 2, either factor variances or residual variances were manipulated to be heterogeneous between groups. In Study 3, which focused on within-level multiple-group analysis, six different model specifications were considered depending on how to model the intra-class group correlation (i.e., correlation between random effect factors for groups within cluster). The results of simulations generally supported the adequacy of MG ML CFA and ML MIMIC for multiple-group analysis with multilevel data. The two methods did not show any notable difference in the latent group mean testing across three studies. Finally, a demonstration with real data and guidelines in selecting an appropriate approach to multilevel multiple-group analysis are provided.

  13. Mathematical modeling of large floating roof reservoir temperature arena

    Directory of Open Access Journals (Sweden)

    Liu Yang

    2018-03-01

    Full Text Available The current study is a simplification of related components of large floating roof tank and modeling for three dimensional temperature field of large floating roof tank. The heat transfer involves its transfer between the hot fluid in the oil tank, between the hot fluid and the tank wall and between the tank wall and the external environment. The mathematical model of heat transfer and flow of oil in the tank simulates the temperature field of oil in tank. Oil temperature field of large floating roof tank is obtained by numerical simulation, map the curve of central temperature dynamics with time and analyze axial and radial temperature of storage tank. It determines the distribution of low temperature storage tank location based on the thickness of the reservoir temperature. Finally, it compared the calculated results and the field test data; eventually validated the calculated results based on the experimental results.

  14. Conjugacy in relatively extra-large Artin groups

    Directory of Open Access Journals (Sweden)

    Arye Juhasz

    2015-09-01

    Full Text Available Let A be an Artin group with standard generators X={x 1 ,…,x n } , n≥1 and defining graph Γ A . A \\emph{standard parabolic subgroup} of A is a subgroup generated by a subset of X . For elements u and v of A we say (as usual that u is conjugate to v by an element h of A if h −1 uh=v holds in A . Similarly, if K and L are subsets of A then K is conjugate to L by an element h of A if h −1 Kh=L . In this work we consider the conjugacy of elements and standard parabolic subgroups of a certain type of Artin groups. Results in this direction occur in occur in papers by Duncan, Kazachkov, Remeslennikov, Fenn, Dale, Jun, Godelle, Gonzalez-Meneses, Wiest, Paris, Rolfsen, for example. Of particular interest are centralisers of elements, and of standard parabolic subgroups, normalisers of standard parabolic subgroups and commensurators of parabolic subgroups. In this work we consider similar problems in a new class of Artin groups, introduced in the paper "On relatively extralarge Artin groups and their relative asphericity", by Juhasz, where the word problem is solved, among other things. Also, intersections of parabolic subgroups and their conjugates are considered.

  15. The Hamburg large scale geostrophic ocean general circulation model. Cycle 1

    International Nuclear Information System (INIS)

    Maier-Reimer, E.; Mikolajewicz, U.

    1992-02-01

    The rationale for the Large Scale Geostrophic ocean circulation model (LSG-OGCM) is based on the observations that for a large scale ocean circulation model designed for climate studies, the relevant characteristic spatial scales are large compared with the internal Rossby radius throughout most of the ocean, while the characteristic time scales are large compared with the periods of gravity modes and barotropic Rossby wave modes. In the present version of the model, the fast modes have been filtered out by a conventional technique of integrating the full primitive equations, including all terms except the nonlinear advection of momentum, by an implicit time integration method. The free surface is also treated prognostically, without invoking a rigid lid approximation. The numerical scheme is unconditionally stable and has the additional advantage that it can be applied uniformly to the entire globe, including the equatorial and coastal current regions. (orig.)

  16. Modelling and measurements of wakes in large wind farms

    DEFF Research Database (Denmark)

    Barthelmie, Rebecca Jane; Rathmann, Ole; Frandsen, Sten Tronæs

    2007-01-01

    The paper presents research conducted in the Flow workpackage of the EU funded UPWIND project which focuses on improving models of flow within and downwind of large wind farms in complex terrain and offshore. The main activity is modelling the behaviour of wind turbine wakes in order to improve...

  17. Subgrid-scale models for large-eddy simulation of rotating turbulent channel flows

    Science.gov (United States)

    Silvis, Maurits H.; Bae, Hyunji Jane; Trias, F. Xavier; Abkar, Mahdi; Moin, Parviz; Verstappen, Roel

    2017-11-01

    We aim to design subgrid-scale models for large-eddy simulation of rotating turbulent flows. Rotating turbulent flows form a challenging test case for large-eddy simulation due to the presence of the Coriolis force. The Coriolis force conserves the total kinetic energy while transporting it from small to large scales of motion, leading to the formation of large-scale anisotropic flow structures. The Coriolis force may also cause partial flow laminarization and the occurrence of turbulent bursts. Many subgrid-scale models for large-eddy simulation are, however, primarily designed to parametrize the dissipative nature of turbulent flows, ignoring the specific characteristics of transport processes. We, therefore, propose a new subgrid-scale model that, in addition to the usual dissipative eddy viscosity term, contains a nondissipative nonlinear model term designed to capture transport processes, such as those due to rotation. We show that the addition of this nonlinear model term leads to improved predictions of the energy spectra of rotating homogeneous isotropic turbulence as well as of the Reynolds stress anisotropy in spanwise-rotating plane-channel flows. This work is financed by the Netherlands Organisation for Scientific Research (NWO) under Project Number 613.001.212.

  18. Deciphering the Crowd: Modeling and Identification of Pedestrian Group Motion

    Directory of Open Access Journals (Sweden)

    Norihiro Hagita

    2013-01-01

    Full Text Available Associating attributes to pedestrians in a crowd is relevant for various areas like surveillance, customer profiling and service providing. The attributes of interest greatly depend on the application domain and might involve such social relations as friends or family as well as the hierarchy of the group including the leader or subordinates. Nevertheless, the complex social setting inherently complicates this task. We attack this problem by exploiting the small group structures in the crowd. The relations among individuals and their peers within a social group are reliable indicators of social attributes. To that end, this paper identifies social groups based on explicit motion models integrated through a hypothesis testing scheme. We develop two models relating positional and directional relations. A pair of pedestrians is identified as belonging to the same group or not by utilizing the two models in parallel, which defines a compound hypothesis testing scheme. By testing the proposed approach on three datasets with different environmental properties and group characteristics, it is demonstrated that we achieve an identification accuracy of 87% to 99%. The contribution of this study lies in its definition of positional and directional relation models, its description of compound evaluations, and the resolution of ambiguities with our proposed uncertainty measure based on the local and global indicators of group relation.

  19. Large transverse momentum processes in a non-scaling parton model

    International Nuclear Information System (INIS)

    Stirling, W.J.

    1977-01-01

    The production of large transverse momentum mesons in hadronic collisions by the quark fusion mechanism is discussed in a parton model which gives logarithmic corrections to Bjorken scaling. It is found that the moments of the large transverse momentum structure function exhibit a simple scale breaking behaviour similar to the behaviour of the Drell-Yan and deep inelastic structure functions of the model. An estimate of corresponding experimental consequences is made and the extent to which analogous results can be expected in an asymptotically free gauge theory is discussed. A simple set of rules is presented for incorporating the logarithmic corrections to scaling into all covariant parton model calculations. (Auth.)

  20. Affine Poisson Groups and WZW Model

    Directory of Open Access Journals (Sweden)

    Ctirad Klimcík

    2008-01-01

    Full Text Available We give a detailed description of a dynamical system which enjoys a Poisson-Lie symmetry with two non-isomorphic dual groups. The system is obtained by taking the q → ∞ limit of the q-deformed WZW model and the understanding of its symmetry structure results in uncovering an interesting duality of its exchange relations.

  1. Key Informant Models for Measuring Group-Level Variables in Small Groups: Application to Plural Subject Theory

    Science.gov (United States)

    Algesheimer, René; Bagozzi, Richard P.; Dholakia, Utpal M.

    2018-01-01

    We offer a new conceptualization and measurement models for constructs at the group-level of analysis in small group research. The conceptualization starts with classical notions of group behavior proposed by Tönnies, Simmel, and Weber and then draws upon plural subject theory by philosophers Gilbert and Tuomela to frame a new perspective…

  2. Large order asymptotics and convergent perturbation theory for critical indices of the φ4 model in 4 - ε expansion

    International Nuclear Information System (INIS)

    Honkonen, J.; Komarova, M.; Nalimov, M.

    2002-01-01

    Large order asymptotic behaviour of renormalization constants in the minimal subtraction scheme for the φ 4 (4 - ε) theory is discussed. Well-known results of the asymptotic 4 - ε expansion of critical indices are shown to be far from the large order asymptotic value. A convergent series for the model φ 4 (4 - ε) is then considered. Radius of convergence of the series for Green functions and for renormalisation group functions is studied. The results of the convergent expansion of critical indices in the 4 - ε scheme are revalued using the knowledge of large order asymptotics. Specific features of this procedure are discussed (Authors)

  3. Estimation of group means when adjusting for covariates in generalized linear models.

    Science.gov (United States)

    Qu, Yongming; Luo, Junxiang

    2015-01-01

    Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.

  4. A stochastic large deformation model for computational anatomy

    DEFF Research Database (Denmark)

    Arnaudon, Alexis; Holm, Darryl D.; Pai, Akshay Sadananda Uppinakudru

    2017-01-01

    In the study of shapes of human organs using computational anatomy, variations are found to arise from inter-subject anatomical differences, disease-specific effects, and measurement noise. This paper introduces a stochastic model for incorporating random variations into the Large Deformation...

  5. Modeling the impact of large-scale energy conversion systems on global climate

    International Nuclear Information System (INIS)

    Williams, J.

    There are three energy options which could satisfy a projected energy requirement of about 30 TW and these are the solar, nuclear and (to a lesser extent) coal options. Climate models can be used to assess the impact of large scale deployment of these options. The impact of waste heat has been assessed using energy balance models and general circulation models (GCMs). Results suggest that the impacts are significant when the heat imput is very high and studies of more realistic scenarios are required. Energy balance models, radiative-convective models and a GCM have been used to study the impact of doubling the atmospheric CO 2 concentration. State-of-the-art models estimate a surface temperature increase of 1.5-3.0 0 C with large amplification near the poles, but much uncertainty remains. Very few model studies have been made of the impact of particles on global climate, more information on the characteristics of particle input are required. The impact of large-scale deployment of solar energy conversion systems has received little attention but model studies suggest that large scale changes in surface characteristics associated with such systems (surface heat balance, roughness and hydrological characteristics and ocean surface temperature) could have significant global climatic effects. (Auth.)

  6. Comparison of the large muscle group widths of the pelvic limb in seven breeds of dogs.

    Science.gov (United States)

    Sabanci, Seyyid Said; Ocal, Mehmet Kamil

    2018-05-14

    Orthopaedic diseases are common in the pelvic limbs of dogs, and reference values for large muscle groups of the pelvic limb may aid in diagnosis such diseases. As such, the objective of this study was to compare the large muscle groups of the pelvic limb in seven breeds of dogs. A total of 126 dogs from different breeds were included, and the widths of the quadriceps, hamstring and gastrocnemius muscles were measured from images of the lateral radiographies. The width of the quadriceps was not different between the breeds, but the widths of the hamstring and gastrocnemius muscles were significantly different between the breeds. The widest hamstring and gastrocnemius muscles were seen in the Rottweilers and the Boxers, respectively. The narrowest hamstring and gastrocnemius muscles were seen in the Belgian Malinois and the Golden retrievers, respectively. All ratios between the measured muscles differed significantly between the breeds. Doberman pinschers and Belgian Malinois had the highest ratio of gastrocnemius width:hamstring width. Doberman pinschers had also the highest ratio of quadriceps width:hamstring width. German shepherds had the highest ratio of gastrocnemius width:quadriceps width. The lowest ratios of quadriceps width:hamstring width were determined in the German shepherds. The ratios of the muscle widths may be used as reference values to assess muscular atrophy or hypertrophy in cases of bilateral or unilateral orthopaedic diseases of the pelvic limbs. Further studies are required to determine the widths and ratios of the large muscle groups of the pelvic limbs in other dog breeds. © 2018 Blackwell Verlag GmbH.

  7. Recursive renormalization group theory based subgrid modeling

    Science.gov (United States)

    Zhou, YE

    1991-01-01

    Advancing the knowledge and understanding of turbulence theory is addressed. Specific problems to be addressed will include studies of subgrid models to understand the effects of unresolved small scale dynamics on the large scale motion which, if successful, might substantially reduce the number of degrees of freedom that need to be computed in turbulence simulation.

  8. Bayesian hierarchical model for large-scale covariance matrix estimation.

    Science.gov (United States)

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  9. Modelling and measurements of wakes in large wind farms

    International Nuclear Information System (INIS)

    Barthelmie, R J; Rathmann, O; Frandsen, S T; Hansen, K S; Politis, E; Prospathopoulos, J; Rados, K; Cabezon, D; Schlez, W; Phillips, J; Neubert, A; Schepers, J G; Pijl, S P van der

    2007-01-01

    The paper presents research conducted in the Flow workpackage of the EU funded UPWIND project which focuses on improving models of flow within and downwind of large wind farms in complex terrain and offshore. The main activity is modelling the behaviour of wind turbine wakes in order to improve power output predictions

  10. Modelling animal group fission using social network dynamics.

    Directory of Open Access Journals (Sweden)

    Cédric Sueur

    Full Text Available Group life involves both advantages and disadvantages, meaning that individuals have to compromise between their nutritional needs and their social links. When a compromise is impossible, the group splits in order to reduce conflict of interests and favour positive social interactions between its members. In this study we built a dynamic model of social networks to represent a succession of temporary fissions involving a change in social relations that could potentially lead to irreversible group fission (i.e. no more group fusion. This is the first study that assesses how a social network changes according to group fission-fusion dynamics. We built a model that was based on different parameters: the group size, the influence of nutritional needs compared to social needs, and the changes in the social network after a temporary fission. The results obtained from this theoretical data indicate how the percentage of social relation transfer, the number of individuals and the relative importance of nutritional requirements and social links influence the average number of days before irreversible fission occurs. The greater the nutritional needs and the higher the transfer of social relations during temporary fission, the fewer days will be observed before an irreversible fission. It is crucial to bridge the gap between the individual and the population level if we hope to understand how simple, local interactions may drive ecological systems.

  11. Large Scale Community Detection Using a Small World Model

    Directory of Open Access Journals (Sweden)

    Ranjan Kumar Behera

    2017-11-01

    Full Text Available In a social network, small or large communities within the network play a major role in deciding the functionalities of the network. Despite of diverse definitions, communities in the network may be defined as the group of nodes that are more densely connected as compared to nodes outside the group. Revealing such hidden communities is one of the challenging research problems. A real world social network follows small world phenomena, which indicates that any two social entities can be reachable in a small number of steps. In this paper, nodes are mapped into communities based on the random walk in the network. However, uncovering communities in large-scale networks is a challenging task due to its unprecedented growth in the size of social networks. A good number of community detection algorithms based on random walk exist in literature. In addition, when large-scale social networks are being considered, these algorithms are observed to take considerably longer time. In this work, with an objective to improve the efficiency of algorithms, parallel programming framework like Map-Reduce has been considered for uncovering the hidden communities in social network. The proposed approach has been compared with some standard existing community detection algorithms for both synthetic and real-world datasets in order to examine its performance, and it is observed that the proposed algorithm is more efficient than the existing ones.

  12. Large-eddy simulation of the temporal mixing layer using the Clark model

    NARCIS (Netherlands)

    Vreman, A.W.; Geurts, B.J.; Kuerten, J.G.M.

    1996-01-01

    The Clark model for the turbulent stress tensor in large-eddy simulation is investigated from a theoretical and computational point of view. In order to be applicable to compressible turbulent flows, the Clark model has been reformulated. Actual large-eddy simulation of a weakly compressible,

  13. Global Bedload Flux Modeling and Analysis in Large Rivers

    Science.gov (United States)

    Islam, M. T.; Cohen, S.; Syvitski, J. P.

    2017-12-01

    Proper sediment transport quantification has long been an area of interest for both scientists and engineers in the fields of geomorphology, and management of rivers and coastal waters. Bedload flux is important for monitoring water quality and for sustainable development of coastal and marine bioservices. Bedload measurements, especially for large rivers, is extremely scarce across time, and many rivers have never been monitored. Bedload measurements in rivers, is particularly acute in developing countries where changes in sediment yields is high. The paucity of bedload measurements is the result of 1) the nature of the problem (large spatial and temporal uncertainties), and 2) field costs including the time-consuming nature of the measurement procedures (repeated bedform migration tracking, bedload samplers). Here we present a first of its kind methodology for calculating bedload in large global rivers (basins are >1,000 km. Evaluation of model skill is based on 113 bedload measurements. The model predictions are compared with an empirical model developed from the observational dataset in an attempt to evaluate the differences between a physically-based numerical model and a lumped relationship between bedload flux and fluvial and basin parameters (e.g., discharge, drainage area, lithology). The initial study success opens up various applications to global fluvial geomorphology (e.g. including the relationship between suspended sediment (wash load) and bedload). Simulated results with known uncertainties offers a new research product as a valuable resource for the whole scientific community.

  14. A large deformation viscoelastic model for double-network hydrogels

    Science.gov (United States)

    Mao, Yunwei; Lin, Shaoting; Zhao, Xuanhe; Anand, Lallit

    2017-03-01

    We present a large deformation viscoelasticity model for recently synthesized double network hydrogels which consist of a covalently-crosslinked polyacrylamide network with long chains, and an ionically-crosslinked alginate network with short chains. Such double-network gels are highly stretchable and at the same time tough, because when stretched the crosslinks in the ionically-crosslinked alginate network rupture which results in distributed internal microdamage which dissipates a substantial amount of energy, while the configurational entropy of the covalently-crosslinked polyacrylamide network allows the gel to return to its original configuration after deformation. In addition to the large hysteresis during loading and unloading, these double network hydrogels also exhibit a substantial rate-sensitive response during loading, but exhibit almost no rate-sensitivity during unloading. These features of large hysteresis and asymmetric rate-sensitivity are quite different from the response of conventional hydrogels. We limit our attention to modeling the complex viscoelastic response of such hydrogels under isothermal conditions. Our model is restricted in the sense that we have limited our attention to conditions under which one might neglect any diffusion of the water in the hydrogel - as might occur when the gel has a uniform initial value of the concentration of water, and the mobility of the water molecules in the gel is low relative to the time scale of the mechanical deformation. We also do not attempt to model the final fracture of such double-network hydrogels.

  15. Modified two-fluid model for the two-group interfacial area transport equation

    International Nuclear Information System (INIS)

    Sun Xiaodong; Ishii, Mamoru; Kelly, Joseph M.

    2003-01-01

    This paper presents a modified two-fluid model that is ready to be applied in the approach of the two-group interfacial area transport equation. The two-group interfacial area transport equation was developed to provide a mechanistic constitutive relation for the interfacial area concentration in the two-fluid model. In the two-group transport equation, bubbles are categorized into two groups: spherical/distorted bubbles as Group 1 while cap/slug/churn-turbulent bubbles as Group 2. Therefore, this transport equation can be employed in the flow regimes spanning from bubbly, cap bubbly, slug to churn-turbulent flows. However, the introduction of the two groups of bubbles requires two gas velocity fields. Yet it is not practical to solve two momentum equations for the gas phase alone. In the current modified two-fluid model, a simplified approach is proposed. The momentum equation for the averaged velocity of both Group-1 and Group-2 bubbles is retained. By doing so, the velocity difference between Group-1 and Group-2 bubbles needs to be determined. This may be made either based on simplified momentum equations for both Group-1 and Group-2 bubbles or by a modified drift-flux model

  16. Multiresolution comparison of precipitation datasets for large-scale models

    Science.gov (United States)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  17. Use of models in large-area forest surveys: comparing model-assisted, model-based and hybrid estimation

    Science.gov (United States)

    Goran Stahl; Svetlana Saarela; Sebastian Schnell; Soren Holm; Johannes Breidenbach; Sean P. Healey; Paul L. Patterson; Steen Magnussen; Erik Naesset; Ronald E. McRoberts; Timothy G. Gregoire

    2016-01-01

    This paper focuses on the use of models for increasing the precision of estimators in large-area forest surveys. It is motivated by the increasing availability of remotely sensed data, which facilitates the development of models predicting the variables of interest in forest surveys. We present, review and compare three different estimation frameworks where...

  18. Very large fMRI study using the IMAGEN database: Sensitivity-specificity and population effect modeling in relation to the underlying anatomy

    International Nuclear Information System (INIS)

    Thyreau, Benjamin; Schwartz, Yannick; Thirion, Bertrand; Frouin, Vincent; Loth, Eva; Conrod, Patricia J.; Schumann, Gunter; Vollstadt-Klein, Sabine; Paus, Tomas; Artiges, Eric; Whelan, Robert; Poline, Jean-Baptiste

    2012-01-01

    In this paper we investigate the use of classical fMRI Random Effect (RFX) group statistics when analyzing a very large cohort and the possible improvement brought from anatomical information. Using 1326 subjects from the IMAGEN study, we first give a global picture of the evolution of the group effect t-value from a simple face-watching contrast with increasing cohort size. We obtain a wide activated pattern, far from being limited to the reasonably expected brain areas, illustrating the difference between statistical significance and practical significance. This motivates us to inject tissue-probability information into the group estimation, we model the BOLD contrast using a matter-weighted mixture of Gaussians and compare it to the common, single-Gaussian model. In both cases, the model parameters are estimated per-voxel for one subgroup, and the likelihood of both models is computed on a second, separate subgroup to reflect model generalization capacity. Various group sizes are tested, and significance is asserted using a 10-fold cross-validation scheme. We conclude that adding matter information consistently improves the quantitative analysis of BOLD responses in some areas of the brain, particularly those where accurate inter-subject registration remains challenging. (authors)

  19. Comparing the performance of SIMD computers by running large air pollution models

    DEFF Research Database (Denmark)

    Brown, J.; Hansen, Per Christian; Wasniewski, J.

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on these computers. Using a realistic large-scale model, we gained detailed insight about the performance of the computers involved when used to solve large-scale scientific...... problems that involve several types of numerical computations. The computers used in our study are the Connection Machines CM-200 and CM-5, and the MasPar MP-2216...

  20. Association of Stressful Life Events with Psychological Problems: A Large-Scale Community-Based Study Using Grouped Outcomes Latent Factor Regression with Latent Predictors

    Directory of Open Access Journals (Sweden)

    Akbar Hassanzadeh

    2017-01-01

    Full Text Available Objective. The current study is aimed at investigating the association between stressful life events and psychological problems in a large sample of Iranian adults. Method. In a cross-sectional large-scale community-based study, 4763 Iranian adults, living in Isfahan, Iran, were investigated. Grouped outcomes latent factor regression on latent predictors was used for modeling the association of psychological problems (depression, anxiety, and psychological distress, measured by Hospital Anxiety and Depression Scale (HADS and General Health Questionnaire (GHQ-12, as the grouped outcomes, and stressful life events, measured by a self-administered stressful life events (SLEs questionnaire, as the latent predictors. Results. The results showed that the personal stressors domain has significant positive association with psychological distress (β=0.19, anxiety (β=0.25, depression (β=0.15, and their collective profile score (β=0.20, with greater associations in females (β=0.28 than in males (β=0.13 (all P<0.001. In addition, in the adjusted models, the regression coefficients for the association of social stressors domain and psychological problems profile score were 0.37, 0.35, and 0.46 in total sample, males, and females, respectively (P<0.001. Conclusion. Results of our study indicated that different stressors, particularly those socioeconomic related, have an effective impact on psychological problems. It is important to consider the social and cultural background of a population for managing the stressors as an effective approach for preventing and reducing the destructive burden of psychological problems.

  1. Association of Stressful Life Events with Psychological Problems: A Large-Scale Community-Based Study Using Grouped Outcomes Latent Factor Regression with Latent Predictors

    Science.gov (United States)

    Hassanzadeh, Akbar; Heidari, Zahra; Hassanzadeh Keshteli, Ammar; Afshar, Hamid

    2017-01-01

    Objective The current study is aimed at investigating the association between stressful life events and psychological problems in a large sample of Iranian adults. Method In a cross-sectional large-scale community-based study, 4763 Iranian adults, living in Isfahan, Iran, were investigated. Grouped outcomes latent factor regression on latent predictors was used for modeling the association of psychological problems (depression, anxiety, and psychological distress), measured by Hospital Anxiety and Depression Scale (HADS) and General Health Questionnaire (GHQ-12), as the grouped outcomes, and stressful life events, measured by a self-administered stressful life events (SLEs) questionnaire, as the latent predictors. Results The results showed that the personal stressors domain has significant positive association with psychological distress (β = 0.19), anxiety (β = 0.25), depression (β = 0.15), and their collective profile score (β = 0.20), with greater associations in females (β = 0.28) than in males (β = 0.13) (all P < 0.001). In addition, in the adjusted models, the regression coefficients for the association of social stressors domain and psychological problems profile score were 0.37, 0.35, and 0.46 in total sample, males, and females, respectively (P < 0.001). Conclusion Results of our study indicated that different stressors, particularly those socioeconomic related, have an effective impact on psychological problems. It is important to consider the social and cultural background of a population for managing the stressors as an effective approach for preventing and reducing the destructive burden of psychological problems. PMID:29312459

  2. Modelling hydrologic and hydrodynamic processes in basins with large semi-arid wetlands

    Science.gov (United States)

    Fleischmann, Ayan; Siqueira, Vinícius; Paris, Adrien; Collischonn, Walter; Paiva, Rodrigo; Pontes, Paulo; Crétaux, Jean-François; Bergé-Nguyen, Muriel; Biancamaria, Sylvain; Gosset, Marielle; Calmant, Stephane; Tanimoun, Bachir

    2018-06-01

    Hydrological and hydrodynamic models are core tools for simulation of large basins and complex river systems associated to wetlands. Recent studies have pointed towards the importance of online coupling strategies, representing feedbacks between floodplain inundation and vertical hydrology. Especially across semi-arid regions, soil-floodplain interactions can be strong. In this study, we included a two-way coupling scheme in a large scale hydrological-hydrodynamic model (MGB) and tested different model structures, in order to assess which processes are important to be simulated in large semi-arid wetlands and how these processes interact with water budget components. To demonstrate benefits from this coupling over a validation case, the model was applied to the Upper Niger River basin encompassing the Niger Inner Delta, a vast semi-arid wetland in the Sahel Desert. Simulation was carried out from 1999 to 2014 with daily TMPA 3B42 precipitation as forcing, using both in-situ and remotely sensed data for calibration and validation. Model outputs were in good agreement with discharge and water levels at stations both upstream and downstream of the Inner Delta (Nash-Sutcliffe Efficiency (NSE) >0.6 for most gauges), as well as for flooded areas within the Delta region (NSE = 0.6; r = 0.85). Model estimates of annual water losses across the Delta varied between 20.1 and 30.6 km3/yr, while annual evapotranspiration ranged between 760 mm/yr and 1130 mm/yr. Evaluation of model structure indicated that representation of both floodplain channels hydrodynamics (storage, bifurcations, lateral connections) and vertical hydrological processes (floodplain water infiltration into soil column; evapotranspiration from soil and vegetation and evaporation of open water) are necessary to correctly simulate flood wave attenuation and evapotranspiration along the basin. Two-way coupled models are necessary to better understand processes in large semi-arid wetlands. Finally, such coupled

  3. The Achieving Success Everyday Group Counseling Model: Implications for Professional School Counselors

    Science.gov (United States)

    Steen, Sam; Henfield, Malik S.; Booker, Beverly

    2014-01-01

    This article presents the Achieving Success Everyday (ASE) group counseling model, which is designed to help school counselors integrate students' academic and personal-social development into their group work. We first describe this group model in detail and then offer one case example of a middle school counselor using the ASE model to conduct a…

  4. Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam’s Window*

    Science.gov (United States)

    Onorante, Luca; Raftery, Adrian E.

    2015-01-01

    Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam’s window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods. PMID:26917859

  5. Research on Francis Turbine Modeling for Large Disturbance Hydropower Station Transient Process Simulation

    Directory of Open Access Journals (Sweden)

    Guangtao Zhang

    2015-01-01

    Full Text Available In the field of hydropower station transient process simulation (HSTPS, characteristic graph-based iterative hydroturbine model (CGIHM has been widely used when large disturbance hydroturbine modeling is involved. However, by this model, iteration should be used to calculate speed and pressure, and slow convergence or no convergence problems may be encountered for some reasons like special characteristic graph profile, inappropriate iterative algorithm, or inappropriate interpolation algorithm, and so forth. Also, other conventional large disturbance hydroturbine models are of some disadvantages and difficult to be used widely in HSTPS. Therefore, to obtain an accurate simulation result, a simple method for hydroturbine modeling is proposed. By this method, both the initial operating point and the transfer coefficients of linear hydroturbine model keep changing during simulation. Hence, it can reflect the nonlinearity of the hydroturbine and be used for Francis turbine simulation under large disturbance condition. To validate the proposed method, both large disturbance and small disturbance simulations of a single hydrounit supplying a resistive, isolated load were conducted. It was shown that the simulation result is consistent with that of field test. Consequently, the proposed method is an attractive option for HSTPS involving Francis turbine modeling under large disturbance condition.

  6. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    Science.gov (United States)

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  7. Modeling and Forecasting Large Realized Covariance Matrices and Portfolio Choice

    NARCIS (Netherlands)

    Callot, Laurent A.F.; Kock, Anders B.; Medeiros, Marcelo C.

    2017-01-01

    We consider modeling and forecasting large realized covariance matrices by penalized vector autoregressive models. We consider Lasso-type estimators to reduce the dimensionality and provide strong theoretical guarantees on the forecast capability of our procedure. We show that we can forecast

  8. Forward Modeling of Large-scale Structure: An Open-source Approach with Halotools

    Science.gov (United States)

    Hearin, Andrew P.; Campbell, Duncan; Tollerud, Erik; Behroozi, Peter; Diemer, Benedikt; Goldbaum, Nathan J.; Jennings, Elise; Leauthaud, Alexie; Mao, Yao-Yuan; More, Surhud; Parejko, John; Sinha, Manodeep; Sipöcz, Brigitta; Zentner, Andrew

    2017-11-01

    We present the first stable release of Halotools (v0.2), a community-driven Python package designed to build and test models of the galaxy-halo connection. Halotools provides a modular platform for creating mock universes of galaxies starting from a catalog of dark matter halos obtained from a cosmological simulation. The package supports many of the common forms used to describe galaxy-halo models: the halo occupation distribution, the conditional luminosity function, abundance matching, and alternatives to these models that include effects such as environmental quenching or variable galaxy assembly bias. Satellite galaxies can be modeled to live in subhalos or to follow custom number density profiles within their halos, including spatial and/or velocity bias with respect to the dark matter profile. The package has an optimized toolkit to make mock observations on a synthetic galaxy population—including galaxy clustering, galaxy-galaxy lensing, galaxy group identification, RSD multipoles, void statistics, pairwise velocities and others—allowing direct comparison to observations. Halotools is object-oriented, enabling complex models to be built from a set of simple, interchangeable components, including those of your own creation. Halotools has an automated testing suite and is exhaustively documented on http://halotools.readthedocs.io, which includes quickstart guides, source code notes and a large collection of tutorials. The documentation is effectively an online textbook on how to build and study empirical models of galaxy formation with Python.

  9. Forward Modeling of Large-scale Structure: An Open-source Approach with Halotools

    Energy Technology Data Exchange (ETDEWEB)

    Hearin, Andrew P.; Campbell, Duncan; Tollerud, Erik; Behroozi, Peter; Diemer, Benedikt; Goldbaum, Nathan J.; Jennings, Elise; Leauthaud, Alexie; Mao, Yao-Yuan; More, Surhud; Parejko, John; Sinha, Manodeep; Sipöcz, Brigitta; Zentner, Andrew

    2017-10-18

    We present the first stable release of Halotools (v0.2), a community-driven Python package designed to build and test models of the galaxy-halo connection. Halotools provides a modular platform for creating mock universes of galaxies starting from a catalog of dark matter halos obtained from a cosmological simulation. The package supports many of the common forms used to describe galaxy-halo models: the halo occupation distribution, the conditional luminosity function, abundance matching, and alternatives to these models that include effects such as environmental quenching or variable galaxy assembly bias. Satellite galaxies can be modeled to live in subhalos or to follow custom number density profiles within their halos, including spatial and/or velocity bias with respect to the dark matter profile. The package has an optimized toolkit to make mock observations on a synthetic galaxy population—including galaxy clustering, galaxy–galaxy lensing, galaxy group identification, RSD multipoles, void statistics, pairwise velocities and others—allowing direct comparison to observations. Halotools is object-oriented, enabling complex models to be built from a set of simple, interchangeable components, including those of your own creation. Halotools has an automated testing suite and is exhaustively documented on http://halotools.readthedocs.io, which includes quickstart guides, source code notes and a large collection of tutorials. The documentation is effectively an online textbook on how to build and study empirical models of galaxy formation with Python.

  10. An Example of Large-group Drama and Cross-year Peer Assessment for Teaching Science in Higher Education

    Science.gov (United States)

    Sloman, Katherine; Thompson, Richard

    2010-09-01

    Undergraduate students pursuing a three-year marine biology degree programme (n = 86) experienced a large-group drama aimed at allowing them to explore how scientific research is funded and the associated links between science and society. In the drama, Year 1 students played the "general public" who decided which environmental research areas should be prioritised for funding, Year 2 students were the "scientists" who had to prepare research proposals which they hoped to get funded, and Year 3 students were the "research panel" who decided which proposals to fund with input from the priorities set by the "general public". The drama, therefore, included an element of cross-year peer assessment where Year 3 students evaluated the research proposals prepared by the Year 2 students. Questionnaires were distributed at the end of the activity to gather: (1) student perceptions on the cross-year nature of the exercise, (2) the use of peer assessment, and (3) their overall views on the drama. The students valued the opportunity to interact with their peers from other years of the degree programme and most were comfortable with the use of cross-year peer assessment. The majority of students felt that they had increased their knowledge of how research proposals are funded and the perceived benefits of the large-group drama included increased critical thinking ability, confidence in presenting work to others, and enhanced communication skills. Only one student did not strongly advocate the use of this large-group drama in subsequent years.

  11. Large animal models for vaccine development and testing.

    Science.gov (United States)

    Gerdts, Volker; Wilson, Heather L; Meurens, Francois; van Drunen Littel-van den Hurk, Sylvia; Wilson, Don; Walker, Stewart; Wheler, Colette; Townsend, Hugh; Potter, Andrew A

    2015-01-01

    The development of human vaccines continues to rely on the use of animals for research. Regulatory authorities require novel vaccine candidates to undergo preclinical assessment in animal models before being permitted to enter the clinical phase in human subjects. Substantial progress has been made in recent years in reducing and replacing the number of animals used for preclinical vaccine research through the use of bioinformatics and computational biology to design new vaccine candidates. However, the ultimate goal of a new vaccine is to instruct the immune system to elicit an effective immune response against the pathogen of interest, and no alternatives to live animal use currently exist for evaluation of this response. Studies identifying the mechanisms of immune protection; determining the optimal route and formulation of vaccines; establishing the duration and onset of immunity, as well as the safety and efficacy of new vaccines, must be performed in a living system. Importantly, no single animal model provides all the information required for advancing a new vaccine through the preclinical stage, and research over the last two decades has highlighted that large animals more accurately predict vaccine outcome in humans than do other models. Here we review the advantages and disadvantages of large animal models for human vaccine development and demonstrate that much of the success in bringing a new vaccine to market depends on choosing the most appropriate animal model for preclinical testing. © The Author 2015. Published by Oxford University Press on behalf of the Institute for Laboratory Animal Research. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  12. Discriminative latent models for recognizing contextual group activities.

    Science.gov (United States)

    Lan, Tian; Wang, Yang; Yang, Weilong; Robinovitch, Stephen N; Mori, Greg

    2012-08-01

    In this paper, we go beyond recognizing the actions of individuals and focus on group activities. This is motivated from the observation that human actions are rarely performed in isolation; the contextual information of what other people in the scene are doing provides a useful cue for understanding high-level activities. We propose a novel framework for recognizing group activities which jointly captures the group activity, the individual person actions, and the interactions among them. Two types of contextual information, group-person interaction and person-person interaction, are explored in a latent variable framework. In particular, we propose three different approaches to model the person-person interaction. One approach is to explore the structures of person-person interaction. Differently from most of the previous latent structured models, which assume a predefined structure for the hidden layer, e.g., a tree structure, we treat the structure of the hidden layer as a latent variable and implicitly infer it during learning and inference. The second approach explores person-person interaction in the feature level. We introduce a new feature representation called the action context (AC) descriptor. The AC descriptor encodes information about not only the action of an individual person in the video, but also the behavior of other people nearby. The third approach combines the above two. Our experimental results demonstrate the benefit of using contextual information for disambiguating group activities.

  13. Trials of large group teaching in Malaysian private universities: a cross sectional study of teaching medicine and other disciplines

    Science.gov (United States)

    2011-01-01

    Background This is a pilot cross sectional study using both quantitative and qualitative approach towards tutors teaching large classes in private universities in the Klang Valley (comprising Kuala Lumpur, its suburbs, adjoining towns in the State of Selangor) and the State of Negeri Sembilan, Malaysia. The general aim of this study is to determine the difficulties faced by tutors when teaching large group of students and to outline appropriate recommendations in overcoming them. Findings Thirty-two academics from six private universities from different faculties such as Medical Sciences, Business, Information Technology, and Engineering disciplines participated in this study. SPSS software was used to analyse the data. The results in general indicate that the conventional instructor-student approach has its shortcoming and requires changes. Interestingly, tutors from Medicine and IT less often faced difficulties and had positive experience in teaching large group of students. Conclusion However several suggestions were proposed to overcome these difficulties ranging from breaking into smaller classes, adopting innovative teaching, use of interactive learning methods incorporating interactive assessment and creative technology which enhanced students learning. Furthermore the study provides insights on the trials of large group teaching which are clearly identified to help tutors realise its impact on teaching. The suggestions to overcome these difficulties and to maximize student learning can serve as a guideline for tutors who face these challenges. PMID:21902839

  14. Exploring the Impact of Students' Learning Approach on Collaborative Group Modeling of Blood Circulation

    Science.gov (United States)

    Lee, Shinyoung; Kang, Eunhee; Kim, Heui-Baik

    2015-01-01

    This study aimed to explore the effect on group dynamics of statements associated with deep learning approaches (DLA) and their contribution to cognitive collaboration and model development during group modeling of blood circulation. A group was selected for an in-depth analysis of collaborative group modeling. This group constructed a model in a…

  15. Ultradian activity rhythms in large groups of newly hatched chicks (Gallus gallus domesticus).

    Science.gov (United States)

    Nielsen, B L; Erhard, H W; Friggens, N C; McLeod, J E

    2008-07-01

    A clutch of young chicks housed with a mother hen exhibit ultradian (within day) rhythms of activity corresponding to the brooding cycle of the hen. In the present study clear evidence was found of ultradian activity rhythms in newly hatched domestic chicks housed in groups larger than natural clutch size without a mother hen or any other obvious external time-keeper. No consistent synchrony was found between groups housed in different pens within the same room. The ultradian rhythms disappeared with time and little evidence of group rhythmicity remained by the third night. This disappearance over time suggests that the presence of a mother hen may be pivotal for the long-term maintenance of these rhythms. The ultradian rhythm of the chicks may also play an important role in the initiation of brooding cycles during the behavioural transition of the mother hen from incubation to brooding. Computer simulations of individual activity rhythms were found to reproduce the observations made on a group basis. This was achievable even when individual chick rhythms were modelled as independent of each other, thus no assumptions of social facilitation are necessary to obtain ultradian activity rhythms on a group level.

  16. Wind and Photovoltaic Large-Scale Regional Models for hourly production evaluation

    DEFF Research Database (Denmark)

    Marinelli, Mattia; Maule, Petr; Hahmann, Andrea N.

    2015-01-01

    This work presents two large-scale regional models used for the evaluation of normalized power output from wind turbines and photovoltaic power plants on a European regional scale. The models give an estimate of renewable production on a regional scale with 1 h resolution, starting from a mesosca...... of the transmission system, especially regarding the cross-border power flows. The tuning of these regional models is done using historical meteorological data acquired on a per-country basis and using publicly available data of installed capacity.......This work presents two large-scale regional models used for the evaluation of normalized power output from wind turbines and photovoltaic power plants on a European regional scale. The models give an estimate of renewable production on a regional scale with 1 h resolution, starting from a mesoscale...

  17. A model of interaction between anticorruption authority and corruption groups

    International Nuclear Information System (INIS)

    Neverova, Elena G.; Malafeyef, Oleg A.

    2015-01-01

    The paper provides a model of interaction between anticorruption unit and corruption groups. The main policy functions of the anticorruption unit involve reducing corrupt practices in some entities through an optimal approach to resource allocation and effective anticorruption policy. We develop a model based on Markov decision-making process and use Howard’s policy-improvement algorithm for solving an optimal decision strategy. We examine the assumption that corruption groups retaliate against the anticorruption authority to protect themselves. This model was implemented through stochastic game

  18. A model of interaction between anticorruption authority and corruption groups

    Energy Technology Data Exchange (ETDEWEB)

    Neverova, Elena G.; Malafeyef, Oleg A. [Saint-Petersburg State University, Saint-Petersburg, Russia, 35, Universitetskii prospekt, Petrodvorets, 198504 Email:elenaneverowa@gmail.com, malafeyevoa@mail.ru (Russian Federation)

    2015-03-10

    The paper provides a model of interaction between anticorruption unit and corruption groups. The main policy functions of the anticorruption unit involve reducing corrupt practices in some entities through an optimal approach to resource allocation and effective anticorruption policy. We develop a model based on Markov decision-making process and use Howard’s policy-improvement algorithm for solving an optimal decision strategy. We examine the assumption that corruption groups retaliate against the anticorruption authority to protect themselves. This model was implemented through stochastic game.

  19. Psychotherapy with schizophrenics in team groups: a systems model.

    Science.gov (United States)

    Beeber, A R

    1991-01-01

    This paper focuses on the treatment of patients with schizophrenic disorders employing the Team Group model. The advantages and disadvantages of the Team Group are presented. Systems theory and principles of group development are applied as a basis for understanding the dynamics of the group in the context at the acute psychiatric unit. Particular problems encountered in treating patients with schizophrenic disorders in this setting are presented. These include: (1) issues of therapist style and technique, (2) basic psychopathology of the schizophrenic disorders, and (3) phase-specific problems associated with the dynamics of the group. Recommendations for therapist interventions are made that may better integrate these patients into the Team Group.

  20. Efficient stochastic approaches for sensitivity studies of an Eulerian large-scale air pollution model

    Science.gov (United States)

    Dimov, I.; Georgieva, R.; Todorov, V.; Ostromsky, Tz.

    2017-10-01

    Reliability of large-scale mathematical models is an important issue when such models are used to support decision makers. Sensitivity analysis of model outputs to variation or natural uncertainties of model inputs is crucial for improving the reliability of mathematical models. A comprehensive experimental study of Monte Carlo algorithms based on Sobol sequences for multidimensional numerical integration has been done. A comparison with Latin hypercube sampling and a particular quasi-Monte Carlo lattice rule based on generalized Fibonacci numbers has been presented. The algorithms have been successfully applied to compute global Sobol sensitivity measures corresponding to the influence of several input parameters (six chemical reactions rates and four different groups of pollutants) on the concentrations of important air pollutants. The concentration values have been generated by the Unified Danish Eulerian Model. The sensitivity study has been done for the areas of several European cities with different geographical locations. The numerical tests show that the stochastic algorithms under consideration are efficient for multidimensional integration and especially for computing small by value sensitivity indices. It is a crucial element since even small indices may be important to be estimated in order to achieve a more accurate distribution of inputs influence and a more reliable interpretation of the mathematical model results.

  1. Modeling and control of a large nuclear reactor. A three-time-scale approach

    Energy Technology Data Exchange (ETDEWEB)

    Shimjith, S.R. [Indian Institute of Technology Bombay, Mumbai (India); Bhabha Atomic Research Centre, Mumbai (India); Tiwari, A.P. [Bhabha Atomic Research Centre, Mumbai (India); Bandyopadhyay, B. [Indian Institute of Technology Bombay, Mumbai (India). IDP in Systems and Control Engineering

    2013-07-01

    Recent research on Modeling and Control of a Large Nuclear Reactor. Presents a three-time-scale approach. Written by leading experts in the field. Control analysis and design of large nuclear reactors requires a suitable mathematical model representing the steady state and dynamic behavior of the reactor with reasonable accuracy. This task is, however, quite challenging because of several complex dynamic phenomena existing in a reactor. Quite often, the models developed would be of prohibitively large order, non-linear and of complex structure not readily amenable for control studies. Moreover, the existence of simultaneously occurring dynamic variations at different speeds makes the mathematical model susceptible to numerical ill-conditioning, inhibiting direct application of standard control techniques. This monograph introduces a technique for mathematical modeling of large nuclear reactors in the framework of multi-point kinetics, to obtain a comparatively smaller order model in standard state space form thus overcoming these difficulties. It further brings in innovative methods for controller design for systems exhibiting multi-time-scale property, with emphasis on three-time-scale systems.

  2. Precise MRI-based stereotaxic surgery in large animal models

    DEFF Research Database (Denmark)

    Glud, Andreas Nørgaard; Bech, Johannes; Tvilling, Laura

    BACKGROUND: Stereotaxic neurosurgery in large animals is used widely in different sophisticated models, where precision is becoming more crucial as desired anatomical target regions are becoming smaller. Individually calculated coordinates are necessary in large animal models with cortical...... and subcortical anatomical differences. NEW METHOD: We present a convenient method to make an MRI-visible skull fiducial for 3D MRI-based stereotaxic procedures in larger experimental animals. Plastic screws were filled with either copper-sulphate solution or MRI-visible paste from a commercially available...... cranial head marker. The screw fiducials were inserted in the animal skulls and T1 weighted MRI was performed allowing identification of the inserted skull marker. RESULTS: Both types of fiducial markers were clearly visible on the MRÍs. This allows high precision in the stereotaxic space. COMPARISON...

  3. Large degeneracy of excited hadrons and quark models

    International Nuclear Information System (INIS)

    Bicudo, P.

    2007-01-01

    The pattern of a large approximate degeneracy of the excited hadron spectra (larger than the chiral restoration degeneracy) is present in the recent experimental report of Bugg. Here we try to model this degeneracy with state of the art quark models. We review how the Coulomb Gauge chiral invariant and confining Bethe-Salpeter equation simplifies in the case of very excited quark-antiquark mesons, including angular or radial excitations, to a Salpeter equation with an ultrarelativistic kinetic energy with the spin-independent part of the potential. The resulting meson spectrum is solved, and the excited chiral restoration is recovered, for all mesons with J>0. Applying the ultrarelativistic simplification to a linear equal-time potential, linear Regge trajectories are obtained, for both angular and radial excitations. The spectrum is also compared with the semiclassical Bohr-Sommerfeld quantization relation. However, the excited angular and radial spectra do not coincide exactly. We then search, with the classical Bertrand theorem, for central potentials producing always classical closed orbits with the ultrarelativistic kinetic energy. We find that no such potential exists, and this implies that no exact larger degeneracy can be obtained in our equal-time framework, with a single principal quantum number comparable to the nonrelativistic Coulomb or harmonic oscillator potentials. Nevertheless we find it plausible that the large experimental approximate degeneracy will be modeled in the future by quark models beyond the present state of the art

  4. Large-scale building energy efficiency retrofit: Concept, model and control

    International Nuclear Information System (INIS)

    Wu, Zhou; Wang, Bo; Xia, Xiaohua

    2016-01-01

    BEER (Building energy efficiency retrofit) projects are initiated in many nations and regions over the world. Existing studies of BEER focus on modeling and planning based on one building and one year period of retrofitting, which cannot be applied to certain large BEER projects with multiple buildings and multi-year retrofit. In this paper, the large-scale BEER problem is defined in a general TBT (time-building-technology) framework, which fits essential requirements of real-world projects. The large-scale BEER is newly studied in the control approach rather than the optimization approach commonly used before. Optimal control is proposed to design optimal retrofitting strategy in terms of maximal energy savings and maximal NPV (net present value). The designed strategy is dynamically changing on dimensions of time, building and technology. The TBT framework and the optimal control approach are verified in a large BEER project, and results indicate that promising performance of energy and cost savings can be achieved in the general TBT framework. - Highlights: • Energy efficiency retrofit of many buildings is studied. • A TBT (time-building-technology) framework is proposed. • The control system of the large-scale BEER is modeled. • The optimal retrofitting strategy is obtained.

  5. Dynamic Modeling, Optimization, and Advanced Control for Large Scale Biorefineries

    DEFF Research Database (Denmark)

    Prunescu, Remus Mihail

    with a complex conversion route. Computational fluid dynamics is used to model transport phenomena in large reactors capturing tank profiles, and delays due to plug flows. This work publishes for the first time demonstration scale real data for validation showing that the model library is suitable...

  6. The positive group affect spiral : a dynamic model of the emergence of positive affective similarity in work groups

    NARCIS (Netherlands)

    Walter, F.; Bruch, H.

    This conceptual paper seeks to clarify the process of the emergence of positive collective affect. Specifically, it develops a dynamic model of the emergence of positive affective similarity in work groups. It is suggested that positive group affective similarity and within-group relationship

  7. Group size, grooming and fission in primates: a modeling approach based on group structure.

    Science.gov (United States)

    Sueur, Cédric; Deneubourg, Jean-Louis; Petit, Odile; Couzin, Iain D

    2011-03-21

    In social animals, fission is a common mode of group proliferation and dispersion and may be affected by genetic or other social factors. Sociality implies preserving relationships between group members. An increase in group size and/or in competition for food within the group can result in decrease certain social interactions between members, and the group may split irreversibly as a consequence. One individual may try to maintain bonds with a maximum of group members in order to keep group cohesion, i.e. proximity and stable relationships. However, this strategy needs time and time is often limited. In addition, previous studies have shown that whatever the group size, an individual interacts only with certain grooming partners. There, we develop a computational model to assess how dynamics of group cohesion are related to group size and to the structure of grooming relationships. Groups' sizes after simulated fission are compared to observed sizes of 40 groups of primates. Results showed that the relationship between grooming time and group size is dependent on how each individual attributes grooming time to its social partners, i.e. grooming a few number of preferred partners or grooming equally or not all partners. The number of partners seemed to be more important for the group cohesion than the grooming time itself. This structural constraint has important consequences on group sociality, as it gives the possibility of competition for grooming partners, attraction for high-ranking individuals as found in primates' groups. It could, however, also have implications when considering the cognitive capacities of primates. Copyright © 2010 Elsevier Ltd. All rights reserved.

  8. Modelling comonotonic group-life under dependent decrement causes

    OpenAIRE

    Wang, Dabuxilatu

    2011-01-01

    Comonotonicity had been a extreme case of dependency between random variables. This article consider an extension of single life model under multiple dependent decrement causes to the case of comonotonic group-life.

  9. Comparison of hard scattering models for particle production at large transverse momentum. 2

    International Nuclear Information System (INIS)

    Schiller, A.; Ilgenfritz, E.M.; Kripfganz, J.; Moehring, H.J.; Ranft, G.; Ranft, J.

    1977-01-01

    Single particle distributions of π + and π - at large transverse momentum are analysed using various hard collision models: qq → qq, qantiq → MantiM, qM → qM. The transverse momentum dependence at thetasub(cm) = 90 0 is well described in all models except qantiq → MantiM. This model has problems with the ratios (pp → π + +X)/(π +- p → π 0 +X). Presently available data on rapidity distributions of pions in π - p and pantip collisions are at rather low transverse momentum (however large xsub(perpendicular) = 2psub(perpendicular)/√s) where it is not obvious that hard collision models should dominate. The data, in particular the π - /π + asymmetry are well described by all models except qM → Mq (CIM). At large values of transverse momentum significant differences between the models are predicted. (author)

  10. An improved large signal model of InP HEMTs

    Science.gov (United States)

    Li, Tianhao; Li, Wenjun; Liu, Jun

    2018-05-01

    An improved large signal model for InP HEMTs is proposed in this paper. The channel current and charge model equations are constructed based on the Angelov model equations. Both the equations for channel current and gate charge models were all continuous and high order drivable, and the proposed gate charge model satisfied the charge conservation. For the strong leakage induced barrier reduction effect of InP HEMTs, the Angelov current model equations are improved. The channel current model could fit DC performance of devices. A 2 × 25 μm × 70 nm InP HEMT device is used to demonstrate the extraction and validation of the model, in which the model has predicted the DC I–V, C–V and bias related S parameters accurately. Project supported by the National Natural Science Foundation of China (No. 61331006).

  11. Group prenatal care.

    Science.gov (United States)

    Mazzoni, Sara E; Carter, Ebony B

    2017-06-01

    Patients participating in group prenatal care gather together with women of similar gestational ages and 2 providers who cofacilitate an educational session after a brief medical assessment. The model was first described in the 1990s by a midwife for low-risk patients and is now practiced by midwives and physicians for both low-risk patients and some high-risk patients, such as those with diabetes. The majority of literature on group prenatal care uses CenteringPregnancy, the most popular model. The first randomized controlled trial of CenteringPregnancy showed that it reduced the risk of preterm birth in low-risk women. However, recent meta-analyses have shown similar rates of preterm birth, low birthweight, and neonatal intensive care unit admission between women participating in group prenatal care and individual prenatal care. There may be subgroups, such as African Americans, who benefit from this type of prenatal care with significantly lower rates of preterm birth. Group prenatal care seems to result in increased patient satisfaction and knowledge and use of postpartum family planning as well as improved weight gain parameters. The literature is inconclusive regarding breast-feeding, stress, depression, and positive health behaviors, although it is theorized that group prenatal care positively affects these outcomes. It is unclear whether group prenatal care results in cost savings, although it may in large-volume practices if each group consists of approximately 8-10 women. Group prenatal care requires a significant paradigm shift. It can be difficult to implement and sustain. More randomized trials are needed to ascertain the true benefits of the model, best practices for implementation, and subgroups who may benefit most from this innovative way to provide prenatal care. In short, group prenatal care is an innovative and promising model with comparable pregnancy outcomes to individual prenatal care in the general population and improved outcomes in some

  12. A preference aggregation model and application in AHP-group decision making

    Science.gov (United States)

    Yang, Taiyi; Yang, De; Chao, Xiangrui

    2018-04-01

    Group decision making process integrate individual preferences to obtain the group preference by applying aggregation rules and preference relations. The two most useful approaches, the aggregation of individual judgements and the aggregation of individual priorities, traditionally are employed in the Analytic Hierarchy Process to deal with group decision making problems. In both cases, it is assumed that the group preference is approximate weighted mathematical expectation of individual judgements and individual priorities. We propose new preference aggregation methods using optimization models in order to obtain group preference which is close to all individual priorities. Some illustrative examples are finally examined to demonstrate proposed models for application.

  13. Detonation and fragmentation modeling for the description of large scale vapor explosions

    International Nuclear Information System (INIS)

    Buerger, M.; Carachalios, C.; Unger, H.

    1985-01-01

    The thermal detonation modeling of large-scale vapor explosions is shown to be indispensable for realistic safety evaluations. A steady-state as well as transient detonation model have been developed including detailed descriptions of the dynamics as well as the fragmentation processes inside a detonation wave. Strong restrictions for large-scale vapor explosions are obtained from this modeling and they indicate that the reactor pressure vessel would even withstand explosions with unrealistically high masses of corium involved. The modeling is supported by comparisons with a detonation experiment and - concerning its key part - hydronamic fragmentation experiments. (orig.) [de

  14. The Cauchy problem for a model of immiscible gas flow with large data

    Energy Technology Data Exchange (ETDEWEB)

    Sande, Hilde

    2008-12-15

    The thesis consists of an introduction and two papers; 1. The solution of the Cauchy problem with large data for a model of a mixture of gases. 2. Front tracking for a model of immiscible gas flow with large data. (AG) refs, figs

  15. Standard model group: Survival of the fittest

    Science.gov (United States)

    Nielsen, H. B.; Brene, N.

    1983-09-01

    The essential content of this paper is related to random dynamics. We speculate that the world seen through a sub-Planck-scale microscope has a lattice structure and that the dynamics on this lattice is almost completely random, except for the requirement that the random (plaquette) action is invariant under some "world (gauge) group". We see that the randomness may lead to spontaneous symmetry breakdown in the vacuum (spontaneous collapse) without explicit appeal to any scalar field associated with the usual Higgs mechanism. We further argue that the subgroup which survives as the end product of a possible chain of collapses is likely to have certain properties; the most important is that it has a topologically connected center. The standard group, i.e. the group of the gauge theory which combines the Salam-Weinberg model with QCD, has this property.

  16. Standard model group: survival of the fittest

    International Nuclear Information System (INIS)

    Nielsen, H.B.; Brene, N.

    1983-01-01

    Th essential content of this paper is related to random dynamics. We speculate that the world seen through a sub-Planck-scale microscope has a lattice structure and that the dynamics on this lattice is almost completely random, except for the requirement that the random (plaquette) action is invariant under some ''world (gauge) group''. We see that the randomness may lead to spontaneous symmetry breakdown in the vacuum (spontaneous collapse) without explicit appeal to any scalar field associated with the usual Higgs mechanism. We further argue that the subgroup which survives as the end product of a possible chain of collapse is likely to have certain properties; the most important is that it has a topologically connected center. The standard group, i.e. the group of the gauge theory which combines the Salam-Weinberg model with QCD, has this property. (orig.)

  17. Including investment risk in large-scale power market models

    DEFF Research Database (Denmark)

    Lemming, Jørgen Kjærgaard; Meibom, P.

    2003-01-01

    Long-term energy market models can be used to examine investments in production technologies, however, with market liberalisation it is crucial that such models include investment risks and investor behaviour. This paper analyses how the effect of investment risk on production technology selection...... can be included in large-scale partial equilibrium models of the power market. The analyses are divided into a part about risk measures appropriate for power market investors and a more technical part about the combination of a risk-adjustment model and a partial-equilibrium model. To illustrate...... the analyses quantitatively, a framework based on an iterative interaction between the equilibrium model and a separate risk-adjustment module was constructed. To illustrate the features of the proposed modelling approach we examined how uncertainty in demand and variable costs affects the optimal choice...

  18. Air quality models and unusually large ozone increases: Identifying model failures, understanding environmental causes, and improving modeled chemistry

    Science.gov (United States)

    Couzo, Evan A.

    Several factors combine to make ozone (O3) pollution in Houston, Texas, unique when compared to other metropolitan areas. These include complex meteorology, intense clustering of industrial activity, and significant precursor emissions from the heavily urbanized eight-county area. Decades of air pollution research have borne out two different causes, or conceptual models, of O 3 formation. One conceptual model describes a gradual region-wide increase in O3 concentrations "typical" of many large U.S. cities. The other conceptual model links episodic emissions of volatile organic compounds to spatially limited plumes of high O3, which lead to large hourly increases that have exceeded 100 parts per billion (ppb) per hour. These large hourly increases are known to lead to violations of the federal O 3 standard and impact Houston's status as a non-attainment area. There is a need to further understand and characterize the causes of peak O 3 levels in Houston and simulate them correctly so that environmental regulators can find the most cost-effective pollution controls. This work provides a detailed understanding of unusually large O 3 increases in the natural and modeled environments. First, we probe regulatory model simulations and assess their ability to reproduce the observed phenomenon. As configured for the purpose of demonstrating future attainment of the O3 standard, the model fails to predict the spatially limited O3 plumes observed in Houston. Second, we combine ambient meteorological and pollutant measurement data to identify the most likely geographic origins and preconditions of the concentrated O3 plumes. We find evidence that the O3 plumes are the result of photochemical activity accelerated by industrial emissions. And, third, we implement changes to the modeled chemistry to add missing formation mechanisms of nitrous acid, which is an important radical precursor. Radicals control the chemical reactivity of atmospheric systems, and perturbations to

  19. The sheep as a large osteoporotic model for orthopaedic research in humans

    DEFF Research Database (Denmark)

    Cheng, L.; Ding, Ming; Li, Z.

    2008-01-01

    Although small animals as rodents are very popular animals for osteoporosis models , large animals models are necessary for research of human osteoporotic diseases. Sheep osteoporosis models are becoming more important because of its unique advantages for osteoporosis reseach. Sheep are docile...... in nature and large in size , which facilitates obtaining blood samples , urine samples and bone tissue samples for different biochemical tests and histological tests , and surgical manipulation and instrument examinations. Their physiology is similar to humans. To induce osteoporosis , OVX and calcium...... intake restriction and glucocorticoid application are the most effective methods for sheep osteoporosis model. Sheep osteoporosis model is an ideal animal model for studying various medicines reacting to osteoporosis and other treatment methods such as prosthetic replacement reacting to osteoporotic...

  20. Large transverse momentum hadronic processes

    International Nuclear Information System (INIS)

    Darriulat, P.

    1977-01-01

    The possible relations between deep inelastic leptoproduction and large transverse momentum (psub(t)) processes in hadronic collisions are usually considered in the framework of the quark-parton picture. Experiments observing the structure of the final state in proton-proton collisions producing at least one large transverse momentum particle have led to the following conclusions: a large fraction of produced particles are uneffected by the large psub(t) process. The other products are correlated to the large psub(t) particle. Depending upon the sign of scalar product they can be separated into two groups of ''towards-movers'' and ''away-movers''. The experimental evidence are reviewed favouring such a picture and the properties are discussed of each of three groups (underlying normal event, towards-movers and away-movers). Some phenomenological interpretations are presented. The exact nature of away- and towards-movers must be further investigated. Their apparent jet structure has to be confirmed. Angular correlations between leading away and towards movers are very informative. Quantum number flow, both within the set of away and towards-movers, and between it and the underlying normal event, are predicted to behave very differently in different models

  1. Towards a 'standard model' of large scale structure formation

    International Nuclear Information System (INIS)

    Shafi, Q.

    1994-01-01

    We explore constraints on inflationary models employing data on large scale structure mainly from COBE temperature anisotropies and IRAS selected galaxy surveys. In models where the tensor contribution to the COBE signal is negligible, we find that the spectral index of density fluctuations n must exceed 0.7. Furthermore the COBE signal cannot be dominated by the tensor component, implying n > 0.85 in such models. The data favors cold plus hot dark matter models with n equal or close to unity and Ω HDM ∼ 0.2 - 0.35. Realistic grand unified theories, including supersymmetric versions, which produce inflation with these properties are presented. (author). 46 refs, 8 figs

  2. A dynamic globalization model for large eddy simulation of complex turbulent flow

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Hae Cheon; Park, No Ma; Kim, Jin Seok [Seoul National Univ., Seoul (Korea, Republic of)

    2005-07-01

    A dynamic subgrid-scale model is proposed for large eddy simulation of turbulent flows in complex geometry. The eddy viscosity model by Vreman [Phys. Fluids, 16, 3670 (2004)] is considered as a base model. A priori tests with the original Vreman model show that it predicts the correct profile of subgrid-scale dissipation in turbulent channel flow but the optimal model coefficient is far from universal. Dynamic procedures of determining the model coefficient are proposed based on the 'global equilibrium' between the subgrid-scale dissipation and viscous dissipation. An important feature of the proposed procedures is that the model coefficient determined is globally constant in space but varies only in time. Large eddy simulations with the present dynamic model are conducted for forced isotropic turbulence, turbulent channel flow and flow over a sphere, showing excellent agreements with previous results.

  3. A Grouping Particle Swarm Optimizer with Personal-Best-Position Guidance for Large Scale Optimization.

    Science.gov (United States)

    Guo, Weian; Si, Chengyong; Xue, Yu; Mao, Yanfen; Wang, Lei; Wu, Qidi

    2017-05-04

    Particle Swarm Optimization (PSO) is a popular algorithm which is widely investigated and well implemented in many areas. However, the canonical PSO does not perform well in population diversity maintenance so that usually leads to a premature convergence or local optima. To address this issue, we propose a variant of PSO named Grouping PSO with Personal- Best-Position (Pbest) Guidance (GPSO-PG) which maintains the population diversity by preserving the diversity of exemplars. On one hand, we adopt uniform random allocation strategy to assign particles into different groups and in each group the losers will learn from the winner. On the other hand, we employ personal historical best position of each particle in social learning rather than the current global best particle. In this way, the exemplars diversity increases and the effect from the global best particle is eliminated. We test the proposed algorithm to the benchmarks in CEC 2008 and CEC 2010, which concern the large scale optimization problems (LSOPs). By comparing several current peer algorithms, GPSO-PG exhibits a competitive performance to maintain population diversity and obtains a satisfactory performance to the problems.

  4. Fast and local non-linear evolution of steep wave-groups on deep water: A comparison of approximate models to fully non-linear simulations

    International Nuclear Information System (INIS)

    Adcock, T. A. A.; Taylor, P. H.

    2016-01-01

    The non-linear Schrödinger equation and its higher order extensions are routinely used for analysis of extreme ocean waves. This paper compares the evolution of individual wave-packets modelled using non-linear Schrödinger type equations with packets modelled using fully non-linear potential flow models. The modified non-linear Schrödinger Equation accurately models the relatively large scale non-linear changes to the shape of wave-groups, with a dramatic contraction of the group along the mean propagation direction and a corresponding extension of the width of the wave-crests. In addition, as extreme wave form, there is a local non-linear contraction of the wave-group around the crest which leads to a localised broadening of the wave spectrum which the bandwidth limited non-linear Schrödinger Equations struggle to capture. This limitation occurs for waves of moderate steepness and a narrow underlying spectrum

  5. Transformation of renormalization groups in 2N-component fermion hierarchical model

    International Nuclear Information System (INIS)

    Stepanov, R.G.

    2006-01-01

    The 2N-component fermion model on the hierarchical lattice is studied. The explicit formulae for renormalization groups transformation in the space of coefficients setting the Grassmannian-significant density of the free measure are presented. The inverse transformation of the renormalization group is calculated. The definition of immovable points of renormalization groups is reduced to solving the set of algebraic equations. The interesting connection between renormalization group transformations in boson and fermion hierarchical models is found out. It is shown that one transformation is obtained from other one by the substitution of N on -N [ru

  6. Characteristics of the large corporation-based, bureaucratic model among oecd countries - an foi model analysis

    Directory of Open Access Journals (Sweden)

    Bartha Zoltán

    2014-03-01

    Full Text Available Deciding on the development path of the economy has been a delicate question in economic policy, not least because of the trade-off effects which immediately worsen certain economic indicators as steps are taken to improve others. The aim of the paper is to present a framework that helps decide on such policy dilemmas. This framework is based on an analysis conducted among OECD countries with the FOI model (focusing on future, outside and inside potentials. Several development models can be deduced by this method, out of which only the large corporation-based, bureaucratic model is discussed in detail. The large corporation-based, bureaucratic model implies a development strategy focused on the creation of domestic safe havens. Based on country studies, it is concluded that well-performing safe havens require the active participation of the state. We find that, in countries adhering to this model, business competitiveness is sustained through intensive public support, and an active role taken by the government in education, research and development, in detecting and exploiting special market niches, and in encouraging sectorial cooperation.

  7. Computational Modeling of Large Wildfires: A Roadmap

    KAUST Repository

    Coen, Janice L.

    2010-08-01

    Wildland fire behavior, particularly that of large, uncontrolled wildfires, has not been well understood or predicted. Our methodology to simulate this phenomenon uses high-resolution dynamic models made of numerical weather prediction (NWP) models coupled to fire behavior models to simulate fire behavior. NWP models are capable of modeling very high resolution (< 100 m) atmospheric flows. The wildland fire component is based upon semi-empirical formulas for fireline rate of spread, post-frontal heat release, and a canopy fire. The fire behavior is coupled to the atmospheric model such that low level winds drive the spread of the surface fire, which in turn releases sensible heat, latent heat, and smoke fluxes into the lower atmosphere, feeding back to affect the winds directing the fire. These coupled dynamic models capture the rapid spread downwind, flank runs up canyons, bifurcations of the fire into two heads, and rough agreement in area, shape, and direction of spread at periods for which fire location data is available. Yet, intriguing computational science questions arise in applying such models in a predictive manner, including physical processes that span a vast range of scales, processes such as spotting that cannot be modeled deterministically, estimating the consequences of uncertainty, the efforts to steer simulations with field data ("data assimilation"), lingering issues with short term forecasting of weather that may show skill only on the order of a few hours, and the difficulty of gathering pertinent data for verification and initialization in a dangerous environment. © 2010 IEEE.

  8. Modeling and Control of a Large Nuclear Reactor A Three-Time-Scale Approach

    CERN Document Server

    Shimjith, S R; Bandyopadhyay, B

    2013-01-01

    Control analysis and design of large nuclear reactors requires a suitable mathematical model representing the steady state and dynamic behavior of the reactor with reasonable accuracy. This task is, however, quite challenging because of several complex dynamic phenomena existing in a reactor. Quite often, the models developed would be of prohibitively large order, non-linear and of complex structure not readily amenable for control studies. Moreover, the existence of simultaneously occurring dynamic variations at different speeds makes the mathematical model susceptible to numerical ill-conditioning, inhibiting direct application of standard control techniques. This monograph introduces a technique for mathematical modeling of large nuclear reactors in the framework of multi-point kinetics, to obtain a comparatively smaller order model in standard state space form thus overcoming these difficulties. It further brings in innovative methods for controller design for systems exhibiting multi-time-scale property,...

  9. Software engineering the mixed model for genome-wide association studies on large samples.

    Science.gov (United States)

    Zhang, Zhiwu; Buckler, Edward S; Casstevens, Terry M; Bradbury, Peter J

    2009-11-01

    Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample size and number of markers used for GWAS is increasing dramatically, resulting in greater statistical power to detect those associations. The use of mixed models with increasingly large data sets depends on the availability of software for analyzing those models. While multiple software packages implement the mixed model method, no single package provides the best combination of fast computation, ability to handle large samples, flexible modeling and ease of use. Key elements of association analysis with mixed models are reviewed, including modeling phenotype-genotype associations using mixed models, population stratification, kinship and its estimation, variance component estimation, use of best linear unbiased predictors or residuals in place of raw phenotype, improving efficiency and software-user interaction. The available software packages are evaluated, and suggestions made for future software development.

  10. PVA gel as a potential adhesion barrier: a safety study in a large animal model of intestinal surgery.

    Science.gov (United States)

    Renz, Bernhard W; Leitner, Kurt; Odermatt, Erich; Worthley, Daniel L; Angele, Martin K; Jauch, Karl-Walter; Lang, Reinhold A

    2014-03-01

    Intra-abdominal adhesions following surgery are a major source of morbidity and mortality including abdominal pain and small bowel obstruction. This study evaluated the safety of PVA gel (polyvinyl alcohol and carboxymethylated cellulose gel) on intestinal anastomoses and its potential effectiveness in preventing adhesions in a clinically relevant large animal model. Experiments were performed in a pig model with median laparotomy and intestinal anastomosis following small bowel resection. The primary endpoint was the safety of PVA on small intestinal anastomoses. We also measured the incidence of postoperative adhesions in PVA vs. control groups: group A (eight pigs): stapled anastomosis with PVA gel compared to group B (eight pigs), which had no PVA gel; group C (eight pigs): hand-sewn anastomosis with PVA gel compared to group B (eight pigs), which had no anti-adhesive barrier. Animals were sacrificed 14 days after surgery and analyzed. All anastomoses had a patent lumen without any stenosis. No anastomoses leaked at an intraluminal pressure of 40 cmH2O. Thus, anastomoses healed very well in both groups, regardless of whether PVA was administered. PVA-treated animals, however, had significantly fewer adhesions in the area of stapled anastomoses. The hand-sewn PVA group also had weaker adhesions and trended towards fewer adhesions to adjacent organs. These results suggest that PVA gel does not jeopardize the integrity of intestinal anastomoses. However, larger trials are needed to investigate the potential of PVA gel to prevent adhesions in gastrointestinal surgery.

  11. Standard model group: survival of the fittest

    Energy Technology Data Exchange (ETDEWEB)

    Nielsen, H.B. (Niels Bohr Inst., Copenhagen (Denmark); Nordisk Inst. for Teoretisk Atomfysik, Copenhagen (Denmark)); Brene, N. (Niels Bohr Inst., Copenhagen (Denmark))

    1983-09-19

    The essential content of this paper is related to random dynamics. We speculate that the world seen through a sub-Planck-scale microscope has a lattice structure and that the dynamics on this lattice is almost completely random, except for the requirement that the random (plaquette) action is invariant under some ''world (gauge) group''. We see that the randomness may lead to spontaneous symmetry breakdown in the vacuum (spontaneous collapse) without explicit appeal to any scalar field associated with the usual Higgs mechanism. We further argue that the subgroup which survives as the end product of a possible chain of collapse is likely to have certain properties; the most important is that it has a topologically connected center. The standard group, i.e. the group of the gauge theory which combines the Salam-Weinberg model with QCD, has this property.

  12. Standard model group survival of the fittest

    International Nuclear Information System (INIS)

    Nielsen, H.B.; Brene, N.

    1983-02-01

    The essential content of this note is related to random dynamics. The authors speculate that the world seen through a sub Planck scale microscope has a lattice structure and that the dynamics on this lattice is almost completely random, except for the requirement that the random (plaquette) action is invariant under some ''world (gauge) group''. It is seen that the randomness may lead to spontaneous symmetry breakdown in the vacuum (spontaneous collapse) without explicit appeal to any scalar field associated with the usual Higgs mechanism. It is further argued that the subgroup which survives as the end product of a possible chain of collapses is likely to have certain properties; the most important is that it has a topologically connected center. The standard group, i.e. the group of the gauge theory which combines the Salam-Weinberg model with QCD, has this property. (Auth.)

  13. A refined regional modeling approach for the Corn Belt - Experiences and recommendations for large-scale integrated modeling

    Science.gov (United States)

    Panagopoulos, Yiannis; Gassman, Philip W.; Jha, Manoj K.; Kling, Catherine L.; Campbell, Todd; Srinivasan, Raghavan; White, Michael; Arnold, Jeffrey G.

    2015-05-01

    Nonpoint source pollution from agriculture is the main source of nitrogen and phosphorus in the stream systems of the Corn Belt region in the Midwestern US. This region is comprised of two large river basins, the intensely row-cropped Upper Mississippi River Basin (UMRB) and Ohio-Tennessee River Basin (OTRB), which are considered the key contributing areas for the Northern Gulf of Mexico hypoxic zone according to the US Environmental Protection Agency. Thus, in this area it is of utmost importance to ensure that intensive agriculture for food, feed and biofuel production can coexist with a healthy water environment. To address these objectives within a river basin management context, an integrated modeling system has been constructed with the hydrologic Soil and Water Assessment Tool (SWAT) model, capable of estimating river basin responses to alternative cropping and/or management strategies. To improve modeling performance compared to previous studies and provide a spatially detailed basis for scenario development, this SWAT Corn Belt application incorporates a greatly refined subwatershed structure based on 12-digit hydrologic units or 'subwatersheds' as defined by the US Geological Service. The model setup, calibration and validation are time-demanding and challenging tasks for these large systems, given the scale intensive data requirements, and the need to ensure the reliability of flow and pollutant load predictions at multiple locations. Thus, the objectives of this study are both to comprehensively describe this large-scale modeling approach, providing estimates of pollution and crop production in the region as well as to present strengths and weaknesses of integrated modeling at such a large scale along with how it can be improved on the basis of the current modeling structure and results. The predictions were based on a semi-automatic hydrologic calibration approach for large-scale and spatially detailed modeling studies, with the use of the Sequential

  14. A model parent group for enhancing aggressive children's social competence in Taiwan.

    Science.gov (United States)

    Li, Ming-Hui

    2009-07-01

    This paper presents a semi-structured psychoeducational model of group work for parents of aggressive children based on concepts of co-parenting and bidirectionality. The group was developed for enhancing five Taiwanese aggressive children's social competence by promoting positive interactions within family. Topics covered in the group included identifying parenting styles, forming parental alliances, fostering parent-child mutual initiations/mutual compliances, establishing parent-child co-regulation, and responding to aggressive children's negative emotions. Pre- and post-group comparisons suggested the effectiveness of the group model.

  15. Model parameters for representative wetland plant functional groups

    Science.gov (United States)

    Williams, Amber S.; Kiniry, James R.; Mushet, David M.; Smith, Loren M.; McMurry, Scott T.; Attebury, Kelly; Lang, Megan; McCarty, Gregory W.; Shaffer, Jill A.; Effland, William R.; Johnson, Mari-Vaughn V.

    2017-01-01

    Wetlands provide a wide variety of ecosystem services including water quality remediation, biodiversity refugia, groundwater recharge, and floodwater storage. Realistic estimation of ecosystem service benefits associated with wetlands requires reasonable simulation of the hydrology of each site and realistic simulation of the upland and wetland plant growth cycles. Objectives of this study were to quantify leaf area index (LAI), light extinction coefficient (k), and plant nitrogen (N), phosphorus (P), and potassium (K) concentrations in natural stands of representative plant species for some major plant functional groups in the United States. Functional groups in this study were based on these parameters and plant growth types to enable process-based modeling. We collected data at four locations representing some of the main wetland regions of the United States. At each site, we collected on-the-ground measurements of fraction of light intercepted, LAI, and dry matter within the 2013–2015 growing seasons. Maximum LAI and k variables showed noticeable variations among sites and years, while overall averages and functional group averages give useful estimates for multisite simulation modeling. Variation within each species gives an indication of what can be expected in such natural ecosystems. For P and K, the concentrations from highest to lowest were spikerush (Eleocharis macrostachya), reed canary grass (Phalaris arundinacea), smartweed (Polygonum spp.), cattail (Typha spp.), and hardstem bulrush (Schoenoplectus acutus). Spikerush had the highest N concentration, followed by smartweed, bulrush, reed canary grass, and then cattail. These parameters will be useful for the actual wetland species measured and for the wetland plant functional groups they represent. These parameters and the associated process-based models offer promise as valuable tools for evaluating environmental benefits of wetlands and for evaluating impacts of various agronomic practices in

  16. Large psub(T) pion production and clustered parton model

    Energy Technology Data Exchange (ETDEWEB)

    Kanki, T [Osaka Univ., Toyonaka (Japan). Coll. of General Education

    1977-05-01

    Recent experimental results on the large p sub(T) inclusive ..pi../sup 0/ productions by pp and ..pi..p collisions are interpreted by the parton model in which the constituent quarks are defined to be the clusters of the quark-partons and gluons.

  17. Large-scale inverse model analyses employing fast randomized data reduction

    Science.gov (United States)

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  18. Photorealistic large-scale urban city model reconstruction.

    Science.gov (United States)

    Poullis, Charalambos; You, Suya

    2009-01-01

    The rapid and efficient creation of virtual environments has become a crucial part of virtual reality applications. In particular, civil and defense applications often require and employ detailed models of operations areas for training, simulations of different scenarios, planning for natural or man-made events, monitoring, surveillance, games, and films. A realistic representation of the large-scale environments is therefore imperative for the success of such applications since it increases the immersive experience of its users and helps reduce the difference between physical and virtual reality. However, the task of creating such large-scale virtual environments still remains a time-consuming and manual work. In this work, we propose a novel method for the rapid reconstruction of photorealistic large-scale virtual environments. First, a novel, extendible, parameterized geometric primitive is presented for the automatic building identification and reconstruction of building structures. In addition, buildings with complex roofs containing complex linear and nonlinear surfaces are reconstructed interactively using a linear polygonal and a nonlinear primitive, respectively. Second, we present a rendering pipeline for the composition of photorealistic textures, which unlike existing techniques, can recover missing or occluded texture information by integrating multiple information captured from different optical sensors (ground, aerial, and satellite).

  19. Finite element modelling for fatigue stress analysis of large suspension bridges

    Science.gov (United States)

    Chan, Tommy H. T.; Guo, L.; Li, Z. X.

    2003-03-01

    Fatigue is an important failure mode for large suspension bridges under traffic loadings. However, large suspension bridges have so many attributes that it is difficult to analyze their fatigue damage using experimental measurement methods. Numerical simulation is a feasible method of studying such fatigue damage. In British standards, the finite element method is recommended as a rigorous method for steel bridge fatigue analysis. This paper aims at developing a finite element (FE) model of a large suspension steel bridge for fatigue stress analysis. As a case study, a FE model of the Tsing Ma Bridge is presented. The verification of the model is carried out with the help of the measured bridge modal characteristics and the online data measured by the structural health monitoring system installed on the bridge. The results show that the constructed FE model is efficient for bridge dynamic analysis. Global structural analyses using the developed FE model are presented to determine the components of the nominal stress generated by railway loadings and some typical highway loadings. The critical locations in the bridge main span are also identified with the numerical results of the global FE stress analysis. Local stress analysis of a typical weld connection is carried out to obtain the hot-spot stresses in the region. These results provide a basis for evaluating fatigue damage and predicting the remaining life of the bridge.

  20. The Beyond the standard model working group: Summary report

    Energy Technology Data Exchange (ETDEWEB)

    G. Azuelos et al.

    2004-03-18

    In this working group we have investigated a number of aspects of searches for new physics beyond the Standard Model (SM) at the running or planned TeV-scale colliders. For the most part, we have considered hadron colliders, as they will define particle physics at the energy frontier for the next ten years at least. The variety of models for Beyond the Standard Model (BSM) physics has grown immensely. It is clear that only future experiments can provide the needed direction to clarify the correct theory. Thus, our focus has been on exploring the extent to which hadron colliders can discover and study BSM physics in various models. We have placed special emphasis on scenarios in which the new signal might be difficult to find or of a very unexpected nature. For example, in the context of supersymmetry (SUSY), we have considered: how to make fully precise predictions for the Higgs bosons as well as the superparticles of the Minimal Supersymmetric Standard Model (MSSM) (parts III and IV); MSSM scenarios in which most or all SUSY particles have rather large masses (parts V and VI); the ability to sort out the many parameters of the MSSM using a variety of signals and study channels (part VII); whether the no-lose theorem for MSSM Higgs discovery can be extended to the next-to-minimal Supersymmetric Standard Model (NMSSM) in which an additional singlet superfield is added to the minimal collection of superfields, potentially providing a natural explanation of the electroweak value of the parameter {micro} (part VIII); sorting out the effects of CP violation using Higgs plus squark associate production (part IX); the impact of lepton flavor violation of various kinds (part X); experimental possibilities for the gravitino and its sgoldstino partner (part XI); what the implications for SUSY would be if the NuTeV signal for di-muon events were interpreted as a sign of R-parity violation (part XII). Our other main focus was on the phenomenological implications of extra

  1. Longitudinal Trajectories of Metabolic Control From Childhood to Young Adulthood in Type 1 Diabetes From a Large German/Austrian Registry: A Group-Based Modeling Approach.

    Science.gov (United States)

    Schwandt, Anke; Hermann, Julia M; Rosenbauer, Joachim; Boettcher, Claudia; Dunstheimer, Désirée; Grulich-Henn, Jürgen; Kuss, Oliver; Rami-Merhar, Birgit; Vogel, Christian; Holl, Reinhard W

    2017-03-01

    Worsening of glycemic control in type 1 diabetes during puberty is a common observation. However, HbA 1c remains stable or even improves for some youths. The aim is to identify distinct patterns of glycemic control in type 1 diabetes from childhood to young adulthood. A total of 6,433 patients with type 1 diabetes were selected from the prospective, multicenter diabetes patient registry Diabetes-Patienten-Verlaufsdokumentation (DPV) (follow-up from age 8 to 19 years, baseline diabetes duration ≥2 years, HbA 1c aggregated per year of life). We used latent class growth modeling as the trajectory approach to determine distinct subgroups following a similar trajectory for HbA 1c over time. Five distinct longitudinal trajectories of HbA 1c were determined, comprising group 1 = 40%, group 2 = 27%, group 3 = 15%, group 4 = 13%, and group 5 = 5% of patients. Groups 1-3 indicated stable glycemic control at different HbA 1c levels. At baseline, similar HbA 1c was observed in group 1 and group 4, but HbA 1c deteriorated in group 4 from age 8 to 19 years. Similar patterns were present in group 3 and group 5. We observed differences in self-monitoring of blood glucose, insulin therapy, daily insulin dose, physical activity, BMI SD score, body-height SD score, and migration background across all HbA 1c trajectories (all P ≤ 0.001). No sex differences were present. Comparing groups with similar initial HbA 1c but different patterns, groups with higher HbA 1c increase were characterized by lower frequency of self-monitoring of blood glucose and physical activity and reduced height (all P demographics were related to different HbA 1c courses. © 2017 by the American Diabetes Association.

  2. Report on US-DOE/OHER Task Group on modelling and scaling

    International Nuclear Information System (INIS)

    Mewhinney, J.A.; Griffith, W.C.

    1989-01-01

    In early 1986, the DOE/OHER Task Group on Modeling and Scaling was formed. Membership on the Task Group is drawn from staff of several laboratories funded by the United States Department of Energy, Office of Health and Environmental Research. The primary goal of the Task Group is to promote cooperation among the laboratories in analysing mammalian radiobiology studies with emphasis on studies that used beagle dogs in linespan experiments. To assist in defining the status of modelling and scaling in animal data, the Task Group served as the programme committee for the 26th Hanford Life Sciences symposium entitled Modeling for Scaling to Man held in October 1987. This symposium had over 60 oral presentations describing current research in dosimetric, pharmacokinetic, and dose-response modelling and scaling of results from animal studies to humans. A summary of the highlights of this symposium is presented. The Task Group also is in the process of developing recommendations for analyses of results obtained from dog lifespan studies. The goal is to provide as many comparisons as possible between these studies and to scale the results to humans to strengthen limited epidemiological data on exposures of humans to radiation. Several methods are discussed. (author)

  3. Cluster imaging of multi-brain networks (CIMBN: a general framework for hyperscanning and modeling a group of interacting brains

    Directory of Open Access Journals (Sweden)

    Lian eDuan

    2015-07-01

    Full Text Available Studying the neural basis of human social interactions is a key topic in the field of social neuroscience. Brain imaging studies in this field usually focus on the neural correlates of the social interactions between two participants. However, as the participant number further increases, even by a small amount, great difficulties raise. One challenge is how to concurrently scan all the interacting brains with high ecological validity, especially for a large number of participants. The other challenge is how to effectively model the complex group interaction behaviors emerging from the intricate neural information exchange among a group of socially organized people. Confronting these challenges, we propose a new approach called Cluster Imaging of Multi-brain Networks (CIMBN. CIMBN consists of two parts. The first part is a cluster imaging technique with high ecological validity based on multiple functional near-infrared spectroscopy (fNIRS systems. Using this technique, we can easily extend the simultaneous imaging capacity of social neuroscience studies up to dozens of participants. The second part of CIMBN is a multi-brain network (MBN modeling method based on graph theory. By taking each brain as a network node and the relationship between any two brains as a network edge, one can construct a network model for a group of interacting brains. The emergent group social behaviors can then be studied using the network’s properties, such as its topological structure and information exchange efficiency. Although there is still much work to do, as a general framework for hyperscanning and modeling a group of interacting brains, CIMBN can provide new insights into the neural correlates of group social interactions, and advance social neuroscience and social psychology.

  4. Penalized Estimation in Large-Scale Generalized Linear Array Models

    DEFF Research Database (Denmark)

    Lund, Adam; Vincent, Martin; Hansen, Niels Richard

    2017-01-01

    Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...

  5. Investigation on the integral output power model of a large-scale wind farm

    Institute of Scientific and Technical Information of China (English)

    BAO Nengsheng; MA Xiuqian; NI Weidou

    2007-01-01

    The integral output power model of a large-scale wind farm is needed when estimating the wind farm's output over a period of time in the future.The actual wind speed power model and calculation method of a wind farm made up of many wind turbine units are discussed.After analyzing the incoming wind flow characteristics and their energy distributions,and after considering the multi-effects among the wind turbine units and certain assumptions,the incoming wind flow model of multi-units is built.The calculation algorithms and steps of the integral output power model of a large-scale wind farm are provided.Finally,an actual power output of the wind farm is calculated and analyzed by using the practical measurement wind speed data.The characteristics of a large-scale wind farm are also discussed.

  6. Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling (Final Report)

    International Nuclear Information System (INIS)

    Schroeder, William J.

    2011-01-01

    This report contains the comprehensive summary of the work performed on the SBIR Phase II, Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling at Kitware Inc. in collaboration with Stanford Linear Accelerator Center (SLAC). The goal of the work was to develop collaborative visualization tools for large-scale data as illustrated in the figure below. The solutions we proposed address the typical problems faced by geographicallyand organizationally-separated research and engineering teams, who produce large data (either through simulation or experimental measurement) and wish to work together to analyze and understand their data. Because the data is large, we expect that it cannot be easily transported to each team member's work site, and that the visualization server must reside near the data. Further, we also expect that each work site has heterogeneous resources: some with large computing clients, tiled (or large) displays and high bandwidth; others sites as simple as a team member on a laptop computer. Our solution is based on the open-source, widely used ParaView large-data visualization application. We extended this tool to support multiple collaborative clients who may locally visualize data, and then periodically rejoin and synchronize with the group to discuss their findings. Options for managing session control, adding annotation, and defining the visualization pipeline, among others, were incorporated. We also developed and deployed a Web visualization framework based on ParaView that enables the Web browser to act as a participating client in a collaborative session. The ParaView Web Visualization framework leverages various Web technologies including WebGL, JavaScript, Java and Flash to enable interactive 3D visualization over the web using ParaView as the visualization server. We steered the development of this technology by teaming with the SLAC National Accelerator Laboratory. SLAC has a computationally-intensive problem

  7. Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling (Final Report)

    Energy Technology Data Exchange (ETDEWEB)

    William J. Schroeder

    2011-11-13

    This report contains the comprehensive summary of the work performed on the SBIR Phase II, Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling at Kitware Inc. in collaboration with Stanford Linear Accelerator Center (SLAC). The goal of the work was to develop collaborative visualization tools for large-scale data as illustrated in the figure below. The solutions we proposed address the typical problems faced by geographicallyand organizationally-separated research and engineering teams, who produce large data (either through simulation or experimental measurement) and wish to work together to analyze and understand their data. Because the data is large, we expect that it cannot be easily transported to each team member's work site, and that the visualization server must reside near the data. Further, we also expect that each work site has heterogeneous resources: some with large computing clients, tiled (or large) displays and high bandwidth; others sites as simple as a team member on a laptop computer. Our solution is based on the open-source, widely used ParaView large-data visualization application. We extended this tool to support multiple collaborative clients who may locally visualize data, and then periodically rejoin and synchronize with the group to discuss their findings. Options for managing session control, adding annotation, and defining the visualization pipeline, among others, were incorporated. We also developed and deployed a Web visualization framework based on ParaView that enables the Web browser to act as a participating client in a collaborative session. The ParaView Web Visualization framework leverages various Web technologies including WebGL, JavaScript, Java and Flash to enable interactive 3D visualization over the web using ParaView as the visualization server. We steered the development of this technology by teaming with the SLAC National Accelerator Laboratory. SLAC has a computationally

  8. The Large Office Environment - Measurement and Modeling of the Wideband Radio Channel

    DEFF Research Database (Denmark)

    Andersen, Jørgen Bach; Nielsen, Jesper Ødum; Bauch, Gerhard

    2006-01-01

    In a future 4G or WLAN wideband application we can imagine multiple users in a large office environment con-sisting of a single room with partitions. Up to now, indoor radio channel measurement and modelling has mainly concentrated on scenarios with several office rooms and corridors. We present...... here measurements at 5.8GHz for 100 MHz bandwidth and a novel modelling approach for the wideband radio channel in a large office room envi-ronment. An acoustic like reverberation theory is pro-posed that allows to specify a tapped delay line model just from the room dimensions and an average...... calculated from the measurements. The pro-posed model can likely also be applied to indoor hot spot scenarios....

  9. A Model for Teaching Large Classes: Facilitating a "Small Class Feel"

    Science.gov (United States)

    Lynch, Rosealie P.; Pappas, Eric

    2017-01-01

    This paper presents a model for teaching large classes that facilitates a "small class feel" to counteract the distance, anonymity, and formality that often characterize large lecture-style courses in higher education. One author (E. P.) has been teaching a 300-student general education critical thinking course for ten years, and the…

  10. Three dimensional modeling of laterally loaded pile groups resting in sand

    Directory of Open Access Journals (Sweden)

    Amr Farouk Elhakim

    2016-04-01

    Full Text Available Many structures often carry lateral loads due to earth pressure, wind, earthquakes, wave action and ship impact. The accurate predictions of the load–displacement response of the pile group as well as the straining actions are needed for a safe and economic design. Most research focused on the behavior of laterally loaded single piles though piles are most frequently used in groups. Soil is modeled as an elastic-perfectly plastic model using the Mohr–Coulomb constitutive model. The three-dimensional Plaxis model is validated using load–displacement results from centrifuge tests of laterally loaded piles embedded in sand. This study utilizes three dimensional finite element modeling to better understand the main parameters that affect the response of laterally loaded pile groups (2 × 2 and 3 × 3 pile configurations including sand relative density, pile spacing (s = 2.5 D, 5 D and 8 D and pile location within the group. The fixity of the pile head affects its load–displacement under lateral loading. Typically, the pile head may be unrestrained (free head as the pile head is allowed to rotate, or restrained (fixed head condition where no pile head rotation is permitted. The analyses were performed for both free and fixed head conditions.

  11. Calculating the renormalisation group equations of a SUSY model with Susyno

    Science.gov (United States)

    Fonseca, Renato M.

    2012-10-01

    Susyno is a Mathematica package dedicated to the computation of the 2-loop renormalisation group equations of a supersymmetric model based on any gauge group (the only exception being multiple U(1) groups) and for any field content. Program summary Program title: Susyno Catalogue identifier: AEMX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 30829 No. of bytes in distributed program, including test data, etc.: 650170 Distribution format: tar.gz Programming language: Mathematica 7 or higher. Computer: All systems that Mathematica 7+ is available for (PC, Mac). Operating system: Any platform supporting Mathematica 7+ (Windows, Linux, Mac OS). Classification: 4.2, 5, 11.1. Nature of problem: Calculating the renormalisation group equations of a supersymmetric model involves using long and complicated general formulae [1, 2]. In addition, to apply them it is necessary to know the Lagrangian in its full form. Building the complete Lagrangian of models with small representations of SU(2) and SU(3) might be easy but in the general case of arbitrary representations of an arbitrary gauge group, this task can be hard, lengthy and error prone. Solution method: The Susyno package uses group theoretical functions to calculate the super-potential and the soft-SUSY-breaking Lagrangian of a supersymmetric model, and calculates the two-loop RGEs of the model using the general equations of [1, 2]. Susyno works for models based on any representation(s) of any gauge group (the only exception being multiple U(1) groups). Restrictions: As the program is based on the formalism of [1, 2], it shares its limitations. Running time can also be a significant restriction, in particular for models with many fields. Unusual features

  12. Deciphering interactions in moving animal groups.

    Directory of Open Access Journals (Sweden)

    Jacques Gautrais

    Full Text Available Collective motion phenomena in large groups of social organisms have long fascinated the observer, especially in cases, such as bird flocks or fish schools, where large-scale highly coordinated actions emerge in the absence of obvious leaders. However, the mechanisms involved in this self-organized behavior are still poorly understood, because the individual-level interactions underlying them remain elusive. Here, we demonstrate the power of a bottom-up methodology to build models for animal group motion from data gathered at the individual scale. Using video tracks of fish shoal in a tank, we show how a careful, incremental analysis at the local scale allows for the determination of the stimulus/response function governing an individual's moving decisions. We find in particular that both positional and orientational effects are present, act upon the fish turning speed, and depend on the swimming speed, yielding a novel schooling model whose parameters are all estimated from data. Our approach also leads to identify a density-dependent effect that results in a behavioral change for the largest groups considered. This suggests that, in confined environment, the behavioral state of fish and their reaction patterns change with group size. We debate the applicability, beyond the particular case studied here, of this novel framework for deciphering interactions in moving animal groups.

  13. Large-Signal DG-MOSFET Modelling for RFID Rectification

    Directory of Open Access Journals (Sweden)

    R. Rodríguez

    2016-01-01

    Full Text Available This paper analyses the undoped DG-MOSFETs capability for the operation of rectifiers for RFIDs and Wireless Power Transmission (WPT at microwave frequencies. For this purpose, a large-signal compact model has been developed and implemented in Verilog-A. The model has been numerically validated with a device simulator (Sentaurus. It is found that the number of stages to achieve the optimal rectifier performance is inferior to that required with conventional MOSFETs. In addition, the DC output voltage could be incremented with the use of appropriate mid-gap metals for the gate, as TiN. Minor impact of short channel effects (SCEs on rectification is also pointed out.

  14. Bilevel Traffic Evacuation Model and Algorithm Design for Large-Scale Activities

    Directory of Open Access Journals (Sweden)

    Danwen Bao

    2017-01-01

    Full Text Available This paper establishes a bilevel planning model with one master and multiple slaves to solve traffic evacuation problems. The minimum evacuation network saturation and shortest evacuation time are used as the objective functions for the upper- and lower-level models, respectively. The optimizing conditions of this model are also analyzed. An improved particle swarm optimization (PSO method is proposed by introducing an electromagnetism-like mechanism to solve the bilevel model and enhance its convergence efficiency. A case study is carried out using the Nanjing Olympic Sports Center. The results indicate that, for large-scale activities, the average evacuation time of the classic model is shorter but the road saturation distribution is more uneven. Thus, the overall evacuation efficiency of the network is not high. For induced emergencies, the evacuation time of the bilevel planning model is shortened. When the audience arrival rate is increased from 50% to 100%, the evacuation time is shortened from 22% to 35%, indicating that the optimization effect of the bilevel planning model is more effective compared to the classic model. Therefore, the model and algorithm presented in this paper can provide a theoretical basis for the traffic-induced evacuation decision making of large-scale activities.

  15. Parameterization of a Hydrological Model for a Large, Ungauged Urban Catchment

    Directory of Open Access Journals (Sweden)

    Gerald Krebs

    2016-10-01

    Full Text Available Urbanization leads to the replacement of natural areas by impervious surfaces and affects the catchment hydrological cycle with adverse environmental impacts. Low impact development tools (LID that mimic hydrological processes of natural areas have been developed and applied to mitigate these impacts. Hydrological simulations are one possibility to evaluate the LID performance but the associated small-scale processes require a highly spatially distributed and explicit modeling approach. However, detailed data for model development are often not available for large urban areas, hampering the model parameterization. In this paper we propose a methodology to parameterize a hydrological model to a large, ungauged urban area by maintaining at the same time a detailed surface discretization for direct parameter manipulation for LID simulation and a firm reliance on available data for model conceptualization. Catchment delineation was based on a high-resolution digital elevation model (DEM and model parameterization relied on a novel model regionalization approach. The impact of automated delineation and model regionalization on simulation results was evaluated for three monitored study catchments (5.87–12.59 ha. The simulated runoff peak was most sensitive to accurate catchment discretization and calibration, while both the runoff volume and the fit of the hydrograph were less affected.

  16. Computational social dynamic modeling of group recruitment.

    Energy Technology Data Exchange (ETDEWEB)

    Berry, Nina M.; Lee, Marinna; Pickett, Marc; Turnley, Jessica Glicken (Sandia National Laboratories, Albuquerque, NM); Smrcka, Julianne D. (Sandia National Laboratories, Albuquerque, NM); Ko, Teresa H.; Moy, Timothy David (Sandia National Laboratories, Albuquerque, NM); Wu, Benjamin C.

    2004-01-01

    The Seldon software toolkit combines concepts from agent-based modeling and social science to create a computationally social dynamic model for group recruitment. The underlying recruitment model is based on a unique three-level hybrid agent-based architecture that contains simple agents (level one), abstract agents (level two), and cognitive agents (level three). This uniqueness of this architecture begins with abstract agents that permit the model to include social concepts (gang) or institutional concepts (school) into a typical software simulation environment. The future addition of cognitive agents to the recruitment model will provide a unique entity that does not exist in any agent-based modeling toolkits to date. We use social networks to provide an integrated mesh within and between the different levels. This Java based toolkit is used to analyze different social concepts based on initialization input from the user. The input alters a set of parameters used to influence the values associated with the simple agents, abstract agents, and the interactions (simple agent-simple agent or simple agent-abstract agent) between these entities. The results of phase-1 Seldon toolkit provide insight into how certain social concepts apply to different scenario development for inner city gang recruitment.

  17. Parallel runs of a large air pollution model on a grid of Sun computers

    DEFF Research Database (Denmark)

    Alexandrov, V.N.; Owczarz, W.; Thomsen, Per Grove

    2004-01-01

    Large -scale air pollution models can successfully be used in different environmental studies. These models are described mathematically by systems of partial differential equations. Splitting procedures followed by discretization of the spatial derivatives leads to several large systems...

  18. Effect of using an audience response system on learning environment, motivation and long-term retention, during case-discussions in a large group of undergraduate veterinary clinical pharmacology students.

    Science.gov (United States)

    Doucet, Michèle; Vrins, André; Harvey, Denis

    2009-12-01

    Teaching methods that provide an opportunity for individual engagement and focussed feedback are required to create an active learning environment for case-based teaching in large groups. A prospective observational controlled study was conducted to evaluate whether the use of an audience response system (ARS) would promote an active learning environment during case-based discussions in large groups, have an impact on student motivation and improve long-term retention. Group A (N = 83) participated in large group case discussions where student participation was voluntary, while for group B (N = 86) an ARS was used. Data collection methods included student and teacher surveys, student focus group interviews, independent observations and 1-year post-course testing. Results indicated that the use of an ARS provided an active learning environment during case-based discussions in large groups by favouring engagement, observation and critical reflection and by increasing student and teacher motivation. Although final exam results were significantly improved in group B, long-term retention was not significantly different between groups. It was concluded that ARS use significantly improved the learning experience associated with case-based discussions in a large group of undergraduate students.

  19. Groundwater Flow and Thermal Modeling to Support a Preferred Conceptual Model for the Large Hydraulic Gradient North of Yucca Mountain

    International Nuclear Information System (INIS)

    McGraw, D.; Oberlander, P.

    2007-01-01

    The purpose of this study is to report on the results of a preliminary modeling framework to investigate the causes of the large hydraulic gradient north of Yucca Mountain. This study builds on the Saturated Zone Site-Scale Flow and Transport Model (referenced herein as the Site-scale model (Zyvoloski, 2004a)), which is a three-dimensional saturated zone model of the Yucca Mountain area. Groundwater flow was simulated under natural conditions. The model framework and grid design describe the geologic layering and the calibration parameters describe the hydrogeology. The Site-scale model is calibrated to hydraulic heads, fluid temperature, and groundwater flowpaths. One area of interest in the Site-scale model represents the large hydraulic gradient north of Yucca Mountain. Nearby water levels suggest over 200 meters of hydraulic head difference in less than 1,000 meters horizontal distance. Given the geologic conceptual models defined by various hydrogeologic reports (Faunt, 2000, 2001; Zyvoloski, 2004b), no definitive explanation has been found for the cause of the large hydraulic gradient. Luckey et al. (1996) presents several possible explanations for the large hydraulic gradient as provided below: The gradient is simply the result of flow through the upper volcanic confining unit, which is nearly 300 meters thick near the large gradient. The gradient represents a semi-perched system in which flow in the upper and lower aquifers is predominantly horizontal, whereas flow in the upper confining unit would be predominantly vertical. The gradient represents a drain down a buried fault from the volcanic aquifers to the lower Carbonate Aquifer. The gradient represents a spillway in which a fault marks the effective northern limit of the lower volcanic aquifer. The large gradient results from the presence at depth of the Eleana Formation, a part of the Paleozoic upper confining unit, which overlies the lower Carbonate Aquifer in much of the Death Valley region. The

  20. Hidden Markov models for the activity profile of terrorist groups

    OpenAIRE

    Raghavan, Vasanthan; Galstyan, Aram; Tartakovsky, Alexander G.

    2012-01-01

    The main focus of this work is on developing models for the activity profile of a terrorist group, detecting sudden spurts and downfalls in this profile, and, in general, tracking it over a period of time. Toward this goal, a $d$-state hidden Markov model (HMM) that captures the latent states underlying the dynamics of the group and thus its activity profile is developed. The simplest setting of $d=2$ corresponds to the case where the dynamics are coarsely quantized as Active and Inactive, re...

  1. Graphs of groups on surfaces interactions and models

    CERN Document Server

    White, AT

    2001-01-01

    The book, suitable as both an introductory reference and as a text book in the rapidly growing field of topological graph theory, models both maps (as in map-coloring problems) and groups by means of graph imbeddings on sufaces. Automorphism groups of both graphs and maps are studied. In addition connections are made to other areas of mathematics, such as hypergraphs, block designs, finite geometries, and finite fields. There are chapters on the emerging subfields of enumerative topological graph theory and random topological graph theory, as well as a chapter on the composition of English

  2. A friendly Maple module for one and two group reactor model

    International Nuclear Information System (INIS)

    Baptista, Camila O.; Pavan, Guilherme A.; Braga, Kelmo L.; Silva, Marcelo V.; Pereira, P.G.S.; Werner, Rodrigo; Antunes, Valdir; Vellozo, Sergio O.

    2015-01-01

    The well known two energy groups core reactor design model is revisited. A simple and friendly Maple module was built to cover the steps calculations of a plate reactor in five situations: 1. one group bare reactor, 2. two groups bare reactor, 3. one group reflected reactor, 4. 1-1/2 groups reflected reactor and 5. two groups reflected reactor. The results show the convergent path of critical size, as it should be. (author)

  3. A friendly Maple module for one and two group reactor model

    Energy Technology Data Exchange (ETDEWEB)

    Baptista, Camila O.; Pavan, Guilherme A.; Braga, Kelmo L.; Silva, Marcelo V.; Pereira, P.G.S.; Werner, Rodrigo; Antunes, Valdir; Vellozo, Sergio O., E-mail: camila.oliv.baptista@gmail.com, E-mail: pavanguilherme@gmail.com, E-mail: kelmo.lins@gmail.com, E-mail: marcelovilelasilva@gmail.com, E-mail: rodrigowerner@hotmail.com, E-mail: neutron201566@yahoo.com, E-mail: vellozo@ime.eb.br [Instituto Militar de Engenharia (IME), Rio de Janeiro, RJ (Brazil). Departamento de Engenharia Nuclear

    2015-07-01

    The well known two energy groups core reactor design model is revisited. A simple and friendly Maple module was built to cover the steps calculations of a plate reactor in five situations: 1. one group bare reactor, 2. two groups bare reactor, 3. one group reflected reactor, 4. 1-1/2 groups reflected reactor and 5. two groups reflected reactor. The results show the convergent path of critical size, as it should be. (author)

  4. Loop groups, the Luttinger model, anyons, and Sutherland systems

    International Nuclear Information System (INIS)

    Langmann, E.; Carey, A.L.

    1998-01-01

    We discuss the representation theory of loop groups and examples of how it is used in physics. These examples include the construction and solution of the Luttinger model and other 1 + 1-dimensional interacting quantum field theories, the construction of anyon field operators on the circle, and the '2 nd quantization' of the Sutherland model using anyons

  5. Modeling the Role of Networks and Individual Differences in Inter-Group Violence.

    Directory of Open Access Journals (Sweden)

    Alexander Isakov

    Full Text Available There is significant heterogeneity within and between populations in their propensity to engage in conflict. Most research has neglected the role of within-group effects in social networks in contributing to between-group violence and focused instead on the precursors and consequences of violence, or on the role of between-group ties. Here, we explore the role of individual variation and of network structure within a population in promoting and inhibiting group violence towards other populations. Motivated by ethnographic observations of collective behavior in a small-scale society, we describe a model with differentiated roles for individuals embedded within friendship networks. Using a simple model based on voting-like dynamics, we explore several strategies for influencing group-level behavior. When we consider changing population level attitude changes and introducing control nodes separately, we find that a particularly effective control strategy relies on exploiting network degree. We also suggest refinements to our model such as tracking fine-grained information spread dynamics that can lead to further enrichment in using evolutionary game theory models for sociological phenomena.

  6. Fresh Frozen Plasma Resuscitation Provides Neuroprotection Compared to Normal Saline in a Large Animal Model of Traumatic Brain Injury and Polytrauma

    DEFF Research Database (Denmark)

    Imam, Ayesha; Jin, Guang; Sillesen, Martin

    2015-01-01

    Abstract We have previously shown that early treatment with fresh frozen plasma (FFP) is neuroprotective in a swine model of hemorrhagic shock (HS) and traumatic brain injury (TBI). However, it remains unknown whether this strategy would be beneficial in a more clinical polytrauma model. Yorkshire...... as well as cerebral perfusion pressures. Levels of cerebral eNOS were higher in the FFP-treated group (852.9 vs. 816.4 ng/mL; p=0.03), but no differences in brain levels of ET-1 were observed. Early administration of FFP is neuroprotective in a complex, large animal model of polytrauma, hemorrhage...

  7. Therapeutic Enactment: Integrating Individual and Group Counseling Models for Change

    Science.gov (United States)

    Westwood, Marvin J.; Keats, Patrice A.; Wilensky, Patricia

    2003-01-01

    The purpose of this article is to introduce the reader to a group-based therapy model known as therapeutic enactment. A description of this multimodal change model is provided by outlining the relevant background information, key concepts related to specific change processes, and the differences in this model compared to earlier psychodrama…

  8. Classification of finite reparametrization symmetry groups in the three-Higgs-doublet model

    International Nuclear Information System (INIS)

    Ivanov, Igor P.; Vdovin, E.

    2013-01-01

    Symmetries play a crucial role in electroweak symmetry breaking models with non-minimal Higgs content. Within each class of these models, it is desirable to know which symmetry groups can be implemented via the scalar sector. In N-Higgs-doublet models, this classification problem was solved only for N=2 doublets. Very recently, we suggested a method to classify all realizable finite symmetry groups of Higgs-family transformations in the three-Higgs-doublet model (3HDM). Here, we present this classification in all detail together with an introduction to the theory of solvable groups, which play the key role in our derivation. We also consider generalized-CP symmetries, and discuss the interplay between Higgs-family symmetries and CP-conservation. In particular, we prove that presence of the Z 4 symmetry guarantees the explicit CP-conservation of the potential. This work completes classification of finite reparametrization symmetry groups in 3HDM. (orig.)

  9. The three-point function as a probe of models for large-scale structure

    International Nuclear Information System (INIS)

    Frieman, J.A.; Gaztanaga, E.

    1993-01-01

    The authors analyze the consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime. Several variations of the standard Ω = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, R p ∼20 h -1 Mpc, e.g., low-matter-density (non-zero cosmological constant) models, open-quote tilted close-quote primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, et al. The authors show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q J at large scales, r approx-gt R p . Current observational constraints on the three-point amplitudes Q 3 and S 3 can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales

  10. Lumped hydrological models is an Occam' razor for runoff modeling in large Russian Arctic basins

    OpenAIRE

    Ayzel Georgy

    2018-01-01

    This study is aimed to investigate the possibility of three lumped hydrological models to predict daily runoff of large-scale Arctic basins for the modern period (1979-2014) in the case of substantial data scarcity. All models were driven only by meteorological forcing reanalysis dataset without any additional information about landscape, soil or vegetation cover properties of studied basins. We found limitations of model parameters calibration in ungauged basins using global optimization alg...

  11. Estimation and Inference for Very Large Linear Mixed Effects Models

    OpenAIRE

    Gao, K.; Owen, A. B.

    2016-01-01

    Linear mixed models with large imbalanced crossed random effects structures pose severe computational problems for maximum likelihood estimation and for Bayesian analysis. The costs can grow as fast as $N^{3/2}$ when there are N observations. Such problems arise in any setting where the underlying factors satisfy a many to many relationship (instead of a nested one) and in electronic commerce applications, the N can be quite large. Methods that do not account for the correlation structure can...

  12. Multiplex polymerase chain reaction-based prognostic models in diffuse large B-cell lymphoma patients treated with R-CHOP

    DEFF Research Database (Denmark)

    Green, Tina M; Jensen, Andreas K; Holst, René

    2016-01-01

    We present a multiplex analysis for genes known to have prognostic value in an attempt to design a clinically useful classification model in patients with diffuse large B-cell lymphoma (DLBCL). Real-time polymerase chain reaction was used to measure transcript levels of 28 relevant genes in 194 de...... models. The best model was validated in data from an online available R-CHOP treated cohort. With progression-free survival (PFS) as primary endpoint, the best performing IPI independent model incorporated the LMO2 and HLADQA1 as well as gene interactions for GCSAMxMIB1, GCSAMxCTGF and FOXP1xPDE4B....... This model assigned 33% of patients (n = 60) to poor outcome with an estimated 3-year PFS of 40% vs. 87% for low risk (n = 61) and intermediate (n = 60) risk groups (P model incorporated LMO2 and BCL2 and assigned 33% of the patients with a 3-year PFS of 35% vs...

  13. Large deformation analysis of adhesive by Eulerian method with new material model

    International Nuclear Information System (INIS)

    Maeda, K; Nishiguchi, K; Iwamoto, T; Okazawa, S

    2010-01-01

    The material model to describe large deformation of a pressure sensitive adhesive (PSA) is presented. A relationship between stress and strain of PSA includes viscoelasticity and rubber-elasticity. Therefore, we propose the material model for describing viscoelasticity and rubber-elasticity, and extend the presented material model to the rate form for three dimensional finite element analysis. After proposing the material model for PSA, we formulate the Eulerian method to simulate large deformation behavior. In the Eulerian calculation, the Piecewise Linear Interface Calculation (PLIC) method for capturing material surface is employed. By using PLIC method, we can impose dynamic and kinematic boundary conditions on captured material surface. The representative two computational examples are calculated to check validity of the present methods.

  14. The large-scale peculiar velocity field in flat models of the universe

    International Nuclear Information System (INIS)

    Vittorio, N.; Turner, M.S.

    1986-10-01

    The inflationary Universe scenario predicts a flat Universe and both adiabatic and isocurvature primordial density perturbations with the Zel'dovich spectrum. The two simplest realizations, models dominated by hot or cold dark matter, seem to be in conflict with observations. Flat models are examined with two components of mass density, where one of the components of mass density is smoothly distributed and the large-scale (≥10h -1 MpC) peculiar velocity field for these models is considered. For the smooth component relativistic particles, a relic cosmological term, and light strings are considered. At present the observational situation is unsettled; but, in principle, the large-scale peculiar velocity field is very powerful discriminator between these different models. 61 refs

  15. Particle production at large transverse momentum and hard collision models

    International Nuclear Information System (INIS)

    Ranft, G.; Ranft, J.

    1977-04-01

    The majority of the presently available experimental data is consistent with hard scattering models. Therefore the hard scattering model seems to be well established. There is good evidence for jets in large transverse momentum reactions as predicted by these models. The overall picture is however not yet well enough understood. We mention only the empirical hard scattering cross section introduced in most of the models, the lack of a deep theoretical understanding of the interplay between quark confinement and jet production, and the fact that we are not yet able to discriminate conclusively between the many proposed hard scattering models. The status of different hard collision models discussed in this paper is summarized. (author)

  16. Optimizing Prediction Using Bayesian Model Averaging: Examples Using Large-Scale Educational Assessments.

    Science.gov (United States)

    Kaplan, David; Lee, Chansoon

    2018-01-01

    This article provides a review of Bayesian model averaging as a means of optimizing the predictive performance of common statistical models applied to large-scale educational assessments. The Bayesian framework recognizes that in addition to parameter uncertainty, there is uncertainty in the choice of models themselves. A Bayesian approach to addressing the problem of model uncertainty is the method of Bayesian model averaging. Bayesian model averaging searches the space of possible models for a set of submodels that satisfy certain scientific principles and then averages the coefficients across these submodels weighted by each model's posterior model probability (PMP). Using the weighted coefficients for prediction has been shown to yield optimal predictive performance according to certain scoring rules. We demonstrate the utility of Bayesian model averaging for prediction in education research with three examples: Bayesian regression analysis, Bayesian logistic regression, and a recently developed approach for Bayesian structural equation modeling. In each case, the model-averaged estimates are shown to yield better prediction of the outcome of interest than any submodel based on predictive coverage and the log-score rule. Implications for the design of large-scale assessments when the goal is optimal prediction in a policy context are discussed.

  17. Large urban fire environment: trends and model city predictions

    International Nuclear Information System (INIS)

    Larson, D.A.; Small, R.D.

    1983-01-01

    The urban fire environment that would result from a megaton-yield nuclear weapon burst is considered. The dependence of temperatures and velocities on fire size, burning intensity, turbulence, and radiation is explored, and specific calculations for three model urban areas are presented. In all cases, high velocity fire winds are predicted. The model-city results show the influence of building density and urban sprawl on the fire environment. Additional calculations consider large-area fires with the burning intensity reduced in a blast-damaged urban center

  18. The respiratory tract deposition model proposed by the ICRP Task Group

    International Nuclear Information System (INIS)

    James, A.C.; Briant, J.K.; Stahlhofen, W.; Rudolf, G.; Gehr, P.

    1990-11-01

    The Task Group has developed a new model of the deposition of inhaled aerosols in each anatomical region of the respiratory tract. The model is used to evaluate the fraction of airborne activity that is deposited in respiratory regions having distinct retention characteristics and clearance pathways: the anterior nares, the extrathoracic airways of the naso- and oropharynx and larynx, the bronchi, the bronchioles, and the alveolated airways of the lung. Drawn from experimental data on total and regional deposition in human subjects, the model is based on extrapolation of these data by means of a detailed theoretical model of aerosol transport and deposition within the lung. The Task Group model applies to all practical conditions, and for aerosol particles and vapors from atomic size up to very coarse aerosols with an activity median aerodynamic diameter of 100 μm. The model is designed to predict regional deposition in different subjects, including adults of either sex, children of various ages, and infants, and also to account for anatomical differences among Caucasian and non-Caucasian subjects. The Task Group model represents aerosol inhalability and regional deposition in different subjects by algebraic expressions of aerosol size, breathing rates, standard lung volumes, and scaling factors for airway dimensions. 35 refs., 13 figs., 2 tabs

  19. Modeling Bottom-Up Visual Attention Using Dihedral Group D4 §

    Directory of Open Access Journals (Sweden)

    Puneet Sharma

    2016-08-01

    Full Text Available In this paper, first, we briefly describe the dihedral group D 4 that serves as the basis for calculating saliency in our proposed model. Second, our saliency model makes two major changes in a latest state-of-the-art model known as group-based asymmetry. First, based on the properties of the dihedral group D 4 , we simplify the asymmetry calculations associated with the measurement of saliency. This results is an algorithm that reduces the number of calculations by at least half that makes it the fastest among the six best algorithms used in this research article. Second, in order to maximize the information across different chromatic and multi-resolution features, the color image space is de-correlated. We evaluate our algorithm against 10 state-of-the-art saliency models. Our results show that by using optimal parameters for a given dataset, our proposed model can outperform the best saliency algorithm in the literature. However, as the differences among the (few best saliency models are small, we would like to suggest that our proposed model is among the best and the fastest among the best. Finally, as a part of future work, we suggest that our proposed approach on saliency can be extended to include three-dimensional image data.

  20. Large scale vibration tests on pile-group effects using blast-induced ground motion

    International Nuclear Information System (INIS)

    Katsuichirou Hijikata; Hideo Tanaka; Takayuki Hashimoto; Kazushige Fujiwara; Yuji Miyamoto; Osamu Kontani

    2005-01-01

    Extensive vibration tests have been performed on pile-supported structures at a large-scale mining site. Ground motions induced by large-scale blasting operations were used as excitation forces for vibration tests. The main objective of this research is to investigate the dynamic behavior of pile-supported structures, in particular, pile-group effects. Two test structures were constructed in an excavated 4 m deep pit. Their test-structures were exactly the same. One structure had 25 steel piles and the other had 4 piles. The test pit was backfilled with sand of appropriate grain size distributions to obtain good compaction, especially between the 25 piles. Accelerations were measured at the structures, in the test pit and in the adjacent free field, and pile strains were measured. Dynamic modal tests of the pile-supported structures and PS measurements of the test pit were performed before and after the vibration tests to detect changes in the natural frequencies of the soil-pile-structure systems and the soil stiffness. The vibration tests were performed six times with different levels of input motions. The maximum horizontal acceleration recorded at the adjacent ground surface varied from 57 cm/s 2 to 1,683 cm/s 2 according to the distances between the test site and the blast areas. (authors)

  1. All polymer chip for amperometric studies of transmitter release from large groups of neuronal cells

    DEFF Research Database (Denmark)

    Larsen, Simon T.; Taboryski, Rafael

    2012-01-01

    We present an all polymer electrochemical chip for simple detection of transmitter release from large groups of cultured PC 12 cells. Conductive polymer PEDOT:tosylate microelectrodes were used together with constant potential amperometry to obtain easy-to-analyze oxidation signals from potassium......-induced release of transmitter molecules. The nature of the resulting current peaks is discussed, and the time for restoring transmitter reservoirs is studied. The relationship between released transmitters and potassium concentration was found to fit to a sigmoidal dose–response curve. Finally, we demonstrate...

  2. Thermal conductivity of group-IV semiconductors from a kinetic-collective model.

    Science.gov (United States)

    de Tomas, C; Cantarero, A; Lopeandia, A F; Alvarez, F X

    2014-09-08

    The thermal conductivity of group-IV semiconductors (silicon, germanium, diamond and grey tin) with several isotopic compositions has been calculated from a kinetic-collective model. From this approach, significantly different to Callaway-like models in its physical interpretation, the thermal conductivity expression accounts for a transition from a kinetic (individual phonon transport) to a collective (hydrodynamic phonon transport) behaviour of the phonon field. Within the model, we confirm the theoretical proportionality between the phonon-phonon relaxation times of the group-IV semiconductors. This proportionality depends on some materials properties and it allows us to predict the thermal conductivity of the whole group of materials without the need to fit each material individually. The predictions on thermal conductivities are in good agreement with experimental data over a wide temperature range.

  3. Thermal conductivity of group-IV semiconductors from a kinetic-collective model

    Science.gov (United States)

    de Tomas, C.; Cantarero, A.; Lopeandia, A. F.; Alvarez, F. X.

    2014-01-01

    The thermal conductivity of group-IV semiconductors (silicon, germanium, diamond and grey tin) with several isotopic compositions has been calculated from a kinetic-collective model. From this approach, significantly different to Callaway-like models in its physical interpretation, the thermal conductivity expression accounts for a transition from a kinetic (individual phonon transport) to a collective (hydrodynamic phonon transport) behaviour of the phonon field. Within the model, we confirm the theoretical proportionality between the phonon–phonon relaxation times of the group-IV semiconductors. This proportionality depends on some materials properties and it allows us to predict the thermal conductivity of the whole group of materials without the need to fit each material individually. The predictions on thermal conductivities are in good agreement with experimental data over a wide temperature range. PMID:25197256

  4. An Automatic User Grouping Model for a Group Recommender System in Location-Based Social Networks

    Directory of Open Access Journals (Sweden)

    Elahe Khazaei

    2018-02-01

    Full Text Available Spatial group recommendation refers to suggesting places to a given set of users. In a group recommender system, members of a group should have similar preferences in order to increase the level of satisfaction. Location-based social networks (LBSNs provide rich content, such as user interactions and location/event descriptions, which can be leveraged for group recommendations. In this paper, an automatic user grouping model is introduced that obtains information about users and their preferences through an LBSN. The preferences of the users, proximity of the places the users have visited in terms of spatial range, users’ free days, and the social relationships among users are extracted automatically from location histories and users’ profiles in the LBSN. These factors are combined to determine the similarities among users. The users are partitioned into groups based on these similarities. Group size is the key to coordinating group members and enhancing their satisfaction. Therefore, a modified k-medoids method is developed to cluster users into groups with specific sizes. To evaluate the efficiency of the proposed method, its mean intra-cluster distance and its distribution of cluster sizes are compared to those of general clustering algorithms. The results reveal that the proposed method compares favourably with general clustering approaches, such as k-medoids and spectral clustering, in separating users into groups of a specific size with a lower mean intra-cluster distance.

  5. Group contribution modelling for the prediction of safety-related and environmental properties

    DEFF Research Database (Denmark)

    Frutiger, Jerome; Abildskov, Jens; Sin, Gürkan

    warming potential and ozone depletion potential. Process safety studies and environmental assessments rely on accurate property data. Safety data such as flammability limits, heat of combustion or auto ignition temperature play an important role in quantifying the risk of fire and explosions among others......We present a new set of property prediction models based on group contributions to predict major safety-related and environmental properties for organic compounds. The predicted list of properties includes lower and upper flammability limits, heat of combustion, auto ignition temperature, global...... models like group contribution (GC) models can estimate data. However, the estimation needs to be accurate, reliable and as little time-consuming as possible so that the models can be used on the fly. In this study the Marrero and Gani group contribution (MR GC) method has been used to develop the models...

  6. REQUIREMENTS FOR SYSTEMS DEVELOPMENT LIFE CYCLE MODELS FOR LARGE-SCALE DEFENSE SYSTEMS

    Directory of Open Access Journals (Sweden)

    Kadir Alpaslan DEMIR

    2015-10-01

    Full Text Available TLarge-scale defense system projects are strategic for maintaining and increasing the national defense capability. Therefore, governments spend billions of dollars in the acquisition and development of large-scale defense systems. The scale of defense systems is always increasing and the costs to build them are skyrocketing. Today, defense systems are software intensive and they are either a system of systems or a part of it. Historically, the project performances observed in the development of these systems have been signifi cantly poor when compared to other types of projects. It is obvious that the currently used systems development life cycle models are insuffi cient to address today’s challenges of building these systems. Using a systems development life cycle model that is specifi cally designed for largescale defense system developments and is effective in dealing with today’s and near-future challenges will help to improve project performances. The fi rst step in the development a large-scale defense systems development life cycle model is the identifi cation of requirements for such a model. This paper contributes to the body of literature in the fi eld by providing a set of requirements for system development life cycle models for large-scale defense systems. Furthermore, a research agenda is proposed.

  7. Working Group 1: Software System Design and Implementation for Environmental Modeling

    Science.gov (United States)

    ISCMEM Working Group One Presentation, presentation with the purpose of fostering the exchange of information about environmental modeling tools, modeling frameworks, and environmental monitoring databases.

  8. Numerically modelling the large scale coronal magnetic field

    Science.gov (United States)

    Panja, Mayukh; Nandi, Dibyendu

    2016-07-01

    The solar corona spews out vast amounts of magnetized plasma into the heliosphere which has a direct impact on the Earth's magnetosphere. Thus it is important that we develop an understanding of the dynamics of the solar corona. With our present technology it has not been possible to generate 3D magnetic maps of the solar corona; this warrants the use of numerical simulations to study the coronal magnetic field. A very popular method of doing this, is to extrapolate the photospheric magnetic field using NLFF or PFSS codes. However the extrapolations at different time intervals are completely independent of each other and do not capture the temporal evolution of magnetic fields. On the other hand full MHD simulations of the global coronal field, apart from being computationally very expensive would be physically less transparent, owing to the large number of free parameters that are typically used in such codes. This brings us to the Magneto-frictional model which is relatively simpler and computationally more economic. We have developed a Magnetofrictional Model, in 3D spherical polar co-ordinates to study the large scale global coronal field. Here we present studies of changing connectivities between active regions, in response to photospheric motions.

  9. Engaging the public with low-carbon energy technologies: Results from a Scottish large group process

    International Nuclear Information System (INIS)

    Howell, Rhys; Shackley, Simon; Mabon, Leslie; Ashworth, Peta; Jeanneret, Talia

    2014-01-01

    This paper presents the results of a large group process conducted in Edinburgh, Scotland investigating public perceptions of climate change and low-carbon energy technologies, specifically carbon dioxide capture and storage (CCS). The quantitative and qualitative results reported show that the participants were broadly supportive of efforts to reduce carbon dioxide emissions, and that there is an expressed preference for renewable energy technologies to be employed to achieve this. CCS was considered in detail during the research due to its climate mitigation potential; results show that the workshop participants were cautious about its deployment. The paper discusses a number of interrelated factors which appear to influence perceptions of CCS; factors such as the perceived costs and benefits of the technology, and people's personal values and trust in others all impacted upon participants’ attitudes towards the technology. The paper thus argues for the need to provide the public with broad-based, balanced and trustworthy information when discussing CCS, and to take seriously the full range of factors that influence public perceptions of low-carbon technologies. - Highlights: • We report the results of a Scottish large group workshop on energy technologies. • There is strong public support for renewable energy and mixed opinions towards CCS. • The workshop was successful in initiating discussion around climate change and energy technologies. • Issues of trust, uncertainty, costs, benefits, values and emotions all inform public perceptions. • Need to take seriously the full range of factors that inform perceptions

  10. Pile group program for full material modeling and progressive failure.

    Science.gov (United States)

    2008-12-01

    Strain wedge (SW) model formulation has been used, in previous work, to evaluate the response of a single pile or a group of piles (including its : pile cap) in layered soils to lateral loading. The SW model approach provides appropriate prediction f...

  11. Are fashion models a group at risk for eating disorders and substance abuse?

    Science.gov (United States)

    Santonastaso, Paolo; Mondini, Silvia; Favaro, Angela

    2002-01-01

    Few studies to date have investigated whether in fact the prevalence of eating disorders (ED) and/or use of illicit drugs is higher among models than among other groups of females. A group of 63 professional fashion models of various nationalities were studied by means of self-reported questionnaires. They were compared with a control group of 126 female subjects recruited from the general population. Fashion models weigh significantly less than controls, but only a small percentage of them uses unhealthy methods to control their weight. The current frequency of full-syndrome ED did not differ between the groups, but partial-syndrome ED were significantly more common among fashion models than among controls. Current substance use or alcohol abuse was reported by 35% of fashion models and 12% of controls. Our findings suggest that fashion models are more at risk for partial ED and use of illicit drugs than females in the general population. Copyright 2002 S. Karger AG, Basel

  12. What is special about the group of the standard model?

    Science.gov (United States)

    Nielsen, H. B.; Brene, N.

    1989-06-01

    The standard model is based on the algebra of U 1×SU 2×SU 3. The systematics of charges of the fundamental fermions seems to suggest the importance of a particular group having this algebra, viz. S(U 2×U 3). This group is distinguished from all other connected compact non semisimple groups with dimensionality up to 12 by a characteristic property: it is very “skew”. By this we mean that the group has relatively few “generalised outer automorphisms”. One may speculate about physical reasons for this fact.

  13. What is special about the group of the standard model?

    International Nuclear Information System (INIS)

    Nielsen, H.B.; Brene, N.

    1989-03-01

    The standard model is based on the algebra of U 1 xSU 2 xSU 3 . The systematics of charges of the fundamental fermions seems to suggest the importance of a particular group having this algebra, viz. S(U 2 xU 3 ). This group is distinguished from all other connected compact non semisimple groups with dimensionality up to 12 by a characteristic property: it is very 'skew'. By this we mean that the group has relatively few 'generalised outer automorphisms'. One may speculate about physical reasons for this fact. (orig.)

  14. Cardiac regeneration using pluripotent stem cells—Progression to large animal models

    Directory of Open Access Journals (Sweden)

    James J.H. Chong

    2014-11-01

    Full Text Available Pluripotent stem cells (PSCs have indisputable cardiomyogenic potential and therefore have been intensively investigated as a potential cardiac regenerative therapy. Current directed differentiation protocols are able to produce high yields of cardiomyocytes from PSCs and studies in small animal models of cardiovascular disease have proven sustained engraftment and functional efficacy. Therefore, the time is ripe for cardiac regenerative therapies using PSC derivatives to be tested in large animal models that more closely resemble the hearts of humans. In this review, we discuss the results of our recent study using human embryonic stem cell derived cardiomyocytes (hESC-CM in a non-human primate model of ischemic cardiac injury. Large scale remuscularization, electromechanical coupling and short-term arrhythmias demonstrated by our hESC-CM grafts are discussed in the context of other studies using adult stem cells for cardiac regeneration.

  15. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    The fields of sensitivity and uncertainty analysis have traditionally been dominated by statistical techniques when large-scale modeling codes are being analyzed. These methods are able to estimate sensitivities, generate response surfaces, and estimate response probability distributions given the input parameter probability distributions. Because the statistical methods are computationally costly, they are usually applied only to problems with relatively small parameter sets. Deterministic methods, on the other hand, are very efficient and can handle large data sets, but generally require simpler models because of the considerable programming effort required for their implementation. The first part of this paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. This second part of the paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. This paper is applicable to low-level radioactive waste disposal system performance assessment

  16. Renormalization group analysis of a simple hierarchical fermion model

    International Nuclear Information System (INIS)

    Dorlas, T.C.

    1991-01-01

    A simple hierarchical fermion model is constructed which gives rise to an exact renormalization transformation in a 2-dimensional parameter space. The behaviour of this transformation is studied. It has two hyperbolic fixed points for which the existence of a global critical line is proven. The asymptotic behaviour of the transformation is used to prove the existence of the thermodynamic limit in a certain domain in parameter space. Also the existence of a continuum limit for these theories is investigated using information about the asymptotic renormalization behaviour. It turns out that the 'trivial' fixed point gives rise to a two-parameter family of continuum limits corresponding to that part of parameter space where the renormalization trajectories originate at this fixed point. Although the model is not very realistic it serves as a simple example of the appliclation of the renormalization group to proving the existence of the thermodynamic limit and the continuum limit of lattice models. Moreover, it illustrates possible complications that can arise in global renormalization group behaviour, and that might also be present in other models where no global analysis of the renormalization transformation has yet been achieved. (orig.)

  17. LDA-Based Unified Topic Modeling for Similar TV User Grouping and TV Program Recommendation.

    Science.gov (United States)

    Pyo, Shinjee; Kim, Eunhui; Kim, Munchurl

    2015-08-01

    Social TV is a social media service via TV and social networks through which TV users exchange their experiences about TV programs that they are viewing. For social TV service, two technical aspects are envisioned: grouping of similar TV users to create social TV communities and recommending TV programs based on group and personal interests for personalizing TV. In this paper, we propose a unified topic model based on grouping of similar TV users and recommending TV programs as a social TV service. The proposed unified topic model employs two latent Dirichlet allocation (LDA) models. One is a topic model of TV users, and the other is a topic model of the description words for viewed TV programs. The two LDA models are then integrated via a topic proportion parameter for TV programs, which enforces the grouping of similar TV users and associated description words for watched TV programs at the same time in a unified topic modeling framework. The unified model identifies the semantic relation between TV user groups and TV program description word groups so that more meaningful TV program recommendations can be made. The unified topic model also overcomes an item ramp-up problem such that new TV programs can be reliably recommended to TV users. Furthermore, from the topic model of TV users, TV users with similar tastes can be grouped as topics, which can then be recommended as social TV communities. To verify our proposed method of unified topic-modeling-based TV user grouping and TV program recommendation for social TV services, in our experiments, we used real TV viewing history data and electronic program guide data from a seven-month period collected by a TV poll agency. The experimental results show that the proposed unified topic model yields an average 81.4% precision for 50 topics in TV program recommendation and its performance is an average of 6.5% higher than that of the topic model of TV users only. For TV user prediction with new TV programs, the average

  18. A Renormalisation Group Method. V. A Single Renormalisation Group Step

    Science.gov (United States)

    Brydges, David C.; Slade, Gordon

    2015-05-01

    This paper is the fifth in a series devoted to the development of a rigorous renormalisation group method applicable to lattice field theories containing boson and/or fermion fields, and comprises the core of the method. In the renormalisation group method, increasingly large scales are studied in a progressive manner, with an interaction parametrised by a field polynomial which evolves with the scale under the renormalisation group map. In our context, the progressive analysis is performed via a finite-range covariance decomposition. Perturbative calculations are used to track the flow of the coupling constants of the evolving polynomial, but on their own perturbative calculations are insufficient to control error terms and to obtain mathematically rigorous results. In this paper, we define an additional non-perturbative coordinate, which together with the flow of coupling constants defines the complete evolution of the renormalisation group map. We specify conditions under which the non-perturbative coordinate is contractive under a single renormalisation group step. Our framework is essentially combinatorial, but its implementation relies on analytic results developed earlier in the series of papers. The results of this paper are applied elsewhere to analyse the critical behaviour of the 4-dimensional continuous-time weakly self-avoiding walk and of the 4-dimensional -component model. In particular, the existence of a logarithmic correction to mean-field scaling for the susceptibility can be proved for both models, together with other facts about critical exponents and critical behaviour.

  19. Hybrid Reynolds-Averaged/Large Eddy Simulation of a Cavity Flameholder; Assessment of Modeling Sensitivities

    Science.gov (United States)

    Baurle, R. A.

    2015-01-01

    Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. The cases simulated corresponded to those used to examine this flowfield experimentally using particle image velocimetry. A variety of turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged / large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This effort was undertaken to formally assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community. The numerical errors were quantified for both the steady-state and scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results showed a high degree of variability when comparing the predictions obtained from each turbulence model, with the non-linear eddy viscosity model (an explicit algebraic stress model) providing the most accurate prediction of the measured values. The hybrid Reynolds-averaged/large eddy simulation results were carefully scrutinized to ensure that even the coarsest grid had an acceptable level of resolution for large eddy simulation, and that the time-averaged statistics were acceptably accurate. The autocorrelation and its Fourier transform were the primary tools used for this assessment. The statistics extracted from the hybrid simulation strategy proved to be more accurate than the Reynolds-averaged results obtained using the linear eddy viscosity models. However, there was no predictive improvement noted over the results obtained from the explicit

  20. Large signal S-parameters: modeling and radiation effects in microwave power transistors

    International Nuclear Information System (INIS)

    Graham, E.D. Jr.; Chaffin, R.J.; Gwyn, C.W.

    1973-01-01

    Microwave power transistors are usually characterized by measuring the source and load impedances, efficiency, and power output at a specified frequency and bias condition in a tuned circuit. These measurements provide limited data for circuit design and yield essentially no information concerning broadbanding possibilities. Recently, a method using large signal S-parameters has been developed which provides a rapid and repeatable means for measuring microwave power transistor parameters. These large signal S-parameters have been successfully used to design rf power amplifiers. Attempts at modeling rf power transistors have in the past been restricted to a modified Ebers-Moll procedure with numerous adjustable model parameters. The modified Ebers-Moll model is further complicated by inclusion of package parasitics. In the present paper an exact one-dimensional device analysis code has been used to model the performance of the transistor chip. This code has been integrated into the SCEPTRE circuit analysis code such that chip, package and circuit performance can be coupled together in the analysis. Using []his computational tool, rf transistor performance has been examined with particular attention given to the theoretical validity of large-signal S-parameters and the effects of nuclear radiation on device parameters. (auth)

  1. Renormalization-group flow of the effective action of cosmological large-scale structures

    CERN Document Server

    Floerchinger, Stefan

    2017-01-01

    Following an approach of Matarrese and Pietroni, we derive the functional renormalization group (RG) flow of the effective action of cosmological large-scale structures. Perturbative solutions of this RG flow equation are shown to be consistent with standard cosmological perturbation theory. Non-perturbative approximate solutions can be obtained by truncating the a priori infinite set of possible effective actions to a finite subspace. Using for the truncated effective action a form dictated by dissipative fluid dynamics, we derive RG flow equations for the scale dependence of the effective viscosity and sound velocity of non-interacting dark matter, and we solve them numerically. Physically, the effective viscosity and sound velocity account for the interactions of long-wavelength fluctuations with the spectrum of smaller-scale perturbations. We find that the RG flow exhibits an attractor behaviour in the IR that significantly reduces the dependence of the effective viscosity and sound velocity on the input ...

  2. Critical behavior in dome D = 1 large-N matrix models

    International Nuclear Information System (INIS)

    Das, S.R.; Dhar, A.; Sengupta, A.M.; Wadia, D.R.

    1990-01-01

    The authors study the critical behavior in D = 1 large-N matrix models. The authors also look at the subleading terms in susceptibility in order to find out the dimensions of some of the operators in the theory

  3. The microscopic structure and group theory of the interacting boson model

    International Nuclear Information System (INIS)

    Lipkin, H.J.

    1980-01-01

    The chains of groups used in calssifying states of the IBM are compared with the chains used in a composite model with j = 3/2 fermion pairs. Many similarities are found, along with differences due to Pauli principle effects in continuum fermion pairs. The classifications are shown to be characterized by several different seniority numbers, which are physically similar but formally different in the two cases because fermion pair and boson pair states used to define seniority in each model correspond to single bosons and four-fermion clusters, respectively, in the other model. The SO(6) and SO(5) groups which define boson pair seniorities in the boson sextet model are isomorphic, respectively, to SU(4) and Sp(4) which have simple physical interpretations in fermion quartet models. (orig.)

  4. Large-n limit of the Heisenberg model: The decorated lattice and the disordered chain

    International Nuclear Information System (INIS)

    Khoruzhenko, B.A.; Pastur, L.A.; Shcherbina, M.V.

    1989-01-01

    The critical temperature of the generalized spherical model (large-component limit of the classical Heisenberg model) on a cubic lattice, whose every bond is decorated by L spins, is found. When L → ∞, the asymptotics of the temperature is T c ∼ aL -1 . The reduction of the number of spherical constraints for the model is found to be fairly large. The free energy of the one-dimensional generalized spherical model with random nearest neighbor interaction is calculated

  5. Large-N limit of the two-Hermitian-matrix model by the hidden BRST method

    International Nuclear Information System (INIS)

    Alfaro, J.

    1993-01-01

    This paper discusses the large-N limit of the two-Hermitian-matrix model in zero dimensions, using the hidden Becchi-Rouet-Stora-Tyutin method. A system of integral equations previously found is solved, showing that it contained the exact solution of the model in leading order of large N

  6. Importance of hemodialysis-related outcomes: comparison of ratings by a self-help group, clinicians, and health technology assessment authors with those by a large reference group of patients

    Directory of Open Access Journals (Sweden)

    Janssen IM

    2016-12-01

    Full Text Available Inger M Janssen,1 Fueloep Scheibler,2 Ansgar Gerhardus3,4 1Department of Epidemiology and International Public Health, University of Bielefeld, Bielefeld, 2Department of Non-Drug Interventions, Institute for Quality and Efficiency in Health Care, Cologne, 3Department for Health Services Research, Institute for Public Health and Nursing Research, University of Bremen, 4Health Sciences Bremen, University of Bremen, Bremen, Germany Background: The selection of important outcomes is a crucial decision for clinical research and health technology assessment (HTA, and there is ongoing debate about which stakeholders should be involved. Hemodialysis is a complex treatment for chronic kidney disease (CKD and affects many outcomes. Apart from obvious outcomes, such as mortality, morbidity and health-related quality of life (HRQoL, others such as, concerning daily living or health care provision, may also be important. The aim of our study was to analyze to what extent the preferences for patient-relevant outcomes differed between various stakeholders. We compared preferences of stakeholders normally or occasionally involved in outcome prioritization (patients from a self-help group, clinicians and HTA authors with those of a large reference group of patients. Participants and methods: The reference group consisted of 4,518 CKD patients investigated previously. We additionally recruited CKD patients via a regional self-help group, nephrologists via an online search and HTA authors via an expert database or personal contacts. All groups assessed the relative importance of the 23 outcomes by means of a discrete visual analog scale. We used descriptive statistics to rank outcomes and compare the results between groups. Results: We received completed questionnaires from 49 self-help group patients, 19 nephrologists and 18 HTA authors. Only the following 3 outcomes were ranked within the top 7 outcomes by all 4 groups: safety, HRQoL and emotional state. The

  7. A semiparametric graphical modelling approach for large-scale equity selection.

    Science.gov (United States)

    Liu, Han; Mulvey, John; Zhao, Tianqi

    2016-01-01

    We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption.

  8. Item Response Theory at Subject- and Group-Level. Research Report 90-1.

    Science.gov (United States)

    Tobi, Hilde

    This paper reviews the literature about item response models for the subject level and aggregated level (group level). Group-level item response models (IRMs) are used in the United States in large-scale assessment programs such as the National Assessment of Educational Progress and the California Assessment Program. In the Netherlands, these…

  9. Large-scale shifts in phytoplankton groups in the Equatorial Pacific during ENSO cycles

    Directory of Open Access Journals (Sweden)

    I. Masotti

    2011-03-01

    Full Text Available The El Niño Southern Oscillation (ENSO drives important changes in the marine productivity of the Equatorial Pacific, in particular during major El Niño/La Niña transitions. Changes in environmental conditions associated with these climatic events also likely impact phytoplankton composition. In this work, the distribution of four major phytoplankton groups (nanoeucaryotes, Prochlorococcus, Synechococcus, and diatoms was examined between 1996 and 2007 by applying the PHYSAT algorithm to the ocean color data archive from the Ocean Color and Temperature Sensor (OCTS and Sea-viewing Wide Field-of-view Sensor (SeaWiFS. Coincident with the decrease in chlorophyll concentrations, a large-scale shift in the phytoplankton composition of the Equatorial Pacific, that was characterized by a decrease in Synechococcus and an increase in nanoeucaryote dominance, was observed during the early stages of both the strong El Niño of 1997 and the moderate El Niño of 2006. A significant increase in diatoms dominance was observed in the Equatorial Pacific during the 1998 La Niña and was associated with elevated marine productivity. An analysis of the environmental variables using a coupled physical-biogeochemical model (NEMO-PISCES suggests that the Synechococcus dominance decrease during the two El Niño events was associated with an abrupt decline in nutrient availability (−0.9 to −2.5 μM NO3 month−1. Alternatively, increased nutrient availability (3 μM NO3 month−1 during the 1998 La Niña resulted in Equatorial Pacific dominance diatom increase. Despite these phytoplankton community shifts, the mean composition is restored after a few months, which suggests resilience in community structure.

  10. Findings and Challenges in Fine-Resolution Large-Scale Hydrological Modeling

    Science.gov (United States)

    Her, Y. G.

    2017-12-01

    Fine-resolution large-scale (FL) modeling can provide the overall picture of the hydrological cycle and transport while taking into account unique local conditions in the simulation. It can also help develop water resources management plans consistent across spatial scales by describing the spatial consequences of decisions and hydrological events extensively. FL modeling is expected to be common in the near future as global-scale remotely sensed data are emerging, and computing resources have been advanced rapidly. There are several spatially distributed models available for hydrological analyses. Some of them rely on numerical methods such as finite difference/element methods (FDM/FEM), which require excessive computing resources (implicit scheme) to manipulate large matrices or small simulation time intervals (explicit scheme) to maintain the stability of the solution, to describe two-dimensional overland processes. Others make unrealistic assumptions such as constant overland flow velocity to reduce the computational loads of the simulation. Thus, simulation efficiency often comes at the expense of precision and reliability in FL modeling. Here, we introduce a new FL continuous hydrological model and its application to four watersheds in different landscapes and sizes from 3.5 km2 to 2,800 km2 at the spatial resolution of 30 m on an hourly basis. The model provided acceptable accuracy statistics in reproducing hydrological observations made in the watersheds. The modeling outputs including the maps of simulated travel time, runoff depth, soil water content, and groundwater recharge, were animated, visualizing the dynamics of hydrological processes occurring in the watersheds during and between storm events. Findings and challenges were discussed in the context of modeling efficiency, accuracy, and reproducibility, which we found can be improved by employing advanced computing techniques and hydrological understandings, by using remotely sensed hydrological

  11. Large-scale tropospheric transport in the Chemistry-Climate Model Initiative (CCMI) simulations

    Science.gov (United States)

    Orbe, Clara; Yang, Huang; Waugh, Darryn W.; Zeng, Guang; Morgenstern, Olaf; Kinnison, Douglas E.; Lamarque, Jean-Francois; Tilmes, Simone; Plummer, David A.; Scinocca, John F.; Josse, Beatrice; Marecal, Virginie; Jöckel, Patrick; Oman, Luke D.; Strahan, Susan E.; Deushi, Makoto; Tanaka, Taichu Y.; Yoshida, Kohei; Akiyoshi, Hideharu; Yamashita, Yousuke; Stenke, Andreas; Revell, Laura; Sukhodolov, Timofei; Rozanov, Eugene; Pitari, Giovanni; Visioni, Daniele; Stone, Kane A.; Schofield, Robyn; Banerjee, Antara

    2018-05-01

    Understanding and modeling the large-scale transport of trace gases and aerosols is important for interpreting past (and projecting future) changes in atmospheric composition. Here we show that there are large differences in the global-scale atmospheric transport properties among the models participating in the IGAC SPARC Chemistry-Climate Model Initiative (CCMI). Specifically, we find up to 40 % differences in the transport timescales connecting the Northern Hemisphere (NH) midlatitude surface to the Arctic and to Southern Hemisphere high latitudes, where the mean age ranges between 1.7 and 2.6 years. We show that these differences are related to large differences in vertical transport among the simulations, in particular to differences in parameterized convection over the oceans. While stronger convection over NH midlatitudes is associated with slower transport to the Arctic, stronger convection in the tropics and subtropics is associated with faster interhemispheric transport. We also show that the differences among simulations constrained with fields derived from the same reanalysis products are as large as (and in some cases larger than) the differences among free-running simulations, most likely due to larger differences in parameterized convection. Our results indicate that care must be taken when using simulations constrained with analyzed winds to interpret the influence of meteorology on tropospheric composition.

  12. Large-scale tropospheric transport in the Chemistry–Climate Model Initiative (CCMI simulations

    Directory of Open Access Journals (Sweden)

    C. Orbe

    2018-05-01

    Full Text Available Understanding and modeling the large-scale transport of trace gases and aerosols is important for interpreting past (and projecting future changes in atmospheric composition. Here we show that there are large differences in the global-scale atmospheric transport properties among the models participating in the IGAC SPARC Chemistry–Climate Model Initiative (CCMI. Specifically, we find up to 40 % differences in the transport timescales connecting the Northern Hemisphere (NH midlatitude surface to the Arctic and to Southern Hemisphere high latitudes, where the mean age ranges between 1.7 and 2.6 years. We show that these differences are related to large differences in vertical transport among the simulations, in particular to differences in parameterized convection over the oceans. While stronger convection over NH midlatitudes is associated with slower transport to the Arctic, stronger convection in the tropics and subtropics is associated with faster interhemispheric transport. We also show that the differences among simulations constrained with fields derived from the same reanalysis products are as large as (and in some cases larger than the differences among free-running simulations, most likely due to larger differences in parameterized convection. Our results indicate that care must be taken when using simulations constrained with analyzed winds to interpret the influence of meteorology on tropospheric composition.

  13. Investigating the LGBTQ Responsive Model for Supervision of Group Work

    Science.gov (United States)

    Luke, Melissa; Goodrich, Kristopher M.

    2013-01-01

    This article reports an investigation of the LGBTQ Responsive Model for Supervision of Group Work, a trans-theoretical supervisory framework to address the needs of lesbian, gay, bisexual, transgender, and questioning (LGBTQ) persons (Goodrich & Luke, 2011). Findings partially supported applicability of the LGBTQ Responsive Model for Supervision…

  14. Benchmarking Deep Learning Models on Large Healthcare Datasets.

    Science.gov (United States)

    Purushotham, Sanjay; Meng, Chuizheng; Che, Zhengping; Liu, Yan

    2018-06-04

    Deep learning models (aka Deep Neural Networks) have revolutionized many fields including computer vision, natural language processing, speech recognition, and is being increasingly used in clinical healthcare applications. However, few works exist which have benchmarked the performance of the deep learning models with respect to the state-of-the-art machine learning models and prognostic scoring systems on publicly available healthcare datasets. In this paper, we present the benchmarking results for several clinical prediction tasks such as mortality prediction, length of stay prediction, and ICD-9 code group prediction using Deep Learning models, ensemble of machine learning models (Super Learner algorithm), SAPS II and SOFA scores. We used the Medical Information Mart for Intensive Care III (MIMIC-III) (v1.4) publicly available dataset, which includes all patients admitted to an ICU at the Beth Israel Deaconess Medical Center from 2001 to 2012, for the benchmarking tasks. Our results show that deep learning models consistently outperform all the other approaches especially when the 'raw' clinical time series data is used as input features to the models. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Transferable tight-binding model for strained group IV and III-V materials and heterostructures

    Science.gov (United States)

    Tan, Yaohua; Povolotskyi, Michael; Kubis, Tillmann; Boykin, Timothy B.; Klimeck, Gerhard

    2016-07-01

    It is critical to capture the effect due to strain and material interface for device level transistor modeling. We introduce a transferable s p3d5s* tight-binding model with nearest-neighbor interactions for arbitrarily strained group IV and III-V materials. The tight-binding model is parametrized with respect to hybrid functional (HSE06) calculations for varieties of strained systems. The tight-binding calculations of ultrasmall superlattices formed by group IV and group III-V materials show good agreement with the corresponding HSE06 calculations. The application of the tight-binding model to superlattices demonstrates that the transferable tight-binding model with nearest-neighbor interactions can be obtained for group IV and III-V materials.

  16. Simplest simulation model for three-dimensional xenon oscillations in large PWRs

    International Nuclear Information System (INIS)

    Shimazu, Yoichiro

    2004-01-01

    Xenon oscillations in large PWRs are well understood and there have been no operational problems remained. However, in order to suppress the oscillations effectively, optimal control strategy is preferable. Generally speaking in such optimality search based on the modern control theory, a large volume of transient core analyses is required. For example, three dimensional core calculations are inevitable for the analyses of radial oscillations. From this point of view, a very simple 3-D model is proposed, which is based on a reactor model of only four points. As in the actual reactor operation, the magnitude of xenon oscillations should be limited from the view point of safety, the model further assumes that the neutron leakage can be also small or even constant. It can explicitly use reactor parameters such as reactivity coefficients and control rod worth directly. The model is so simplified as described above that it can predict oscillation behavior in a very short calculation time even on a PC. However the prediction result is good. The validity of the model in comparison with measured data and the applications are discussed. (author)

  17. Frailty Across Age Groups.

    Science.gov (United States)

    Pérez-Zepeda, M U; Ávila-Funes, J A; Gutiérrez-Robledo, L M; García-Peña, C

    2016-01-01

    The implementation of an aging biomarker into clinical practice is under debate. The Frailty Index is a model of deficit accumulation and has shown to accurately capture frailty in older adults, thus bridging biological with clinical practice. To describe the association of socio-demographic characteristics and the Frailty Index in different age groups (from 20 to over one hundred years) in a representative sample of Mexican subjects. Cross-sectional analysis. Nationwide and population-representative survey. Adults 20-years and older interviewed during the last Mexican National Health and Nutrition Survey (2012). A 30-item Frailty Index following standard construction was developed. Multi-level regression models were performed to test the associations of the Frailty Index with multiple socio-demographic characteristics across age groups. A total of 29,504 subjects was analyzed. The 30-item Frailty Index showed the highest scores in the older age groups, especially in women. No sociodemographic variable was associated with the Frailty Index in all the studied age groups. However, employment, economic income, and smoking status were more consistently found across age groups. To our knowledge, this is the first report describing the Frailty Index in a representative large sample of a Latin American country. Increasing age and gender were closely associated with a higher score.

  18. A Project Team Analysis Using Tuckman's Model of Small-Group Development.

    Science.gov (United States)

    Natvig, Deborah; Stark, Nancy L

    2016-12-01

    Concerns about equitable workloads for nursing faculty have been well documented, yet a standardized system for workload management does not exist. A project team was challenged to establish an academic workload management system when two dissimilar universities were consolidated. Tuckman's model of small-group development was used as the framework for the analysis of processes and effectiveness of a workload project team. Agendas, notes, and meeting minutes were used as the primary sources of information. Analysis revealed the challenges the team encountered. Utilization of a team charter was an effective tool in guiding the team to become a highly productive group. Lessons learned from the analysis are discussed. Guiding a diverse group into a highly productive team is complex. The use of Tuckman's model of small-group development provided a systematic mechanism to review and understand group processes and tasks. [J Nurs Educ. 2016;55(12):675-681.]. Copyright 2016, SLACK Incorporated.

  19. A turbulence model for large interfaces in high Reynolds two-phase CFD

    International Nuclear Information System (INIS)

    Coste, P.; Laviéville, J.

    2015-01-01

    Highlights: • Two-phase CFD commonly involves interfaces much larger than the computational cells. • A two-phase turbulence model is developed to better take them into account. • It solves k–epsilon transport equations in each phase. • The special treatments and transfer terms at large interfaces are described. • Validation cases are presented. - Abstract: A model for two-phase (six-equation) CFD modelling of turbulence is presented, for the regions of the flow where the liquid–gas interface takes place on length scales which are much larger than the typical computational cell size. In the other regions of the flow, the liquid or gas volume fractions range from 0 to 1. Heat and mass transfer, compressibility of the fluids, are included in the system, which is used at high Reynolds numbers in large scale industrial calculations. In this context, a model based on k and ε transport equations in each phase was chosen. The paper describes the model, with a focus on the large interfaces, which require special treatments and transfer terms between the phases, including some approaches inspired from wall functions. The validation of the model is based on high Reynolds number experiments with turbulent quantities measurements of a liquid jet impinging a free surface and an air water stratified flow. A steam–water stratified condensing flow experiment is also used for an indirect validation in the case of heat and mass transfer

  20. Group Approach to the Quantization of Non-Abelian Stueckelberg Models

    International Nuclear Information System (INIS)

    Aldaya, V; Lopez-Ruiz, F F; Calixto, M

    2011-01-01

    The quantum field theory of Non-Linear Sigma Models on coadjoint orbits of a semi-simple group G are formulated in the framework of a Group Approach to Quantization. In this scheme, partial-trace Lagrangians are recovered from two-cocycles defined on the infinite-dimensional group of sections of the jet-gauge group J 1 (G). This construction is extended to the entire physical system coupled to Yang-Mills fields, thus constituting an algebraic formulation of the Non-Abelian Stueckelgerg formalism devoid of the unitarity/renormalizability obstruction that this theory finds in the standard Lagrangian formalism under canonical quantization.

  1. Group Approach to the Quantization of Non-Abelian Stueckelberg Models

    Energy Technology Data Exchange (ETDEWEB)

    Aldaya, V; Lopez-Ruiz, F F [Instituto de Astrofisica de AndalucIa (IAA-CSIC), Apartado Postal 3004, 18080 Granada (Spain); Calixto, M, E-mail: valdaya@iaa.es, E-mail: Manuel.Calixto@upct.es, E-mail: flopez@iaa.es [Departamento de Matematica Aplicada y Estadistica, Universidad Politecnica de Cartagena, Paseo Alfonso XIII 56, 30203 Cartagena (Spain)

    2011-03-01

    The quantum field theory of Non-Linear Sigma Models on coadjoint orbits of a semi-simple group G are formulated in the framework of a Group Approach to Quantization. In this scheme, partial-trace Lagrangians are recovered from two-cocycles defined on the infinite-dimensional group of sections of the jet-gauge group J{sup 1} (G). This construction is extended to the entire physical system coupled to Yang-Mills fields, thus constituting an algebraic formulation of the Non-Abelian Stueckelgerg formalism devoid of the unitarity/renormalizability obstruction that this theory finds in the standard Lagrangian formalism under canonical quantization.

  2. Transferable tight binding model for strained group IV and III-V heterostructures

    Science.gov (United States)

    Tan, Yaohua; Povolotskyi, Micheal; Kubis, Tillmann; Boykin, Timothy; Klimeck, Gerhard

    Modern semiconductor devices have reached critical device dimensions in the range of several nanometers. For reliable prediction of device performance, it is critical to have a numerical efficient model that are transferable to material interfaces. In this work, we present an empirical tight binding (ETB) model with transferable parameters for strained IV and III-V group semiconductors. The ETB model is numerically highly efficient as it make use of an orthogonal sp3d5s* basis set with nearest neighbor inter-atomic interactions. The ETB parameters are generated from HSE06 hybrid functional calculations. Band structures of strained group IV and III-V materials by ETB model are in good agreement with corresponding HSE06 calculations. Furthermore, the ETB model is applied to strained superlattices which consist of group IV and III-V elements. The ETB model turns out to be transferable to nano-scale hetero-structure. The ETB band structures agree with the corresponding HSE06 results in the whole Brillouin zone. The ETB band gaps of superlattices with common cations or common anions have discrepancies within 0.05eV.

  3. Effects of Random Environment on a Self-Organized Critical System: Renormalization Group Analysis of a Continuous Model

    Directory of Open Access Journals (Sweden)

    Antonov N.V.

    2016-01-01

    Full Text Available We study effects of the random fluid motion on a system in a self-organized critical state. The latter is described by the continuous stochastic model proposed by Hwa and Kardar [Phys. Rev. Lett. 62: 1813 (1989]. The advecting velocity field is Gaussian, not correlated in time, with the pair correlation function of the form ∝ δ(t − t′/k⊥d-1+ξ , where k⊥ = |k⊥| and k⊥ is the component of the wave vector, perpendicular to a certain preferred direction – the d-dimensional generalization of the ensemble introduced by Avellaneda and Majda [Commun. Math. Phys. 131: 381 (1990]. Using the field theoretic renormalization group we show that, depending on the relation between the exponent ξ and the spatial dimension d, the system reveals different types of large-scale, long-time scaling behaviour, associated with the three possible fixed points of the renormalization group equations. They correspond to ordinary diffusion, to passively advected scalar field (the nonlinearity of the Hwa–Kardar model is irrelevant and to the “pure” Hwa–Kardar model (the advection is irrelevant. For the special case ξ = 2(4 − d/3 both the nonlinearity and the advection are important. The corresponding critical exponents are found exactly for all these cases.

  4. Using radar altimetry to update a large-scale hydrological model of the Brahmaputra river basin

    DEFF Research Database (Denmark)

    Finsen, F.; Milzow, Christian; Smith, R.

    2014-01-01

    Measurements of river and lake water levels from space-borne radar altimeters (past missions include ERS, Envisat, Jason, Topex) are useful for calibration and validation of large-scale hydrological models in poorly gauged river basins. Altimetry data availability over the downstream reaches...... of the Brahmaputra is excellent (17 high-quality virtual stations from ERS-2, 6 from Topex and 10 from Envisat are available for the Brahmaputra). In this study, altimetry data are used to update a large-scale Budyko-type hydrological model of the Brahmaputra river basin in real time. Altimetry measurements...... improved model performance considerably. The Nash-Sutcliffe model efficiency increased from 0.77 to 0.83. Real-time river basin modelling using radar altimetry has the potential to improve the predictive capability of large-scale hydrological models elsewhere on the planet....

  5. Explorations in combining cognitive models of individuals and system dynamics models of groups.

    Energy Technology Data Exchange (ETDEWEB)

    Backus, George A.

    2008-07-01

    This report documents a demonstration model of interacting insurgent leadership, military leadership, government leadership, and societal dynamics under a variety of interventions. The primary focus of the work is the portrayal of a token societal model that responds to leadership activities. The model also includes a linkage between leadership and society that implicitly represents the leadership subordinates as they directly interact with the population. The societal model is meant to demonstrate the efficacy and viability of using System Dynamics (SD) methods to simulate populations and that these can then connect to cognitive models depicting individuals. SD models typically focus on average behavior and thus have limited applicability to describe small groups or individuals. On the other hand, cognitive models readily describe individual behavior but can become cumbersome when used to describe populations. Realistic security situations are invariably a mix of individual and population dynamics. Therefore, the ability to tie SD models to cognitive models provides a critical capability that would be otherwise be unavailable.

  6. Mixed-signal instrumentation for large-signal device characterization and modelling

    NARCIS (Netherlands)

    Marchetti, M.

    2013-01-01

    This thesis concentrates on the development of advanced large-signal measurement and characterization tools to support technology development, model extraction and validation, and power amplifier (PA) designs that address the newly introduced third and fourth generation (3G and 4G) wideband

  7. Large-deflection statics analysis of active cardiac catheters through co-rotational modelling.

    Science.gov (United States)

    Peng Qi; Chen Qiu; Mehndiratta, Aadarsh; I-Ming Chen; Haoyong Yu

    2016-08-01

    This paper presents a co-rotational concept for large-deflection formulation of cardiac catheters. Using this approach, the catheter is first discretized with a number of equal length beam elements and nodes, and the rigid body motions of an individual beam element are separated from its deformations. Therefore, it is adequate for modelling arbitrarily large deflections of a catheter with linear elastic analysis at the local element level. A novel design of active cardiac catheter of 9 Fr in diameter at the beginning of the paper is proposed, which is based on the contra-rotating double helix patterns and is improved from the previous prototypes. The modelling section is followed by MATLAB simulations of various deflections when the catheter is exerted different types of loads. This proves the feasibility of the presented modelling approach. To the best knowledge of the authors, it is the first to utilize this methodology for large-deflection static analysis of the catheter, which will enable more accurate control of robot-assisted cardiac catheterization procedures. Future work would include further experimental validations.

  8. Analysis of 16S libraries of mouse gastrointestinal microflora reveals a large new group of mouse intestinal bacteria.

    Science.gov (United States)

    Salzman, Nita H; de Jong, Hendrik; Paterson, Yvonne; Harmsen, Hermie J M; Welling, Gjalt W; Bos, Nicolaas A

    2002-11-01

    Total genomic DNA from samples of intact mouse small intestine, large intestine, caecum and faeces was used as template for PCR amplification of 16S rRNA gene sequences with conserved bacterial primers. Phylogenetic analysis of the amplification products revealed 40 unique 16S rDNA sequences. Of these sequences, 25% (10/40) corresponded to described intestinal organisms of the mouse, including Lactobacillus spp., Helicobacter spp., segmented filamentous bacteria and members of the altered Schaedler flora (ASF360, ASF361, ASF502 and ASF519); 75% (30/40) represented novel sequences. A large number (11/40) of the novel sequences revealed a new operational taxonomic unit (OTU) belonging to the Cytophaga-Flavobacter-Bacteroides phylum, which the authors named 'mouse intestinal bacteria'. 16S rRNA probes were developed for this new OTU. Upon analysis of the novel sequences, eight were found to cluster within the Eubacterium rectale-Clostridium coccoides group and three clustered within the Bacteroides group. One of the novel sequences was distantly related to Verrucomicrobium spinosum and one was distantly related to Bacillus mycoides. Oligonucleotide probes specific for the 16S rRNA of these novel clones were generated. Using a combination of four previously described and four newly designed probes, approximately 80% of bacteria recovered from the murine large intestine and 71% of bacteria recovered from the murine caecum could be identified by fluorescence in situ hybridization (FISH).

  9. The Beyond the Standard Model Working Group: Summary Report

    Energy Technology Data Exchange (ETDEWEB)

    Rizzo, Thomas G.

    2002-08-08

    Various theoretical aspects of physics beyond the Standard Model at hadron colliders are discussed. Our focus will be on those issues that most immediately impact the projects pursued as part of the BSM group at this meeting.

  10. A model-based eco-routing strategy for electric vehicles in large urban networks

    OpenAIRE

    De Nunzio , Giovanni; Thibault , Laurent; Sciarretta , Antonio

    2016-01-01

    International audience; A novel eco-routing navigation strategy and energy consumption modeling approach for electric vehicles are presented in this work. Speed fluctuations and road network infrastructure have a large impact on vehicular energy consumption. Neglecting these effects may lead to large errors in eco-routing navigation, which could trivially select the route with the lowest average speed. We propose an energy consumption model that considers both accelerations and impact of the ...

  11. Group-kinetic theory and modeling of atmospheric turbulence

    Science.gov (United States)

    Tchen, C. M.

    1989-01-01

    A group kinetic method is developed for analyzing eddy transport properties and relaxation to equilibrium. The purpose is to derive the spectral structure of turbulence in incompressible and compressible media. Of particular interest are: direct and inverse cascade, boundary layer turbulence, Rossby wave turbulence, two phase turbulence; compressible turbulence, and soliton turbulence. Soliton turbulence can be found in large scale turbulence, turbulence connected with surface gravity waves and nonlinear propagation of acoustical and optical waves. By letting the pressure gradient represent the elementary interaction among fluid elements and by raising the Navier-Stokes equation to higher dimensionality, the master equation was obtained for the description of the microdynamical state of turbulence.

  12. A Creative Therapies Model for the Group Supervision of Counsellors.

    Science.gov (United States)

    Wilkins, Paul

    1995-01-01

    Sets forth a model of group supervision, drawing on a creative therapies approach which provides an effective way of delivering process issues, conceptualization issues, and personalization issues. The model makes particular use of techniques drawn from art therapy and from psychodrama, and should be applicable to therapists of many orientations.…

  13. Effects of uncertainty in model predictions of individual tree volume on large area volume estimates

    Science.gov (United States)

    Ronald E. McRoberts; James A. Westfall

    2014-01-01

    Forest inventory estimates of tree volume for large areas are typically calculated by adding model predictions of volumes for individual trees. However, the uncertainty in the model predictions is generally ignored with the result that the precision of the large area volume estimates is overestimated. The primary study objective was to estimate the effects of model...

  14. Revisiting the merits of a mandatory large group classroom learning format: an MD-MBA perspective.

    Science.gov (United States)

    Li, Shawn X; Pinto-Powell, Roshini

    2017-01-01

    The role of classroom learning in medical education is rapidly changing. To promote active learning and reduce student stress, medical schools have adopted policies such as pass/fail curriculums and recorded lectures. These policies along with the rising importance of the USMLE (United States Medical Licensing Examination) exams have made asynchronous learning popular to the detriment of classroom learning. In contrast to this model, modern day business schools employ mandatory large group classes with assigned seating and cold-calling. Despite similar student demographics, medical and business schools have adopted vastly different approaches to the classroom. When examining the classroom dynamic at business schools with mandatory classes, it is evident that there's an abundance of engaging discourse and peer learning objectives that medical schools share. Mandatory classes leverage the network effect just like social media forums such as Facebook and Twitter. That is, the value of a classroom discussion increases when more students are present to participate. At a time when students are savvy consumers of knowledge, the classroom is competing against an explosion of study aids dedicated to USMLE preparation. Certainly, the purpose of medical school is not solely about the efficient transfer of knowledge - but to train authentic, competent, and complete physicians. To accomplish this, we must promote the inimitable and deeply personal interactions amongst faculty and students. When viewed through this lens, mandatory classes might just be a way for medical schools to leverage their competitive advantage in educating the complete physician.

  15. Power and Vision: Group-Process Models Evolving from Social-Change Movements.

    Science.gov (United States)

    Morrow, Susan L.; Hawxhurst, Donna M.

    1988-01-01

    Explores evolution of group process in social change movements, including the evolution of the new left, the cooperative movement,and the women's liberation movement. Proposes a group-process model that encourages people to share power and live their visions. (Author/NB)

  16. Perturbation theory instead of large scale shell model calculations

    International Nuclear Information System (INIS)

    Feldmeier, H.; Mankos, P.

    1977-01-01

    Results of large scale shell model calculations for (sd)-shell nuclei are compared with a perturbation theory provides an excellent approximation when the SU(3)-basis is used as a starting point. The results indicate that perturbation theory treatment in an SU(3)-basis including 2hω excitations should be preferable to a full diagonalization within the (sd)-shell. (orig.) [de

  17. Expected Utility and Entropy-Based Decision-Making Model for Large Consumers in the Smart Grid

    Directory of Open Access Journals (Sweden)

    Bingtuan Gao

    2015-09-01

    Full Text Available In the smart grid, large consumers can procure electricity energy from various power sources to meet their load demands. To maximize its profit, each large consumer needs to decide their energy procurement strategy under risks such as price fluctuations from the spot market and power quality issues. In this paper, an electric energy procurement decision-making model is studied for large consumers who can obtain their electric energy from the spot market, generation companies under bilateral contracts, the options market and self-production facilities in the smart grid. Considering the effect of unqualified electric energy, the profit model of large consumers is formulated. In order to measure the risks from the price fluctuations and power quality, the expected utility and entropy is employed. Consequently, the expected utility and entropy decision-making model is presented, which helps large consumers to minimize their expected profit of electricity procurement while properly limiting the volatility of this cost. Finally, a case study verifies the feasibility and effectiveness of the proposed model.

  18. What is special about the group of the standard model

    Energy Technology Data Exchange (ETDEWEB)

    Nielsen, H.B.; Brene, N.

    1989-06-15

    The standard model is based on the algebra of U/sub 1/xSU/sub 2/xSU/sub 3/. The systematics of charges of the fundamental fermions seems to suggest the importance of a particular group having this algebra, viz. S(U/sub 2/xU/sub 3/). This group is distinguished from all other connected compact non semisimple groups with dimensionality up to 12 by a characteristic property: it is very ''skew''. By this we mean that the group has relatively few ''generalised outer automorphisms''. One may speculate about physical reasons for this fact. (orig.).

  19. Demography-based adaptive network model reproduces the spatial organization of human linguistic groups

    Science.gov (United States)

    Capitán, José A.; Manrubia, Susanna

    2015-12-01

    The distribution of human linguistic groups presents a number of interesting and nontrivial patterns. The distributions of the number of speakers per language and the area each group covers follow log-normal distributions, while population and area fulfill an allometric relationship. The topology of networks of spatial contacts between different linguistic groups has been recently characterized, showing atypical properties of the degree distribution and clustering, among others. Human demography, spatial conflicts, and the construction of networks of contacts between linguistic groups are mutually dependent processes. Here we introduce an adaptive network model that takes all of them into account and successfully reproduces, using only four model parameters, not only those features of linguistic groups already described in the literature, but also correlations between demographic and topological properties uncovered in this work. Besides their relevance when modeling and understanding processes related to human biogeography, our adaptive network model admits a number of generalizations that broaden its scope and make it suitable to represent interactions between agents based on population dynamics and competition for space.

  20. Tracking Maneuvering Group Target with Extension Predicted and Best Model Augmentation Method Adapted

    Directory of Open Access Journals (Sweden)

    Linhai Gan

    2017-01-01

    Full Text Available The random matrix (RM method is widely applied for group target tracking. The assumption that the group extension keeps invariant in conventional RM method is not yet valid, as the orientation of the group varies rapidly while it is maneuvering; thus, a new approach with group extension predicted is derived here. To match the group maneuvering, a best model augmentation (BMA method is introduced. The existing BMA method uses a fixed basic model set, which may lead to a poor performance when it could not ensure basic coverage of true motion modes. Here, a maneuvering group target tracking algorithm is proposed, where the group extension prediction and the BMA adaption are exploited. The performance of the proposed algorithm will be illustrated by simulation.

  1. A differential-geometric approach to generalized linear models with grouped predictors

    NARCIS (Netherlands)

    Augugliaro, Luigi; Mineo, Angelo M.; Wit, Ernst C.

    We propose an extension of the differential-geometric least angle regression method to perform sparse group inference in a generalized linear model. An efficient algorithm is proposed to compute the solution curve. The proposed group differential-geometric least angle regression method has important

  2. To eat and not be eaten: modelling resources and safety in multi-species animal groups.

    Directory of Open Access Journals (Sweden)

    Umesh Srinivasan

    Full Text Available Using mixed-species bird flocks as an example, we model the payoffs for two types of species from participating in multi-species animal groups. Salliers feed on mobile prey, are good sentinels and do not affect prey capture rates of gleaners; gleaners feed on prey on substrates and can enhance the prey capture rate of salliers by flushing prey, but are poor sentinels. These functional types are known from various animal taxa that form multi-species associations. We model costs and benefits of joining groups for a wide range of group compositions under varying abundances of two types of prey-prey on substrates and mobile prey. Our model predicts that gleaners and salliers show a conflict of interest in multi-species groups, because gleaners benefit from increasing numbers of salliers in the group, whereas salliers benefit from increasing gleaner numbers. The model also predicts that the limits to size and variability in composition of multi-species groups are driven by the relative abundance of different types of prey, independent of predation pressure. Our model emphasises resources as a primary driver of temporal and spatial group dynamics, rather than reproductive activity or predation per se, which have hitherto been thought to explain patterns of multi-species group formation and cohesion. The qualitative predictions of the model are supported by empirical patterns from both terrestrial and marine multi-species groups, suggesting that similar mechanisms might underlie group dynamics in a range of taxa. The model also makes novel predictions about group dynamics that can be tested using variation across space and time.

  3. Modeling Hydrodynamics on the Wave Group Scale in Topographically Complex Reef Environments

    Science.gov (United States)

    Reyns, J.; Becker, J. M.; Merrifield, M. A.; Roelvink, J. A.

    2016-02-01

    The knowledge of the characteristics of waves and the associated wave-driven currents is important for sediment transport and morphodynamics, nutrient dynamics and larval dispersion within coral reef ecosystems. Reef-lined coasts differ from sandy beaches in that they have a steep offshore slope, that the non-sandy bottom topography is very rough, and that the distance between the point of maximum short wave dissipation and the actual coastline is usually large. At this short wave breakpoint, long waves are released, and these infragravity (IG) scale motions account for the bulk of the water level variance on the reef flat, the lagoon and eventually, the sandy beaches fronting the coast through run-up. These IG energy dominated water level motions are reinforced during extreme events such as cyclones or swells through larger incident band wave heights and low frequency wave resonance on the reef. Recently, a number of hydro(-morpho)dynamic models that have the capability to model these IG waves have successfully been applied to morphologically differing reef environments. One of these models is the XBeach model, which is curvilinear in nature. This poses serious problems when trying to model an entire atoll for example, as it is extremely difficult to build curvilinear grids that are optimal for the simulation of hydrodynamic processes, while maintaining the topology in the grid. One solution to remediate this problem of grid connectivity is the use of unstructured grids. We present an implementation of the wave action balance on the wave group scale with feedback to the flow momentum balance, which is the foundation of XBeach, within the framework of the unstructured Delft3D Flexible Mesh model. The model can be run in stationary as well as in instationary mode, and it can be forced by regular waves, time series or wave spectra. We show how the code is capable of modeling the wave generated flow at a number of topographically complex reef sites and for a number of

  4. Virtualizing ancient Rome: 3D acquisition and modeling of a large plaster-of-Paris model of imperial Rome

    Science.gov (United States)

    Guidi, Gabriele; Frischer, Bernard; De Simone, Monica; Cioci, Andrea; Spinetti, Alessandro; Carosso, Luca; Micoli, Laura L.; Russo, Michele; Grasso, Tommaso

    2005-01-01

    Computer modeling through digital range images has been used for many applications, including 3D modeling of objects belonging to our cultural heritage. The scales involved range from small objects (e.g. pottery), to middle-sized works of art (statues, architectural decorations), up to very large structures (architectural and archaeological monuments). For any of these applications, suitable sensors and methodologies have been explored by different authors. The object to be modeled within this project is the "Plastico di Roma antica," a large plaster-of-Paris model of imperial Rome (16x17 meters) created in the last century. Its overall size therefore demands an acquisition approach typical of large structures, but it also is characterized extremely tiny details typical of small objects (houses are a few centimeters high; their doors, windows, etc. are smaller than 1 centimeter). This paper gives an account of the procedures followed for solving this "contradiction" and describes how a huge 3D model was acquired and generated by using a special metrology Laser Radar. The procedures for reorienting in a single reference system the huge point clouds obtained after each acquisition phase, thanks to the measurement of fixed redundant references, are described. The data set was split in smaller sub-areas 2 x 2 meters each for purposes of mesh editing. This subdivision was necessary owing to the huge number of points in each individual scan (50-60 millions). The final merge of the edited parts made it possible to create a single mesh. All these processes were made with software specifically designed for this project since no commercial package could be found that was suitable for managing such a large number of points. Preliminary models are presented. Finally, the significance of the project is discussed in terms of the overall project known as "Rome Reborn," of which the present acquisition is an important component.

  5. Material model for non-linear finite element analyses of large concrete structures

    NARCIS (Netherlands)

    Engen, Morten; Hendriks, M.A.N.; Øverli, Jan Arve; Åldstedt, Erik; Beushausen, H.

    2016-01-01

    A fully triaxial material model for concrete was implemented in a commercial finite element code. The only required input parameter was the cylinder compressive strength. The material model was suitable for non-linear finite element analyses of large concrete structures. The importance of including

  6. The pig as a large animal model for influenza a virus infection

    DEFF Research Database (Denmark)

    Skovgaard, Kerstin; Brogaard, Louise; Larsen, Lars Erik

    It is increasingly realized that large animal models like the pig are exceptionally human like and serve as an excellent model for disease and inflammation. Pigs are fully susceptible to human influenza, share many similarities with humans regarding lung physiology and innate immune cell...

  7. Large scale solar district heating. Evaluation, modelling and designing - Appendices

    Energy Technology Data Exchange (ETDEWEB)

    Heller, A.

    2000-07-01

    The appendices present the following: A) Cad-drawing of the Marstal CSHP design. B) Key values - large-scale solar heating in Denmark. C) Monitoring - a system description. D) WMO-classification of pyranometers (solarimeters). E) The computer simulation model in TRNSYS. F) Selected papers from the author. (EHS)

  8. Large Scale Skill in Regional Climate Modeling and the Lateral Boundary Condition Scheme

    Science.gov (United States)

    Veljović, K.; Rajković, B.; Mesinger, F.

    2009-04-01

    Several points are made concerning the somewhat controversial issue of regional climate modeling: should a regional climate model (RCM) be expected to maintain the large scale skill of the driver global model that is supplying its lateral boundary condition (LBC)? Given that this is normally desired, is it able to do so without help via the fairly popular large scale nudging? Specifically, without such nudging, will the RCM kinetic energy necessarily decrease with time compared to that of the driver model or analysis data as suggested by a study using the Regional Atmospheric Modeling System (RAMS)? Finally, can the lateral boundary condition scheme make a difference: is the almost universally used but somewhat costly relaxation scheme necessary for a desirable RCM performance? Experiments are made to explore these questions running the Eta model in two versions differing in the lateral boundary scheme used. One of these schemes is the traditional relaxation scheme, and the other the Eta model scheme in which information is used at the outermost boundary only, and not all variables are prescribed at the outflow boundary. Forecast lateral boundary conditions are used, and results are verified against the analyses. Thus, skill of the two RCM forecasts can be and is compared not only against each other but also against that of the driver global forecast. A novel verification method is used in the manner of customary precipitation verification in that forecast spatial wind speed distribution is verified against analyses by calculating bias adjusted equitable threat scores and bias scores for wind speeds greater than chosen wind speed thresholds. In this way, focusing on a high wind speed value in the upper troposphere, verification of large scale features we suggest can be done in a manner that may be more physically meaningful than verifications via spectral decomposition that are a standard RCM verification method. The results we have at this point are somewhat

  9. One-velocity neutron diffusion calculations based on a two-group reactor model

    Energy Technology Data Exchange (ETDEWEB)

    Bingulac, S; Radanovic, L; Lazarevic, B; Matausek, M; Pop-Jordanov, J [Boris Kidric Institute of Nuclear Sciences, Vinca, Belgrade (Yugoslavia)

    1965-07-01

    Many processes in reactor physics are described by the energy dependent neutron diffusion equations which for many practical purposes can often be reduced to one-dimensional two-group equations. Though such two-group models are satisfactory from the standpoint of accuracy, they require rather extensive computations which are usually iterative and involve the use of digital computers. In many applications, however, and particularly in dynamic analyses, where the studies are performed on analogue computers, it is preferable to avoid iterative calculations. The usual practice in such situations is to resort to one group models, which allow the solution to be expressed analytically. However, the loss in accuracy is rather great particularly when several media of different properties are involved. This paper describes a procedure by which the solution of the two-group neutron diffusion. equations can be expressed analytically in the form which, from the computational standpoint, is as simple as the one-group model, but retains the accuracy of the two-group treatment. In describing the procedure, the case of a multi-region nuclear reactor of cylindrical geometry is treated, but the method applied and the results obtained are of more general application. Another approach in approximate solution of diffusion equations, suggested by Galanin is applicable only in special ideal cases.

  10. Model Experiments for the Determination of Airflow in Large Spaces

    DEFF Research Database (Denmark)

    Nielsen, Peter V.

    Model experiments are one of the methods used for the determination of airflow in large spaces. This paper will discuss the formation of the governing dimensionless numbers. It is shown that experiments with a reduced scale often will necessitate a fully developed turbulence level of the flow....... Details of the flow from supply openings are very important for the determination of room air distribution. It is in some cases possible to make a simplified supply opening for the model experiment....

  11. Reviewing the Role of Stakeholders in Operational Research: Opportunities for Group Model Building

    NARCIS (Netherlands)

    Gooyert, V. de; Rouwette, E.A.J.A.; Kranenburg, H.L. van

    2013-01-01

    Stakeholders have always received much attention in system dynamics, especially in the group model building tradition, which emphasizes the deep involvement of a client group in building a system dynamics model. In organizations, stakeholders are gaining more and more attention by managers who try

  12. Large-scale groundwater modeling using global datasets: a test case for the Rhine-Meuse basin

    NARCIS (Netherlands)

    Sutanudjaja, E.H.; Beek, L.P.H. van; Jong, S.M. de; Geer, F.C. van; Bierkens, M.F.P.

    2011-01-01

    The current generation of large-scale hydrological models does not include a groundwater flow component. Large-scale groundwater models, involving aquifers and basins of multiple countries, are still rare mainly due to a lack of hydro-geological data which are usually only available in

  13. Large-scale groundwater modeling using global datasets: A test case for the Rhine-Meuse basin

    NARCIS (Netherlands)

    Sutanudjaja, E.H.; Beek, L.P.H. van; Jong, S.M. de; Geer, F.C. van; Bierkens, M.F.P.

    2011-01-01

    The current generation of large-scale hydrological models does not include a groundwater flow component. Large-scale groundwater models, involving aquifers and basins of multiple countries, are still rare mainly due to a lack of hydro-geological data which are usually only available in developed

  14. A modified two-fluid model for the application of two-group interfacial area transport equation

    International Nuclear Information System (INIS)

    Sun, X.; Ishii, M.; Kelly, J.

    2003-01-01

    This paper presents the modified two-fluid model that is ready to be applied in the approach of the two-group interfacial area transport equation. The two-group interfacial area transport equation was developed to provide a mechanistic constitutive relation for the interfacial area concentration in the two-fluid model. In the two-group transport equation, bubbles are categorized into two groups: spherical/distorted bubbles as Group 1 while cap/slug/churn-turbulent bubbles as Group 2. Therefore, this transport equation can be employed in the flow regimes spanning from bubbly, cap bubbly, slug to churn-turbulent flows. However, the introduction of the two groups of bubbles requires two gas velocity fields. Yet it is not desirable to solve two momentum equations for the gas phase alone. In the current modified two-fluid model, a simplified approach is proposed. The momentum equation for the averaged velocity of both Group-1 and Group-2 bubbles is retained. By doing so, the velocity difference between Group-1 and Group-2 bubbles needs to be determined. This may be made either based on simplified momentum equations for both Group-1 and Group-2 bubbles or by a modified drift-flux model

  15. Numerical Modeling of Large-Scale Rocky Coastline Evolution

    Science.gov (United States)

    Limber, P.; Murray, A. B.; Littlewood, R.; Valvo, L.

    2008-12-01

    Seventy-five percent of the world's ocean coastline is rocky. On large scales (i.e. greater than a kilometer), many intertwined processes drive rocky coastline evolution, including coastal erosion and sediment transport, tectonics, antecedent topography, and variations in sea cliff lithology. In areas such as California, an additional aspect of rocky coastline evolution involves submarine canyons that cut across the continental shelf and extend into the nearshore zone. These types of canyons intercept alongshore sediment transport and flush sand to abyssal depths during periodic turbidity currents, thereby delineating coastal sediment transport pathways and affecting shoreline evolution over large spatial and time scales. How tectonic, sediment transport, and canyon processes interact with inherited topographic and lithologic settings to shape rocky coastlines remains an unanswered, and largely unexplored, question. We will present numerical model results of rocky coastline evolution that starts with an immature fractal coastline. The initial shape is modified by headland erosion, wave-driven alongshore sediment transport, and submarine canyon placement. Our previous model results have shown that, as expected, an initial sediment-free irregularly shaped rocky coastline with homogeneous lithology will undergo smoothing in response to wave attack; headlands erode and mobile sediment is swept into bays, forming isolated pocket beaches. As this diffusive process continues, pocket beaches coalesce, and a continuous sediment transport pathway results. However, when a randomly placed submarine canyon is introduced to the system as a sediment sink, the end results are wholly different: sediment cover is reduced, which in turn increases weathering and erosion rates and causes the entire shoreline to move landward more rapidly. The canyon's alongshore position also affects coastline morphology. When placed offshore of a headland, the submarine canyon captures local sediment

  16. Framing Negotiation: Dynamics of Epistemological and Positional Framing in Small Groups during Scientific Modeling

    Science.gov (United States)

    Shim, Soo-Yean; Kim, Heui-Baik

    2018-01-01

    In this study, we examined students' epistemological and positional framing during small group scientific modeling to explore their context-dependent perceptions about knowledge, themselves, and others. We focused on two small groups of Korean eighth-grade students who participated in six modeling activities about excretion. The two groups were…

  17. Description of group-theoretical model of developed turbulence

    International Nuclear Information System (INIS)

    Saveliev, V L; Gorokhovski, M A

    2008-01-01

    We propose to associate the phenomenon of stationary turbulence with the special self-similar solutions of the Euler equations. These solutions represent the linear superposition of eigenfields of spatial symmetry subgroup generators and imply their dependence on time through the parameter of the symmetry transformation only. From this model, it follows that for developed turbulent process, changing the scale of averaging (filtering) of the velocity field is equivalent to composition of scaling, translation and rotation transformations. We call this property a renormalization-group invariance of filtered turbulent fields. The renormalization group invariance provides an opportunity to transform the averaged Navier-Stokes equation over a small scale (inner threshold of the turbulence) to larger scales by simple scaling. From the methodological point of view, it is significant to note that the turbulent viscosity term appeared not as a result of averaging of the nonlinear term in the Navier-Stokes equation, but from the molecular viscosity term with the help of renormalization group transformation.

  18. On the standard model group in F-theory

    International Nuclear Information System (INIS)

    Choi, Kang-Sin

    2014-01-01

    We analyze the standard model gauge group SU(3) x SU(2) x U(1) constructed in F-theory. The non-Abelian part SU(3) x SU(2) is described by a surface singularity of Kodaira type. Blow-up analysis shows that the non-Abelian part is distinguished from the naive product of SU(3) and SU(2), but that it should be a rank three group along the chain of E n groups, because it has non-generic gauge symmetry enhancement structure responsible for desirablematter curves. The Abelian part U(1) is constructed from a globally valid two-form with the desired gauge quantum numbers, using a similar method to the decomposition (factorization) method of the spectral cover. This technique makes use of an extra section in the elliptic fiber of the Calabi-Yau manifold, on which F-theory is compactified. Conventional gauge coupling unification of SU(5) is achieved, without requiring a threshold correction from the flux along the hypercharge direction. (orig.)

  19. Holographic renormalization group and cosmology in theories with quasilocalized gravity

    International Nuclear Information System (INIS)

    Csaki, Csaba; Erlich, Joshua; Hollowood, Timothy J.; Terning, John

    2001-01-01

    We study the long distance behavior of brane theories with quasilocalized gravity. The five-dimensional (5D) effective theory at large scales follows from a holographic renormalization group flow. As intuitively expected, the graviton is effectively four dimensional at intermediate scales and becomes five dimensional at large scales. However, in the holographic effective theory the essentially 4D radion dominates at long distances and gives rise to scalar antigravity. The holographic description shows that at large distances the Gregory-Rubakov-Sibiryakov (GRS) model is equivalent to the model recently proposed by Dvali, Gabadadze, and Porrati (DGP), where a tensionless brane is embedded into 5D Minkowski space, with an additional induced 4D Einstein-Hilbert term on the brane. In the holographic description the radion of the GRS model is automatically localized on the tensionless brane, and provides the ghostlike field necessary to cancel the extra graviton polarization of the DGP model. Thus, there is a holographic duality between these theories. This analysis provides physical insight into how the GRS model works at intermediate scales; in particular it sheds light on the size of the width of the graviton resonance, and also demonstrates how the holographic renormalization group can be used as a practical tool for calculations

  20. Dominance Weighted Social Choice Functions for Group Recommendations

    Directory of Open Access Journals (Sweden)

    Silvia ROSSI

    2015-12-01

    Full Text Available In travel domains, decision support systems provide support to tourists in the planning of their vacation. In particular, when the number of possible Points of Interest (POI to visit is large, the system should help tourists providing recommendations on the POI that could be more interesting for them. Since traveling is, usually, an activity that involves small groups of people, the system should take simultaneously into account the preferences of each group's member. At the same time, it also should model possible intra-group relationships, which can have an impact in the group decision-making process. In this paper, we model this problem as a multi-agent aggregation of preferences by using weighted social choice functions, whereas such weights are automatically evaluated by analyzing the interactions of the group's members on Online Social Networks.

  1. Multivariate sparse group lasso for the multivariate multiple linear regression with an arbitrary group structure.

    Science.gov (United States)

    Li, Yanming; Nan, Bin; Zhu, Ji

    2015-06-01

    We propose a multivariate sparse group lasso variable selection and estimation method for data with high-dimensional predictors as well as high-dimensional response variables. The method is carried out through a penalized multivariate multiple linear regression model with an arbitrary group structure for the regression coefficient matrix. It suits many biology studies well in detecting associations between multiple traits and multiple predictors, with each trait and each predictor embedded in some biological functional groups such as genes, pathways or brain regions. The method is able to effectively remove unimportant groups as well as unimportant individual coefficients within important groups, particularly for large p small n problems, and is flexible in handling various complex group structures such as overlapping or nested or multilevel hierarchical structures. The method is evaluated through extensive simulations with comparisons to the conventional lasso and group lasso methods, and is applied to an eQTL association study. © 2015, The International Biometric Society.

  2. Exploring the Group Prenatal Care Model: A Critical Review of the Literature

    Science.gov (United States)

    Thielen, Kathleen

    2012-01-01

    Few studies have compared perinatal outcomes between individual prenatal care and group prenatal care. A critical review of research articles that were published between 1998 and 2009 and involved participants of individual and group prenatal care was conducted. Two middle range theories, Pender’s health promotion model and Swanson’s theory of caring, were blended to enhance conceptualization of the relationship between pregnant women and the group prenatal care model. Among the 17 research studies that met inclusion criteria for this critical review, five examined gestational age and birth weight with researchers reporting longer gestations and higher birth weights in infants born to mothers participating in group prenatal care, especially in the preterm birth population. Current evidence demonstrates that nurse educators and leaders should promote group prenatal care as a potential method of improving perinatal outcomes within the pregnant population. PMID:23997549

  3. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    This paper presents a comprehensive approach to sensitivity and uncertainty analysis of large-scale computer models that is analytic (deterministic) in principle and that is firmly based on the model equations. The theory and application of two systems based upon computer calculus, GRESS and ADGEN, are discussed relative to their role in calculating model derivatives and sensitivities without a prohibitive initial manpower investment. Storage and computational requirements for these two systems are compared for a gradient-enhanced version of the PRESTO-II computer model. A Deterministic Uncertainty Analysis (DUA) method that retains the characteristics of analytically computing result uncertainties based upon parameter probability distributions is then introduced and results from recent studies are shown. 29 refs., 4 figs., 1 tab

  4. 3D continuum phonon model for group-IV 2D materials

    DEFF Research Database (Denmark)

    Willatzen, Morten; Lew Yan Voon, Lok C.; Gandi, Appala Naidu

    2017-01-01

    . In this paper, we use the model to not only compare the phonon spectra among the group-IV materials but also to study whether these phonons differ from those of a compound material such as molybdenum disulfide. The origin of quadratic modes is clarified. Mode coupling for both graphene and silicene is obtained......, contrary to previous works. Our model allows us to predict the existence of confined optical phonon modes for the group-IV materials but not for molybdenum disulfide. A comparison of the long-wavelength modes to density-functional results is included....

  5. Modeling Resource Utilization of a Large Data Acquisition System

    CERN Document Server

    AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger

    2017-01-01

    The ATLAS 'Phase-II' upgrade, scheduled to start in 2024, will significantly change the requirements under which the data-acquisition system operates. The input data rate, currently fixed around 150 GB/s, is anticipated to reach 5 TB/s. In order to deal with the challenging conditions, and exploit the capabilities of newer technologies, a number of architectural changes are under consideration. Of particular interest is a new component, known as the Storage Handler, which will provide a large buffer area decoupling real-time data taking from event filtering. Dynamic operational models of the upgraded system can be used to identify the required resources and to select optimal techniques. In order to achieve a robust and dependable model, the current data-acquisition architecture has been used as a test case. This makes it possible to verify and calibrate the model against real operation data. Such a model can then be evolved toward the future ATLAS Phase-II architecture. In this paper we introduce the current ...

  6. Modelling Resource Utilization of a Large Data Acquisition System

    CERN Document Server

    Santos, Alejandro; The ATLAS collaboration

    2017-01-01

    The ATLAS 'Phase-II' upgrade, scheduled to start in 2024, will significantly change the requirements under which the data-acquisition system operates. The input data rate, currently fixed around 150 GB/s, is anticipated to reach 5 TB/s. In order to deal with the challenging conditions, and exploit the capabilities of newer technologies, a number of architectural changes are under consideration. Of particular interest is a new component, known as the Storage Handler, which will provide a large buffer area decoupling real-time data taking from event filtering. Dynamic operational models of the upgraded system can be used to identify the required resources and to select optimal techniques. In order to achieve a robust and dependable model, the current data-acquisition architecture has been used as a test case. This makes it possible to verify and calibrate the model against real operation data. Such a model can then be evolved toward the future ATLAS Phase-II architecture. In this paper we introduce the current ...

  7. Large-scale groundwater modeling using global datasets: A test case for the Rhine-Meuse basin

    NARCIS (Netherlands)

    Sutanudjaja, E.H.; Beek, L.P.H. van; Jong, S.M. de; Geer, F.C. van; Bierkens, M.F.P.

    2011-01-01

    Large-scale groundwater models involving aquifers and basins of multiple countries are still rare due to a lack of hydrogeological data which are usually only available in developed countries. In this study, we propose a novel approach to construct large-scale groundwater models by using global

  8. REIONIZATION ON LARGE SCALES. I. A PARAMETRIC MODEL CONSTRUCTED FROM RADIATION-HYDRODYNAMIC SIMULATIONS

    International Nuclear Information System (INIS)

    Battaglia, N.; Trac, H.; Cen, R.; Loeb, A.

    2013-01-01

    We present a new method for modeling inhomogeneous cosmic reionization on large scales. Utilizing high-resolution radiation-hydrodynamic simulations with 2048 3 dark matter particles, 2048 3 gas cells, and 17 billion adaptive rays in a L = 100 Mpc h –1 box, we show that the density and reionization redshift fields are highly correlated on large scales (∼> 1 Mpc h –1 ). This correlation can be statistically represented by a scale-dependent linear bias. We construct a parametric function for the bias, which is then used to filter any large-scale density field to derive the corresponding spatially varying reionization redshift field. The parametric model has three free parameters that can be reduced to one free parameter when we fit the two bias parameters to simulation results. We can differentiate degenerate combinations of the bias parameters by combining results for the global ionization histories and correlation length between ionized regions. Unlike previous semi-analytic models, the evolution of the reionization redshift field in our model is directly compared cell by cell against simulations and performs well in all tests. Our model maps the high-resolution, intermediate-volume radiation-hydrodynamic simulations onto lower-resolution, larger-volume N-body simulations (∼> 2 Gpc h –1 ) in order to make mock observations and theoretical predictions

  9. Application of simplified models to CO2 migration and immobilization in large-scale geological systems

    KAUST Repository

    Gasda, Sarah E.

    2012-07-01

    Long-term stabilization of injected carbon dioxide (CO 2) is an essential component of risk management for geological carbon sequestration operations. However, migration and trapping phenomena are inherently complex, involving processes that act over multiple spatial and temporal scales. One example involves centimeter-scale density instabilities in the dissolved CO 2 region leading to large-scale convective mixing that can be a significant driver for CO 2 dissolution. Another example is the potentially important effect of capillary forces, in addition to buoyancy and viscous forces, on the evolution of mobile CO 2. Local capillary effects lead to a capillary transition zone, or capillary fringe, where both fluids are present in the mobile state. This small-scale effect may have a significant impact on large-scale plume migration as well as long-term residual and dissolution trapping. Computational models that can capture both large and small-scale effects are essential to predict the role of these processes on the long-term storage security of CO 2 sequestration operations. Conventional modeling tools are unable to resolve sufficiently all of these relevant processes when modeling CO 2 migration in large-scale geological systems. Herein, we present a vertically-integrated approach to CO 2 modeling that employs upscaled representations of these subgrid processes. We apply the model to the Johansen formation, a prospective site for sequestration of Norwegian CO 2 emissions, and explore the sensitivity of CO 2 migration and trapping to subscale physics. Model results show the relative importance of different physical processes in large-scale simulations. The ability of models such as this to capture the relevant physical processes at large spatial and temporal scales is important for prediction and analysis of CO 2 storage sites. © 2012 Elsevier Ltd.

  10. Group-Wise Herding Behavior in Financial Markets: An Agent-Based Modeling Approach

    Science.gov (United States)

    Kim, Minsung; Kim, Minki

    2014-01-01

    In this paper, we shed light on the dynamic characteristics of rational group behaviors and the relationship between monetary policy and economic units in the financial market by using an agent-based model (ABM), the Hurst exponent, and the Shannon entropy. First, an agent-based model is used to analyze the characteristics of the group behaviors at different levels of irrationality. Second, the Hurst exponent is applied to analyze the characteristics of the trend-following irrationality group. Third, the Shannon entropy is used to analyze the randomness and unpredictability of group behavior. We show that in a system that focuses on macro-monetary policy, steep fluctuations occur, meaning that the medium-level irrationality group has the highest Hurst exponent and Shannon entropy among all of the groups. However, in a system that focuses on micro-monetary policy, all group behaviors follow a stable trend, and the medium irrationality group thus remains stable, too. Likewise, in a system that focuses on both micro- and macro-monetary policies, all groups tend to be stable. Consequently, we find that group behavior varies across economic units at each irrationality level for micro- and macro-monetary policy in the financial market. Together, these findings offer key insights into monetary policy. PMID:24714635

  11. Development of a transverse mixing model for large scale impulsion phenomenon in tight lattice

    International Nuclear Information System (INIS)

    Liu, Xiaojing; Ren, Shuo; Cheng, Xu

    2017-01-01

    Highlights: • Experiment data of Krauss is used to validate the feasibility of CFD simulation method. • CFD simulation is performed to simulate the large scale impulsion phenomenon for tight-lattice bundle. • A mixing model to simulate the large scale impulsion phenomenon is proposed based on CFD result fitting. • The new developed mixing model has been added in the subchannel code. - Abstract: Tight-lattice is widely adopted in the innovative reactor fuel bundles design since it can increase the conversion ratio and improve the heat transfer between fuel bundles and coolant. It has been noticed that a large scale impulsion of cross-velocity exists in the gap region, which plays an important role on the transverse mixing flow and heat transfer. Although many experiments and numerical simulation have been carried out to study the impulsion of velocity, a model to describe the wave length, amplitude and frequency of mixing coefficient is still missing. This research work takes advantage of the CFD method to simulate the experiment of Krauss and to compare experiment data and simulation result in order to demonstrate the feasibility of simulation method and turbulence model. Then, based on this verified method and model, several simulations are performed with different Reynolds number and different Pitch-to-Diameter ratio. By fitting the CFD results achieved, a mixing model to simulate the large scale impulsion phenomenon is proposed and adopted in the current subchannel code. The new mixing model is applied to some fuel assembly analysis by subchannel calculation, it can be noticed that the new developed mixing model can reduce the hot channel factor and contribute to a uniform distribution of outlet temperature.

  12. Solving large linear systems in an implicit thermohaline ocean model

    NARCIS (Netherlands)

    de Niet, Arie Christiaan

    2007-01-01

    The climate on earth is largely determined by the global ocean circulation. Hence it is important to predict how the flow will react to perturbation by for example melting icecaps. To answer questions about the stability of the global ocean flow, a computer model has been developed that is able to

  13. Monte Carlo technique for very large ising models

    Science.gov (United States)

    Kalle, C.; Winkelmann, V.

    1982-08-01

    Rebbi's multispin coding technique is improved and applied to the kinetic Ising model with size 600*600*600. We give the central part of our computer program (for a CDC Cyber 76), which will be helpful also in a simulation of smaller systems, and describe the other tricks necessary to go to large lattices. The magnetization M at T=1.4* T c is found to decay asymptotically as exp(-t/2.90) if t is measured in Monte Carlo steps per spin, and M( t = 0) = 1 initially.

  14. A coordination model for ultra-large scale systems of systems

    Directory of Open Access Journals (Sweden)

    Manuela L. Bujorianu

    2013-11-01

    Full Text Available The ultra large multi-agent systems are becoming increasingly popular due to quick decay of the individual production costs and the potential of speeding up the solving of complex problems. Examples include nano-robots, or systems of nano-satellites for dangerous meteorite detection, or cultures of stem cells for organ regeneration or nerve repair. The topics associated with these systems are usually dealt within the theories of intelligent swarms or biologically inspired computation systems. Stochastic models play an important role and they are based on various formulations of the mechanical statistics. In these cases, the main assumption is that the swarm elements have a simple behaviour and that some average properties can be deduced for the entire swarm. In contrast, complex systems in areas like aeronautics are formed by elements with sophisticated behaviour, which are even autonomous. In situations like this, a new approach to swarm coordination is necessary. We present a stochastic model where the swarm elements are communicating autonomous systems, the coordination is separated from the component autonomous activity and the entire swarm can be abstracted away as a piecewise deterministic Markov process, which constitutes one of the most popular model in stochastic control. Keywords: ultra large multi-agent systems, system of systems, autonomous systems, stochastic hybrid systems.

  15. Incorporating social groups' responses in a descriptive model for second- and higher-order impact identification

    International Nuclear Information System (INIS)

    Sutheerawatthana, Pitch; Minato, Takayuki

    2010-01-01

    The response of a social group is a missing element in the formal impact assessment model. Previous discussion of the involvement of social groups in an intervention has mainly focused on the formation of the intervention. This article discusses the involvement of social groups in a different way. A descriptive model is proposed by incorporating a social group's response into the concept of second- and higher-order effects. The model is developed based on a cause-effect relationship through the observation of phenomena in case studies. The model clarifies the process by which social groups interact with a lower-order effect and then generate a higher-order effect in an iterative manner. This study classifies social groups' responses into three forms-opposing, modifying, and advantage-taking action-and places them in six pathways. The model is expected to be used as an analytical tool for investigating and identifying impacts in the planning stage and as a framework for monitoring social groups' responses during the implementation stage of a policy, plan, program, or project (PPPPs).

  16. A Multi-Resolution Spatial Model for Large Datasets Based on the Skew-t Distribution

    KAUST Repository

    Tagle, Felipe

    2017-12-06

    Large, non-Gaussian spatial datasets pose a considerable modeling challenge as the dependence structure implied by the model needs to be captured at different scales, while retaining feasible inference. Skew-normal and skew-t distributions have only recently begun to appear in the spatial statistics literature, without much consideration, however, for the ability to capture dependence at multiple resolutions, and simultaneously achieve feasible inference for increasingly large data sets. This article presents the first multi-resolution spatial model inspired by the skew-t distribution, where a large-scale effect follows a multivariate normal distribution and the fine-scale effects follow a multivariate skew-normal distributions. The resulting marginal distribution for each region is skew-t, thereby allowing for greater flexibility in capturing skewness and heavy tails characterizing many environmental datasets. Likelihood-based inference is performed using a Monte Carlo EM algorithm. The model is applied as a stochastic generator of daily wind speeds over Saudi Arabia.

  17. Exploiting multi-scale parallelism for large scale numerical modelling of laser wakefield accelerators

    International Nuclear Information System (INIS)

    Fonseca, R A; Vieira, J; Silva, L O; Fiuza, F; Davidson, A; Tsung, F S; Mori, W B

    2013-01-01

    A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ∼10 6 cores and sustained performance over ∼2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios. (paper)

  18. Fast and accurate focusing analysis of large photon sieve using pinhole ring diffraction model.

    Science.gov (United States)

    Liu, Tao; Zhang, Xin; Wang, Lingjie; Wu, Yanxiong; Zhang, Jizhen; Qu, Hemeng

    2015-06-10

    In this paper, we developed a pinhole ring diffraction model for the focusing analysis of a large photon sieve. Instead of analyzing individual pinholes, we discuss the focusing of all of the pinholes in a single ring. An explicit equation for the diffracted field of individual pinhole ring has been proposed. We investigated the validity range of this generalized model and analytically describe the sufficient conditions for the validity of this pinhole ring diffraction model. A practical example and investigation reveals the high accuracy of the pinhole ring diffraction model. This simulation method could be used for fast and accurate focusing analysis of a large photon sieve.

  19. Effective models of new physics at the Large Hadron Collider

    International Nuclear Information System (INIS)

    Llodra-Perez, J.

    2011-07-01

    With the start of the Large Hadron Collider runs, in 2010, particle physicists will be soon able to have a better understanding of the electroweak symmetry breaking. They might also answer to many experimental and theoretical open questions raised by the Standard Model. Surfing on this really favorable situation, we will first present in this thesis a highly model-independent parametrization in order to characterize the new physics effects on mechanisms of production and decay of the Higgs boson. This original tool will be easily and directly usable in data analysis of CMS and ATLAS, the huge generalist experiments of LHC. It will help indeed to exclude or validate significantly some new theories beyond the Standard Model. In another approach, based on model-building, we considered a scenario of new physics, where the Standard Model fields can propagate in a flat six-dimensional space. The new spatial extra-dimensions will be compactified on a Real Projective Plane. This orbifold is the unique six-dimensional geometry which possesses chiral fermions and a natural Dark Matter candidate. The scalar photon, which is the lightest particle of the first Kaluza-Klein tier, is stabilized by a symmetry relic of the six dimension Lorentz invariance. Using the current constraints from cosmological observations and our first analytical calculation, we derived a characteristic mass range around few hundred GeV for the Kaluza-Klein scalar photon. Therefore the new states of our Universal Extra-Dimension model are light enough to be produced through clear signatures at the Large Hadron Collider. So we used a more sophisticated analysis of particle mass spectrum and couplings, including radiative corrections at one-loop, in order to establish our first predictions and constraints on the expected LHC phenomenology. (author)

  20. Application of Logic Models in a Large Scientific Research Program

    Science.gov (United States)

    O'Keefe, Christine M.; Head, Richard J.

    2011-01-01

    It is the purpose of this article to discuss the development and application of a logic model in the context of a large scientific research program within the Commonwealth Scientific and Industrial Research Organisation (CSIRO). CSIRO is Australia's national science agency and is a publicly funded part of Australia's innovation system. It conducts…

  1. Water and salt balance modelling to predict the effects of land-use changes in forested catchments. 3. The large catchment model

    Science.gov (United States)

    Sivapalan, Murugesu; Viney, Neil R.; Jeevaraj, Charles G.

    1996-03-01

    This paper presents an application of a long-term, large catchment-scale, water balance model developed to predict the effects of forest clearing in the south-west of Western Australia. The conceptual model simulates the basic daily water balance fluxes in forested catchments before and after clearing. The large catchment is divided into a number of sub-catchments (1-5 km2 in area), which are taken as the fundamental building blocks of the large catchment model. The responses of the individual subcatchments to rainfall and pan evaporation are conceptualized in terms of three inter-dependent subsurface stores A, B and F, which are considered to represent the moisture states of the subcatchments. Details of the subcatchment-scale water balance model have been presented earlier in Part 1 of this series of papers. The response of any subcatchment is a function of its local moisture state, as measured by the local values of the stores. The variations of the initial values of the stores among the subcatchments are described in the large catchment model through simple, linear equations involving a number of similarity indices representing topography, mean annual rainfall and level of forest clearing.The model is applied to the Conjurunup catchment, a medium-sized (39·6 km2) catchment in the south-west of Western Australia. The catchment has been heterogeneously (in space and time) cleared for bauxite mining and subsequently rehabilitated. For this application, the catchment is divided into 11 subcatchments. The model parameters are estimated by calibration, by comparing observed and predicted runoff values, over a 18 year period, for the large catchment and two of the subcatchments. Excellent fits are obtained.

  2. Large-scale ligand-based predictive modelling using support vector machines.

    Science.gov (United States)

    Alvarsson, Jonathan; Lampa, Samuel; Schaal, Wesley; Andersson, Claes; Wikberg, Jarl E S; Spjuth, Ola

    2016-01-01

    The increasing size of datasets in drug discovery makes it challenging to build robust and accurate predictive models within a reasonable amount of time. In order to investigate the effect of dataset sizes on predictive performance and modelling time, ligand-based regression models were trained on open datasets of varying sizes of up to 1.2 million chemical structures. For modelling, two implementations of support vector machines (SVM) were used. Chemical structures were described by the signatures molecular descriptor. Results showed that for the larger datasets, the LIBLINEAR SVM implementation performed on par with the well-established libsvm with a radial basis function kernel, but with dramatically less time for model building even on modest computer resources. Using a non-linear kernel proved to be infeasible for large data sizes, even with substantial computational resources on a computer cluster. To deploy the resulting models, we extended the Bioclipse decision support framework to support models from LIBLINEAR and made our models of logD and solubility available from within Bioclipse.

  3. Item Construction Using Reflective, Formative, or Rasch Measurement Models: Implications for Group Work

    Science.gov (United States)

    Peterson, Christina Hamme; Gischlar, Karen L.; Peterson, N. Andrew

    2017-01-01

    Measures that accurately capture the phenomenon are critical to research and practice in group work. The vast majority of group-related measures were developed using the reflective measurement model rooted in classical test theory (CTT). Depending on the construct definition and the measure's purpose, the reflective model may not always be the…

  4. ARMA modelling of neutron stochastic processes with large measurement noise

    International Nuclear Information System (INIS)

    Zavaljevski, N.; Kostic, Lj.; Pesic, M.

    1994-01-01

    An autoregressive moving average (ARMA) model of the neutron fluctuations with large measurement noise is derived from langevin stochastic equations and validated using time series data obtained during prompt neutron decay constant measurements at the zero power reactor RB in Vinca. Model parameters are estimated using the maximum likelihood (ML) off-line algorithm and an adaptive pole estimation algorithm based on the recursive prediction error method (RPE). The results show that subcriticality can be determined from real data with high measurement noise using much shorter statistical sample than in standard methods. (author)

  5. Large-scale groundwater modeling using global datasets: a test case for the Rhine-Meuse basin

    Directory of Open Access Journals (Sweden)

    E. H. Sutanudjaja

    2011-09-01

    Full Text Available The current generation of large-scale hydrological models does not include a groundwater flow component. Large-scale groundwater models, involving aquifers and basins of multiple countries, are still rare mainly due to a lack of hydro-geological data which are usually only available in developed countries. In this study, we propose a novel approach to construct large-scale groundwater models by using global datasets that are readily available. As the test-bed, we use the combined Rhine-Meuse basin that contains groundwater head data used to verify the model output. We start by building a distributed land surface model (30 arc-second resolution to estimate groundwater recharge and river discharge. Subsequently, a MODFLOW transient groundwater model is built and forced by the recharge and surface water levels calculated by the land surface model. Results are promising despite the fact that we still use an offline procedure to couple the land surface and MODFLOW groundwater models (i.e. the simulations of both models are separately performed. The simulated river discharges compare well to the observations. Moreover, based on our sensitivity analysis, in which we run several groundwater model scenarios with various hydro-geological parameter settings, we observe that the model can reasonably well reproduce the observed groundwater head time series. However, we note that there are still some limitations in the current approach, specifically because the offline-coupling technique simplifies the dynamic feedbacks between surface water levels and groundwater heads, and between soil moisture states and groundwater heads. Also the current sensitivity analysis ignores the uncertainty of the land surface model output. Despite these limitations, we argue that the results of the current model show a promise for large-scale groundwater modeling practices, including for data-poor environments and at the global scale.

  6. Analysis and Design Environment for Large Scale System Models and Collaborative Model Development, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — As NASA modeling efforts grow more complex and more distributed among many working groups, new tools and technologies are required to integrate their efforts...

  7. Large Deviations for Stochastic Models of Two-Dimensional Second Grade Fluids

    International Nuclear Information System (INIS)

    Zhai, Jianliang; Zhang, Tusheng

    2017-01-01

    In this paper, we establish a large deviation principle for stochastic models of incompressible second grade fluids. The weak convergence method introduced by Budhiraja and Dupuis (Probab Math Statist 20:39–61, 2000) plays an important role.

  8. Large Deviations for Stochastic Models of Two-Dimensional Second Grade Fluids

    Energy Technology Data Exchange (ETDEWEB)

    Zhai, Jianliang, E-mail: zhaijl@ustc.edu.cn [University of Science and Technology of China, School of Mathematical Sciences (China); Zhang, Tusheng, E-mail: Tusheng.Zhang@manchester.ac.uk [University of Manchester, School of Mathematics (United Kingdom)

    2017-06-15

    In this paper, we establish a large deviation principle for stochastic models of incompressible second grade fluids. The weak convergence method introduced by Budhiraja and Dupuis (Probab Math Statist 20:39–61, 2000) plays an important role.

  9. Regional modeling of large wildfires under current and potential future climates in Colorado and Wyoming, USA

    Science.gov (United States)

    West, Amanda; Kumar, Sunil; Jarnevich, Catherine S.

    2016-01-01

    Regional analysis of large wildfire potential given climate change scenarios is crucial to understanding areas most at risk in the future, yet wildfire models are not often developed and tested at this spatial scale. We fit three historical climate suitability models for large wildfires (i.e. ≥ 400 ha) in Colorado andWyoming using topography and decadal climate averages corresponding to wildfire occurrence at the same temporal scale. The historical models classified points of known large wildfire occurrence with high accuracies. Using a novel approach in wildfire modeling, we applied the historical models to independent climate and wildfire datasets, and the resulting sensitivities were 0.75, 0.81, and 0.83 for Maxent, Generalized Linear, and Multivariate Adaptive Regression Splines, respectively. We projected the historic models into future climate space using data from 15 global circulation models and two representative concentration pathway scenarios. Maps from these geospatial analyses can be used to evaluate the changing spatial distribution of climate suitability of large wildfires in these states. April relative humidity was the most important covariate in all models, providing insight to the climate space of large wildfires in this region. These methods incorporate monthly and seasonal climate averages at a spatial resolution relevant to land management (i.e. 1 km2) and provide a tool that can be modified for other regions of North America, or adapted for other parts of the world.

  10. Penson-Kolb-Hubbard model: a renormalisation group study

    International Nuclear Information System (INIS)

    Bhattacharyya, Bibhas; Roy, G.K.

    1995-01-01

    The Penson-Kolb-Hubbard (PKH) model in one dimension (1d) by means of real space renormalisation group (RG) method for the half-filled band has been studied. Different phases are identified by studying the RG-flow pattern, the energy gap and different correlation functions. The phase diagram consists of four phases: a spin density wave (SDW), a strong coupling superconducting phase (SSC), a weak coupling superconducting phase (WSC) and a nearly metallic phase. For the negative value of the pair hopping amplitude introduced in this model it was found that the pair-pair correlation indicates a superconducting phase for which the centre-of-mass of the pairs move with a momentum π. (author). 7 refs., 4 figs

  11. The ICRP task group respiratory tract model - an age-dependent dosimetric model for general application

    International Nuclear Information System (INIS)

    Bailey, M.R.; Birchall, A.

    1992-01-01

    The ICRP Task Group on Human Respiratory Tract Models for Radiological Protection has developed a revised dosimetric model for the respiratory tract. Papers outlining the model, and describing each aspect of it were presented at the Third International Workshop on Respiratory Tract Dosimetry (Albuquerque 1-3 July 1990), the Proceedings of which were recently published in Radiation Protection Dosimetry Volume 38 Nos 1-3 (1991). Since the model had not changed substantially since the Workshop at Albuquerque, only a summary of the paper presented at Schloss Elmau is included in these Proceedings. (author)

  12. A phase transition between small- and large-field models of inflation

    International Nuclear Information System (INIS)

    Itzhaki, Nissan; Kovetz, Ely D

    2009-01-01

    We show that models of inflection point inflation exhibit a phase transition from a region in parameter space where they are of large-field type to a region where they are of small-field type. The phase transition is between a universal behavior, with respect to the initial condition, at the large-field region and non-universal behavior at the small-field region. The order parameter is the number of e-foldings. We find integer critical exponents at the transition between the two phases.

  13. A model for amalgamation in group decision making

    Science.gov (United States)

    Cutello, Vincenzo; Montero, Javier

    1992-01-01

    In this paper we present a generalization of the model proposed by Montero, by allowing non-complete fuzzy binary relations for individuals. A degree of unsatisfaction can be defined in this case, suggesting that any democratic aggregation rule should take into account not only ethical conditions or some degree of rationality in the amalgamating procedure, but also a minimum support for the set of alternatives subject to the group analysis.

  14. Global dynamics of multi-group SEI animal disease models with indirect transmission

    International Nuclear Information System (INIS)

    Wang, Yi; Cao, Jinde

    2014-01-01

    A challenge to multi-group epidemic models in mathematical epidemiology is the exploration of global dynamics. Here we formulate multi-group SEI animal disease models with indirect transmission via contaminated water. Under biologically motivated assumptions, the basic reproduction number R 0 is derived and established as a sharp threshold that completely determines the global dynamics of the system. In particular, we prove that if R 0 <1, the disease-free equilibrium is globally asymptotically stable, and the disease dies out; whereas if R 0 >1, then the endemic equilibrium is globally asymptotically stable and thus unique, and the disease persists in all groups. Since the weight matrix for weighted digraphs may be reducible, the afore-mentioned approach is not directly applicable to our model. For the proofs we utilize the classical method of Lyapunov, graph-theoretic results developed recently and a new combinatorial identity. Since the multiple transmission pathways may correspond to the real world, the obtained results are of biological significance and possible generalizations of the model are also discussed

  15. Searches for phenomena beyond the Standard Model at the Large ...

    Indian Academy of Sciences (India)

    metry searches at the LHC is thus the channel with large missing transverse momentum and jets of high transverse momentum. No excess above the expected SM background is observed and limits are set on supersymmetric models. Figures 1 and 2 show the limits from ATLAS [11] and CMS [12]. In addition to setting limits ...

  16. Review of Dynamic Modeling and Simulation of Large Scale Belt Conveyor System

    Science.gov (United States)

    He, Qing; Li, Hong

    Belt conveyor is one of the most important devices to transport bulk-solid material for long distance. Dynamic analysis is the key to decide whether the design is rational in technique, safe and reliable in running, feasible in economy. It is very important to study dynamic properties, improve efficiency and productivity, guarantee conveyor safe, reliable and stable running. The dynamic researches and applications of large scale belt conveyor are discussed. The main research topics, the state-of-the-art of dynamic researches on belt conveyor are analyzed. The main future works focus on dynamic analysis, modeling and simulation of main components and whole system, nonlinear modeling, simulation and vibration analysis of large scale conveyor system.

  17. A Discrete Heterogeneous-Group Economic Growth Model with Endogenous Leisure Time

    Directory of Open Access Journals (Sweden)

    Wei-Bin Zhang

    2009-01-01

    Full Text Available This paper proposes a one-sector multigroup growth model with endogenous labor supply in discrete time. Proposing an alternative approach to behavior of households, we examine the dynamics of wealth and income distribution in a competitive economy with capital accumulation as the main engine of economic growth. We show how human capital levels, preferences, and labor force of heterogeneous households determine the national economic growth, wealth, and income distribution and time allocation of the groups. By simulation we demonstrate, for instance, that in the three-group economy when the rich group's human capital is improved, all the groups will economically benefit, and the leisure times of all the groups are reduced but when any other group's human capital is improved, the group will economically benefit, the other two groups economically lose, and the leisure times of all the groups are increased.

  18. 3D continuum phonon model for group-IV 2D materials

    KAUST Repository

    Willatzen, Morten

    2017-06-30

    A general three-dimensional continuum model of phonons in two-dimensional materials is developed. Our first-principles derivation includes full consideration of the lattice anisotropy and flexural modes perpendicular to the layers and can thus be applied to any two-dimensional material. In this paper, we use the model to not only compare the phonon spectra among the group-IV materials but also to study whether these phonons differ from those of a compound material such as molybdenum disulfide. The origin of quadratic modes is clarified. Mode coupling for both graphene and silicene is obtained, contrary to previous works. Our model allows us to predict the existence of confined optical phonon modes for the group-IV materials but not for molybdenum disulfide. A comparison of the long-wavelength modes to density-functional results is included.

  19. Analysis of Feedback processes in Online Group Interaction: a methodological model

    Directory of Open Access Journals (Sweden)

    Anna Espasa

    2013-06-01

    Full Text Available The aim of this article is to present a methodological model to analyze students' group interaction to improve their essays in online learning environments, based on asynchronous and written communication. In these environments teacher and student scaffolds for discussion are essential to promote interaction. One of these scaffolds can be the feedback. Research on feedback processes has predominantly focused on feedback design rather than on how students utilize feedback to improve learning. This methodological model fills this gap contributing to analyse the implementation of the feedback processes while students discuss collaboratively in a specific case of writing assignments. A review of different methodological models was carried out to define a framework adjusted to the analysis of the relationship of written and asynchronous group interaction, and students' activity and changes incorporated into the final text. The model proposed includes the following dimensions: 1 student participation 2 nature of student learning and 3 quality of student learning. The main contribution of this article is to present the methodological model and also to ascertain the model's operativity regarding how students incorporate such feedback into their essays.

  20. Interfacial area concentration in gas–liquid bubbly to churn flow regimes in large diameter pipes

    International Nuclear Information System (INIS)

    Shen, Xiuzhong; Hibiki, Takashi

    2015-01-01

    Highlights: • A systematic method to predict interfacial area concentration (IAC) is presented. • A correlation for group 1 bubble void fraction is proposed. • Correlations of IAC and bubble diameter are developed for group 1 bubbles. • Correlations of IAC and bubble diameter are developed for group 2 bubbles. • The newly-developed two-group IAC model compares well with collected databases. - Abstract: This study performed a survey on existing correlations for interfacial area concentration (IAC) prediction and collected an IAC experimental database of two-phase flows taken under various flow conditions in large diameter pipes. Although some of these existing correlations were developed by partly using the IAC databases taken in the low-void-fraction two-phase flows in large diameter pipes, no correlation can satisfactorily predict the IAC in the two-phase flows changing from bubbly, cap bubbly to churn flow in the collected database of large diameter pipes. So this study presented a systematic way to predict the IAC for the bubbly-to-churn flows in large diameter pipes by categorizing bubbles into two groups (group 1: spherical or distorted bubble, group 2: cap bubble). A correlation was developed to predict the group 1 void fraction by using the void fraction for all bubble. The group 1 bubble IAC and bubble diameter were modeled by using the key parameters such as group 1 void fraction and bubble Reynolds number based on the analysis of Hibiki and Ishii (2001, 2002) using one-dimensional bubble number density and interfacial area transport equations. The correlations of IAC and bubble diameter for group 2 cap bubbles were developed by taking into account the characteristics of the representative bubbles among the group 2 bubbles and the comparison between a newly-derived drift velocity correlation for large diameter pipes and the existing drift velocity correlation of Kataoka and Ishii (1987) for large diameter pipes. The predictions from the newly

  1. Distinguishing Little-Higgs product and simple group models at the LHC and ILC

    International Nuclear Information System (INIS)

    Kilian, W.; Rainwater, D.

    2006-09-01

    We propose a means to discriminate between the two basic variants of Little Higgs models, the Product Group and Simple Group models, at the next generation of colliders. It relies on a special coupling of light pseudoscalar particles present in Little Higgs models, the pseudo-axions, to the Z and the Higgs boson, which is present only in Simple Group models. We discuss the collider phenomenology of the pseudo-axion in the presence of such a coupling at the LHC, where resonant production and decay of either the Higgs or the pseudo-axion induced by that coupling can be observed for much of parameter space. The full allowed range of parameters, including regions where the observability is limited at the LHC, is covered by a future ILC, where double scalar production would be a golden channel to look for. (orig.)

  2. Distinguishing Little-Higgs product and simple group models at the LHC and ILC

    Energy Technology Data Exchange (ETDEWEB)

    Kilian, W. [Siegen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik]|[Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Rainwater, D. [Rochester Univ., NY (United States). Dept. of Physics and Astronomy; Reuter, J. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2006-09-15

    We propose a means to discriminate between the two basic variants of Little Higgs models, the Product Group and Simple Group models, at the next generation of colliders. It relies on a special coupling of light pseudoscalar particles present in Little Higgs models, the pseudo-axions, to the Z and the Higgs boson, which is present only in Simple Group models. We discuss the collider phenomenology of the pseudo-axion in the presence of such a coupling at the LHC, where resonant production and decay of either the Higgs or the pseudo-axion induced by that coupling can be observed for much of parameter space. The full allowed range of parameters, including regions where the observability is limited at the LHC, is covered by a future ILC, where double scalar production would be a golden channel to look for. (orig.)

  3. Network formation under heterogeneous costs: The multiple group model

    NARCIS (Netherlands)

    Kamphorst, J.J.A.; van der Laan, G.

    2007-01-01

    It is widely recognized that the shape of networks influences both individual and aggregate behavior. This raises the question which types of networks are likely to arise. In this paper we investigate a model of network formation, where players are divided into groups and the costs of a link between

  4. Full-Scale Approximations of Spatio-Temporal Covariance Models for Large Datasets

    KAUST Repository

    Zhang, Bohai; Sang, Huiyan; Huang, Jianhua Z.

    2014-01-01

    of dataset and application of such models is not feasible for large datasets. This article extends the full-scale approximation (FSA) approach by Sang and Huang (2012) to the spatio-temporal context to reduce computational complexity. A reversible jump Markov

  5. Modeling the Hydrologic Effects of Large-Scale Green Infrastructure Projects with GIS

    Science.gov (United States)

    Bado, R. A.; Fekete, B. M.; Khanbilvardi, R.

    2015-12-01

    Impervious surfaces in urban areas generate excess runoff, which in turn causes flooding, combined sewer overflows, and degradation of adjacent surface waters. Municipal environmental protection agencies have shown a growing interest in mitigating these effects with 'green' infrastructure practices that partially restore the perviousness and water holding capacity of urban centers. Assessment of the performance of current and future green infrastructure projects is hindered by the lack of adequate hydrological modeling tools; conventional techniques fail to account for the complex flow pathways of urban environments, and detailed analyses are difficult to prepare for the very large domains in which green infrastructure projects are implemented. Currently, no standard toolset exists that can rapidly and conveniently predict runoff, consequent inundations, and sewer overflows at a city-wide scale. We demonstrate how streamlined modeling techniques can be used with open-source GIS software to efficiently model runoff in large urban catchments. Hydraulic parameters and flow paths through city blocks, roadways, and sewer drains are automatically generated from GIS layers, and ultimately urban flow simulations can be executed for a variety of rainfall conditions. With this methodology, users can understand the implications of large-scale land use changes and green/gray storm water retention systems on hydraulic loading, peak flow rates, and runoff volumes.

  6. Development of two-group interfacial area transport equation for confined flow-2. Model evaluation

    International Nuclear Information System (INIS)

    Sun, Xiaodong; Kim, Seungjin; Ishii, Mamoru; Beus, Stephen G.

    2003-01-01

    The bubble interaction mechanisms have been analytically modeled in the first paper of this series to provide mechanistic constitutive relations for the two-group interfacial area transport equation (IATE), which was proposed to dynamically solve the interfacial area concentration in the two-fluid model. This paper presents the evaluation approach and results of the two-group IATE based on available experimental data obtained in confined flow, namely, 11 data sets in or near bubbly flow and 13 sets in cap-turbulent and churn-turbulent flow. The two-group IATE is evaluated in steady state, one-dimensional form. Also, since the experiments were performed under adiabatic, air-water two-phase flow conditions, the phase change effect is omitted in the evaluation. To account for the inter-group bubble transport, the void fraction transport equation for Group-2 bubbles is also used to predict the void fraction for Group-2 bubbles. Agreement between the data and the model predictions is reasonably good and the average relative difference for the total interfacial area concentration between the 24 data sets and predictions is within 7%. The model evaluation demonstrates the capability of the two-group IATE focused on the current confined flow to predict the interfacial area concentration over a wide range of flow regimes. (author)

  7. Inviscid Wall-Modeled Large Eddy Simulations for Improved Efficiency

    Science.gov (United States)

    Aikens, Kurt; Craft, Kyle; Redman, Andrew

    2015-11-01

    The accuracy of an inviscid flow assumption for wall-modeled large eddy simulations (LES) is examined because of its ability to reduce simulation costs. This assumption is not generally applicable for wall-bounded flows due to the high velocity gradients found near walls. In wall-modeled LES, however, neither the viscous near-wall region or the viscous length scales in the outer flow are resolved. Therefore, the viscous terms in the Navier-Stokes equations have little impact on the resolved flowfield. Zero pressure gradient flat plate boundary layer results are presented for both viscous and inviscid simulations using a wall model developed previously. The results are very similar and compare favorably to those from another wall model methodology and experimental data. Furthermore, the inviscid assumption reduces simulation costs by about 25% and 39% for supersonic and subsonic flows, respectively. Future research directions are discussed as are preliminary efforts to extend the wall model to include the effects of unresolved wall roughness. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. Computational resources on TACC Stampede were provided under XSEDE allocation ENG150001.

  8. The Comparative Study of Collaborative Learning and SDLC Model to develop IT Group Projects

    Directory of Open Access Journals (Sweden)

    Sorapak Pukdesree

    2017-11-01

    Full Text Available The main objectives of this research were to compare the attitudes of learners between applying SDLC model with collaborative learning and typical SDLC model and to develop electronic courseware as group projects. The research was a quasi-experimental research. The populations of the research were students who took Computer Organization and Architecture course in the academic year 2015. There were 38 students who participated to the research. The participants were divided voluntary into two groups including an experimental group with 28 students using SDLC model with collaborative learning and a control group with 10 students using typical SDLC model. The research instruments were attitude questionnaire, semi-structured interview and self-assessment questionnaire. The collected data was analysed by arithmetic mean, standard deviation, and independent sample t-test. The results of the questionnaire revealed that the attitudes of the learners using collaborative learning and SDLC model were statistically significant difference between the mean score for experimental group and control group at a significance level of 0.05. The independent statistical analyses were significantly different between the two groups at a significance level of 0.05. The results of the interviewing revealed that most of the learners had the corresponding opinions that collaborative learning was very useful with highest level of their attitudes comparing with the previous methodology. Learners had left some feedbacks that collaborative learning should be applied to other courses.

  9. Extended Group Contribution Model for Polyfunctional Phase Equilibria

    DEFF Research Database (Denmark)

    Abildskov, Jens

    of physical separation processes. In a thermodynamic sense, design requires detailed knowledge of activity coefficients in the phases at equilibrium. The prediction of these quantities from a minimum of experimental data is the broad scope of this thesis. Adequate equations exist for predicting vapor......Material and energy balances and equilibrium data form the basis of most design calculations. While material and energy balances may be stated without much difficulty, the design engineer is left with a choice between a wide variety of models for describing phase equilibria in the design......-liquid equilibria from data on binary mixtures, composed of structurally simple molecules with a single functional group. More complex is the situation with mixtures composed of structurally more complicated molecules or molecules with more than one functional group. The UNIFAC method is extended to handle...

  10. Modelling of decay heat removal using large water pools

    International Nuclear Information System (INIS)

    Munther, R.; Raussi, P.; Kalli, H.

    1992-01-01

    The main task for investigating of passive safety systems typical for ALWRs (Advanced Light Water Reactors) has been reviewing decay heat removal systems. The reference system for calculations has been represented in Hitachi's SBWR-concept. The calculations for energy transfer to the suppression pool were made using two different fluid mechanics codes, namely FIDAP and PHOENICS. FIDAP is based on finite element methodology and PHOENICS uses finite differences. The reason choosing these codes has been to compare their modelling and calculating abilities. The thermal stratification behaviour and the natural circulation was modelled with several turbulent flow models. Also, energy transport to the suppression pool was calculated for laminar flow conditions. These calculations required a large amount of computer resources and so the CRAY-supercomputer of the state computing centre was used. The results of the calculations indicated that the capabilities of these codes for modelling the turbulent flow regime are limited. Output from these codes should be considered carefully, and whenever possible, experimentally determined parameters should be used as input to enhance the code reliability. (orig.). (31 refs., 21 figs., 3 tabs.)

  11. Large tan β in gauge-mediated SUSY-breaking models

    International Nuclear Information System (INIS)

    Rattazzi, R.

    1997-01-01

    We explore some topics in the phenomenology of gauge-mediated SUSY-breaking scenarios having a large hierarchy of Higgs VEVs, v U /v D = tan β>>1. Some motivation for this scenario is first presented. We then use a systematic, analytic expansion (including some threshold corrections) to calculate the μ-parameter needed for proper electroweak breaking and the radiative corrections to the B-parameter, which fortuitously cancel at leading order. If B = 0 at the messenger scale then tan β is naturally large and calculable; we calculate it. We then confront this prediction with classical and quantum vacuum stability constraints arising from the Higgs-slepton potential, and indicate the preferred values of the top quark mass and messenger scale(s). The possibility of vacuum instability in a different direction yields an upper bound on the messenger mass scale complementary to the familiar bound from gravitino relic abundance. Next, we calculate the rate for b→sγ and show the possibility of large deviations (in the direction currently favored by experiment) from standard-model and small tan β predictions. Finally, we discuss the implications of these findings and their applicability to future, broader and more detailed investigations. (orig.)

  12. How uncertainty in socio-economic variables affects large-scale transport model forecasts

    DEFF Research Database (Denmark)

    Manzo, Stefano; Nielsen, Otto Anker; Prato, Carlo Giacomo

    2015-01-01

    A strategic task assigned to large-scale transport models is to forecast the demand for transport over long periods of time to assess transport projects. However, by modelling complex systems transport models have an inherent uncertainty which increases over time. As a consequence, the longer...... the period forecasted the less reliable is the forecasted model output. Describing uncertainty propagation patterns over time is therefore important in order to provide complete information to the decision makers. Among the existing literature only few studies analyze uncertainty propagation patterns over...

  13. Modelling the Transfer of Radionuclides from Naturally Occurring Radioactive Material (NORM). Report of the NORM Working Group of EMRAS Theme 2

    International Nuclear Information System (INIS)

    2012-01-01

    This working group was established to improve the modelling of the transfer of radionuclides from residues containing naturally occurring radioactive material (NORM) for the purposes of radiological assessment. Almost all naturally occurring materials contain radionuclides from the primordial decay chains (for example, uranium-238, uranium-235, thorium-232 and their daughter products radium-226 and radium-228), plus some individual long-lived radionuclides such as potassium-40. Extraction and/or processing of minerals containing these materials results waste containing such radionuclides. Often the processing can enhance the concentration of the NORM in the waste as compared with the original material. The extraction and processing of minerals usually involves large volumes of material and the resulting waste is also present in large volumes which are usually left on the earth's surface. Human exposure to radionuclides from such waste piles can occur as a result of gaseous emanation from the waste (radon-222) or as a result of the leaching by rainfall of radionuclides from the waste into water courses and, possibly, food chains. There are a variety of situations involving NORM that require potential radiation doses to be assessed, they include: (1) surface storage of residues from the extraction and processing of minerals; (2) remediation of NORM-containing waste piles; and (3) the use of NORM-containing waste for backfilling, building materials, road construction etc. In all of these situations there is a need to understand the present and future behaviour of the radionuclides which may be released from NORM so that steps can be taken to ensure that humans are adequately protected from exposure to radiation. Because of the long-lived nature of many of the radionuclides, the assessments must be carried out over long times into the future. This is the first time that the modelling of NORM-containing radionuclides has been examined in this IAEA format and the working

  14. Gaze distribution analysis and saliency prediction across age groups.

    Science.gov (United States)

    Krishna, Onkar; Helo, Andrea; Rämä, Pia; Aizawa, Kiyoharu

    2018-01-01

    Knowledge of the human visual system helps to develop better computational models of visual attention. State-of-the-art models have been developed to mimic the visual attention system of young adults that, however, largely ignore the variations that occur with age. In this paper, we investigated how visual scene processing changes with age and we propose an age-adapted framework that helps to develop a computational model that can predict saliency across different age groups. Our analysis uncovers how the explorativeness of an observer varies with age, how well saliency maps of an age group agree with fixation points of observers from the same or different age groups, and how age influences the center bias tendency. We analyzed the eye movement behavior of 82 observers belonging to four age groups while they explored visual scenes. Explorative- ness was quantified in terms of the entropy of a saliency map, and area under the curve (AUC) metrics was used to quantify the agreement analysis and the center bias tendency. Analysis results were used to develop age adapted saliency models. Our results suggest that the proposed age-adapted saliency model outperforms existing saliency models in predicting the regions of interest across age groups.

  15. Analogue scale modelling of extensional tectonic processes using a large state-of-the-art centrifuge

    Science.gov (United States)

    Park, Heon-Joon; Lee, Changyeol

    2017-04-01

    Analogue scale modelling of extensional tectonic processes such as rifting and basin opening has been numerously conducted. Among the controlling factors, gravitational acceleration (g) on the scale models was regarded as a constant (Earth's gravity) in the most of the analogue model studies, and only a few model studies considered larger gravitational acceleration by using a centrifuge (an apparatus generating large centrifugal force by rotating the model at a high speed). Although analogue models using a centrifuge allow large scale-down and accelerated deformation that is derived by density differences such as salt diapir, the possible model size is mostly limited up to 10 cm. A state-of-the-art centrifuge installed at the KOCED Geotechnical Centrifuge Testing Center, Korea Advanced Institute of Science and Technology (KAIST) allows a large surface area of the scale-models up to 70 by 70 cm under the maximum capacity of 240 g-tons. Using the centrifuge, we will conduct analogue scale modelling of the extensional tectonic processes such as opening of the back-arc basin. Acknowledgement This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (grant number 2014R1A6A3A04056405).

  16. A radiographic study of the mandibular third molar root development in different ethnic groups.

    Science.gov (United States)

    Liversidge, H M; Peariasamy, K; Folayan, M O; Adeniyi, A O; Ngom, P I; Mikami, Y; Shimada, Y; Kuroe, K; Tvete, I F; Kvaal, S I

    2017-12-01

    The nature of differences in the timing of tooth formation between ethnic groups is important when estimating age. To calculate age of transition of the mandibular third (M3) molar tooth stages from archived dental radiographs from sub-Saharan Africa, Malaysia, Japan and two groups from London UK (Whites and Bangladeshi). The number of radiographs was 4555 (2028 males, 2527 females) with an age range 10-25 years. The left M3 was staged into Moorrees stages. A probit model was fitted to calculate mean ages for transitions between stages for males and females and each ethnic group separately. The estimated age distributions given each M3 stage was calculated. To assess differences in timing of M3 between ethnic groups, three models were proposed: a separate model for each ethnic group, a joint model and a third model combining some aspects across groups. The best model fit was tested using Bayesian and Akaikes information criteria (BIC and AIC) and log likelihood ratio test. Differences in mean ages of M3 root stages were found between ethnic groups, however all groups showed large standard deviation values. The AIC and log likelihood ratio test indicated that a separate model for each ethnic group was best. Small differences were also noted between timing of M3 between males and females, with the exception of the Malaysian group. These findings suggests that features of a reference data set (wide age range and uniform age distribution) and a Bayesian statistical approach are more important than population specific convenience samples to estimate age of an individual using M3. Some group differences were evident in M3 timing, however, this has some impact on the confidence interval of estimated age in females and little impact in males because of the large variation in age.

  17. Improving CASINO performance for models with large number of electrons

    International Nuclear Information System (INIS)

    Anton, L.; Alfe, D.; Hood, R.Q.; Tanqueray, D.

    2009-01-01

    Quantum Monte Carlo calculations have at their core algorithms based on statistical ensembles of multidimensional random walkers which are straightforward to use on parallel computers. Nevertheless some computations have reached the limit of the memory resources for models with more than 1000 electrons because of the need to store a large amount of electronic orbitals related data. Besides that, for systems with large number of electrons, it is interesting to study if the evolution of one configuration of random walkers can be done faster in parallel. We present a comparative study of two ways to solve these problems: (1) distributed orbital data done with MPI or Unix inter-process communication tools, (2) second level parallelism for configuration computation

  18. Linear velocity fields in non-Gaussian models for large-scale structure

    Science.gov (United States)

    Scherrer, Robert J.

    1992-01-01

    Linear velocity fields in two types of physically motivated non-Gaussian models are examined for large-scale structure: seed models, in which the density field is a convolution of a density profile with a distribution of points, and local non-Gaussian fields, derived from a local nonlinear transformation on a Gaussian field. The distribution of a single component of the velocity is derived for seed models with randomly distributed seeds, and these results are applied to the seeded hot dark matter model and the global texture model with cold dark matter. An expression for the distribution of a single component of the velocity in arbitrary local non-Gaussian models is given, and these results are applied to such fields with chi-squared and lognormal distributions. It is shown that all seed models with randomly distributed seeds and all local non-Guassian models have single-component velocity distributions with positive kurtosis.

  19. A quantitative genetic model of reciprocal altruism: a condition for kin or group selection to prevail.

    Science.gov (United States)

    Aoki, K

    1983-01-01

    A condition is derived for reciprocal altruism to evolve by kin or group selection. It is assumed that many additively acting genes of small effect and the environment determine the probability that an individual is a reciprocal altruist, as opposed to being unconditionally selfish. The particular form of reciprocal altruism considered is TIT FOR TAT, a strategy that involves being altruistic on the first encounter with another individual and doing whatever the other did on the previous encounter in subsequent encounters with the same individual. Encounters are restricted to individuals of the same generation belonging to the same kin or breeding group, but first encounters occur at random within that group. The number of individuals with which an individual interacts is assumed to be the same within any kin or breeding group. There are 1 + i expected encounters between two interacting individuals. On any encounter, it is assumed that an individual who behaves altruistically suffers a cost in personal fitness proportional to c while improving his partner's fitness by the same proportion of b. Then, the condition for kin or group selection to prevail is [Formula: see text] if group size is sufficiently large and the group mean and the within-group genotypic variance of the trait value (i.e., the probability of being a TIT-FOR-TAT strategist) are uncorrelated. Here, C, Vb, and Tb are the population mean, between-group variance, and between-group third central moment of the trait value and r is the correlation between the additive genotypic values of interacting kin or of individuals within the same breeding group. The right-hand side of the above inequality is monotone decreasing in C if we hold Tb/Vb constant, and kin and group selection become superfluous beyond a certain threshold value of C. The effect of finite group size is also considered in a kin-selection model. PMID:6575395

  20. Evaluation of sub grid scale and local wall models in Large-eddy simulations of separated flow

    Directory of Open Access Journals (Sweden)

    Sam Ali Al

    2015-01-01

    Full Text Available The performance of the Sub Grid Scale models is studied by simulating a separated flow over a wavy channel. The first and second order statistical moments of the resolved velocities obtained by using Large-Eddy simulations at different mesh resolutions are compared with Direct Numerical Simulations data. The effectiveness of modeling the wall stresses by using local log-law is then tested on a relatively coarse grid. The results exhibit a good agreement between highly-resolved Large Eddy Simulations and Direct Numerical Simulations data regardless the Sub Grid Scale models. However, the agreement is less satisfactory with relatively coarse grid without using any wall models and the differences between Sub Grid Scale models are distinguishable. Using local wall model retuned the basic flow topology and reduced significantly the differences between the coarse meshed Large-Eddy Simulations and Direct Numerical Simulations data. The results show that the ability of local wall model to predict the separation zone depends strongly on its implementation way.

  1. Automatic Generation of Connectivity for Large-Scale Neuronal Network Models through Structural Plasticity.

    Science.gov (United States)

    Diaz-Pier, Sandra; Naveau, Mikaël; Butz-Ostendorf, Markus; Morrison, Abigail

    2016-01-01

    With the emergence of new high performance computation technology in the last decade, the simulation of large scale neural networks which are able to reproduce the behavior and structure of the brain has finally become an achievable target of neuroscience. Due to the number of synaptic connections between neurons and the complexity of biological networks, most contemporary models have manually defined or static connectivity. However, it is expected that modeling the dynamic generation and deletion of the links among neurons, locally and between different regions of the brain, is crucial to unravel important mechanisms associated with learning, memory and healing. Moreover, for many neural circuits that could potentially be modeled, activity data is more readily and reliably available than connectivity data. Thus, a framework that enables networks to wire themselves on the basis of specified activity targets can be of great value in specifying network models where connectivity data is incomplete or has large error margins. To address these issues, in the present work we present an implementation of a model of structural plasticity in the neural network simulator NEST. In this model, synapses consist of two parts, a pre- and a post-synaptic element. Synapses are created and deleted during the execution of the simulation following local homeostatic rules until a mean level of electrical activity is reached in the network. We assess the scalability of the implementation in order to evaluate its potential usage in the self generation of connectivity of large scale networks. We show and discuss the results of simulations on simple two population networks and more complex models of the cortical microcircuit involving 8 populations and 4 layers using the new framework.

  2. Multiconformation, Density Functional Theory-Based pKa Prediction in Application to Large, Flexible Organic Molecules with Diverse Functional Groups.

    Science.gov (United States)

    Bochevarov, Art D; Watson, Mark A; Greenwood, Jeremy R; Philipp, Dean M

    2016-12-13

    We consider the conformational flexibility of molecules and its implications for micro- and macro-pK a . The corresponding formulas are derived and discussed against the background of a comprehensive scientific and algorithmic description of the latest version of our computer program Jaguar pK a , a density functional theory-based pK a predictor, which is now capable of acting on multiple conformations explicitly. Jaguar pK a is essentially a complex computational workflow incorporating research and technologies from the fields of cheminformatics, molecular mechanics, quantum mechanics, and implicit solvation models. The workflow also makes use of automatically applied empirical corrections which account for the systematic errors resulting from the neglect of explicit solvent interactions in the algorithm's implicit solvent model. Applications of our program to large, flexible organic molecules representing several classes of functional groups are shown, with a particular emphasis in illustrations laid on drug-like molecules. It is demonstrated that a combination of aggressive conformational search and an explicit consideration of multiple conformations nearly eliminates the dependence of results on the initially chosen conformation. In certain cases this leads to unprecedented accuracy, which is sufficient for distinguishing stereoisomers that have slightly different pK a values. An application of Jaguar pK a to proton sponges, the pK a of which are strongly influenced by steric effects, showcases the advantages that pK a predictors based on quantum mechanical calculations have over similar empirical programs.

  3. Modelling of large sodium fires: A coupled experimental and calculational approach

    International Nuclear Information System (INIS)

    Astegiano, J.C.; Balard, F.; Cartier, L.; De Pascale, C.; Forestier, A.; Merigot, C.; Roubin, P.; Tenchine, D.; Bakouta, N.

    1996-01-01

    The consequences of large sodium leaks in secondary circuit of Super-Phenix have been studied mainly with the FEUMIX code, on the basis of sodium fire experiments. This paper presents the status of the coupled AIRBUS (water experiment) FEUMIX approach under development in order to strengthen the extrapolation made for the Super-Phenix secondary circuits calculations for large leakage flow. FEUMIX code is a point code based on the concept of a global interfacial area between sodium and air. Mass and heat transfers through this global area is supposed to be similar. Then, global interfacial transfer coefficient Sih is an important parameter of the model. Correlations for the interfacial area are extracted from a large number of sodium tests. For the studies of hypothetical large sodium leak in secondary circuit of Super-Phenix, flow rates of more than 1 t/s have been considered and extrapolation was made from the existing results (maximum flow rate 225 kg/s). In order to strengthen the extrapolation, water test has been contemplated, on the basis of a thermal hydraulic similarity. The principle is to measure the interfacial area of a hot water jet in air, then to transpose the Sih to sodium without combustion, and to use this value in FEUMIX with combustion modelling. AIRBUS test section is a parallelepipedic gastight tank, 106 m 3 (5.7 x 3.7 x 5) internally insulated. Water jet is injected from heated external auxiliary tank into the cell using pressurized air tank and specific valve. The main measurements performed during each test are injected flow rate air pressure water temperature gas temperature A first series of tests were performed in order to qualify the methodology: typical FCA and IGNA sodium fire tests were represented in AIRBUS, and a comparison of the FEUMIX calculation using Sih value deduced from water experiments show satisfactory agreement. A second series of test for large flow rate, corresponding to large sodium leak in secondary circuit of Super

  4. Bayesian assessment of moving group membership: importance of models and prior knowledge

    Science.gov (United States)

    Lee, Jinhee; Song, Inseok

    2018-04-01

    Young nearby moving groups are important and useful in many fields of astronomy such as studying exoplanets, low-mass stars, and the stellar evolution of the early planetary systems over tens of millions of years, which has led to intensive searches for their members. Identification of members depends on the used models sensitively; therefore, careful examination of the models is required. In this study, we investigate the effects of the models used in moving group membership calculations based on a Bayesian framework (e.g. BANYAN II) focusing on the beta-Pictoris moving group (BPMG). Three improvements for building models are suggested: (1) updating a list of accepted members by re-assessing memberships in terms of position, motion, and age, (2) investigating member distribution functions in XYZ, and (3) exploring field star distribution functions in XYZ and UVW. The effect of each change is investigated, and we suggest using all of these improvements simultaneously in future membership probability calculations. Using this improved MG membership calculation and the careful examination of the age, 57 bona fide members of BPMG are confirmed including 12 new members. We additionally suggest 17 highly probable members.

  5. An Axiomatic Analysis Approach for Large-Scale Disaster-Tolerant Systems Modeling

    Directory of Open Access Journals (Sweden)

    Theodore W. Manikas

    2011-02-01

    Full Text Available Disaster tolerance in computing and communications systems refers to the ability to maintain a degree of functionality throughout the occurrence of a disaster. We accomplish the incorporation of disaster tolerance within a system by simulating various threats to the system operation and identifying areas for system redesign. Unfortunately, extremely large systems are not amenable to comprehensive simulation studies due to the large computational complexity requirements. To address this limitation, an axiomatic approach that decomposes a large-scale system into smaller subsystems is developed that allows the subsystems to be independently modeled. This approach is implemented using a data communications network system example. The results indicate that the decomposition approach produces simulation responses that are similar to the full system approach, but with greatly reduced simulation time.

  6. Flexible non-linear predictive models for large-scale wind turbine diagnostics

    DEFF Research Database (Denmark)

    Bach-Andersen, Martin; Rømer-Odgaard, Bo; Winther, Ole

    2017-01-01

    We demonstrate how flexible non-linear models can provide accurate and robust predictions on turbine component temperature sensor data using data-driven principles and only a minimum of system modeling. The merits of different model architectures are evaluated using data from a large set...... of turbines operating under diverse conditions. We then go on to test the predictive models in a diagnostic setting, where the output of the models are used to detect mechanical faults in rotor bearings. Using retrospective data from 22 actual rotor bearing failures, the fault detection performance...... of the models are quantified using a structured framework that provides the metrics required for evaluating the performance in a fleet wide monitoring setup. It is demonstrated that faults are identified with high accuracy up to 45 days before a warning from the hard-threshold warning system....

  7. Many large medical groups will need to acquire new skills and tools to be ready for payment reform.

    Science.gov (United States)

    Mechanic, Robert; Zinner, Darren E

    2012-09-01

    Federal and state policy makers are now experimenting with programs that hold health systems accountable for delivering care under predetermined budgets to help control health care spending. To assess how well prepared medical groups are to participate in these arrangements, we surveyed twenty-one large, multispecialty groups. We evaluated their participation in risk contracts such as capitation and the degree of operational support associated with these arrangements. On average, about 25 percent of the surveyed groups' patient care revenue stemmed from global capitation contracts and 9 percent from partial capitation or shared risk contracts. Groups with a larger share of revenue from risk contracts were more likely than others to have salaried physicians, advanced data management capabilities, preferred relationships with efficient specialists, and formal programs to coordinate care for high-risk patients. Our findings suggest that medical groups that lack risk contracting experience may need to develop new competencies and infrastructure to successfully navigate federal payment reform programs, including information systems that track performance and support clinicians in delivering good care; physician-level reward systems that are aligned with organizational goals; sound physician leadership; and an organizational commitment to supporting performance improvement. The difficulty of implementing these changes in complex health care organizations should not be underestimated.

  8. Migdal-Kadanoff renormalization group for the Z(5) model

    International Nuclear Information System (INIS)

    Baltar, V.L.V.; Carneiro, G.M.; Pol, M.E.; Zagury, N.

    1984-01-01

    The Migdal-Kadanoff renormalization group methods is used to calculate the phase diagram of the AF Z(5) model. It is found that this scheme simulates a fixed line which it is interpreted as the locus of attraction of a critical phase. This result is in reasonable agreement with the predictions of Monte Carlo simulations. (Author) [pt

  9. Model checking methodology for large systems, faults and asynchronous behaviour. SARANA 2011 work report

    International Nuclear Information System (INIS)

    Lahtinen, J.; Launiainen, T.; Heljanko, K.; Ropponen, J.

    2012-01-01

    Digital instrumentation and control (I and C) systems are challenging to verify. They enable complicated control functions, and the state spaces of the models easily become too large for comprehensive verification through traditional methods. Model checking is a formal method that can be used for system verification. A number of efficient model checking systems are available that provide analysis tools to determine automatically whether a given state machine model satisfies the desired safety properties. This report reviews the work performed in the Safety Evaluation and Reliability Analysis of Nuclear Automation (SARANA) project in 2011 regarding model checking. We have developed new, more exact modelling methods that are able to capture the behaviour of a system more realistically. In particular, we have developed more detailed fault models depicting the hardware configuration of a system, and methodology to model function-block-based systems asynchronously. In order to improve the usability of our model checking methods, we have developed an algorithm for model checking large modular systems. The algorithm can be used to verify properties of a model that could otherwise not be verified in a straightforward manner. (orig.)

  10. Model checking methodology for large systems, faults and asynchronous behaviour. SARANA 2011 work report

    Energy Technology Data Exchange (ETDEWEB)

    Lahtinen, J. [VTT Technical Research Centre of Finland, Espoo (Finland); Launiainen, T.; Heljanko, K.; Ropponen, J. [Aalto Univ., Espoo (Finland). Dept. of Information and Computer Science

    2012-07-01

    Digital instrumentation and control (I and C) systems are challenging to verify. They enable complicated control functions, and the state spaces of the models easily become too large for comprehensive verification through traditional methods. Model checking is a formal method that can be used for system verification. A number of efficient model checking systems are available that provide analysis tools to determine automatically whether a given state machine model satisfies the desired safety properties. This report reviews the work performed in the Safety Evaluation and Reliability Analysis of Nuclear Automation (SARANA) project in 2011 regarding model checking. We have developed new, more exact modelling methods that are able to capture the behaviour of a system more realistically. In particular, we have developed more detailed fault models depicting the hardware configuration of a system, and methodology to model function-block-based systems asynchronously. In order to improve the usability of our model checking methods, we have developed an algorithm for model checking large modular systems. The algorithm can be used to verify properties of a model that could otherwise not be verified in a straightforward manner. (orig.)

  11. Absorption and scattering coefficient dependence of laser-Doppler flowmetry models for large tissue volumes

    International Nuclear Information System (INIS)

    Binzoni, T; Leung, T S; Ruefenacht, D; Delpy, D T

    2006-01-01

    Based on quasi-elastic scattering theory (and random walk on a lattice approach), a model of laser-Doppler flowmetry (LDF) has been derived which can be applied to measurements in large tissue volumes (e.g. when the interoptode distance is >30 mm). The model holds for a semi-infinite medium and takes into account the transport-corrected scattering coefficient and the absorption coefficient of the tissue, and the scattering coefficient of the red blood cells. The model holds for anisotropic scattering and for multiple scattering of the photons by the moving scatterers of finite size. In particular, it has also been possible to take into account the simultaneous presence of both Brownian and pure translational movements. An analytical and simplified version of the model has also been derived and its validity investigated, for the case of measurements in human skeletal muscle tissue. It is shown that at large optode spacing it is possible to use the simplified model, taking into account only a 'mean' light pathlength, to predict the blood flow related parameters. It is also demonstrated that the 'classical' blood volume parameter, derived from LDF instruments, may not represent the actual blood volume variations when the investigated tissue volume is large. The simplified model does not need knowledge of the tissue optical parameters and thus should allow the development of very simple and cost-effective LDF hardware

  12. Oligopolistic competition in wholesale electricity markets: Large-scale simulation and policy analysis using complementarity models

    Science.gov (United States)

    Helman, E. Udi

    This dissertation conducts research into the large-scale simulation of oligopolistic competition in wholesale electricity markets. The dissertation has two parts. Part I is an examination of the structure and properties of several spatial, or network, equilibrium models of oligopolistic electricity markets formulated as mixed linear complementarity problems (LCP). Part II is a large-scale application of such models to the electricity system that encompasses most of the United States east of the Rocky Mountains, the Eastern Interconnection. Part I consists of Chapters 1 to 6. The models developed in this part continue research into mixed LCP models of oligopolistic electricity markets initiated by Hobbs [67] and subsequently developed by Metzler [87] and Metzler, Hobbs and Pang [88]. Hobbs' central contribution is a network market model with Cournot competition in generation and a price-taking spatial arbitrage firm that eliminates spatial price discrimination by the Cournot firms. In one variant, the solution to this model is shown to be equivalent to the "no arbitrage" condition in a "pool" market, in which a Regional Transmission Operator optimizes spot sales such that the congestion price between two locations is exactly equivalent to the difference in the energy prices at those locations (commonly known as locational marginal pricing). Extensions to this model are presented in Chapters 5 and 6. One of these is a market model with a profit-maximizing arbitrage firm. This model is structured as a mathematical program with equilibrium constraints (MPEC), but due to the linearity of its constraints, can be solved as a mixed LCP. Part II consists of Chapters 7 to 12. The core of these chapters is a large-scale simulation of the U.S. Eastern Interconnection applying one of the Cournot competition with arbitrage models. This is the first oligopolistic equilibrium market model to encompass the full Eastern Interconnection with a realistic network representation (using

  13. Induction of continuous expanding infrarenal aortic aneurysms in a large porcine animal model

    DEFF Research Database (Denmark)

    Kloster, Brian Ozeraitis; Lund, Lars; Lindholt, Jes S.

    2015-01-01

    BackgroundA large animal model with a continuous expanding infrarenal aortic aneurysm gives access to a more realistic AAA model with anatomy and physiology similar to humans, and thus allows for new experimental research in the natural history and treatment options of the disease. Methods10 pigs...

  14. Renormalization group flow of scalar models in gravity

    International Nuclear Information System (INIS)

    Guarnieri, Filippo

    2014-01-01

    In this Ph.D. thesis we study the issue of renormalizability of gravitation in the context of the renormalization group (RG), employing both perturbative and non-perturbative techniques. In particular, we focus on different gravitational models and approximations in which a central role is played by a scalar degree of freedom, since their RG flow is easier to analyze. We restrict our interest in particular to two quantum gravity approaches that have gained a lot of attention recently, namely the asymptotic safety scenario for gravity and the Horava-Lifshitz quantum gravity. In the so-called asymptotic safety conjecture the high energy regime of gravity is controlled by a non-Gaussian fixed point which ensures non-perturbative renormalizability and finiteness of the correlation functions. We then investigate the existence of such a non trivial fixed point using the functional renormalization group, a continuum version of the non-perturbative Wilson's renormalization group. In particular we quantize the sole conformal degree of freedom, which is an approximation that has been shown to lead to a qualitatively correct picture. The question of the existence of a non-Gaussian fixed point in an infinite-dimensional parameter space, that is for a generic f(R) theory, cannot however be studied using such a conformally reduced model. Hence we study it by quantizing a dynamically equivalent scalar-tensor theory, i.e. a generic Brans-Dicke theory with ω=0 in the local potential approximation. Finally, we investigate, using a perturbative RG scheme, the asymptotic freedom of the Horava-Lifshitz gravity, that is an approach based on the emergence of an anisotropy between space and time which lifts the Newton's constant to a marginal coupling and explicitly preserves unitarity. In particular we evaluate the one-loop correction in 2+1 dimensions quantizing only the conformal degree of freedom.

  15. Fast sampling from a Hidden Markov Model posterior for large data

    DEFF Research Database (Denmark)

    Bonnevie, Rasmus; Hansen, Lars Kai

    2014-01-01

    Hidden Markov Models are of interest in a broad set of applications including modern data driven systems involving very large data sets. However, approximate inference methods based on Bayesian averaging are precluded in such applications as each sampling step requires a full sweep over the data...

  16. A large-scale multi-species spatial depletion model for overwintering waterfowl

    NARCIS (Netherlands)

    Baveco, J.M.; Kuipers, H.; Nolet, B.A.

    2011-01-01

    In this paper, we develop a model to evaluate the capacity of accommodation areas for overwintering waterfowl, at a large spatial scale. Each day geese are distributed over roosting sites. Based on the energy minimization principle, the birds daily decide which surrounding fields to exploit within

  17. A Continental-scale River Corridor Model to Synthesize Understanding and Prioritize Management of Water Purification Functions and Ecological Services in Large Basins

    Science.gov (United States)

    Harvey, J. W.; Gomez-Velez, J. D.; Scott, D.; Boyer, E. W.; Schmadel, N. M.; Alexander, R. B.; Eng, K.; Golden, H. E.; Kettner, A.; Konrad, C. P.; Moore, R. B.; Pizzuto, J. E.; Schwarz, G. E.; Soulsby, C.

    2017-12-01

    The functional values of rivers depend on more than just wetted river channels. Instead, the river channel exchanges water and suspended materials with adjacent riparian, floodplain, hyporheic zones, and ponded waters such as lakes and reservoirs. Together these features comprise a larger functional unit known as the river corridor. The exchange of water, solutes, and sediments within the river corridor alters downstream water quality and ecological functions, but our understanding of the large-scale, cumulative impacts is inadequate and has limited advancements in sustainable management practices. A problem with traditional watershed, groundwater, and river water quality models is that none of them explicitly accounts for river corridor storage and processing, and the exchanges of water, solutes, and sediments that occur many times between the channel and off-channel environments during a river's transport to the sea. Our River Corridor Working Group at the John Wesley Powell Center is quantifying the key components of river corridor functions. Relying on foundational studies that identified floodplain, riparian, and hyporheic exchange flows and resulting enhancement of chemical reactions at river reach scales, we are assembling the datasets and building the models to upscale that understanding onto 2.6 million river reaches in the U.S. A principal goal of the River Corridor Working group is to develop a national-scale river corridor model for the conterminous U.S. that will reveal, perhaps for the first time, the relative influences of hyporheic, riparian, floodplain, and ponded waters at large spatial scales. The simple but physically-based models are predictive for changing conditions and therefore can directly address the consequences and effectiveness of management actions in sustaining valuable river corridor functions. This presentation features interpretation of useful river corridor connectivity metrics and ponded water influences on nutrient and sediment

  18. Small groups, large profits: Calculating interest rates in community-managed microfinance

    DEFF Research Database (Denmark)

    Rasmussen, Ole Dahl

    2012-01-01

    Savings groups are a widely used strategy for women’s economic resilience – over 80% of members worldwide are women, and in the case described here, 72.5%. In these savings groups it is common to see the interest rate on savings reported as "20-30% annually". Using panel data from 204 groups...... in Malawi, I show that the right figure is likely to be at least twice this figure. For these groups, the annual return is 62%. The difference comes from sector-wide application of a non-standard interest rate calculations and unrealistic assumptions about the savings profile in the groups. As a result......, it is impossible to compare returns in savings groups with returns elsewhere. Moreover, the interest on savings is incomparable to the interest rate on loans. I argue for the use of a standardized comparable metric and suggest easy ways to implement it. Developments of new tools and standard along these lines...

  19. Large-Signal Code TESLA: Improvements in the Implementation and in the Model

    National Research Council Canada - National Science Library

    Chernyavskiy, Igor A; Vlasov, Alexander N; Anderson, Jr., Thomas M; Cooke, Simon J; Levush, Baruch; Nguyen, Khanh T

    2006-01-01

    We describe the latest improvements made in the large-signal code TESLA, which include transformation of the code to a Fortran-90/95 version with dynamical memory allocation and extension of the model...

  20. Simple Model for Simulating Characteristics of River Flow Velocity in Large Scale

    Directory of Open Access Journals (Sweden)

    Husin Alatas

    2015-01-01

    Full Text Available We propose a simple computer based phenomenological model to simulate the characteristics of river flow velocity in large scale. We use shuttle radar tomography mission based digital elevation model in grid form to define the terrain of catchment area. The model relies on mass-momentum conservation law and modified equation of motion of falling body in inclined plane. We assume inelastic collision occurs at every junction of two river branches to describe the dynamics of merged flow velocity.