WorldWideScience

Sample records for models large group

  1. Binary choices in small and large groups: A unified model

    Science.gov (United States)

    Bischi, Gian-Italo; Merlone, Ugo

    2010-02-01

    Two different ways to model the diffusion of alternative choices within a population of individuals in the presence of social externalities are known in the literature. While Galam’s model of rumors spreading considers a majority rule for interactions in several groups, Schelling considers individuals interacting in one large group, with payoff functions that describe how collective choices influence individual preferences. We incorporate these two approaches into a unified general discrete-time dynamic model for studying individual interactions in variously sized groups. We first illustrate how the two original models can be obtained as particular cases of the more general model we propose, then we show how several other situations can be analyzed. The model we propose goes beyond a theoretical exercise as it allows modeling situations which are relevant in economic and social systems. We consider also other aspects such as the propensity to switch choices and the behavioral momentum, and show how they may affect the dynamics of the whole population.

  2. Compatibility of the large quasar groups with the concordance cosmological model

    Science.gov (United States)

    Marinello, Gabriel E.; Clowes, Roger G.; Campusano, Luis E.; Williger, Gerard M.; Söchting, Ilona K.; Graham, Matthew J.

    2016-09-01

    We study the compatibility of large quasar groups with the concordance cosmological model. Large quasar groups are very large spatial associations of quasars in the cosmic web, with sizes of 50-250 h-1 Mpc. In particular, the largest large quasar group known, named Huge-LQG, has a longest axis of ˜860 h-1 Mpc, larger than the scale of homogeneity (˜260 Mpc), which has been noted as a possible violation of the cosmological principle. Using mock catalogues constructed from the Horizon Run 2 cosmological simulation, we found that large quasar groups size, quasar member number and mean overdensity distributions in the mocks agree with observations. The Huge-LQG is found to be a rare group with a probability of 0.3 per cent of finding a group as large or larger than the observed, but an extreme value analysis shows that it is an expected maximum in the sample volume with a probability of 19 per cent of observing a largest quasar group as large or larger than Huge-LQG. The Huge-LQG is expected to be the largest structure in a volume at least 5.3 ± 1 times larger than the one currently studied.

  3. Compatibility of the Large Quasar Groups with the Concordance Cosmological Model

    CERN Document Server

    Marinello, Gabriel E; Campusano, Luis E; Williger, Gerald M; Söchting, Ilona K; Graham, Matthew J

    2016-01-01

    We study the compatibility of large quasar groups with the concordance cosmological model. Large Quasar Groups are very large spatial associations of quasars in the cosmic web, with sizes of 50-250h^-1 Mpc. In particular, the largest large quasar group known, named Huge-LQG, has a longest axis of ~860h^-1 Mpc, larger than the scale of homogeneity (~260 Mpc), which has been pointed as a possible violation of the cosmological principle. Using mock catalogues constructed from the Horizon Run 2 cosmological simulation, we found that large quasar groups size, quasar member number and mean overdensity distributions in the mocks agree with observations. The Huge-LQG is found to be a rare group with a probability of 0.3 per cent of finding a group as large or larger than the observed, but an extreme value analysis shows that it is an expected maximum in the sample volume with a probability of 19 per cent of observing a largest quasar group as large or larger than Huge-LQG. The Huge-LQG is expected to be the largest s...

  4. Heritability of longevity in Large White and Landrace sows using continuous time and grouped data models

    Directory of Open Access Journals (Sweden)

    Sölkner Johann

    2010-05-01

    Full Text Available Abstract Background Using conventional measurements of lifetime, it is not possible to differentiate between productive and non-productive days during a sow's lifetime and this can lead to estimated breeding values favoring less productive animals. By rescaling the time axis from continuous to several discrete classes, grouped survival data (discrete survival time models can be used instead. Methods The productive life length of 12319 Large White and 9833 Landrace sows was analyzed with continuous scale and grouped data models. Random effect of herd*year, fixed effects of interaction between parity and relative number of piglets, age at first farrowing and annual herd size change were included in the analysis. The genetic component was estimated from sire, sire-maternal grandsire, sire-dam, sire-maternal grandsire and animal models, and the heritabilities computed for each model type in both breeds. Results If age at first farrowing was under 43 weeks or above 60 weeks, the risk of culling sows increased. An interaction between parity and relative litter size was observed, expressed by limited culling during first parity and severe risk increase of culling sows having small litters later in life. In the Landrace breed, heritabilities ranged between 0.05 and 0.08 (s.e. 0.014-0.020 for the continuous and between 0.07 and 0.11 (s.e. 0.016-0.023 for the grouped data models, and in the Large White breed, they ranged between 0.08 and 0.14 (s.e. 0.012-0.026 for the continuous and between 0.08 and 0.13 (s.e. 0.012-0.025 for the grouped data models. Conclusions Heritabilities for length of productive life were similar with continuous time and grouped data models in both breeds. Based on these results and because grouped data models better reflect the economical needs in meat animals, we conclude that grouped data models are more appropriate in pig.

  5. Memory Transmission in Small Groups and Large Networks: An Agent-Based Model.

    Science.gov (United States)

    Luhmann, Christian C; Rajaram, Suparna

    2015-12-01

    The spread of social influence in large social networks has long been an interest of social scientists. In the domain of memory, collaborative memory experiments have illuminated cognitive mechanisms that allow information to be transmitted between interacting individuals, but these experiments have focused on small-scale social contexts. In the current study, we took a computational approach, circumventing the practical constraints of laboratory paradigms and providing novel results at scales unreachable by laboratory methodologies. Our model embodied theoretical knowledge derived from small-group experiments and replicated foundational results regarding collaborative inhibition and memory convergence in small groups. Ultimately, we investigated large-scale, realistic social networks and found that agents are influenced by the agents with which they interact, but we also found that agents are influenced by nonneighbors (i.e., the neighbors of their neighbors). The similarity between these results and the reports of behavioral transmission in large networks offers a major theoretical insight by linking behavioral transmission to the spread of information.

  6. The Density Matrix Renormalization Group Method and Large-Scale Nuclear Shell-Model Calculations

    CERN Document Server

    Dimitrova, S S; Pittel, S; Stoitsov, M V

    2002-01-01

    The particle-hole Density Matrix Renormalization Group (p-h DMRG) method is discussed as a possible new approach to large-scale nuclear shell-model calculations. Following a general description of the method, we apply it to a class of problems involving many identical nucleons constrained to move in a single large j-shell and to interact via a pairing plus quadrupole interaction. A single-particle term that splits the shell into degenerate doublets is included so as to accommodate the physics of a Fermi surface in the problem. We apply the p-h DMRG method to this test problem for two $j$ values, one for which the shell model can be solved exactly and one for which the size of the hamiltonian is much too large for exact treatment. In the former case, the method is able to reproduce the exact results for the ground state energy, the energies of low-lying excited states, and other observables with extreme precision. In the latter case, the results exhibit rapid exponential convergence, suggesting the great promi...

  7. Efficacy of single large doses of caspofungin in a neutropenic murine model against the "psilosis" group.

    Science.gov (United States)

    Berényi, Réka; Kovács, Renátó; Domán, Marianna; Gesztelyi, Rudolf; Kardos, Gábor; Juhász, Béla; Perlin, David; Majoros, László

    2014-07-01

    We compared the in vivo efficacy of single large dose of caspofungin to that of daily smaller caspofungin doses (with same cumulative doses) against C. albicans (echinocandin susceptible and resistant isolates) and the “psilosis� group in a neutropenic murine model. Seven treatment groups were formed for C. orthopsilosis, C. metapsilosis and C. albicans (no treatment, 1, 2 and 3 mg/kg caspofungin daily for five days; single 5, 10 and 15 mg/kg caspofungin doses). For C. parapsilosis there were five treatment groups (no treatment, 3 and 4 mg/kg caspofungin daily for five days; single 15 and 20 mg/kg caspofungin). Tissue burdens of C. orthopsilosis and C. parapsilosis were significantly decreased by daily 3 mg/kg and 10 or 15 mg/kg single caspofungin doses (Pcaspofungin doses (Pcaspofungin were comparable or sometimes superior to the lower, daily-dose regimen against the “psilosis� group supporting further studies with this therapeutic strategy.

  8. Formation of Large-Amplitude Wave Groups in an Experimental Model Basin

    Science.gov (United States)

    2008-08-01

    Tm= 3.0s with large grouped waves embedded C -46 Figure 95. Spectral Analysis of Phase II, Run 26- Hurricane Camille, ?i=30, Hs= 40.64 cm (16.0 in...Tm= 2.45s with wave group embedded C -47 Figure 96. Spectral Analysis of Phase II, Run 40- Hurricane Camille, ^=46.6, Hs= 26.16 cm (10.3 in.), Tm...1.96s with wave group embedded C -48 Figure 97. Three-dimensional wave surface plots for Phase II, Run 7 D-2 Figure 98. GLRP Single Point Time History

  9. A model for small-group problem-based learning in a large class facilitated by one instructor.

    Science.gov (United States)

    Nicholl, Tessa A; Lou, Kelvin

    2012-08-10

    To implement and evaluate a model for small-group problem-based learning (PBL) in a large class facilitated by 1 instructor. A PBL model that included weekly assignments, quizzes, peer feedback, and case wrap-up sessions was developed and implemented in the final year of the pharmacy program to allow 1 instructor to facilitate PBL for up to 16 student teams in a large classroom. Student and team scores on multiple-choice examinations confirmed achievement of learning objectives. Students reported on course evaluation surveys that they were able to engage in the learning process and were satisfied with the new PBL model. This model achieved a cost savings of $42,000 per term. A revised PBL model without individual group tutors allowed students to achieve the required learning outcomes in an interactive and engaging atmosphere, avoided classroom-scheduling conflicts, and produced a large cost savings for the university.

  10. Quantitative Modeling of Membrane Transport and Anisogamy by Small Groups Within a Large-Enrollment Organismal Biology Course

    Directory of Open Access Journals (Sweden)

    Eric S. Haag

    2016-12-01

    Full Text Available Quantitative modeling is not a standard part of undergraduate biology education, yet is routine in the physical sciences. Because of the obvious biophysical aspects, classes in anatomy and physiology offer an opportunity to introduce modeling approaches to the introductory curriculum. Here, we describe two in-class exercises for small groups working within a large-enrollment introductory course in organismal biology. Both build and derive biological insights from quantitative models, implemented using spreadsheets. One exercise models the evolution of anisogamy (i.e., small sperm and large eggs from an initial state of isogamy. Groups of four students work on Excel spreadsheets (from one to four laptops per group. The other exercise uses an online simulator to generate data related to membrane transport of a solute, and a cloud-based spreadsheet to analyze them. We provide tips for implementing these exercises gleaned from two years of experience.

  11. Renormalization Group Flow, Stability, and Bulk Viscosity in a Large N Thermal QCD Model

    CERN Document Server

    Dasgupta, Keshav; Gale, Charles; Richard, Michael

    2016-01-01

    The ultraviolet completion of a large N QCD model requires introducing new degrees of freedom at certain scale so that the UV behavior may become asymptotically conformal with no Landau poles and no UV divergences of Wilson loops. These UV degrees of freedom are represented by certain anti-branes arranged on the blown-up sphere of a warped resolved conifold in a way that they are separated from the other set of branes that control the IR behavior of the theory. This separation of the branes and the anti-branes creates instability in the theory. Further complications arise from the curvature of the ambient space. We show that, despite these analytical hurdles, stability may still be achieved by switching on appropriate world-volume fluxes on the branes. The UV degrees of freedom, on the other hand, modify the RG flow in the model. We discuss this in details by evaluating the flow from IR confining to UV conformal. Finally we lay down a calculational scheme to study bulk viscosity which, in turn, would signal t...

  12. Distributed Model Predictive Control over Multiple Groups of Vehicles in Highway Intelligent Space for Large Scale System

    Directory of Open Access Journals (Sweden)

    Tang Xiaofeng

    2014-01-01

    Full Text Available The paper presents the three time warning distances for solving the large scale system of multiple groups of vehicles safety driving characteristics towards highway tunnel environment based on distributed model prediction control approach. Generally speaking, the system includes two parts. First, multiple vehicles are divided into multiple groups. Meanwhile, the distributed model predictive control approach is proposed to calculate the information framework of each group. Each group of optimization performance considers the local optimization and the neighboring subgroup of optimization characteristics, which could ensure the global optimization performance. Second, the three time warning distances are studied based on the basic principles used for highway intelligent space (HIS and the information framework concept is proposed according to the multiple groups of vehicles. The math model is built to avoid the chain avoidance of vehicles. The results demonstrate that the proposed highway intelligent space method could effectively ensure driving safety of multiple groups of vehicles under the environment of fog, rain, or snow.

  13. The Renormalization Group and Its Applications to Generating Coarse-Grained Models of Large Biological Molecular Systems.

    Science.gov (United States)

    Koehl, Patrice; Poitevin, Frédéric; Navaza, Rafael; Delarue, Marc

    2017-03-14

    Understanding the dynamics of biomolecules is the key to understanding their biological activities. Computational methods ranging from all-atom molecular dynamics simulations to coarse-grained normal-mode analyses based on simplified elastic networks provide a general framework to studying these dynamics. Despite recent successes in studying very large systems with up to a 100,000,000 atoms, those methods are currently limited to studying small- to medium-sized molecular systems due to computational limitations. One solution to circumvent these limitations is to reduce the size of the system under study. In this paper, we argue that coarse-graining, the standard approach to such size reduction, must define a hierarchy of models of decreasing sizes that are consistent with each other, i.e., that each model contains the information of the dynamics of its predecessor. We propose a new method, Decimate, for generating such a hierarchy within the context of elastic networks for normal-mode analysis. This method is based on the concept of the renormalization group developed in statistical physics. We highlight the details of its implementation, with a special focus on its scalability to large systems of up to millions of atoms. We illustrate its application on two large systems, the capsid of a virus and the ribosome translation complex. We show that highly decimated representations of those systems, containing down to 1% of their original number of atoms, still capture qualitatively and quantitatively their dynamics. Decimate is available as an OpenSource resource.

  14. Teacher-student co-construction processes in biology: Strategies for developing mental models in large group discussions

    Science.gov (United States)

    Nunez Oviedo, Maria Cecilia

    The aim of this study was to describe co-construction processes in large group discussions. Co-construction, as used here, is a process by which the teacher and the students work together to construct and evaluate mental models of a target concept. Data were collected for an in-depth case study of a single teacher instructing middle school students with an innovative curriculum on human respiration. Data came from transcripts of video taped lessons, drawings, and pre- and post-test scores. Quantitative and qualitative analyses were conducted. In the quantitative analysis, differences in gains between one and two standard deviations in size were found between the pre- and post-test scores indicating that the students increased their understanding about human respiration. In the qualitative analysis, a generative exploratory method followed by a convergent coded method was conducted to examine teacher-student interaction patterns. The aim of this part was to determine how learning occurred by attempting to connect dialogue patterns with underlying cognitive processes. The main outcome of the study is a hypothesized model containing four layers of nested teaching strategies. Listed from large to small time scales these are: the Macro Cycle, the Co-construction Modes, the Micro Cycle, and the Teaching Tactics. The most intensive analysis focused on identifying and articulating the Co-construction Modes---Accretion Mode, Disconfirmation Mode, Modification Mode, Evolution Mode, and Competition Mode---and their relations to the other levels of the model. These modes can either describe the construction and evaluation of individual model elements or of entire models giving a total of ten modes. The frequency of these co-construction modes was then determined by coding, twenty-six hours of transcripts. The most frequent modes were the Accretion Mode and the Disconfirmation Mode. The teacher's and the students' contributions to the co-construction process were also examined

  15. Large-group psychodynamics and massive violence

    Directory of Open Access Journals (Sweden)

    Vamik D. Volkan

    2006-06-01

    Full Text Available Beginning with Freud, psychoanalytic theories concerning large groups have mainly focused on individuals' perceptions of what their large groups psychologically mean to them. This chapter examines some aspects of large-group psychology in its own right and studies psychodynamics of ethnic, national, religious or ideological groups, the membership of which originates in childhood. I will compare the mourning process in individuals with the mourning process in large groups to illustrate why we need to study large-group psychology as a subject in itself. As part of this discussion I will also describe signs and symptoms of large-group regression. When there is a threat against a large-group's identity, massive violence may be initiated and this violence in turn, has an obvious impact on public health.

  16. Bridging the gap from ocean models to population dynamics of large marine predators: A model of mid-trophic functional groups

    Science.gov (United States)

    Lehodey, Patrick; Murtugudde, Raghu; Senina, Inna

    2010-01-01

    The modeling of mid-trophic organisms of the pelagic ecosystem is a critical step in linking the coupled physical-biogeochemical models to population dynamics of large pelagic predators. Here, we provide an example of a modeling approach with definitions of several pelagic mid-trophic functional groups. This application includes six different groups characterized by their vertical behavior, i.e., occurrence of diel migration between epipelagic, mesopelagic and bathypelagic layers. Parameterization of the dynamics of these components is based on a temperature-linked time development relationship. Estimated parameters of this relationship are close to those predicted by a model based on a theoretical description of the allocation of metabolic energy at the cellular level, and that predicts a species metabolic rate in terms of its body mass and temperature. Then, a simple energy transfer from primary production is used, justified by the existence of constant slopes in log-log biomass size spectrum relationships. Recruitment, ageing, mortality and passive transport with horizontal currents, taking into account vertical behavior of organisms, are modeled by a system of advection-diffusion-reaction equations. Temperature and currents averaged in each vertical layer are provided independently by an Ocean General Circulation Model and used to drive the mid-trophic level (MTL) model. Simulation outputs are presented for the tropical Pacific Ocean to illustrate how different temperature and oceanic circulation conditions result in spatial and temporal lags between regions of high primary production and regions of aggregation of mid-trophic biomass. Predicted biomasses are compared against available data. Data requirements to evaluate outputs of these types of models are discussed, as well as the prospects that they offer both for ecosystem models of lower and upper trophic levels.

  17. Renormalization group formulation of large eddy simulation

    Science.gov (United States)

    Yakhot, V.; Orszag, S. A.

    1985-01-01

    Renormalization group (RNG) methods are applied to eliminate small scales and construct a subgrid scale (SSM) transport eddy model for transition phenomena. The RNG and SSM procedures are shown to provide a more accurate description of viscosity near the wall than does the Smagorinski approach and also generate farfield turbulence viscosity values which agree well with those of previous researchers. The elimination of small scales causes the simultaneous appearance of a random force and eddy viscosity. The RNG method permits taking these into account, along with other phenomena (such as rotation) for large-eddy simulations.

  18. Secure Group Communications for Large Dynamic Multicast Group

    Institute of Scientific and Technical Information of China (English)

    Liu Jing; Zhou Mingtian

    2003-01-01

    As the major problem in multicast security, the group key management has been the focus of research But few results are satisfactory. In this paper, the problems of group key management and access control for large dynamic multicast group have been researched and a solution based on SubGroup Secure Controllers (SGSCs) is presented, which solves many problems in IOLUS system and WGL scheme.

  19. Discrete Group Gas-Solid Two-Phase Flow Model and Its Simulation in the Large Caliber High Speed Davis Gun

    Institute of Scientific and Technical Information of China (English)

    Shuyuan Jiang∗and Hao Wang

    2016-01-01

    Aiming at the characteristics of the long tubular powder, a one⁃dimensional discrete group gas⁃solid two-phase flow model was established for the large caliber high speed Davis gun with a tubular modular charge. In this model, the tubular modules were described by the Lagrangian system without being assumed as pseudo⁃fluid, whereas the gas field is described by Eulerian system. The new model was used to simulate a 480 mm Davis gun. The simulation results were compared with test results, and the model was verified to be feasibility. This study provides a new method to research the interior ballistic performance of Davis guns.

  20. Collaboration within Large Groups in the Classroom

    Science.gov (United States)

    Szewkis, Eyal; Nussbaum, Miguel; Rosen, Tal; Abalos, Jose; Denardin, Fernanda; Caballero, Daniela; Tagle, Arturo; Alcoholado, Cristian

    2011-01-01

    The purpose of this paper is to show how a large group of students can work collaboratively in a synchronous way within the classroom using the cheapest possible technological support. Making use of the features of Single Display Groupware and of Multiple Mice we propose a computer-supported collaborative learning approach for big groups within…

  1. Working group report: Physics at the Large Hadron Collider

    Indian Academy of Sciences (India)

    D K Ghosh; A Nyffeler; V Ravindran

    2011-05-01

    This is a summary of the activities of the Physics at the LHC working group in the XIth Workshop on High Energy Physics Phenomenology (WHEPP-XI) held at the Physical Research Laboratory, Ahmedabad, India in January 2010. We discuss the activities of each sub-working group on physics issues at colliders such as Tevatron and Large Hadron Collider (LHC). The main issues discussed involve (1) results on W mass measurement and associated QCD uncertainties, (2) an attempt to understand the asymmetry measured at Tevatron in the top quark production, and (3) phenomenology of warped space dimension model.

  2. Anomalous scaling and large-scale anisotropy in magnetohydrodynamic turbulence: two-loop renormalization-group analysis of the Kazantsev-Kraichnan kinematic model.

    Science.gov (United States)

    Antonov, N V; Gulitskiy, N M

    2012-06-01

    The field theoretic renormalization group and operator product expansion are applied to the Kazantsev-Kraichnan kinematic model for the magnetohydrodynamic turbulence. The anomalous scaling emerges as a consequence of the existence of certain composite fields ("operators") with negative dimensions. The anomalous exponents for the correlation functions of arbitrary order are calculated in the two-loop approximation (second order of the renormalization-group expansion), including the anisotropic sectors. The anomalous scaling and the hierarchy of anisotropic contributions become stronger due to those second-order contributions.

  3. Modelling group dynamic animal movement

    DEFF Research Database (Denmark)

    Langrock, Roland; Hopcraft, J. Grant C.; Blackwell, Paul G.

    2014-01-01

    Group dynamic movement is a fundamental aspect of many species' movements. The need to adequately model individuals' interactions with other group members has been recognised, particularly in order to differentiate the role of social forces in individual movement from environmental factors. However......, to date, practical statistical methods which can include group dynamics in animal movement models have been lacking. We consider a flexible modelling framework that distinguishes a group-level model, describing the movement of the group's centre, and an individual-level model, such that each individual...... makes its movement decisions relative to the group centroid. The basic idea is framed within the flexible class of hidden Markov models, extending previous work on modelling animal movement by means of multi-state random walks. While in simulation experiments parameter estimators exhibit some bias...

  4. 大型集团公司预算考核激励机制改进研究%Research on the Improvement of Budget Evaluation and Incentive Model for Large Group Company

    Institute of Scientific and Technical Information of China (English)

    陈晶晶; 游凌

    2012-01-01

    根据大型集团公司的特点和管理需求,引入多代理人的横向约束机制,对现有激励报酬模型进行改进研究,旨在抑制预算松弛现象,建立合理高效的预算考核激励机制,提出管理建议以供大型集团公司借鉴。%Based on the characteristics and budget management requirements of large group companies, this paper introduces the constraint mechanism of multi-agent to improve the existing incentive model. The target is to solve the problem caused by budgetary slack and establish a rational and efficient evaluation and incentive model for large group companies. Finally this paper proposes some suggestions on budget management for the reference of large group companies.

  5. Managing Student Behavior during Large Group Guidance: What Works Best?

    Science.gov (United States)

    Quarto, Christopher J.

    2007-01-01

    Participants provided information pertaining to managing non-task-related behavior of students during large group guidance lessons. In particular, school counselors were asked often how often they provide large group guidance, the frequency of which students exhibit off-task and/or disruptive behavior during guidance lessons, and techniques they…

  6. Implementing Small-Group Activities in Large Lecture Classes

    Science.gov (United States)

    Yazedjian, Ani; Kolkhorst, Brittany Boyle

    2007-01-01

    This study examines student perceptions regarding the effectiveness of small-group work in a large lecture class. The article considers and illustrates from students' perspectives the ways in which small-group activities could enhance comprehension of course material, reduce anonymity associated with large lecture classes, and promote student…

  7. Composite model with large mixing of neutrinos

    CERN Document Server

    Haba, N

    1999-01-01

    We suggest a simple composite model that induces the large flavor mixing of neutrino in the supersymmetric theory. This model has only one hyper-color in addition to the standard gauge group, which makes composite states of preons. In this model, {\\bf 10} and {\\bf 1} representations in SU(5) grand unified theory are composite states and produce the mass hierarchy. This explains why the large mixing is realized in the lepton sector, while the small mixing is realized in the quark sector. This model can naturally solve the atmospheric neutrino problem. We can also solve the solar neutrino problem by improving the model.

  8. Modeling Interactions in Small Groups

    Science.gov (United States)

    Heise, David R.

    2013-01-01

    A new theory of interaction within small groups posits that group members initiate actions when tension mounts between the affective meanings of their situational identities and impressions produced by recent events. Actors choose partners and behaviors so as to reduce the tensions. A computer model based on this theory, incorporating reciprocal…

  9. Going Spiral? Phenomena of "Half-Knowledge" in the Experiential Large Group as Temporary Learning Community

    Science.gov (United States)

    Adlam, John

    2014-01-01

    In this paper I use group-analytic, philosophical and psycho-social lenses to explore phenomena associated with the convening of an experiential large group within a two-day conference on the theme of "knowing and not-knowing". Drawing in particular on the work of Earl Hopper, two different models of large group convening--in which the…

  10. Large Unifying Hybrid Supernetwork Model

    Institute of Scientific and Technical Information of China (English)

    LIU; Qiang; FANG; Jin-qing; LI; Yong

    2015-01-01

    For depicting multi-hybrid process,large unifying hybrid network model(so called LUHNM)has two sub-hybrid ratios except dr.They are deterministic hybrid ratio(so called fd)and random hybrid ratio(so called gr),respectively.

  11. Large N Expansion. Vector Models

    CERN Document Server

    Nissimov, E; Nissimov, Emil; Pacheva, Svetlana

    2006-01-01

    Preliminary version of a contribution to the "Quantum Field Theory. Non-Perturbative QFT" topical area of "Modern Encyclopedia of Mathematical Physics" (SELECTA), eds. Aref'eva I, and Sternheimer D, Springer (2007). Consists of two parts - "main article" (Large N Expansion. Vector Models) and a "brief article" (BPHZL Renormalization).

  12. Highly Scalable Trip Grouping for Large Scale Collective Transportation Systems

    DEFF Research Database (Denmark)

    Gidofalvi, Gyozo; Pedersen, Torben Bach; Risch, Tore

    2008-01-01

    Transportation-related problems, like road congestion, parking, and pollution, are increasing in most cities. In order to reduce traffic, recent work has proposed methods for vehicle sharing, for example for sharing cabs by grouping "closeby" cab requests and thus minimizing transportation cost...... and utilizing cab space. However, the methods published so far do not scale to large data volumes, which is necessary to facilitate large-scale collective transportation systems, e.g., ride-sharing systems for large cities. This paper presents highly scalable trip grouping algorithms, which generalize previous...

  13. MODELLING OF ONLINE GROUP DISCOUNTS

    Directory of Open Access Journals (Sweden)

    Karlo Kotarac

    2013-02-01

    Full Text Available Web pages for group discounts have become very popular in the past few years. In this paper we concentrate on the group discounts for the service industry in which a quality of the service plays an important role in retaining customers which in return affects business profitability. We present a model of the group discount offer from a merchant’s point view. A merchant decides about the size of the discount offered, having in mind quality of the service offered which is affected by the number of customers who use the service. Finally, we derive the first order optimality conditions.

  14. Verbal Synchrony and Action Dynamics in Large Groups.

    Science.gov (United States)

    von Zimmermann, Jorina; Richardson, Daniel C

    2016-01-01

    While synchronized movement has been shown to increase liking and feelings of togetherness between people, we investigated whether collective speaking in time would change the way that larger groups played a video game together. Anthropologists have speculated that the function of interpersonal coordination in dance, chants, and singing is not just to produce warm, affiliative feelings, but also to improve group action. The group that chants and dances together hunts well together. Direct evidence for this is sparse, as research so far has mainly studied pairs, the effects of coordinated physical movement, and measured cooperation and affiliative decisions. In our experiment, large groups of people were given response handsets to play a computer game together, in which only joint coordinative efforts lead to success. Before playing, the synchrony of their verbal behavior was manipulated. After the game, we measured group members' affiliation toward their group, their performance on a memory task, and the way in which they played the group action task. We found that verbal synchrony in large groups produced affiliation, enhanced memory performance, and increased group members' coordinative efforts. Our evidence suggests that the effects of synchrony are stable across modalities, can be generalized to larger groups and have consequences for action coordination.

  15. Characteristic Properties of Large Subgroups in Primary Abelian Groups

    Indian Academy of Sciences (India)

    Peter V Danchev

    2004-08-01

    Suppose is an arbitrary additively written primary abelian group with a fixed large subgroup . It is shown that is (a) summable; (b)$\\sum$-summable; (c) a $\\sum$-group; (d) $p^{+1}$-projective only when so is . These claims extend results of such a kind obtained by Benabdallah, Eisenstadt, Irwin and Poluianov, Acta Math. Acad. Sci. Hungaricae (1970) and Khan, Proc. Indian Acad. Sci. Sect. A (1978).

  16. Will Large DSO-Managed Group Practices Be the Predominant Setting for Oral Health Care by 2025? Two Viewpoints: Viewpoint 1: Large DSO-Managed Group Practices Will Be the Setting in Which the Majority of Oral Health Care Is Delivered by 2025 and Viewpoint 2: Increases in DSO-Managed Group Practices Will Be Offset by Models Allowing Dentists to Retain the Independence and Freedom of a Traditional Practice.

    Science.gov (United States)

    Cole, James R; Dodge, William W; Findley, John S; Young, Stephen K; Horn, Bruce D; Kalkwarf, Kenneth L; Martin, Max M; Winder, Ronald L

    2015-05-01

    This Point/Counterpoint article discusses the transformation of dental practice from the traditional solo/small-group (partnership) model of the 1900s to large Dental Support Organizations (DSO) that support affiliated dental practices by providing nonclinical functions such as, but not limited to, accounting, human resources, marketing, and legal and practice management. Many feel that DSO-managed group practices (DMGPs) with employed providers will become the setting in which the majority of oral health care will be delivered in the future. Viewpoint 1 asserts that the traditional dental practice patterns of the past are shifting as many younger dentists gravitate toward employed positions in large group practices or the public sector. Although educational debt is relevant in predicting graduates' practice choices, other variables such as gender, race, and work-life balance play critical roles as well. Societal characteristics demonstrated by aging Gen Xers and those in the Millennial generation blend seamlessly with the opportunities DMGPs offer their employees. Viewpoint 2 contends the traditional model of dental care delivery-allowing entrepreneurial practitioners to make decisions in an autonomous setting-is changing but not to the degree nor as rapidly as Viewpoint 1 professes. Millennials entering the dental profession, with characteristics universally attributed to their generation, see value in the independence and flexibility that a traditional practice allows. Although DMGPs provide dentists one option for practice, several alternative delivery models offer current dentists and future dental school graduates many of the advantages of DMGPs while allowing them to maintain the independence and freedom a traditional practice provides.

  17. Teaching English Pronunciation to Large Groups of Students: Some Suggestions.

    Science.gov (United States)

    Makarova, Veronika

    Problems in teaching English pronunciation to large groups of university students in Japan are discussed, and some solutions are offered. Pronunciation instruction requires close individual interaction between teacher and student, difficult if not impossible to achieve in a typical Japanese university classroom. However, it is possible to get…

  18. Models of large scale structure

    Energy Technology Data Exchange (ETDEWEB)

    Frenk, C.S. (Physics Dept., Univ. of Durham (UK))

    1991-01-01

    The ingredients required to construct models of the cosmic large scale structure are discussed. Input from particle physics leads to a considerable simplification by offering concrete proposals for the geometry of the universe, the nature of the dark matter and the primordial fluctuations that seed the growth of structure. The remaining ingredient is the physical interaction that governs dynamical evolution. Empirical evidence provided by an analysis of a redshift survey of IRAS galaxies suggests that gravity is the main agent shaping the large-scale structure. In addition, this survey implies large values of the mean cosmic density, {Omega}> or approx.0.5, and is consistent with a flat geometry if IRAS galaxies are somewhat more clustered than the underlying mass. Together with current limits on the density of baryons from Big Bang nucleosynthesis, this lends support to the idea of a universe dominated by non-baryonic dark matter. Results from cosmological N-body simulations evolved from a variety of initial conditions are reviewed. In particular, neutrino dominated and cold dark matter dominated universes are discussed in detail. Finally, it is shown that apparent periodicities in the redshift distributions in pencil-beam surveys arise frequently from distributions which have no intrinsic periodicity but are clustered on small scales. (orig.).

  19. Five Large Generation Groups:Competing in Capital Operation

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    @@ Since the reform of electric power industry in 2002,the newly established five large generation groups have been persisting in the development strategy of "taking electricity as the core and extending to up-and-downstream businesses." Stringent measures were taken in capital operation and their potential has been shown through electric power assets acquiring,coal and financial resources investing,capital market financing as well as power utility restructuring.The five groups are playing more and more important roles in merger and acquisition (M&A) and capital markets.

  20. Integrating Collaborative Learning Groups in the Large Enrollment Lecture

    Science.gov (United States)

    Adams, J. P.; Brissenden, G.; Lindell Adrian, R.; Slater, T. F.

    1998-12-01

    Recent reforms for undergraduate education propose that students should work in teams to solve problems that simulate problems that research scientists address. In the context of an innovative large-enrollment course at Montana State University, faculty have developed a series of 15 in-class, collaborative learning group activities that provide students with realistic scenarios to investigate. Focusing on a team approach, the four principle types of activities employed are historical, conceptual, process, and open-ended activities. Examples of these activities include classifying stellar spectra, characterizing galaxies, parallax measurements, estimating stellar radii, and correlating star colors with absolute magnitudes. Summative evaluation results from a combination of attitude surveys, astronomy concept examinations, and focus group interviews strongly suggest that, overall, students are learning more astronomy, believe that the group activities are valuable, enjoy the less-lecture course format, and have significantly higher attendance rates. In addition, class observations of 48 self-formed, collaborative learning groups reveal that female students are more engaged in single-gender learning groups than in mixed gender groups.

  1. The effect of continuous grouping of pigs in large groups on stress response and haematological parameters

    DEFF Research Database (Denmark)

    Damgaard, Birthe Marie; Studnitz, Merete; Jensen, Karin Hjelholt

    2009-01-01

    from weaning at the age of 4 weeks to the age of 18 weeks after weaning. Limited differences were found in stress and haematological parameters between pigs in dynamic and static groups. The cortisol response to the stress test was increasing with the duration of the stress test in pigs from......The consequences of an ‘all in-all out' static group of uniform age vs. a continuously dynamic group with litter introduction and exit every third week were examined with respect to stress response and haematological parameters in large groups of 60 pigs. The experiment included a total of 480 pigs...

  2. Automated group assignment in large phylogenetic trees using GRUNT: GRouping, Ungrouping, Naming Tool

    Directory of Open Access Journals (Sweden)

    Markowitz Victor M

    2007-10-01

    Full Text Available Abstract Background Accurate taxonomy is best maintained if species are arranged as hierarchical groups in phylogenetic trees. This is especially important as trees grow larger as a consequence of a rapidly expanding sequence database. Hierarchical group names are typically manually assigned in trees, an approach that becomes unfeasible for very large topologies. Results We have developed an automated iterative procedure for delineating stable (monophyletic hierarchical groups to large (or small trees and naming those groups according to a set of sequentially applied rules. In addition, we have created an associated ungrouping tool for removing existing groups that do not meet user-defined criteria (such as monophyly. The procedure is implemented in a program called GRUNT (GRouping, Ungrouping, Naming Tool and has been applied to the current release of the Greengenes (Hugenholtz 16S rRNA gene taxonomy comprising more than 130,000 taxa. Conclusion GRUNT will facilitate researchers requiring comprehensive hierarchical grouping of large tree topologies in, for example, database curation, microarray design and pangenome assignments. The application is available at the greengenes website 1.

  3. VLES Modelling with the Renormalization Group

    Institute of Scientific and Technical Information of China (English)

    Chris De Langhe; Bart Merci; Koen Lodefier; Erik Dick

    2003-01-01

    In a Very-Large-Eddy Simulation (VLES), the filterwidth-wavenumber can be outside the inertial range, and simple subgrid models have to be replaced by more complicated ('RANS-like') models which can describe the transport of the biggest eddies. One could approach this by using a RANS model in these regions, and modify the lengthscale in the model for the LES-regions[1~3]. The problem with these approaches is that these models are specifically calibrated for RANS computations, and therefore not suitable to describe inertial range quantities. We investigated the construction of subgrid viscosity and transport equations without any calibrated constants, but these are calculated directly form the Navier-Stokes equation by means of a Renormalization Group (RG)procedure. This leads to filterwidth dependent transport equations and effective viscosity with the right limiting behaviour (DNS and RANS limits).

  4. Large Representation Recurrences in Large N Random Unitary Matrix Models

    CERN Document Server

    Karczmarek, Joanna L

    2011-01-01

    In a random unitary matrix model at large N, we study the properties of the expectation value of the character of the unitary matrix in the rank k symmetric tensor representation. We address the problem of whether the standard semiclassical technique for solving the model in the large N limit can be applied when the representation is very large, with k of order N. We find that the eigenvalues do indeed localize on an extremum of the effective potential; however, for finite but sufficiently large k/N, it is not possible to replace the discrete eigenvalue density with a continuous one. Nonetheless, the expectation value of the character has a well-defined large N limit, and when the discreteness of the eigenvalues is properly accounted for, it shows an intriguing approximate periodicity as a function of k/N.

  5. Structuring very large domain models

    DEFF Research Database (Denmark)

    Störrle, Harald

    2010-01-01

    View/Viewpoint approaches like IEEE 1471-2000, or Kruchten's 4+1-view model are used to structure software architectures at a high level of granularity. While research has focused on architectural languages and with consistency between multiple views, practical questions such as the structuring a...

  6. Very Large System Dynamics Models - Lessons Learned

    Energy Technology Data Exchange (ETDEWEB)

    Jacob J. Jacobson; Leonard Malczynski

    2008-10-01

    This paper provides lessons learned from developing several large system dynamics (SD) models. System dynamics modeling practice emphasize the need to keep models small so that they are manageable and understandable. This practice is generally reasonable and prudent; however, there are times that large SD models are necessary. This paper outlines two large SD projects that were done at two Department of Energy National Laboratories, the Idaho National Laboratory and Sandia National Laboratories. This paper summarizes the models and then discusses some of the valuable lessons learned during these two modeling efforts.

  7. Large-cell Monte Carlo renormalization group for percolation

    Science.gov (United States)

    Reynolds, Peter J.; Stanley, H. Eugene; Klein, W.

    1980-02-01

    We obtain the critical parameters for the site-percolation problem on the square lattice to a high degree of accuracy (comparable to that of series expansions) by using a Monte Carlo position-space renormalization-group procedure directly on the site-occupation probability. Our method involves calculating recursion relations using progressively larger lattice rescalings, b. We find smooth sequences for the value of the critical percolation concentration pc(b) and for the scaling powers yp(b) and yh(b). Extrapolating these sequences to the limit b-->∞ leads to quite accurate numerical predictions. Further, by considering other weight functions or "rules" which also embody the essential connectivity feature of percolation, we find that the numerical results in the infinite-cell limit are in fact "rule independent." However, the actual fashion in which this limit is approached does depend upon the rule chosen. A connection between extrapolation of our renormalization-group results and finite-size scaling is made. Furthermore, the usual finite-size scaling arguments lead to independent estimates of pc and yp. Combining both the large-cell approach and the finite-size scaling results, we obtain yp=0.7385+/-0.0080 and yh=1.898+/-0.003. Thus we find αp=-0.708+/-0.030, βp=0.138(+0.006,-0.005), γp=2.432+/-0.035, δp=18.6+/-0.6, νp=1.354+/-0.015, and 2-ηp=1.796+/-0.006. The site-percolation threshold is found for the square lattice at pc=0.5931+/-0.0006. We note that our calculated value of νp is in considerably better agreement with the proposal of Klein et al. that νp=ln3ln(32)≅1.3548, than with den Nijs' recent conjecture, which predicts νp=43. However, our results cannot entirely rule out the latter possibility.

  8. Facilitating Active Engagement of the University Student in a Large-Group Setting Using Group Work Activities

    Science.gov (United States)

    Kinsella, Gemma K.; Mahon, Catherine; Lillis, Seamus

    2017-01-01

    It is envisaged that small-group exercises as part of a large-group session would facilitate not only group work exercises (a valuable employability skill), but also peer learning. In this article, such a strategy to facilitate the active engagement of the student in a large-group setting was explored. The production of student-led resources was…

  9. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  10. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  11. Comparing linear probability model coefficients across groups

    DEFF Research Database (Denmark)

    Holm, Anders; Ejrnæs, Mette; Karlson, Kristian Bernt

    2015-01-01

    This article offers a formal identification analysis of the problem in comparing coefficients from linear probability models between groups. We show that differences in coefficients from these models can result not only from genuine differences in effects, but also from differences in one or more...... these limitations, and we suggest a restricted approach to using linear probability model coefficients in group comparisons....

  12. Modelling and simulations of macroscopic multi-group pedestrian flow

    CERN Document Server

    Mahato, Naveen K; Tiwari, Sudarshan

    2016-01-01

    We consider a multi-group microscopic model for pedestrian flow describing the behaviour of large groups. It is based on an interacting particle system coupled to an eikonal equation. Hydrodynamic multi-group models are derived from the underlying particle system as well as scalar multi-group models. The eikonal equation is used to compute optimal paths for the pedestrians. Particle methods are used to solve the macroscopic equations. Numerical test cases are investigated and the models and, in particular, the resulting evacuation times are compared for a wide range of different parameters.

  13. Large groups in the Chile-UK quasar survey

    CERN Document Server

    Newman, P R; Campusano, L E; Graham, M J; Newman, Peter R.; Clowes, Roger G.; Campusano, Luis E.; Graham, Matthew J.

    1997-01-01

    The Chile-UK quasar survey, a new-generation 140 deg^2 UVX survey to B = 20, is now \\sim 25 per cent complete. The catalogue currently contains 319 quasars and 93 emission line galaxies. Using the minimal-spanning tree method, we have independently confirmed the \\sim 200 h^-1 Mpc group of quasars at z \\simeq 1.3 discovered by Clowes & Campusano (1991). We have discovered a new \\sim 150 h^-1 Mpc group of 13 quasars at median z \\simeq 1.51. The null hypothesis of a uniform, random distribution is rejected at a level of significance of 0.003 for both groups.

  14. Phylogenetic invariants for group-based models

    CERN Document Server

    Donten-Bury, Maria

    2010-01-01

    In this paper we investigate properties of algebraic varieties representing group-based phylogenetic models. We give the (first) example of a nonnormal general group-based model for an abelian group. Following Kaie Kubjas we also determine some invariants of group-based models showing that the associated varieties do not have to be deformation equivalent. We propose a method of generating many phylogenetic invariants and in particular we show that our approach gives the whole ideal of the claw tree for 3-Kimura model under the assumption of the conjecture of Sturmfels and Sullivant. This, combined with the results of Sturmfels and Sullivant, would enable to determine all phylogenetic invariants for any tree for 3-Kimura model and possibly for other group-based models.

  15. Large-Group Fantasies versus Single-Subject Science.

    Science.gov (United States)

    Da Prato, Robert A.

    1992-01-01

    This paper argues that judgment-based assessment of data from multiply replicated single-subject or small-N studies should replace normative-based (p=less than 0.05) assessment of large-N research in the clinical sciences, and asserts that inferential statistics should be abandoned as a method of evaluating clinical research data. (Author/JDD)

  16. Refactoring Process Models in Large Process Repositories.

    NARCIS (Netherlands)

    Weber, B.; Reichert, M.U.

    2008-01-01

    With the increasing adoption of process-aware information systems (PAIS), large process model repositories have emerged. Over time respective models have to be re-aligned to the real-world business processes through customization or adaptation. This bears the risk that model redundancies are introdu

  17. Refactoring Process Models in Large Process Repositories.

    NARCIS (Netherlands)

    Weber, B.; Reichert, M.U.

    With the increasing adoption of process-aware information systems (PAIS), large process model repositories have emerged. Over time respective models have to be re-aligned to the real-world business processes through customization or adaptation. This bears the risk that model redundancies are

  18. Unilateral neglect and perceptual parsing: a large-group study.

    Science.gov (United States)

    Neppi-Mòdona, Marco; Savazzi, Silvia; Ricci, Raffaella; Genero, Rosanna; Berruti, Giuseppina; Pepi, Riccardo

    2002-01-01

    Array-centred and subarray-centred neglect were disambiguated in a group of 116 patients with left neglect by means of a modified version of the Albert test in which the central column of segments was deleted so as to create two separate sets of targets grouped by proximity. The results indicated that neglect was more frequent in array- than subarray-centred coordinates and that, in a minority of cases, neglect co-occurred in both coordinate-systems. The two types of neglect were functionally but not anatomically dissociated. Presence of visual field defects was not prevalent in one type of neglect with respect to the other. These data contribute further evidence to previous single-case and small-group studies by showing that neglect can occur in single or multiple reference frames simultaneously, in agreement with current neuropsychological, neurophysiological and computational concepts of space representation.

  19. Evolutionary models of in-group favoritism.

    Science.gov (United States)

    Masuda, Naoki; Fu, Feng

    2015-01-01

    In-group favoritism is the tendency for individuals to cooperate with in-group members more strongly than with out-group members. Similar concepts have been described across different domains, including in-group bias, tag-based cooperation, parochial altruism, and ethnocentrism. Both humans and other animals show this behavior. Here, we review evolutionary mechanisms for explaining this phenomenon by covering recently developed mathematical models. In fact, in-group favoritism is not easily realized on its own in theory, although it can evolve under some conditions. We also discuss the implications of these modeling results in future empirical and theoretical research.

  20. Commons Dilemma Choices in Small vs. Large Groups.

    Science.gov (United States)

    Powers, Richard B.; Boyle, William

    The purpose of the Commons Game is to teach students how social traps work; that is, that short-term individual gain tends to dominate long-term collective gain. Simulations of Commons Dilemma have grown considerably in the last decade; however, the research has used small face-to-face groups to study behavior in the Commons. To compare the…

  1. Curvature Properties of Lorentzian Manifolds with Large Isometry Groups

    Energy Technology Data Exchange (ETDEWEB)

    Batat, Wafaa [Ecole Normale Superieure de L' Enseignement Technique d' Oran, Departement de Mathematiques et Informatique (Algeria)], E-mail: wafa.batat@enset-oran.dz; Calvaruso, Giovanni, E-mail: giovanni.calvaruso@unile.it; Leo, Barbara De [University of Salento, Dipartimento di Matematica ' E. De Giorgi' (Italy)], E-mail: barbara.deleo@unile.it

    2009-08-15

    The curvature of Lorentzian manifolds (M{sup n},g), admitting a group of isometries of dimension at least 1/2n(n - 1) + 1, is completely described. Interesting behaviours are found, in particular as concerns local symmetry, local homogeneity and conformal flatness.

  2. Conjugacy in relatively extra-large Artin groups

    Directory of Open Access Journals (Sweden)

    Arye Juhasz

    2015-09-01

    Full Text Available Let A be an Artin group with standard generators X={x 1 ,…,x n } , n≥1 and defining graph Γ A . A \\emph{standard parabolic subgroup} of A is a subgroup generated by a subset of X . For elements u and v of A we say (as usual that u is conjugate to v by an element h of A if h −1 uh=v holds in A . Similarly, if K and L are subsets of A then K is conjugate to L by an element h of A if h −1 Kh=L . In this work we consider the conjugacy of elements and standard parabolic subgroups of a certain type of Artin groups. Results in this direction occur in occur in papers by Duncan, Kazachkov, Remeslennikov, Fenn, Dale, Jun, Godelle, Gonzalez-Meneses, Wiest, Paris, Rolfsen, for example. Of particular interest are centralisers of elements, and of standard parabolic subgroups, normalisers of standard parabolic subgroups and commensurators of parabolic subgroups. In this work we consider similar problems in a new class of Artin groups, introduced in the paper "On relatively extralarge Artin groups and their relative asphericity", by Juhasz, where the word problem is solved, among other things. Also, intersections of parabolic subgroups and their conjugates are considered.

  3. Highly Scalable Trip Grouping for Large Scale Collective Transportation Systems

    DEFF Research Database (Denmark)

    Gidofalvi, Gyozo; Pedersen, Torben Bach; Risch, Tore

    2008-01-01

    Transportation-related problems, like road congestion, parking, and pollution, are increasing in most cities. In order to reduce traffic, recent work has proposed methods for vehicle sharing, for example for sharing cabs by grouping "closeby" cab requests and thus minimizing transportation cost...

  4. Getting Started: Informal Small-Group Strategies in Large Classes.

    Science.gov (United States)

    Cooper, James L.; Robinson, Pamela

    2000-01-01

    Describes a number of informal "turn-to-your-neighbor" approaches that create an active learning environment in college lecture settings. These include: launching class in discussion, breaking up the lecture for comprehension checks, closing class with small-group conversation, and debriefing exams. (DB)

  5. Comparing linear probability model coefficients across groups

    DEFF Research Database (Denmark)

    Holm, Anders; Ejrnæs, Mette; Karlson, Kristian Bernt

    2015-01-01

    This article offers a formal identification analysis of the problem in comparing coefficients from linear probability models between groups. We show that differences in coefficients from these models can result not only from genuine differences in effects, but also from differences in one or more...... of the following three components: outcome truncation, scale parameters and distributional shape of the predictor variable. These results point to limitations in using linear probability model coefficients for group comparisons. We also provide Monte Carlo simulations and real examples to illustrate...... these limitations, and we suggest a restricted approach to using linear probability model coefficients in group comparisons....

  6. Working group report: Dictionary of Large Hadron Collider signatures

    Indian Academy of Sciences (India)

    A Belyaev; I A Christidi; A De Roeck; R M Godbole; B Mellado; A Nyffeler; C Petridou; D P Roy

    2009-01-01

    We report on a plan to establish a `Dictionary of LHC Signatures', an initiative that started at the WHEPP-X workshop in Chennai, January 2008. This study aims at the strategy of distinguishing 3 classes of dark matter motivated scenarios such as -parity conserved supersymmetry, little Higgs models with -parity conservation and universal extra dimensions with KK-parity for generic cases of their realization in a wide range of the model space. Discriminating signatures are tabulated and will need a further detailed analysis.

  7. Efficient Use of Interactive Video with Large Groups.

    Science.gov (United States)

    Jones, Brian

    1993-01-01

    Reports on ways in which interactive video-based courseware is being used with students studying for vocational qualifications at Thames Valley University (United Kingdom). Two alternative models using interactive video are described, one using multiple workstations and one using a single workstation led by a tutor. (Contains six references.) (LRW)

  8. Risk contracting and operational capabilities in large medical groups during national healthcare reform.

    Science.gov (United States)

    Mechanic, Robert E; Zinner, Darren

    2016-06-01

    Little is known about the scope of alternative payment models outside of Medicare. This study measures the full complement of public and private payment arrangements in large, multi-specialty group practices as a barometer of payment reform among advanced organizations. We collected information from 33 large, multi-specialty group practices about the proportion of their total revenue in 7 payment models, physician compensation strategies, and the implementation of selected performance management initiatives. We grouped respondents into 3 categories based on the proportion of their revenue in risk arrangements: risk-based (45%-100%), mixed (10%-35%), and fee-for-service (FFS) (0%-10%). We analyzed changes in contracting and operating characteristics between 2011 and 2013. In 2013, 68% of groups' total patient revenue was from FFS payments and 32% was from risk arrangements (unweighted average). Risk-based groups had 26% FFS revenue, whereas mixed-payment and FFS groups had 75% and 98%, respectively. Between 2011 and 2013, 9 groups increased risk contract revenue by about 15 percentage points and 22 reported few changes. Risk-based groups reported more advanced implementation of performance management strategies and were more likely to have physician financial incentives for quality and patient experience. The groups in this study are well positioned to manage risk-based contracts successfully, but less than one-third receive a majority of their revenue from risk arrangements. The experience of these relatively advanced groups suggests that expanding risk-based arrangements across the US health system will likely be slower and more challenging than many people assume.

  9. Spin Foam Models with Finite Groups

    Directory of Open Access Journals (Sweden)

    Benjamin Bahr

    2013-01-01

    Full Text Available Spin foam models, loop quantum gravity, and group field theory are discussed as quantum gravity candidate theories and usually involve a continuous Lie group. We advocate here to consider quantum gravity-inspired models with finite groups, firstly as a test bed for the full theory and secondly as a class of new lattice theories possibly featuring an analogue diffeomorphism symmetry. To make these notes accessible to readers outside the quantum gravity community, we provide an introduction to some essential concepts in the loop quantum gravity, spin foam, and group field theory approach and point out the many connections to the lattice field theory and the condensed-matter systems.

  10. Spin foam models with finite groups

    CERN Document Server

    Bahr, Benjamin; Ryan, James P

    2011-01-01

    Spin foam models, loop quantum gravity and group field theory are discussed as quantum gravity candidate theories and usually involve a continuous Lie group. We advocate here to consider quantum gravity inspired models with finite groups, firstly as a test bed for the full theory and secondly as a class of new lattice theories possibly featuring an analogue diffeomorphism symmetry. To make these notes accessible to readers outside the quantum gravity community we provide an introduction to some essential concepts in the loop quantum gravity, spin foam and group field theory approach and point out the many connections to lattice field theory and condensed matter systems.

  11. Application of renormalization group theory to the large-eddy simulation of transitional boundary layers

    Science.gov (United States)

    Piomelli, Ugo; Zang, Thomas A.; Speziale, Charles G.; Lund, Thomas S.

    1990-01-01

    An eddy viscosity model based on the renormalization group theory of Yakhot and Orszag (1986) is applied to the large-eddy simulation of transition in a flat-plate boundary layer. The simulation predicts with satisfactory accuracy the mean velocity and Reynolds stress profiles, as well as the development of the important scales of motion. The evolution of the structures characteristic of the nonlinear stages of transition is also predicted reasonably well.

  12. An experiential group model for psychotherapy supervision.

    Science.gov (United States)

    Altfeld, D A

    1999-04-01

    This article presents an experiential group model of supervision constructed for both group and individual therapy presentations, emphasizing concepts from object relations theory and group-as-a-whole dynamics. It focuses on intrapsychic, interpersonal, and systems processes, and stresses the group aspect of the supervisory process. Its central thesis is that material presented in a group supervisory setting stimulates conscious and unconscious parallel processes in group members. Through here-and-now responses, associations, and interactions among the supervisory members, countertransference issues that have eluded the presenter can make themselves known and be worked through on emotional as well as cognitive levels. Selected excerpts from supervisory sessions demonstrate various attributes and strengths of the model.

  13. Model of trust in work groups

    Directory of Open Access Journals (Sweden)

    Sidorenkov, Andrey V.

    2013-09-01

    Full Text Available A multi-dimensional model of trust in a small group has been developed and approved. This model includes two dimensions: trust levels (interpersonal trust, micro-group trust, group trust, trust between subgroups, trust between subgroups and group and types of trust (activity-coping, information-influential and confidentially-protective trust. Each level of trust is manifested in three types, so there are fifteen varieties of trust. Two corresponding questionnaires were developed for the study. 347 persons from 32 work groups participated in the research. It was determined that in a small group there is an asymmetry of trust levels within the group. In particular, micro-group trust is demonstrated the most in comparison with other trust levels. There is also an asymmetry in the manifestation of interpersonal trust in a group structure. This is demonstrated by the fact that in informal subgroups, in comparison with a group as a whole, interpersonal confidential and performance trust is the most manifested. In a small group and in informal subgroups there are relationships between trust levels which have certain regularities.

  14. Point groups in the Vibron model

    Energy Technology Data Exchange (ETDEWEB)

    Leviatan, A.

    1989-08-01

    The question of incorporating the notion of point groups in the algebraic Vibron model for molecular rotation--vibration spectra is addressed. Boson transformations which act on intrinsic states are identified as the algebraic analog of the discrete point group transformations. A prescription for assigning point group labels to states of the Vibron model is obtained. In case of nonlinear triatomic molecules the Jacobi coordinates are found to be a convenient possible choice for the geometric counterparts of the algebraic shape parameters. The work focuses on rigid diatomic and triatomic molecules (linear and bent).

  15. Laboratory Modeling of Aspects of Large Fires,

    Science.gov (United States)

    1984-04-30

    7 -7 g~L AD-A153 152 DNA-TR- 84-18 LABORATORY MODELING OF ASPECTS OF LARGE FIRES G.F. Carrier "URARY F.E. Fendell b DVSO R.D. Fleeter N. Got L.M...I1I TITLE (include Socurty Olassihicarion) LABORATORY MODELING OF ASPECTS OF LARGE FIRES 12. PERSONAL AUrHoR(S G.F. Carrier F.E. Fendell R.D. Fleeter N...Motorbuch Verlag.___ Caidin, M. (1960). A Torch to the Enemy: the Fire Raid on Tokyo. New York, NY: Ballantine. Carrier, G. F., Fendell , F. E., and

  16. Large scale topic modeling made practical

    DEFF Research Database (Denmark)

    Wahlgreen, Bjarne Ørum; Hansen, Lars Kai

    2011-01-01

    Topic models are of broad interest. They can be used for query expansion and result structuring in information retrieval and as an important component in services such as recommender systems and user adaptive advertising. In large scale applications both the size of the database (number of docume......Topic models are of broad interest. They can be used for query expansion and result structuring in information retrieval and as an important component in services such as recommender systems and user adaptive advertising. In large scale applications both the size of the database (number...... topics at par with a much larger case specific vocabulary....

  17. Merger acquisition or independence--small group decision--large group impact.

    Science.gov (United States)

    Erins, D C

    1995-01-01

    As this group practice looked to the future, two alternatives were considered--remain independent or merge with another entity. Remaining independent, although desirable, would be extremely difficult, so this group looked for a mutually beneficial affiliation. This case study details the beginning-to-end affiliation process--from seeking potential partners to signing the papers: the keys to success and potential dealbreakers.

  18. Large group size yields group stability in the cooperatively breeding cichlid Neolamprologus pulcher

    NARCIS (Netherlands)

    Heg, D; Bachar, Z; Taborsky, M

    2005-01-01

    Group size has been shown to positively influence survival of group members in many cooperatively breeding vertebrates, including the Lake Tanganyika cichlid Neolamprologus pulcher, suggesting Allee effects. However, long-term data are scarce to test how these survival differences translate into cha

  19. Modeling capillary forces for large displacements

    NARCIS (Netherlands)

    Mastrangeli, M.; Arutinov, G.; Smits, E.C.P.; Lambert, P.

    2014-01-01

    Originally applied to the accurate, passive positioning of submillimetric devices, recent works proved capillary self-alignment as effective also for larger components and relatively large initial offsets. In this paper, we describe an analytic quasi-static model of 1D capillary restoring forces tha

  20. Pronunciation Modeling for Large Vocabulary Speech Recognition

    Science.gov (United States)

    Kantor, Arthur

    2010-01-01

    The large pronunciation variability of words in conversational speech is one of the major causes of low accuracy in automatic speech recognition (ASR). Many pronunciation modeling approaches have been developed to address this problem. Some explicitly manipulate the pronunciation dictionary as well as the set of the units used to define the…

  1. Inverse modeling for Large-Eddy simulation

    NARCIS (Netherlands)

    Geurts, Bernardus J.

    1998-01-01

    Approximate higher order polynomial inversion of the top-hat filter is developed with which the turbulent stress tensor in Large-Eddy Simulation can be consistently represented using the filtered field. Generalized (mixed) similarity models are proposed which improved the agreement with the kinetic

  2. Recursive renormalization group theory based subgrid modeling

    Science.gov (United States)

    Zhou, YE

    1991-01-01

    Advancing the knowledge and understanding of turbulence theory is addressed. Specific problems to be addressed will include studies of subgrid models to understand the effects of unresolved small scale dynamics on the large scale motion which, if successful, might substantially reduce the number of degrees of freedom that need to be computed in turbulence simulation.

  3. Spatial occupancy models for large data sets

    Science.gov (United States)

    Johnson, Devin S.; Conn, Paul B.; Hooten, Mevin B.; Ray, Justina C.; Pond, Bruce A.

    2013-01-01

    Since its development, occupancy modeling has become a popular and useful tool for ecologists wishing to learn about the dynamics of species occurrence over time and space. Such models require presence–absence data to be collected at spatially indexed survey units. However, only recently have researchers recognized the need to correct for spatially induced overdisperison by explicitly accounting for spatial autocorrelation in occupancy probability. Previous efforts to incorporate such autocorrelation have largely focused on logit-normal formulations for occupancy, with spatial autocorrelation induced by a random effect within a hierarchical modeling framework. Although useful, computational time generally limits such an approach to relatively small data sets, and there are often problems with algorithm instability, yielding unsatisfactory results. Further, recent research has revealed a hidden form of multicollinearity in such applications, which may lead to parameter bias if not explicitly addressed. Combining several techniques, we present a unifying hierarchical spatial occupancy model specification that is particularly effective over large spatial extents. This approach employs a probit mixture framework for occupancy and can easily accommodate a reduced-dimensional spatial process to resolve issues with multicollinearity and spatial confounding while improving algorithm convergence. Using open-source software, we demonstrate this new model specification using a case study involving occupancy of caribou (Rangifer tarandus) over a set of 1080 survey units spanning a large contiguous region (108 000 km2) in northern Ontario, Canada. Overall, the combination of a more efficient specification and open-source software allows for a facile and stable implementation of spatial occupancy models for large data sets.

  4. Group Buying Schemes : A Sustainable Business Model?

    OpenAIRE

    Köpp, Sebastian; Mukhachou, Aliaksei; Schwaninger, Markus

    2013-01-01

    Die Autoren gehen der Frage nach, ob "Group Buying Schemes" wie beispielsweise von den Unternehmen Groupon und Dein Deal angeboten, ein nachhaltiges Geschäftsmodell sind. Anhand der Fallstudie Groupon wird mit einem System Dynamics Modell festgestellt, dass das Geschäftsmodell geändert werden muss, wenn die Unternehmung auf Dauer lebensfähig sein soll. The authors examine if group buying schemes are a sustainable business model. By means of the Groupon case study and using a System Dynami...

  5. Group Centric Networking: Large Scale Over the Air Testing of Group Centric Networking

    Science.gov (United States)

    2016-11-01

    performance of Group Centric Networking (GCN), a networking protocol developed for robust and scalable communications in lossy networks where users are...landscape. Due to these fundamental challenges, robustly connecting users together in military tactical networks is still an open problem. A similar problem...users that share a common set of interests communicate in a collaborative nature. This exploits the nature of the underlying network . For instance, in

  6. Computational Modeling of Large Wildfires: A Roadmap

    KAUST Repository

    Coen, Janice L.

    2010-08-01

    Wildland fire behavior, particularly that of large, uncontrolled wildfires, has not been well understood or predicted. Our methodology to simulate this phenomenon uses high-resolution dynamic models made of numerical weather prediction (NWP) models coupled to fire behavior models to simulate fire behavior. NWP models are capable of modeling very high resolution (< 100 m) atmospheric flows. The wildland fire component is based upon semi-empirical formulas for fireline rate of spread, post-frontal heat release, and a canopy fire. The fire behavior is coupled to the atmospheric model such that low level winds drive the spread of the surface fire, which in turn releases sensible heat, latent heat, and smoke fluxes into the lower atmosphere, feeding back to affect the winds directing the fire. These coupled dynamic models capture the rapid spread downwind, flank runs up canyons, bifurcations of the fire into two heads, and rough agreement in area, shape, and direction of spread at periods for which fire location data is available. Yet, intriguing computational science questions arise in applying such models in a predictive manner, including physical processes that span a vast range of scales, processes such as spotting that cannot be modeled deterministically, estimating the consequences of uncertainty, the efforts to steer simulations with field data ("data assimilation"), lingering issues with short term forecasting of weather that may show skill only on the order of a few hours, and the difficulty of gathering pertinent data for verification and initialization in a dangerous environment. © 2010 IEEE.

  7. Affine Poisson Groups and WZW Model

    Directory of Open Access Journals (Sweden)

    Ctirad Klimcík

    2008-01-01

    Full Text Available We give a detailed description of a dynamical system which enjoys a Poisson-Lie symmetry with two non-isomorphic dual groups. The system is obtained by taking the q → ∞ limit of the q-deformed WZW model and the understanding of its symmetry structure results in uncovering an interesting duality of its exchange relations.

  8. An economic model of large Medicaid practices.

    Science.gov (United States)

    Cromwell, J; Mitchell, J B

    1984-06-01

    Public attention given to Medicaid "mills" prompted this more general investigation of the origins of large Medicaid practices. A dual market demand model is proposed showing how Medicaid competes with private insurers for scarce physician time. Various program parameters--fee schedules, coverage, collection costs--are analyzed along with physician preferences, specialties, and other supply-side characteristics. Maximum likelihood techniques are used to test the model. The principal finding is that in raising Medicaid fees, as many physicians opt into the program as expand their Medicaid caseloads to exceptional levels, leaving the maldistribution of patients unaffected while notably improving access. Still, the fact that Medicaid fees are lower than those of private insurers does lead to reduced access to more qualified practitioners. Where anti-Medicaid sentiment is stronger, access is also reduced and large Medicaid practices more likely to flourish.

  9. Engineering large animal models of human disease.

    Science.gov (United States)

    Whitelaw, C Bruce A; Sheets, Timothy P; Lillico, Simon G; Telugu, Bhanu P

    2016-01-01

    The recent development of gene editing tools and methodology for use in livestock enables the production of new animal disease models. These tools facilitate site-specific mutation of the genome, allowing animals carrying known human disease mutations to be produced. In this review, we describe the various gene editing tools and how they can be used for a range of large animal models of diseases. This genomic technology is in its infancy but the expectation is that through the use of gene editing tools we will see a dramatic increase in animal model resources available for both the study of human disease and the translation of this knowledge into the clinic. Comparative pathology will be central to the productive use of these animal models and the successful translation of new therapeutic strategies.

  10. A Life-Cycle Model of Human Social Groups Produces a U-Shaped Distribution in Group Size.

    Directory of Open Access Journals (Sweden)

    Gul Deniz Salali

    Full Text Available One of the central puzzles in the study of sociocultural evolution is how and why transitions from small-scale human groups to large-scale, hierarchically more complex ones occurred. Here we develop a spatially explicit agent-based model as a first step towards understanding the ecological dynamics of small and large-scale human groups. By analogy with the interactions between single-celled and multicellular organisms, we build a theory of group lifecycles as an emergent property of single cell demographic and expansion behaviours. We find that once the transition from small-scale to large-scale groups occurs, a few large-scale groups continue expanding while small-scale groups gradually become scarcer, and large-scale groups become larger in size and fewer in number over time. Demographic and expansion behaviours of groups are largely influenced by the distribution and availability of resources. Our results conform to a pattern of human political change in which religions and nation states come to be represented by a few large units and many smaller ones. Future enhancements of the model should include decision-making rules and probabilities of fragmentation for large-scale societies. We suggest that the synthesis of population ecology and social evolution will generate increasingly plausible models of human group dynamics.

  11. Large-scale multimedia modeling applications

    Energy Technology Data Exchange (ETDEWEB)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications.

  12. Large genetic animal models of Huntington's Disease.

    Science.gov (United States)

    Morton, A Jennifer; Howland, David S

    2013-01-01

    The dominant nature of the Huntington's disease gene mutation has allowed genetic models to be developed in multiple species, with the mutation causing an abnormal neurological phenotype in all animals in which it is expressed. Many different rodent models have been generated. The most widely used of these, the transgenic R6/2 mouse, carries the mutation in a fragment of the human huntingtin gene and has a rapidly progressive and fatal neurological phenotype with many relevant pathological changes. Nevertheless, their rapid decline has been frequently questioned in the context of a disease that takes years to manifest in humans, and strenuous efforts have been made to make rodent models that are genetically more 'relevant' to the human condition, including full length huntingtin gene transgenic and knock-in mice. While there is no doubt that we have learned, and continue to learn much from rodent models, their usefulness is limited by two species constraints. First, the brains of rodents differ significantly from humans in both their small size and their neuroanatomical organization. Second, rodents have much shorter lifespans than humans. Here, we review new approaches taken to these challenges in the development of models of Huntington's disease in large brained, long-lived animals. We discuss the need for such models, and how they might be used to fill specific niches in preclinical Huntington's disease research, particularly in testing gene-based therapeutics. We discuss the advantages and disadvantages of animals in which the prodromal period of disease extends over a long time span. We suggest that there is considerable 'value added' for large animal models in preclinical Huntington's disease research.

  13. Identifiability of large phylogenetic mixture models.

    Science.gov (United States)

    Rhodes, John A; Sullivant, Seth

    2012-01-01

    Phylogenetic mixture models are statistical models of character evolution allowing for heterogeneity. Each of the classes in some unknown partition of the characters may evolve by different processes, or even along different trees. Such models are of increasing interest for data analysis, as they can capture the variety of evolutionary processes that may be occurring across long sequences of DNA or proteins. The fundamental question of whether parameters of such a model are identifiable is difficult to address, due to the complexity of the parameterization. Identifiability is, however, essential to their use for statistical inference.We analyze mixture models on large trees, with many mixture components, showing that both numerical and tree parameters are indeed identifiable in these models when all trees are the same. This provides a theoretical justification for some current empirical studies, and indicates that extensions to even more mixture components should be theoretically well behaved. We also extend our results to certain mixtures on different trees, using the same algebraic techniques.

  14. Medical students perceive better group learning processes when large classes are made to seem small.

    Directory of Open Access Journals (Sweden)

    Juliette Hommes

    Full Text Available OBJECTIVE: Medical schools struggle with large classes, which might interfere with the effectiveness of learning within small groups due to students being unfamiliar to fellow students. The aim of this study was to assess the effects of making a large class seem small on the students' collaborative learning processes. DESIGN: A randomised controlled intervention study was undertaken to make a large class seem small, without the need to reduce the number of students enrolling in the medical programme. The class was divided into subsets: two small subsets (n=50 as the intervention groups; a control group (n=102 was mixed with the remaining students (the non-randomised group n∼100 to create one large subset. SETTING: The undergraduate curriculum of the Maastricht Medical School, applying the Problem-Based Learning principles. In this learning context, students learn mainly in tutorial groups, composed randomly from a large class every 6-10 weeks. INTERVENTION: The formal group learning activities were organised within the subsets. Students from the intervention groups met frequently within the formal groups, in contrast to the students from the large subset who hardly enrolled with the same students in formal activities. MAIN OUTCOME MEASURES: Three outcome measures assessed students' group learning processes over time: learning within formally organised small groups, learning with other students in the informal context and perceptions of the intervention. RESULTS: Formal group learning processes were perceived more positive in the intervention groups from the second study year on, with a mean increase of β=0.48. Informal group learning activities occurred almost exclusively within the subsets as defined by the intervention from the first week involved in the medical curriculum (E-I indexes>-0.69. Interviews tapped mainly positive effects and negligible negative side effects of the intervention. CONCLUSION: Better group learning processes can be

  15. Large animal models for stem cell therapy.

    Science.gov (United States)

    Harding, John; Roberts, R Michael; Mirochnitchenko, Oleg

    2013-03-28

    The field of regenerative medicine is approaching translation to clinical practice, and significant safety concerns and knowledge gaps have become clear as clinical practitioners are considering the potential risks and benefits of cell-based therapy. It is necessary to understand the full spectrum of stem cell actions and preclinical evidence for safety and therapeutic efficacy. The role of animal models for gaining this information has increased substantially. There is an urgent need for novel animal models to expand the range of current studies, most of which have been conducted in rodents. Extant models are providing important information but have limitations for a variety of disease categories and can have different size and physiology relative to humans. These differences can preclude the ability to reproduce the results of animal-based preclinical studies in human trials. Larger animal species, such as rabbits, dogs, pigs, sheep, goats, and non-human primates, are better predictors of responses in humans than are rodents, but in each case it will be necessary to choose the best model for a specific application. There is a wide spectrum of potential stem cell-based products that can be used for regenerative medicine, including embryonic and induced pluripotent stem cells, somatic stem cells, and differentiated cellular progeny. The state of knowledge and availability of these cells from large animals vary among species. In most cases, significant effort is required for establishing and characterizing cell lines, comparing behavior to human analogs, and testing potential applications. Stem cell-based therapies present significant safety challenges, which cannot be addressed by traditional procedures and require the development of new protocols and test systems, for which the rigorous use of larger animal species more closely resembling human behavior will be required. In this article, we discuss the current status and challenges of and several major directions

  16. Calling all stakeholders: group-level assessment (GLA)-a qualitative and participatory method for large groups.

    Science.gov (United States)

    Vaughn, Lisa M; Lohmueller, MaryAnn

    2014-08-01

    Group-level assessment (GLA) is a qualitative and participatory large group method in which timely and valid data are collaboratively generated and interactively evaluated with relevant stakeholders leading to the development of participant-driven data and relevant action plans. This method is useful across a wide range of evaluation purposes in many environments. GLA involves bringing a large group of participants together to build a common database through the co-identification of relevant needs, judgments, and priorities. The GLA process proceeds through the following seven steps: climate setting, generating, appreciating, reflecting, understanding, selecting, and action. This article describes the methodological development and process of conducting a GLA and its various applications across the evaluation spectrum. We highlight several exemplars where GLA was used in order to demonstrate the particular nuances of working with different sizes and types of groups and to elaborate on our learnings from the wide applicability of the method.

  17. Chemical Evolution models of Local Group galaxies

    CERN Document Server

    Tosi, M P

    2003-01-01

    Status quo and perspectives of standard chemical evolution models of Local Group galaxies are summarized, discussing what we have learnt from them, what we know we have not learnt yet, and what I think we will learn in the near future. It is described how Galactic chemical evolution models have helped showing that: i) stringent constraints on primordial nucleosynthesis can be derived from the observed Galactic abundances of the light elements, ii) the Milky Way has been accreting external gas from early epochs to the present time, iii) the vast majority of Galactic halo stars have formed quite rapidly at early epochs. Chemical evolution models for the closest dwarf galaxies, although still uncertain so far, are expected to become extremely reliable in the nearest future, thanks to the quality of new generation photometric and spectroscopic data which are currently being acquired.

  18. Development of large Area Covering Height Model

    Science.gov (United States)

    Jacobsen, K.

    2014-04-01

    Height information is a basic part of topographic mapping. Only in special areas frequent update of height models is required, usually the update cycle is quite lower as for horizontal map information. Some height models are available free of charge in the internet; for commercial height models a fee has to be paid. Mostly digital surface models (DSM) with the height of the visible surface are given and not the bare ground height, as required for standard mapping. Nevertheless by filtering of DSM, digital terrain models (DTM) with the height of the bare ground can be generated with the exception of dense forest areas where no height of the bare ground is available. These height models may be better as the DTM of some survey administrations. In addition several DTM from national survey administrations are classified, so as alternative the commercial or free of charge available information from internet can be used. The widely used SRTM DSM is available also as ACE-2 GDEM corrected by altimeter data for systematic height errors caused by vegetation and orientation errors. But the ACE-2 GDEM did not respect neighbourhood information. With the worldwide covering TanDEM-X height model, distributed starting 2014 by Airbus Defence and Space (former ASTRIUM) as WorldDEM, higher level of details and accuracy is reached as with other large area covering height models. At first the raw-version of WorldDEM will be available, followed by an edited version and finally as WorldDEM-DTM a height model of the bare ground. With 12 m spacing and a relative standard deviation of 1.2 m within an area of 1° x 1° an accuracy and resolution level is reached, satisfying also for larger map scales. For limited areas with the HDEM also a height model with 6 m spacing and a relative vertical accuracy of 0.5 m can be generated on demand. By bathymetric LiDAR and stereo images also the height of the sea floor can be determined if the water has satisfying transparency. Another method of getting

  19. Computational social dynamic modeling of group recruitment.

    Energy Technology Data Exchange (ETDEWEB)

    Berry, Nina M.; Lee, Marinna; Pickett, Marc; Turnley, Jessica Glicken (Sandia National Laboratories, Albuquerque, NM); Smrcka, Julianne D. (Sandia National Laboratories, Albuquerque, NM); Ko, Teresa H.; Moy, Timothy David (Sandia National Laboratories, Albuquerque, NM); Wu, Benjamin C.

    2004-01-01

    The Seldon software toolkit combines concepts from agent-based modeling and social science to create a computationally social dynamic model for group recruitment. The underlying recruitment model is based on a unique three-level hybrid agent-based architecture that contains simple agents (level one), abstract agents (level two), and cognitive agents (level three). This uniqueness of this architecture begins with abstract agents that permit the model to include social concepts (gang) or institutional concepts (school) into a typical software simulation environment. The future addition of cognitive agents to the recruitment model will provide a unique entity that does not exist in any agent-based modeling toolkits to date. We use social networks to provide an integrated mesh within and between the different levels. This Java based toolkit is used to analyze different social concepts based on initialization input from the user. The input alters a set of parameters used to influence the values associated with the simple agents, abstract agents, and the interactions (simple agent-simple agent or simple agent-abstract agent) between these entities. The results of phase-1 Seldon toolkit provide insight into how certain social concepts apply to different scenario development for inner city gang recruitment.

  20. Geometry of Dynamic Large Networks: A Scaling and Renormalization Group Approach

    Science.gov (United States)

    2013-12-11

    Geometry of Dynamic Large Networks - A Scaling and Renormalization Group Approach IRAJ SANIEE LUCENT TECHNOLOGIES INC 12/11/2013 Final Report...Z39.18 Final Performance Report Grant Title: Geometry of Dynamic Large Networks: A Scaling and Renormalization Group Approach Grant Award Number...test itself may be scaled to much larger graphs than those we examined via renormalization group methodology. Using well-understood mechanisms, we

  1. Large Group Exposure Treatment: a Feasibility Study in Highly Spider Fearful Individuals

    Directory of Open Access Journals (Sweden)

    Andre Wannemueller

    2016-08-01

    Full Text Available A large group one-session exposure treatment (LG-OST based on indirect modeled exposure strategies was carried out to investigate its feasibility and effectiveness in a sample of highly spider fearful individuals (N = 78. The stability of LG-OST-effects was assessed at 8-month follow-up (FU. Furthermore, a second sample (N = 30 of highly spider fearful individuals was treated in a standard, single-person one-session treatment (SP-OST design to compare LG-OST-effects to a standard spider fear treatment. Participants’ fear of spider was assessed by multiple questionnaires and by a behavioral approach test (BAT. The fear assessment took place before and after the respective intervention, and at 8-month follow-up in LG-OST. Regarding subjective spider fear measures, LG-OST mainly showed medium to large effect sizes, ranging from Cohen’s d = .69 to d = 1.21, except for one small effect of d = .25. After LG-OST, participants approached the spider closer at post-treatment measures (d = 1.18. LG-OST-effects remained stable during the 8-month FU-interval. However, SP-OST-effects proved superior in most measures. An LG-OST-protocol provided evidence for feasibility and efficiency. The effects of LG-OST were equal to those of indirect modeled exposure strategies, carried out in single-settings. LG-OST may represent a useful tool in future phobia-treatment, especially if it can match the effects of single-setting OST, e.g., by including more direct exposure elements in future large group attempts.

  2. A New Method for Grey Forecasting Model Group

    Institute of Scientific and Technical Information of China (English)

    李峰; 王仲东; 宋中民

    2002-01-01

    In order to describe the characteristics of some systems, such as the process of economic and product forecasting, a lot of discrete data may be used. Although they are discrete, the inside law can be-founded by some methods. For a series that the discrete degree is large and the integrated tendency is ascending, a new method for grey forecasting model group is given by the grey system theory. The method is that it firstly transforms original data, chooses some clique values and divides original data into groups by different clique values; then, it establishes non-equigap GM(1, 1) model for different groups and searches forecasting area of original data by the solution of model. At the end of the paper, the result of reliability of forecasting value is obtained. It is shown that the method is feasible.

  3. Intelligent negotiation model for ubiquitous group decision scenarios

    Institute of Scientific and Technical Information of China (English)

    Joo CARNEIRO; Diogo MARTINHO; Goreti MARREIROS; Paulo NOVAIS

    2016-01-01

    Supporting group decision-making in ubiquitous contexts is a complex task that must deal with a large amount of factors to succeed. Here we propose an approach for an intelligent negotiation model to support the group decision-making process specifically designed for ubiquitous contexts. Our approach can be used by researchers that intend to include arguments, complex algorithms, and agents’ modeling in a negotiation model. It uses a social networking logic due to the type of communication employed by the agents and it intends to support the ubiquitous group decision-making process in a similar way to the real process, which simultaneously preserves the amount and quality of intelligence generated in face-to-face meetings. We propose a new look into this problem by considering and defining strategies to deal with important points such as the type of attributes in the multi- criterion problems, agents’ reasoning, and intelligent dialogues.

  4. Stochastic group selection model for the evolution of altruism

    CERN Document Server

    Silva, A T C; Silva, Ana T. C.

    1999-01-01

    We study numerically and analytically a stochastic group selection model in which a population of asexually reproducing individuals, each of which can be either altruist or non-altruist, is subdivided into $M$ reproductively isolated groups (demes) of size $N$. The cost associated with being altruistic is modelled by assigning the fitness $1- \\tau$, with $\\tau \\in [0,1]$, to the altruists and the fitness 1 to the non-altruists. In the case that the altruistic disadvantage $\\tau$ is not too large, we show that the finite $M$ fluctuations are small and practically do not alter the deterministic results obtained for $M \\to \\infty$. However, for large $\\tau$ these fluctuations greatly increase the instability of the altruistic demes to mutations. These results may be relevant to the dynamics of parasite-host systems and, in particular, to explain the importance of mutation in the evolution of parasite virulence.

  5. Modeling of large area hot embossing

    CERN Document Server

    Worgull, M; Marcotte, J -P; Hétu, J -F; Heckele, M

    2008-01-01

    Today, hot embossing and injection molding belong to the established plastic molding processes in microengineering. Based on experimental findings, a variety of microstructures have been replicated so far using the processes. However, with increasing requirements regarding the embossing surface and the simultaneous decrease of the structure size down into the nanorange, increasing know-how is needed to adapt hot embossing to industrial standards. To reach this objective, a German-Canadian cooperation project has been launched to study hot embossing theoretically by a process simulation and experimentally. The present publication shall report about the first results of the simulation - the modeling and simulation of large area replication based on an eight inch microstructured mold.

  6. Novel web service selection model based on discrete group search.

    Science.gov (United States)

    Zhai, Jie; Shao, Zhiqing; Guo, Yi; Zhang, Haiteng

    2014-01-01

    In our earlier work, we present a novel formal method for the semiautomatic verification of specifications and for describing web service composition components by using abstract concepts. After verification, the instantiations of components were selected to satisfy the complex service performance constraints. However, selecting an optimal instantiation, which comprises different candidate services for each generic service, from a large number of instantiations is difficult. Therefore, we present a new evolutionary approach on the basis of the discrete group search service (D-GSS) model. With regard to obtaining the optimal multiconstraint instantiation of the complex component, the D-GSS model has competitive performance compared with other service selection models in terms of accuracy, efficiency, and ability to solve high-dimensional service composition component problems. We propose the cost function and the discrete group search optimizer (D-GSO) algorithm and study the convergence of the D-GSS model through verification and test cases.

  7. Statistical Modeling of Large-Scale Scientific Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Eliassi-Rad, T; Baldwin, C; Abdulla, G; Critchlow, T

    2003-11-15

    With the advent of massively parallel computer systems, scientists are now able to simulate complex phenomena (e.g., explosions of a stars). Such scientific simulations typically generate large-scale data sets over the spatio-temporal space. Unfortunately, the sheer sizes of the generated data sets make efficient exploration of them impossible. Constructing queriable statistical models is an essential step in helping scientists glean new insight from their computer simulations. We define queriable statistical models to be descriptive statistics that (1) summarize and describe the data within a user-defined modeling error, and (2) are able to answer complex range-based queries over the spatiotemporal dimensions. In this chapter, we describe systems that build queriable statistical models for large-scale scientific simulation data sets. In particular, we present our Ad-hoc Queries for Simulation (AQSim) infrastructure, which reduces the data storage requirements and query access times by (1) creating and storing queriable statistical models of the data at multiple resolutions, and (2) evaluating queries on these models of the data instead of the entire data set. Within AQSim, we focus on three simple but effective statistical modeling techniques. AQSim's first modeling technique (called univariate mean modeler) computes the ''true'' (unbiased) mean of systematic partitions of the data. AQSim's second statistical modeling technique (called univariate goodness-of-fit modeler) uses the Andersen-Darling goodness-of-fit method on systematic partitions of the data. Finally, AQSim's third statistical modeling technique (called multivariate clusterer) utilizes the cosine similarity measure to cluster the data into similar groups. Our experimental evaluations on several scientific simulation data sets illustrate the value of using these statistical models on large-scale simulation data sets.

  8. Problem-based learning: facilitating multiple small teams in a large group setting.

    Science.gov (United States)

    Hyams, Jennifer H; Raidal, Sharanne L

    2013-01-01

    Problem-based learning (PBL) is often described as resource demanding due to the high staff-to-student ratio required in a traditional PBL tutorial class where there is commonly one facilitator to every 5-16 students. The veterinary science program at Charles Sturt University, Australia, has developed a method of group facilitation which readily allows one or two staff members to facilitate up to 30 students at any one time while maintaining the benefits of a small PBL team of six students. Multi-team facilitation affords obvious financial and logistic advantages, but there are also important pedagogical benefits derived from uniform facilitation across multiple groups, enhanced discussion and debate between groups, and the development of self-facilitation skills in students. There are few disadvantages to the roaming facilitator model, provided that several requirements are addressed. These requirements include a suitable venue, large whiteboards, a structured approach to support student engagement with each disclosure, a detailed facilitator guide, and an open, collaborative, and communicative environment.

  9. Configured-groups hypothesis: fast comparison of exact large quantities without counting.

    Science.gov (United States)

    Miravete, Sébastien; Tricot, André; Kalyuga, Slava; Amadieu, Franck

    2017-07-17

    Our innate number sense cannot distinguish between two large exact numbers of objects (e.g., 45 dots vs 46). Configured groups (e.g., 10 blocks, 20 frames) are traditionally used in schools to represent large numbers. Previous studies suggest that these external representations make it easier to use symbolic strategies such as counting ten by ten, enabling humans to differentiate exactly two large numbers. The main hypothesis of this work is that configured groups also allow for a differentiation of large exact numbers, even when symbolic strategies become ineffective. In experiment 1, the children from grade 3 were asked to compare two large collections of objects for 5 s. When the objects were organized in configured groups, the success rate was over .90. Without this configured grouping, the children were unable to make a successful comparison. Experiments 2 and 3 controlled for a strategy based on non-numerical parameters (areas delimited by dots or the sum areas of dots, etc.) or use symbolic strategies. These results suggest that configured grouping enables humans to distinguish between two large exact numbers of objects, even when innate number sense and symbolic strategies are ineffective. These results are consistent with what we call "the configured group hypothesis": configured groups play a fundamental role in the acquisition of exact numerical abilities.

  10. Large Scale, High Resolution, Mantle Dynamics Modeling

    Science.gov (United States)

    Geenen, T.; Berg, A. V.; Spakman, W.

    2007-12-01

    To model the geodynamic evolution of plate convergence, subduction and collision and to allow for a connection to various types of observational data, geophysical, geodetical and geological, we developed a 4D (space-time) numerical mantle convection code. The model is based on a spherical 3D Eulerian fem model, with quadratic elements, on top of which we constructed a 3D Lagrangian particle in cell(PIC) method. We use the PIC method to transport material properties and to incorporate a viscoelastic rheology. Since capturing small scale processes associated with localization phenomena require a high resolution, we spend a considerable effort on implementing solvers suitable to solve for models with over 100 million degrees of freedom. We implemented Additive Schwartz type ILU based methods in combination with a Krylov solver, GMRES. However we found that for problems with over 500 thousend degrees of freedom the convergence of the solver degraded severely. This observation is known from the literature [Saad, 2003] and results from the local character of the ILU preconditioner resulting in a poor approximation of the inverse of A for large A. The size of A for which ILU is no longer usable depends on the condition of A and on the amount of fill in allowed for the ILU preconditioner. We found that for our problems with over 5×105 degrees of freedom convergence became to slow to solve the system within an acceptable amount of walltime, one minute, even when allowing for considerable amount of fill in. We also implemented MUMPS and found good scaling results for problems up to 107 degrees of freedom for up to 32 CPU¡¯s. For problems with over 100 million degrees of freedom we implemented Algebraic Multigrid type methods (AMG) from the ML library [Sala, 2006]. Since multigrid methods are most effective for single parameter problems, we rebuild our model to use the SIMPLE method in the Stokes solver [Patankar, 1980]. We present scaling results from these solvers for 3D

  11. Exposing earth surface process model simulations to a large audience

    Science.gov (United States)

    Overeem, I.; Kettner, A. J.; Borkowski, L.; Russell, E. L.; Peddicord, H.

    2015-12-01

    The Community Surface Dynamics Modeling System (CSDMS) represents a diverse group of >1300 scientists who develop and apply numerical models to better understand the Earth's surface. CSDMS has a mandate to make the public more aware of model capabilities and therefore started sharing state-of-the-art surface process modeling results with large audiences. One platform to reach audiences outside the science community is through museum displays on 'Science on a Sphere' (SOS). Developed by NOAA, SOS is a giant globe, linked with computers and multiple projectors and can display data and animations on a sphere. CSDMS has developed and contributed model simulation datasets for the SOS system since 2014, including hydrological processes, coastal processes, and human interactions with the environment. Model simulations of a hydrological and sediment transport model (WBM-SED) illustrate global river discharge patterns. WAVEWATCH III simulations have been specifically processed to show the impacts of hurricanes on ocean waves, with focus on hurricane Katrina and super storm Sandy. A large world dataset of dams built over the last two centuries gives an impression of the profound influence of humans on water management. Given the exposure of SOS, CSDMS aims to contribute at least 2 model datasets a year, and will soon provide displays of global river sediment fluxes and changes of the sea ice free season along the Arctic coast. Over 100 facilities worldwide show these numerical model displays to an estimated 33 million people every year. Datasets storyboards, and teacher follow-up materials associated with the simulations, are developed to address common core science K-12 standards. CSDMS dataset documentation aims to make people aware of the fact that they look at numerical model results, that underlying models have inherent assumptions and simplifications, and that limitations are known. CSDMS contributions aim to familiarize large audiences with the use of numerical

  12. Modeling of Internet Influence on Group Emotion

    Science.gov (United States)

    Czaplicka, Agnieszka; Hołyst, Janusz A.

    Long-range interactions are introduced to a two-dimensional model of agents with time-dependent internal variables ei = 0, ±1 corresponding to valencies of agent emotions. Effects of spontaneous emotion emergence and emotional relaxation processes are taken into account. The valence of agent i depends on valencies of its four nearest neighbors but it is also influenced by long-range interactions corresponding to social relations developed for example by Internet contacts to a randomly chosen community. Two types of such interactions are considered. In the first model the community emotional influence depends only on the sign of its temporary emotion. When the coupling parameter approaches a critical value a phase transition takes place and as result for larger coupling constants the mean group emotion of all agents is nonzero over long time periods. In the second model the community influence is proportional to magnitude of community average emotion. The ordered emotional phase was here observed for a narrow set of system parameters.

  13. Fuzzy classification of phantom parent groups in an animal model

    Directory of Open Access Journals (Sweden)

    Fikse Freddy

    2009-09-01

    Full Text Available Abstract Background Genetic evaluation models often include genetic groups to account for unequal genetic level of animals with unknown parentage. The definition of phantom parent groups usually includes a time component (e.g. years. Combining several time periods to ensure sufficiently large groups may create problems since all phantom parents in a group are considered contemporaries. Methods To avoid the downside of such distinct classification, a fuzzy logic approach is suggested. A phantom parent can be assigned to several genetic groups, with proportions between zero and one that sum to one. Rules were presented for assigning coefficients to the inverse of the relationship matrix for fuzzy-classified genetic groups. This approach was illustrated with simulated data from ten generations of mass selection. Observations and pedigree records were randomly deleted. Phantom parent groups were defined on the basis of gender and generation number. In one scenario, uncertainty about generation of birth was simulated for some animals with unknown parents. In the distinct classification, one of the two possible generations of birth was randomly chosen to assign phantom parents to genetic groups for animals with simulated uncertainty, whereas the phantom parents were assigned to both possible genetic groups in the fuzzy classification. Results The empirical prediction error variance (PEV was somewhat lower for fuzzy-classified genetic groups. The ranking of animals with unknown parents was more correct and less variable across replicates in comparison with distinct genetic groups. In another scenario, each phantom parent was assigned to three groups, one pertaining to its gender, and two pertaining to the first and last generation, with proportion depending on the (true generation of birth. Due to the lower number of groups, the empirical PEV of breeding values was smaller when genetic groups were fuzzy-classified. Conclusion Fuzzy

  14. An overview of comparative modelling and resources dedicated to large-scale modelling of genome sequences.

    Science.gov (United States)

    Lam, Su Datt; Das, Sayoni; Sillitoe, Ian; Orengo, Christine

    2017-08-01

    Computational modelling of proteins has been a major catalyst in structural biology. Bioinformatics groups have exploited the repositories of known structures to predict high-quality structural models with high efficiency at low cost. This article provides an overview of comparative modelling, reviews recent developments and describes resources dedicated to large-scale comparative modelling of genome sequences. The value of subclustering protein domain superfamilies to guide the template-selection process is investigated. Some recent cases in which structural modelling has aided experimental work to determine very large macromolecular complexes are also cited.

  15. A tree-based model for homogeneous groupings of multinomials.

    Science.gov (United States)

    Yang, Tae Young

    2005-11-30

    The motivation of this paper is to provide a tree-based method for grouping multinomial data according to their classification probability vectors. We produce an initial tree by binary recursive partitioning whereby multinomials are successively split into two subsets and the splits are determined by maximizing the likelihood function. If the number of multinomials k is too large, we propose to order the multinomials, and then build the initial tree based on a dramatically smaller number k-1 of possible splits. The tree is then pruned from the bottom up. The pruning process involves a sequence of hypothesis tests of a single homogeneous group against the alternative that there are two distinct, internally homogeneous groups. As pruning criteria, the Bayesian information criterion and the Wilcoxon rank-sum test are proposed. The tree-based model is illustrated on genetic sequence data. Homogeneous groupings of genetic sequences present new opportunities to understand and align these sequences.

  16. On several families of elliptic curves with arbitrary large Selmer groups

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    In this paper,we calculate the ()-Selmer groups S()(E/Q) and S()(E/Q) of elliptic curves y2 = x(x + εpD)(x + εqD) via the descent method.In particular,we show that the Selmer groups of several families of such elliptic curves can be arbitrary large.

  17. Large and small sets with respect to homomorphisms and products of groups

    Directory of Open Access Journals (Sweden)

    Riccardo Gusso

    2002-10-01

    Full Text Available We study the behaviour of large, small and medium subsets with respect to homomorphisms and products of groups. Then we introduce the definition af a P-small set in abelian groups and we investigate the relations between this kind of smallness and the previous one, giving some examples that distinguish them.

  18. Nurture Groups: A Large-Scale, Controlled Study of Effects on Development and Academic Attainment

    Science.gov (United States)

    Reynolds, Sue; MacKay, Tommy; Kearney, Maura

    2009-01-01

    Nurture groups have contributed to inclusive practices in primary schools in the UK for some time now and have frequently been the subject of articles in this journal. This large-scale, controlled study of nurture groups across 32 schools in the City of Glasgow provides further evidence for their effectiveness in addressing the emotional…

  19. Large-N Analysis of Three Dimensional Nonlinear Sigma Models

    CERN Document Server

    Higashijima, K; Tsuzuki, M; Higashijima, Kiyoshi; Itou, Etsuko; Tsuzuki, Makoto

    2005-01-01

    Non-perturbative renormalization group approach suggests that a large class of nonlinear sigma models are renormalizable in three dimensional space-time, while they are non-renormalizable in perturbation theory. ${\\cal N}=2$ supersymmetric nonlinear sigma models whose target spaces are Einstein-K\\"{a}hler manifolds with positive scalar curvature belongs to this class. hermitian symmetric spaces, being homogeneous, are specially simple examples of these manifolds. To find an independent evidence of the nonperturbative renormalizability of these models, the large N method, another nonperturbative method, is applied to 3-dimensional ${\\cal N}=2$ supersymmetric nonlinear sigma models on the target spaces $CP^{N-1}=SU(N)/[SU(N-1)\\times U(1)]$ and $Q^{N-2}=SO(N)/[SO(N-2)\\times SO(2)]$, two typical examples of hermitian symmetric spaces. We find that $\\beta$ functions in these models agree with the results of the nonperturbative renormalization group approach in the next-to-leading order of 1/N expansion, and have n...

  20. Renormalisation group improved leptogenesis in family symmetry models

    Energy Technology Data Exchange (ETDEWEB)

    Cooper, Iain K., E-mail: ikc1g08@soton.ac.uk [School of Physics and Astronomy, University of Southampton, Southampton, SO17 1BJ (United Kingdom); King, Stephen F., E-mail: king@soton.ac.uk [School of Physics and Astronomy, University of Southampton, Southampton, SO17 1BJ (United Kingdom); Luhn, Christoph, E-mail: christoph.luhn@durham.ac.uk [School of Physics and Astronomy, University of Southampton, Southampton, SO17 1BJ (United Kingdom); Institute for Particle Physics Phenomenology, University of Durham, Durham, DH1 3LE (United Kingdom)

    2012-06-11

    We study renormalisation group (RG) corrections relevant for leptogenesis in the case of family symmetry models such as the Altarelli-Feruglio A{sub 4} model of tri-bimaximal lepton mixing or its extension to tri-maximal mixing. Such corrections are particularly relevant since in large classes of family symmetry models, to leading order, the CP violating parameters of leptogenesis would be identically zero at the family symmetry breaking scale, due to the form dominance property. We find that RG corrections violate form dominance and enable such models to yield viable leptogenesis at the scale of right-handed neutrino masses. More generally, the results of this paper show that RG corrections to leptogenesis cannot be ignored for any family symmetry model involving sizeable neutrino and {tau} Yukawa couplings.

  1. Promoting oral interaction in large groups through task-based learning

    OpenAIRE

    Forero Rocha, Yolima

    2009-01-01

    This research project attempts to show the way a group of five teachers used task-based learning with a group of 50 seventh graders to improve oral interaction. The students belonged to Isabel II School. They took an active part in the implementation of tasks and were asked to answer two questionnaires. Some English classes were observed and recorded; finally, an evaluation was taken by students to test their improvement. Key words: Task-based learning, oral interaction, large groups, hig...

  2. Corn Heterotic Group and Model in Heilongjiang of China

    Institute of Scientific and Technical Information of China (English)

    JIN Yi; DONG Ling; YU Tianjiang; LI Yan; GUO Ran

    2009-01-01

    The concept and research achievements of the heterotic group and model in corn were introduced briefly. The results showed that the domestic corn germplasm could be divided into three main heterotic groups and two main heterotic models. The research on corn germplasm in Heilongjiang Province could be concluded as three main heterotic groups and three main heterotic models. Some new opinions about corn heterotic group and heterotic model in Heilongjiang Province were proposed such as Northeast group and NortheastxLancaster model.

  3. Risk Factors, Coronary Severity, Outcome and ABO Blood Group: A Large Chinese Han Cohort Study.

    Science.gov (United States)

    Zhang, Yan; Li, Sha; Zhu, Cheng-Gang; Guo, Yuan-Lin; Wu, Na-Qiong; Xu, Rui-Xia; Dong, Qian; Liu, Geng; Li, Jian-Jun

    2015-10-01

    ABO blood type locus has been reported to have ethnic difference and to be a pivotal genetic determinant of cardiovascular risk, whereas few prospective data regarding the impact on cardiovascular outcomes are available in a large cohort of patients with angiography-proven coronary artery disease, especially from the Chinese population. The objective of this study was to assess the prognostic role of blood type in future cardiovascular events (CVEs) in Chinese Han patients undergoing coronary angiography.The population of this prospective cohort study consisted of 3823 eligible patients, and followed annually to capture all CVEs. Baseline characteristics and ABO blood type were obtained. Cox proportional hazards models were used to evaluate the risk of ABO blood type on CVEs.New CVEs occurred in 348 patients [263 (10.3%) non-O and 85 (7.8%) O] during a median period of 24.6 months follow-up. Significantly, non-O blood group was related to the presence and severity of coronary atherosclerosis and several risk factors including inflammatory markers. The log-rank test revealed that there was a significant difference between non-O and O blood groups in event-free survival analysis (P = 0.026). In particular, the Cox proportional hazards models revealed that non-O blood type was associated with increased CVEs risk [hazard ratio (95% confidence interval) 1.320 (1.033-1.685)], even after adjusting for potential confounders [adjusted hazard ratio (95% confidence interval) non-O: 1.289 (1.003-1.656); A: 1.083 (0.797-1.472); B: 1.481 (1.122-1.955); AB: 1.249 (0.852-1.831), respectively].Non-O blood type is associated with future CVEs in Chinese Han patients undergoing coronary angiography.

  4. Flexible modeling frameworks to replace small ensembles of hydrological models and move toward large ensembles?

    Science.gov (United States)

    Addor, Nans; Clark, Martyn P.; Mizukami, Naoki

    2017-04-01

    Climate change impacts on hydrological processes are typically assessed using small ensembles of hydrological models. That is, a handful of hydrological models are typically driven by a larger number of climate models. Such a setup has several limitations. Because the number of hydrological models is small, only a small proportion of the model space is sampled, likely leading to an underestimation of the uncertainties in the projections. Further, sampling is arbitrary: although hydrological models should be selected to provide a representative sample of existing models (in terms of complexity and governing hypotheses), they are instead usually selected based on legacy reasons. Furthermore, running several hydrological models currently constitutes a practical challenge because each model must be setup and calibrated individually. Finally, and probably most importantly, the differences between the projected impacts cannot be directly related to differences between hydrological models, because the models are different in almost every possible aspect. We are hence in a situation in which different hydrological models deliver different projections, but for reasons that are mostly unclear, and in which the uncertainty in the projections is probably underestimated. To overcome these limitations, we are experimenting with the flexible modeling framework FUSE (Framework for Understanding Model Errors). FUSE enables to construct conceptual models piece by piece (in a "pick and mix" approach), so it can be used to generate a large number of models that mimic existing models and/or models that differ from other models in single targeted respect (e.g. how baseflow is generated). FUSE hence allows for controlled modeling experiments, and for a more systematic and exhaustive sampling of the model space. Here we explore climate change impacts over the contiguous USA on a 12km grid using two groups of three models: the first group involves the commonly used models VIC, PRMS and HEC

  5. An Audit of the Effectiveness of Large Group Neurology Tutorials for Irish Undergraduate Medical Students

    LENUS (Irish Health Repository)

    Kearney, H

    2016-07-01

    The aim of this audit was to determine the effectiveness of large group tutorials for teaching neurology to medical students. Students were asked to complete a questionnaire rating their confidence on a ten point Likert scale in a number of domains in the undergraduate education guidelines from the Association of British Neurologists (ABN). We then arranged a series of interactive large group tutorials for the class and repeated the questionnaire one month after teaching. In the three core domains of neurological: history taking, examination and differential diagnosis, none of the students rated their confidence as nine or ten out of ten prior to teaching. This increased to 6% for history taking, 12 % in examination and 25% for differential diagnosis after eight weeks of tutorials. This audit demonstrates that in our centre, large group tutorials were an effective means of teaching, as measured by the ABN guidelines in undergraduate neurology.

  6. Diversity Competent Group Work Supervision: An Application of the Supervision of Group Work Model (SGW)

    Science.gov (United States)

    Okech, Jane E. Atieno; Rubel, Deborah

    2007-01-01

    This article emphasizes the need for concrete descriptions of supervision to promote diversity-competent group work and presents an application of the supervision of group work model (SGW) to this end. The SGW, a supervision model adapted from the discrimination model, is uniquely suited for promoting diversity competence in group work, since it…

  7. Diversity Competent Group Work Supervision: An Application of the Supervision of Group Work Model (SGW)

    Science.gov (United States)

    Okech, Jane E. Atieno; Rubel, Deborah

    2007-01-01

    This article emphasizes the need for concrete descriptions of supervision to promote diversity-competent group work and presents an application of the supervision of group work model (SGW) to this end. The SGW, a supervision model adapted from the discrimination model, is uniquely suited for promoting diversity competence in group work, since it…

  8. From evolution to revolution: understanding mutability in large and disruptive human groups

    Science.gov (United States)

    Whitaker, Roger M.; Felmlee, Diane; Verma, Dinesh C.; Preece, Alun; Williams, Grace-Rose

    2017-05-01

    Over the last 70 years there has been a major shift in the threats to global peace. While the 1950's and 1960's were characterised by the cold war and the arms race, many security threats are now characterised by group behaviours that are disruptive, subversive or extreme. In many cases such groups are loosely and chaotically organised, but their ideals are sociologically and psychologically embedded in group members to the extent that the group represents a major threat. As a result, insights into how human groups form, emerge and change are critical, but surprisingly limited insights into the mutability of human groups exist. In this paper we argue that important clues to understand the mutability of groups come from examining the evolutionary origins of human behaviour. In particular, groups have been instrumental in human evolution, used as a basis to derive survival advantage, leaving all humans with a basic disposition to navigate the world through social networking and managing their presence in a group. From this analysis we present five critical features of social groups that govern mutability, relating to social norms, individual standing, status rivalry, ingroup bias and cooperation. We argue that understanding how these five dimensions interact and evolve can provide new insights into group mutation and evolution. Importantly, these features lend themselves to digital modeling. Therefore computational simulation can support generative exploration of groups and the discovery of latent factors, relevant to both internal group and external group modelling. Finally we consider the role of online social media in relation to understanding the mutability of groups. This can play an active role in supporting collective behaviour, and analysis of social media in the context of the five dimensions of group mutability provides a fresh basis to interpret the forces affecting groups.

  9. Modeling Group Rapports through Tourist School Activities

    OpenAIRE

    Elena Moldovan; Răzvan Sandu ENOIU; Adriana LEIBOVICI

    2012-01-01

    The objective of the research was the evaluation of the developing social climate by determining group cohesion and affective and sympathetic inter personal relationships between the components of the experimental group bent to the tourist program done by the researcher and between the ones of the witness group that has done extracurricular tourist activities after the traditional program, in its free time and during holidays.

  10. Modeling Group Rapports through Tourist School Activities

    Directory of Open Access Journals (Sweden)

    Elena Moldovan

    2012-12-01

    Full Text Available The objective of the research was the evaluation of the developing social climate by determining group cohesion and affective and sympathetic inter personal relationships between the components of the experimental group bent to the tourist program done by the researcher and between the ones of the witness group that has done extracurricular tourist activities after the traditional program, in its free time and during holidays.

  11. Slow light with large group index - bandwidth product in lattice-shifted photonic crystal waveguides

    Science.gov (United States)

    Tang, Jian; Li, Wenhui; Wu, Jun; Xu, Zhonghui

    2016-10-01

    This study presents a systematic optimization procedure to generate slow light with large group index, wideband, and low dispersion in an lattice-shifted photonic crystal waveguide. The waveguide is based on triangular lattice photonic crystal imposed by selectively altering the locations of the holes adjacent to the line defect. Under a constant group index criterion of ± 10% variation, when group indices are nearly constants of 24, 33, 46, 57, and 66, their corresponding bandwidths of flat band reach 24.2, 17.6, 12.8, 10.1 and 8.6 nm around 1550 nm, respectively. A nearly constant large group index - bandwidth product (GBP) of 0.37 is achieved for all cases. Low dispersion slow light propagation is confirmed by studying the relative temporal pulse-width spreading with the 2-D finite-difference time-domain method.

  12. Implementing high-fidelity simulations with large groups of nursing students.

    Science.gov (United States)

    Hooper, Barbara; Shaw, Luanne; Zamzam, Rebekah

    2015-01-01

    Nurse educators are increasing the use of simulation as a teaching strategy. Simulations are conducted typically with a small group of students. This article describes the process for implementing 6 high-fidelity simulations with a large group of undergraduate nursing students. The goal was to evaluate if student knowledge increased on postsimulation quiz scores when only a few individuals actively participated in the simulation while the other students observed.

  13. Mother Moose Generating Extra Dimensions from Simple Groups at Large N

    CERN Document Server

    Rothstein, I; Rothstein, Ira; Skiba, Witold

    2002-01-01

    We show that there exists a correspondence between four dimensional gauge theories with simple groups and higher dimensional gauge theories at large N. As an example, we show that a four dimensional {N}=2 supersymmetric SU(N) gauge theory, on the Higgs branch, has the same correlators as a five dimensional SU(N) gauge theory in the limit of large N provided the couplings are appropriately rescaled. We show that our results can be applied to the AdS/CFT correspondence to derive correlators of five or more dimensional gauge theories from solutions of five dimensional supergravity in the large t'Hooft coupling limit.

  14. Time Distributions of Large and Small Sunspot Groups Over Four Solar Cycles

    CERN Document Server

    Kilcik, A; Abramenko, V; Goode, P R; Ozguc, A; Rozelot, J P; Cao, W; 10.1088/0004-637X/731/1/30

    2011-01-01

    Here we analyze solar activity by focusing on time variations of the number of sunspot groups (SGs) as a function of their modified Zurich class. We analyzed data for solar cycles 2023 by using Rome (cycles 2021) and Learmonth Solar Observatory (cycles 2223) SG numbers. All SGs recorded during these time intervals were separated into two groups. The first group includes small SGs (A, B, C, H, and J classes by Zurich classification) and the second group consists of large SGs (D, E, F, and G classes). We then calculated small and large SG numbers from their daily mean numbers as observed on the solar disk during a given month. We report that the time variations of small and large SG numbers are asymmetric except for the solar cycle 22. In general large SG numbers appear to reach their maximum in the middle of the solar cycle (phase 0.450.5), while the international sunspot numbers and the small SG numbers generally peak much earlier (solar cycle phase 0.290.35). Moreover, the 10.7 cm solar radio flux, the facul...

  15. Learning through Discussions: Comparing the Benefits of Small-Group and Large-Class Settings

    Science.gov (United States)

    Pollock, Philip H.; Hamann, Kerstin; Wilson, Bruce M.

    2011-01-01

    The literature on teaching and learning heralds the benefits of discussion for student learner outcomes, especially its ability to improve students' critical thinking skills. Yet, few studies compare the effects of different types of face-to-face discussions on learners. Using student surveys, we analyze the benefits of small-group and large-class…

  16. The Effectiveness of Lecture-Integrated, Web-Supported Case Studies in Large Group Teaching

    Science.gov (United States)

    Azzawi, May; Dawson, Maureen M.

    2007-01-01

    The effectiveness of lecture-integrated and web-supported case studies in supporting a large and academically diverse group of undergraduate students was evaluated in the present study. Case studies and resource (web)-based learning were incorporated as two complementary interactive learning strategies into the traditional curriculum. A truncated…

  17. Using Professional Teaching Assistants to Support Large Group Business Communication Classes

    Science.gov (United States)

    Rieber, Lloyd J.

    2004-01-01

    In this article, the author reports on the use of classroom instructors and full-time professional teaching assistants called "course tutors" for teaching business communication to large groups at an undergraduate university. The author explains how to recruit course tutors, what course tutors do in the classroom, and the advantages and…

  18. Taking Ownership of Learning in a Large Class: Group Projects and a Mini-Conference

    Science.gov (United States)

    Borda, Emily J.; Kriz, George S.; Popejoy, Kate L.; Dickinson, Alison K.; Olson, Amy L.

    2009-01-01

    Helping students take ownership of their learning is often a challenge in a large lecture course. In this article, the authors describe a nature of science-oriented group project in a chemistry course in which students gave presentations in concurrent conference sessions as well as its impact on student learning as evidenced through multiple data…

  19. Taking Ownership of Learning in a Large Class: Group Projects and a Mini-Conference

    Science.gov (United States)

    Borda, Emily J.; Kriz, George S.; Popejoy, Kate L.; Dickinson, Alison K.; Olson, Amy L.

    2009-01-01

    Helping students take ownership of their learning is often a challenge in a large lecture course. In this article, the authors describe a nature of science-oriented group project in a chemistry course in which students gave presentations in concurrent conference sessions as well as its impact on student learning as evidenced through multiple data…

  20. The MIIT Again Urged to Set Up Large Rare Earth Groups

    Institute of Scientific and Technical Information of China (English)

    2014-01-01

    <正>According to the"Economic Operation of The Rare Earth Industry in 2013"public notice published by the Ministry of Industry and Information Technology,speeding up the establishment of large rare earth enterprise group was placed at conspicuous position.Recently,the Department of Raw Materials,Ministry of Industry and Information

  1. Gateway to Success for At-Risk Students in a Large-Group Introductory Chemistry Class

    Science.gov (United States)

    Mason, Diana; Verdel, Ellen

    2001-02-01

    Thirty-six entering freshmen of a designated "at-risk" population were divided between two classes of introductory chemistry for nonscience majors. Seventeen students were placed in a typical large-group lecture (n = 210) class, and the remaining 19 were placed in a special class with enrollment restricted to the selected, at-risk students. To assure equity and control for consistency, both lecture classes met at the same time. The two female instructors gave the same lectures to each class. At-risk students from both sections were required to attend supplemental instruction sections led by the same teaching assistant, and the same instructor graded all assignments. Students of both classes were encouraged to form informal collaborative groups to complete 10 in-class problem sets, which were graded on an individual basis. The at-risk students from the large-group lecture (average = 75.5) outperformed students enrolled in the special class (average = 70.6). All targeted students were successful in the large-group lecture class; however, three at-risk students in the smaller class were unsuccessful. Scaffolding from a more diverse population in the larger group may be one reason for this unexpected outcome. In the larger lecture class, students rated the collaborative assignments as being most beneficial to their success.

  2. Assessing Activity and Location of Individual Laying Hens in Large Groups Using Modern Technology

    Science.gov (United States)

    Siegford, Janice M.; Berezowski, John; Biswas, Subir K.; Daigle, Courtney L.; Gebhardt-Henrich, Sabine G.; Hernandez, Carlos E.; Thurner, Stefan; Toscano, Michael J.

    2016-01-01

    Simple Summary Tracking of individual animals within large groups is increasingly possible offering an exciting opportunity to researchers. Whereas previously only relatively indistinguishable groups of individual animals could be observed and combined into pen level data, we can now focus on individual actors and track their activities across time and space with minimal intervention and disturbance. We describe several tracking systems that are currently in use for laying hens and review each, highlighting their strengths and weaknesses, as well as environments or conditions for which they may be most suited, and relevant issues to fit the best technology for the intended purpose. Abstract Tracking individual animals within large groups is increasingly possible, offering an exciting opportunity to researchers. Whereas previously only relatively indistinguishable groups of individual animals could be observed and combined into pen level data, we can now focus on individual actors within these large groups and track their activities across time and space with minimal intervention and disturbance. The development is particularly relevant to the poultry industry as, due to a shift away from battery cages, flock sizes are increasingly becoming larger and environments more complex. Many efforts have been made to track individual bird behavior and activity in large groups using a variety of methodologies with variable success. Of the technologies in use, each has associated benefits and detriments, which can make the approach more or less suitable for certain environments and experiments. Within this article, we have divided several tracking systems that are currently available into two major categories (radio frequency identification and radio signal strength) and review the strengths and weaknesses of each, as well as environments or conditions for which they may be most suitable. We also describe related topics including types of analysis for the data and concerns

  3. Creating a library holding group: an approach to large system integration.

    Science.gov (United States)

    Huffman, Isaac R; Martin, Heather J; Delawaska-Elliott, Basia

    2016-10-01

    Faced with resource constraints, many hospital libraries have considered joint operations. This case study describes how Providence Health & Services created a single group to provide library services. Using a holding group model, staff worked to unify more than 6,100 nonlibrary subscriptions and 14 internal library sites. Our library services grew by unifying 2,138 nonlibrary subscriptions and 11 library sites and hiring more library staff. We expanded access to 26,018 more patrons. A model with built-in flexibility allowed successful library expansion. Although challenges remain, this success points to a viable model of unified operations.

  4. Large-scale shifts in phytoplankton groups in the Equatorial Pacific during ENSO cycles

    Directory of Open Access Journals (Sweden)

    I. Masotti

    2010-04-01

    Full Text Available The El Niño Southern Oscillation (ENSO drives important changes in the marine productivity of the Equatorial Pacific, in particular during major El Niño/La Niña transitions. Changes in environmental conditions associated with these climatic events also likely impact phytoplankton composition. In this work, the distribution of four major phytoplankton groups (nanoeucaryotes, Prochlorococcus, Synechococcus, and diatoms was examined between 1996 and 2007 by applying the PHYSAT algorithm to the ocean color data archive from the Ocean Color and Temperature Sensor (OCTS and Sea-viewing Wide Field-of-view Sensor (SeaWiFS. Coincident with the decrease in chlorophyll concentrations, a large-scale shift in the phytoplankton composition of the Equatorial Pacific, that was characterized by a decrease in Synechococcus and an increase in nanoeucaryotes dominance, was observed during the early stages of both the strong El Niño of 1997 and the moderate El Niño of 2006. A significant increase in diatoms dominance was observed in the Equatorial Pacific during the 1998 La Niña and was associated with elevated marine productivity. An analysis of the environmental variables using a coupled physical-biogeochemical model (NEMO-PISCES suggests that the Synechococcus dominance decrease during the two El Niño events was associated with an abrupt decline in nutrient availability (−0.9 to −2.5 μM NO3 month−1. Alternatively, increased nutrient availability (3 μM NO3 month−1 during the 1998 La Niña resulted in Equatorial Pacific dominance diatom increase. Despite these phytoplankton community shifts, the mean composition is restored after a few months, which suggests resilience in community structure. Such rapid changes to the composition of phytoplankton groups should be considered in future modeling approaches to represent variability of the marine productivity in the Equatorial Pacific and to quantify its

  5. Group Centric Information Sharing Using Hierarchical Models

    Science.gov (United States)

    2011-01-01

    Baltimore, MD, USA (June 2010 – Aug 2010). Programmer Analyst, Cognizant Technology Solutions, Chennai , India (Nov 2008 – July 2009...CollaborateCom), Crystal City , Virginia, November 11-14, 2009, pages 1-10. [3] Ravi Sandhu, Ram Krishnan, Jianwei Niu and William Winsborough, Group

  6. Group Modeling in Social Learning Environments

    Science.gov (United States)

    Stankov, Slavomir; Glavinic, Vlado; Krpan, Divna

    2012-01-01

    Students' collaboration while learning could provide better learning environments. Collaboration assumes social interactions which occur in student groups. Social theories emphasize positive influence of such interactions on learning. In order to create an appropriate learning environment that enables social interactions, it is important to…

  7. Advances in large-scale crop modeling

    Science.gov (United States)

    Scholze, Marko; Bondeau, Alberte; Ewert, Frank; Kucharik, Chris; Priess, Jörg; Smith, Pascalle

    Intensified human activity and a growing population have changed the climate and the land biosphere. One of the most widely recognized human perturbations is the emission of carbon dioxide (C02) by fossil fuel burning and land-use change. As the terrestrial biosphere is an active player in the global carbon cycle, changes in land use feed back to the climate of the Earth through regulation of the content of atmospheric CO2, the most important greenhouse gas,and changing albedo (e.g., energy partitioning).Recently, the climate modeling community has started to develop more complex Earthsystem models that include marine and terrestrial biogeochemical processes in addition to the representation of atmospheric and oceanic circulation. However, most terrestrial biosphere models simulate only natural, or so-called potential, vegetation and do not account for managed ecosystems such as croplands and pastures, which make up nearly one-third of the Earth's land surface.

  8. Constraining models with a large scalar multiplet

    CERN Document Server

    Earl, Kevin; Logan, Heather E; Pilkington, Terry

    2013-01-01

    Models in which the Higgs sector is extended by a single electroweak scalar multiplet X can possess an accidental global U(1) symmetry at the renormalizable level if X has isospin T greater or equal to 2. We show that all such U(1)-symmetric models are excluded by the interplay of the cosmological relic density of the lightest (neutral) component of X and its direct detection cross section via Z exchange. The sole exception is the T=2 multiplet, whose lightest member decays on a few-day to few-year timescale via a Planck-suppressed dimension-5 operator.

  9. Transforming a large-class lecture course to a smaller-group interactive course.

    Science.gov (United States)

    Persky, Adam M; Pollack, Gary M

    2010-11-10

    To transition a large pharmacokinetics course that was delivered using a traditional lecture format into a smaller-group course with a discussion format. An e-book and Web-based multimedia learning modules were utilized to facilitate students' independent learning which allowed the number of classes they were required to attend to be reduced from 3 to 1 per week. Students were assigned randomly to 1 of 3 weekly class sessions. The majority of lecture time was replaced with active-learning activities including discussion, problem solving, and case studies to encourage higher-order learning. Changes in course delivery were assessed over a 4-year period by comparing students' grades and satisfaction ratings on course evaluations. Although student satisfaction with the course did not improve significantly, students preferred the smaller-group setting to a large lecture-based class. The resources and activities designed to shift responsibility for learning to the students did not affect examination grades even though a larger portion of examination questions focused on higher orders of learning (eg, application) in the smaller-group format. Transitioning to a smaller-group discussion format is possible in a pharmacokinetics course by increasing student accountability for acquiring factual content outside of the classroom. Students favored the smaller-class format over a large lecture-based class.

  10. Assessing Activity and Location of Individual Laying Hens in Large Groups Using Modern Technology.

    Science.gov (United States)

    Siegford, Janice M; Berezowski, John; Biswas, Subir K; Daigle, Courtney L; Gebhardt-Henrich, Sabine G; Hernandez, Carlos E; Thurner, Stefan; Toscano, Michael J

    2016-02-02

    Tracking individual animals within large groups is increasingly possible, offering an exciting opportunity to researchers. Whereas previously only relatively indistinguishable groups of individual animals could be observed and combined into pen level data, we can now focus on individual actors within these large groups and track their activities across time and space with minimal intervention and disturbance. The development is particularly relevant to the poultry industry as, due to a shift away from battery cages, flock sizes are increasingly becoming larger and environments more complex. Many efforts have been made to track individual bird behavior and activity in large groups using a variety of methodologies with variable success. Of the technologies in use, each has associated benefits and detriments, which can make the approach more or less suitable for certain environments and experiments. Within this article, we have divided several tracking systems that are currently available into two major categories (radio frequency identification and radio signal strength) and review the strengths and weaknesses of each, as well as environments or conditions for which they may be most suitable. We also describe related topics including types of analysis for the data and concerns with selecting focal birds.

  11. Assessing Activity and Location of Individual Laying Hens in Large Groups Using Modern Technology

    Directory of Open Access Journals (Sweden)

    Janice M. Siegford

    2016-02-01

    Full Text Available Tracking individual animals within large groups is increasingly possible, offering an exciting opportunity to researchers. Whereas previously only relatively indistinguishable groups of individual animals could be observed and combined into pen level data, we can now focus on individual actors within these large groups and track their activities across time and space with minimal intervention and disturbance. The development is particularly relevant to the poultry industry as, due to a shift away from battery cages, flock sizes are increasingly becoming larger and environments more complex. Many efforts have been made to track individual bird behavior and activity in large groups using a variety of methodologies with variable success. Of the technologies in use, each has associated benefits and detriments, which can make the approach more or less suitable for certain environments and experiments. Within this article, we have divided several tracking systems that are currently available into two major categories (radio frequency identification and radio signal strength and review the strengths and weaknesses of each, as well as environments or conditions for which they may be most suitable. We also describe related topics including types of analysis for the data and concerns with selecting focal birds.

  12. COST MODEL FOR LARGE URBAN SCHOOLS.

    Science.gov (United States)

    O'BRIEN, RICHARD J.

    THIS DOCUMENT CONTAINS A COST SUBMODEL OF AN URBAN EDUCATIONAL SYSTEM. THIS MODEL REQUIRES THAT PUPIL POPULATION AND PROPOSED SCHOOL BUILDING ARE KNOWN. THE COST ELEMENTS ARE--(1) CONSTRUCTION COSTS OF NEW PLANTS, (2) ACQUISITION AND DEVELOPMENT COSTS OF BUILDING SITES, (3) CURRENT OPERATING EXPENSES OF THE PROPOSED SCHOOL, (4) PUPIL…

  13. Beyond the Standard Model: Working group report

    Indian Academy of Sciences (India)

    Gautam Bhattacharyya; Amitava Raychaudhuri

    2000-07-01

    This report summarises the work done in the ‘Beyond the Standard Model’ working group of the Sixth Workshop on High Energy Physics Phenomenology (WHEPP-6) held at the Institute of Mathematical Sciences, Chennai, Jan 3–15, 2000. The participants in this working group were: R Adhikari, B Ananthanarayan, K P S Balaji, Gour Bhattacharya, Gautam Bhattacharyya, Chao-Hsi Chang (Zhang), D Choudhury, Amitava Datta, Anindya Datta, Asesh K Datta, A Dighe, N Gaur, D Ghosh, A Goyal, K Kar, S F King, Anirban Kundu, U Mahanta, R N Mohapatra, B Mukhopadhyaya, S Pakvasa, P N Pandita, M K Parida, P Poulose, G Raffelt, G Rajasekaran, S Rakshit, Asim K Ray, A Raychaudhuri, S Raychaudhuri, D P Roy, P Roy, S Roy, K Sridhar and S Vempati.

  14. A Semiautomated Assignment Protocol for Methyl Group Side Chains in Large Proteins.

    Science.gov (United States)

    Kim, Jonggul; Wang, Yingjie; Li, Geoffrey; Veglia, Gianluigi

    2016-01-01

    The developments of biosynthetic specific labeling strategies for side-chain methyl groups have allowed structural and dynamic characterization of very large proteins and protein complexes. However, the assignment of the methyl-group resonances remains an Achilles' heel for NMR, as the experiments designed to correlate side chains to the protein backbone become rather insensitive with the increase of the transverse relaxation rates. In this chapter, we outline a semiempirical approach to assign the resonances of methyl-group side chains in large proteins. This method requires a crystal structure or an NMR ensemble of conformers as an input, together with NMR data sets such as nuclear Overhauser effects (NOEs) and paramagnetic relaxation enhancements (PREs), to be implemented in a computational protocol that provides a probabilistic assignment of methyl-group resonances. As an example, we report the protocol used in our laboratory to assign the side chains of the 42-kDa catalytic subunit of the cAMP-dependent protein kinase A. Although we emphasize the labeling of isoleucine, leucine, and valine residues, this method is applicable to other methyl group side chains such as those of alanine, methionine, and threonine, as well as reductively methylated cysteine side chains.

  15. Modeling and Control of Large Flexible Structures.

    Science.gov (United States)

    1984-07-31

    systems with hybrid (lumped and distributed) structure. * -3.Development of stabilizing control strategies for nonlinear distributed models, including...process, but much more needs to be done. el .It ;,, "..- ,. ,-,,o’,, .4. : ") Part I: :i: ’i" ’" Wierner-Hopf Methods for Design of Stabilizing ... Control Systems :: Z’" ..-- -~ . . . . .. . . . . . . ... . . . . .......- ~ .. . . S 5 * * .5 .. ** .*% - * 5*55 * . . . . % % ’ * . ’ % , . :.:. -A

  16. Modeling Social Influence in Large Populations

    Science.gov (United States)

    2010-07-13

    feature selection,” The Journal of Machine Learning Research, vol. 3, 2003, pp. 1157– 1182. I. Ajzen , “The theory of planned behavior ,” Organizational...ANSI Std Z39-18 Theory and Introduction 2 Theory : human collectivities are composed of individuals with different meaningful identities, and these...Intrinsically represent a window of time (e.g., before, after, or during a simulation event) Constructed via a theory to model translation of a

  17. Large-scale shifts in phytoplankton groups in the Equatorial Pacific during ENSO cycles

    Directory of Open Access Journals (Sweden)

    I. Masotti

    2011-03-01

    Full Text Available The El Niño Southern Oscillation (ENSO drives important changes in the marine productivity of the Equatorial Pacific, in particular during major El Niño/La Niña transitions. Changes in environmental conditions associated with these climatic events also likely impact phytoplankton composition. In this work, the distribution of four major phytoplankton groups (nanoeucaryotes, Prochlorococcus, Synechococcus, and diatoms was examined between 1996 and 2007 by applying the PHYSAT algorithm to the ocean color data archive from the Ocean Color and Temperature Sensor (OCTS and Sea-viewing Wide Field-of-view Sensor (SeaWiFS. Coincident with the decrease in chlorophyll concentrations, a large-scale shift in the phytoplankton composition of the Equatorial Pacific, that was characterized by a decrease in Synechococcus and an increase in nanoeucaryote dominance, was observed during the early stages of both the strong El Niño of 1997 and the moderate El Niño of 2006. A significant increase in diatoms dominance was observed in the Equatorial Pacific during the 1998 La Niña and was associated with elevated marine productivity. An analysis of the environmental variables using a coupled physical-biogeochemical model (NEMO-PISCES suggests that the Synechococcus dominance decrease during the two El Niño events was associated with an abrupt decline in nutrient availability (−0.9 to −2.5 μM NO3 month−1. Alternatively, increased nutrient availability (3 μM NO3 month−1 during the 1998 La Niña resulted in Equatorial Pacific dominance diatom increase. Despite these phytoplankton community shifts, the mean composition is restored after a few months, which suggests resilience in community structure.

  18. Under the Cobblestones: Politics and Possibilities of the Art Therapy Large Group.

    OpenAIRE

    Jones, Kevin; Skaife, Sally

    2009-01-01

    This paper discusses the politics and possibilities of linking the personal and political with therapeutic and social transformation through a teaching method provided on the art therapy training at Goldsmiths, the art therapy large group (ATLG). Three key ideas of May 68 are related to the ATLG and their relevance to other psychotherapies and psychotherapy trainings is considered. These are: the importance of the ‘capitalist’ university as an essential terrain in the struggle for social chan...

  19. An investigation into the factors that encourage learner participation in a large group medical classroom

    Directory of Open Access Journals (Sweden)

    Moffett J

    2014-03-01

    Full Text Available Jennifer Moffett, John Berezowski, Dustine Spencer, Shari Lanning Ross University School of Veterinary Medicine, West Farm, St Kitts, West Indies Background: Effective lectures often incorporate activities that encourage learner participation. A challenge for educators is how to facilitate this in the large group lecture setting. This study investigates the individual student characteristics involved in encouraging (or dissuading learners to interact, ask questions, and make comments in class. Methods: Students enrolled in a Doctor of Veterinary Medicine program at Ross University School of Veterinary Medicine, St Kitts, were invited to complete a questionnaire canvassing their participation in the large group classroom. Data from the questionnaire were analyzed using Excel (Microsoft, Redmond, WA, USA and the R software environment (http://www.r-project.org/. Results: One hundred and ninety-two students completed the questionnaire (response rate, 85.7%. The results showed statistically significant differences between male and female students when asked to self-report their level of participation (P=0.011 and their confidence to participate (P<0.001 in class. No statistically significant difference was identified between different age groups of students (P=0.594. Student responses reflected that an "aversion to public speaking" acted as the main deterrent to participating during a lecture. Female participants were 3.56 times more likely to report a fear of public speaking than male participants (odds ratio 3.56, 95% confidence interval 1.28–12.33, P=0.01. Students also reported "smaller sizes of class and small group activities" and "other students participating" as factors that made it easier for them to participate during a lecture. Conclusion: In this study, sex likely played a role in learner participation in the large group veterinary classroom. Male students were more likely to participate in class and reported feeling more confident to

  20. An investigation into the factors that encourage learner participation in a large group medical classroom.

    Science.gov (United States)

    Moffett, Jennifer; Berezowski, John; Spencer, Dustine; Lanning, Shari

    2014-01-01

    Effective lectures often incorporate activities that encourage learner participation. A challenge for educators is how to facilitate this in the large group lecture setting. This study investigates the individual student characteristics involved in encouraging (or dissuading) learners to interact, ask questions, and make comments in class. Students enrolled in a Doctor of Veterinary Medicine program at Ross University School of Veterinary Medicine, St Kitts, were invited to complete a questionnaire canvassing their participation in the large group classroom. Data from the questionnaire were analyzed using Excel (Microsoft, Redmond, WA, USA) and the R software environment (http://www.r-project.org/). One hundred and ninety-two students completed the questionnaire (response rate, 85.7%). The results showed statistically significant differences between male and female students when asked to self-report their level of participation (P=0.011) and their confidence to participate (Pspeaking" acted as the main deterrent to participating during a lecture. Female participants were 3.56 times more likely to report a fear of public speaking than male participants (odds ratio 3.56, 95% confidence interval 1.28-12.33, P=0.01). Students also reported "smaller sizes of class and small group activities" and "other students participating" as factors that made it easier for them to participate during a lecture. In this study, sex likely played a role in learner participation in the large group veterinary classroom. Male students were more likely to participate in class and reported feeling more confident to participate than female students. Female students in this study commonly identified aversion to public speaking as a factor which held them back from participating in the large group lecture setting. These are important findings for veterinary and medical educators aiming to improve learner participation in the classroom. Potential ways of addressing this challenge include

  1. Influence of Deterministic Attachments for Large Unifying Hybrid Network Model

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    Large unifying hybrid network model (LUHPM) introduced the deterministic mixing ratio fd on the basis of the harmonious unification hybrid preferential model, to describe the influence of deterministic attachment to the network topology characteristics,

  2. Group Contribution Based Process Flowsheet Synthesis, Design and Modelling

    DEFF Research Database (Denmark)

    Gani, Rafiqul; d'Anterroches, Loïc

    2004-01-01

    This paper presents a process-group-contribution Method to model. simulate and synthesize a flowsheet. The process-group based representation of a flowsheet together with a process "property" model are presented. The process-group based synthesis method is developed on the basis of the computer...

  3. Modelling large scale human activity in San Francisco

    Science.gov (United States)

    Gonzalez, Marta

    2010-03-01

    Diverse group of people with a wide variety of schedules, activities and travel needs compose our cities nowadays. This represents a big challenge for modeling travel behaviors in urban environments; those models are of crucial interest for a wide variety of applications such as traffic forecasting, spreading of viruses, or measuring human exposure to air pollutants. The traditional means to obtain knowledge about travel behavior is limited to surveys on travel journeys. The obtained information is based in questionnaires that are usually costly to implement and with intrinsic limitations to cover large number of individuals and some problems of reliability. Using mobile phone data, we explore the basic characteristics of a model of human travel: The distribution of agents is proportional to the population density of a given region, and each agent has a characteristic trajectory size contain information on frequency of visits to different locations. Additionally we use a complementary data set given by smart subway fare cards offering us information about the exact time of each passenger getting in or getting out of the subway station and the coordinates of it. This allows us to uncover the temporal aspects of the mobility. Since we have the actual time and place of individual's origin and destination we can understand the temporal patterns in each visited location with further details. Integrating two described data set we provide a dynamical model of human travels that incorporates different aspects observed empirically.

  4. A Selective Review of Group Selection in High Dimensional Models

    CERN Document Server

    Huang, Jian; Ma, Shuangge

    2012-01-01

    Grouping structures arise naturally in many statistical modeling problems. Several methods have been proposed for variable selection that respect grouping structure in variables. Examples include the group LASSO and several concave group selection methods. In this article, we give a selective review of group selection concerning methodological developments, theoretical properties, and computational algorithms. We pay particular attention to group selection methods involving concave penalties. We address both group selection and bi-level selection methods. We describe several applications of these methods in nonparametric additive models, semiparametric regression, seemingly unrelated regressions, genomic data analysis and genome wide association studies. We also highlight some issues that require further study.

  5. Symmetries of preon interactions modeled as a finite group

    Science.gov (United States)

    Bellinger, James N.

    1997-07-01

    I model preon interactions as a finite group. Treating the elements of the group as the bases of a vector space, I examine those linear mappings under which the transformed bases may be treated as members of a group isomorphic to the original. In some cases these mappings are continuous Lie groups.

  6. Symmetries of preon interactions modeled as a finite group

    Energy Technology Data Exchange (ETDEWEB)

    Bellinger, J.N. [University of Wisconsin at Madison, Madison, Wisconsin 53706 (United States)

    1997-07-01

    I model preon interactions as a finite group. Treating the elements of the group as the bases of a vector space, I examine those linear mappings under which the transformed bases may be treated as members of a group isomorphic to the original. In some cases these mappings are continuous Lie groups. {copyright} {ital 1997 American Institute of Physics.}

  7. Evidence for the alignment of quasar radio polarizations with large quasar group axes

    Science.gov (United States)

    Pelgrims, V.; Hutsemékers, D.

    2016-05-01

    Recently, evidence has been presented for the polarization vectors from quasars to preferentially align with the axes of the large quasar groups (LQG) to which they belong. This report was based on observations made at optical wavelengths for two LQGs at redshift ~1.3. The correlation suggests that the spin axes of quasars preferentially align with their surrounding large-scale structure that is assumed to be traced by the LQGs. Here, we consider a large sample of LQGs built from the Sloan Digital Sky Survey DR7 quasar catalogue in the redshift range 1.0-1.8. For quasars embedded in this sample, we collected radio polarization measurements with the goal to study possible correlations between quasar polarization vectors and the major axis of their host LQGs. Assuming the radio polarization vector is perpendicular to the quasar spin axis, we found that the quasar spin axis is preferentially parallel to the LQG major axis inside LQGs that have at least 20 members. This result independently supports the observations at optical wavelengths. We additionally found that when the richness of an LQG decreases, the quasar spin axis becomes preferentially perpendicular to the LQG major axis and that no correlation is detected for quasar groups with fewer than 10 members.

  8. Long-Term Calculations with Large Air Pollution Models

    DEFF Research Database (Denmark)

    1999-01-01

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  9. Long-Term Calculations with Large Air Pollution Models

    DEFF Research Database (Denmark)

    1999-01-01

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  10. Assessing learning progress and quality of teaching in large groups of students.

    Science.gov (United States)

    Reumann, Matthias; Mohr, Matthias; Diez, Anke; Dössel, Olaf

    2008-01-01

    The classic tool of assessing learning progress are written tests and assignments. In large groups of students the workload often does not allow in depth evaluation during the course. Thus our aim was to modify the course to include active learning methods and student centered teaching. We changed the course structure only slightly and established new assessment methods like minute papers, short tests, mini-projects and a group project at the end of the semester. The focus was to monitor the learning progress during the course so that problematic issues could be addressed immediately. The year before the changes 26.76 % of the class failed the course with a grade average of 3.66 (Pass grade is 4.0/30 % of achievable marks). After introducing student centered teaching, only 14 % of students failed the course and the average grade was 3.01. Grades were also distributed more evenly with more students achieving better results. We have shown that even in large groups of students with > 100 participants student centered and active learning is possible. Although it requires a great work overhead on the behalf of the teaching staff, the quality of teaching and the motivation of the students is increased leading to a better learning environment.

  11. Sexuality and the Elderly: A Group Counseling Model.

    Science.gov (United States)

    Capuzzi, Dave; Gossman, Larry

    1982-01-01

    Describes a 10-session group counseling model to facilitate awareness of sexuality and the legitimacy of its expression for older adults. Considers member selection, session length and setting, and group leadership. (Author/MCF)

  12. Using Agent Base Models to Optimize Large Scale Network for Large System Inventories

    Science.gov (United States)

    Shameldin, Ramez Ahmed; Bowling, Shannon R.

    2010-01-01

    The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.

  13. Modeling Large sound sources in a room acoustical calculation program

    DEFF Research Database (Denmark)

    Christensen, Claus Lynge

    1999-01-01

    A room acoustical model capable of modelling point, line and surface sources is presented. Line and surface sources are modelled using a special ray-tracing algorithm detecting the radiation pattern of the surfaces in the room. Point sources are modelled using a hybrid calculation method combining...... this ray-tracing method with Image source modelling. With these three source types, it is possible to model large and complex sound sources in workrooms....

  14. Modeling Large sound sources in a room acoustical calculation program

    DEFF Research Database (Denmark)

    Christensen, Claus Lynge

    1999-01-01

    A room acoustical model capable of modelling point, line and surface sources is presented. Line and surface sources are modelled using a special ray-tracing algorithm detecting the radiation pattern of the surfaces in the room. Point sources are modelled using a hybrid calculation method combining...... this ray-tracing method with Image source modelling. With these three source types, it is possible to model large and complex sound sources in workrooms....

  15. Evaluation of receptivity of the medical students in a lecture of a large group

    Directory of Open Access Journals (Sweden)

    Vidyarthi SurendraK, Nayak RoopaP, GuptaSandeep K

    2014-04-01

    Full Text Available Background: Lecturing is widely used teaching method in higher education. Instructors of large classes may have only option to deliver lecture to convey informations to large group students.Aims and Objectives: The present study was to evaluate the effectiveness/receptivity of interactive lecturing in a large group of MBBS second year students. Material and Methods: The present study was conducted in the well-equipped lecture theater of Dhanalakshmi Srinivasan Medical College and Hospital (DSMCH, Tamil Nadu. A fully prepared interactive lecture on the specific topic was delivered by using power point presentation for second year MBBS students. Before start to deliver the lecture, instructor distributed multiple choice 10 questionnaires to attempt within 10 minutes. After 30 minutes of delivering lecture, again instructor distributed same 10 sets of multiple choice questionnaires to attempt in 10 minutes. The topic was never disclosed to the students before to deliver the lecture. Statistics: We analyzed the pre-lecture & post-lecture questions of each student by applying the paired t-test formula by using www.openepi.com version 3.01 online/offline software and by using Microsoft Excel Sheet Windows 2010. Results: The 31 male, 80 female including 111 students of average age 18.58 years baseline (pre-lecture receptivity mean % was 30.99 ± 14.64 and post-lecture receptivity mean % was increased upto 53.51± 19.52. The only 12 students out of 111 post-lecture receptivity values was less (mean % 25.8± 10.84 than the baseline (mean % 45± 9.05 receptive value and this reduction of receptivity was more towards negative side. Conclusion: In interactive lecture session with power point presentation students/learners can learn, even in large-class environments, but it should be active-learner centered.

  16. On the "Dependence" of "Independent" Group EEG Sources; an EEG Study on Two Large Databases.

    OpenAIRE

    Congedo, Marco; John, Roy; RIDDER, Dirk De; Prichep, Leslie; Isenhart, Robert

    2010-01-01

    International audience; The aim of this work is to study the coherence profile (dependence) of robust eyes-closed resting EEG sources isolated by group blind source separation (gBSS). We employ a test-retest strategy using two large sample normative databases (N = 57 and 84). Using a BSS method in the complex Fourier domain, we show that we can rigourously study the out-of-phase dependence of the extracted components, albeit they are extracted so as to be in-phase independent (by BSS definiti...

  17. All polymer chip for amperometric studies of transmitter release from large groups of neuronal cells

    DEFF Research Database (Denmark)

    Larsen, Simon T.; Taboryski, Rafael

    2012-01-01

    We present an all polymer electrochemical chip for simple detection of transmitter release from large groups of cultured PC 12 cells. Conductive polymer PEDOT:tosylate microelectrodes were used together with constant potential amperometry to obtain easy-to-analyze oxidation signals from potassium......-induced release of transmitter molecules. The nature of the resulting current peaks is discussed, and the time for restoring transmitter reservoirs is studied. The relationship between released transmitters and potassium concentration was found to fit to a sigmoidal dose–response curve. Finally, we demonstrate...

  18. Revisiting Executive Pay in Family-Controlled Firms: Family Premium in Large Business Groups

    OpenAIRE

    Cheong, Juyoung; Kim, Woochan

    2014-01-01

    According to the prior literature, family executives of family-controlled firms receive lower compensation than non-family executives. One of the key driving forces behind this is the existence of family members who are not involved in management, but own significant fraction of shares and closely monitor and/or discipline those involved in management. In this paper, we show that this assumption falls apart if family-controlled firm is part of a large business group, where most of the family ...

  19. Group Clustering Mechanism for P2P Large Scale Data Sharing Collaboration

    Institute of Scientific and Technical Information of China (English)

    DENGQianni; LUXinda; CHENLi

    2005-01-01

    Research shows that P2P scientific collaboration network will exhibit small-world topology, as do a large number of social networks for which the same pattern has been documented. In this paper we propose a topology building protocol to benefit from the small world feature. We find that the idea of Freenet resembles the dynamic pattern of social interactions in scientific data sharing and the small world characteristic of Freenet is propitious to improve the file locating performance in scientificdata sharing. But the LRU (Least recently used) datas-tore cache replacement scheme of Freenet is not suitableto be used in scientific data sharing network. Based onthe group locality of scientific collaboration, we proposean enhanced group clustering cache replacement scheme.Simulation shows that this scheme improves the request hitratio dramatically while keeping the small average hops per successful request comparable to LRU.

  20. Multiple Group Analysis in Multilevel Structural Equation Model Across Level 1 Groups.

    Science.gov (United States)

    Ryu, Ehri

    2015-01-01

    This article introduces and evaluates a procedure for conducting multiple group analysis in multilevel structural equation model across Level 1 groups (MG1-MSEM; Ryu, 2014). When group membership is at Level 1, multiple group analysis raises two issues that cannot be solved by a simple extension of the standard multiple group analysis in single-level structural equation model. First, the Level 2 data are not independent between Level 1 groups. Second, the standard procedure fails to take into account the dependency between members of different Level 1 groups within the same cluster. The MG1-MSEM approach provides solutions to these problems. In MG1-MSEM, the Level 1 mean structure is necessary to represent the differences between Level 1 groups within clusters. The Level 2 model is the same regardless of Level 1 group membership. A simulation study examined the performance of MUML (Muthén's maximum likelihood) estimation in MG1-MSEM. The MG1-MSEM approach is illustrated for both a multilevel path model and a multilevel factor model using empirical data sets.

  1. WORK GROUP DEVELOPMENT MODELS – THE EVOLUTION FROM SIMPLE GROUP TO EFFECTIVE TEAM

    Directory of Open Access Journals (Sweden)

    Raluca ZOLTAN

    2016-02-01

    Full Text Available Currently, work teams are increasingly studied by virtue of the advantages they have compared to the work groups. But a true team does not appear overnight but must complete several steps to overcome the initial stage of its existence as a group. The question that arises is at what point a simple group is turning into an effective team. Even though the development process of group into a team is not a linear process, the models found in the literature provides a rich framework for analyzing and identifying the features which group acquires over time till it become a team in the true sense of word. Thus, in this article we propose an analysis of the main models of group development in order to point out, even in a relative manner, the stage when the simple work group becomes an effective work team.

  2. Group impressions as dynamic configurations: the tensor product model of group impression formation and change.

    Science.gov (United States)

    Kashima, Y; Woolcock, J; Kashima, E S

    2000-10-01

    Group impressions are dynamic configurations. The tensor product model (TPM), a connectionist model of memory and learning, is used to describe the process of group impression formation and change, emphasizing the structured and contextualized nature of group impressions and the dynamic evolution of group impressions over time. TPM is first shown to be consistent with algebraic models of social judgment (the weighted averaging model; N. Anderson, 1981) and exemplar-based social category learning (the context model; E. R. Smith & M. A. Zárate, 1992), providing a theoretical reduction of the algebraic models to the present connectionist framework. TPM is then shown to describe a common process that underlies both formation and change of group impressions despite the often-made assumption that they constitute different psychological processes. In particular, various time-dependent properties of both group impression formation (e.g., time variability, response dependency, and order effects in impression judgments) and change (e.g., stereotype change and group accentuation) are explained, demonstrating a hidden unity beneath the diverse array of empirical findings. Implications of the model for conceptualizing stereotype formation and change are discussed.

  3. Large-scale parallel configuration interaction. II. Two- and four-component double-group general active space implementation with application to BiH

    DEFF Research Database (Denmark)

    Knecht, Stefan; Jensen, Hans Jørgen Aagaard; Fleig, Timo

    2010-01-01

    We present a parallel implementation of a large-scale relativistic double-group configuration interaction CIprogram. It is applicable with a large variety of two- and four-component Hamiltonians. The parallel algorithm is based on a distributed data model in combination with a static load balanci...

  4. Comparison Between Overtopping Discharge in Small and Large Scale Models

    DEFF Research Database (Denmark)

    Helgason, Einar; Burcharth, Hans F.

    2006-01-01

    small and large scale model tests show no clear evidence of scale effects for overtopping above a threshold value. In the large scale model no overtopping was measured for waveheights below Hs = 0.5m as the water sunk into the voids between the stones on the crest. For low overtopping scale effects...... are presented as the small-scale model underpredicts the overtopping discharge....

  5. Frequency and phase synchronization in large groups: Low dimensional description of synchronized clapping, firefly flashing, and cricket chirping

    Science.gov (United States)

    Ott, Edward; Antonsen, Thomas M.

    2017-05-01

    A common observation is that large groups of oscillatory biological units often have the ability to synchronize. A paradigmatic model of such behavior is provided by the Kuramoto model, which achieves synchronization through coupling of the phase dynamics of individual oscillators, while each oscillator maintains a different constant inherent natural frequency. Here we consider the biologically likely possibility that the oscillatory units may be capable of enhancing their synchronization ability by adaptive frequency dynamics. We propose a simple augmentation of the Kuramoto model which does this. We also show that, by the use of a previously developed technique [Ott and Antonsen, Chaos 18, 037113 (2008)], it is possible to reduce the resulting dynamics to a lower dimensional system for the macroscopic evolution of the oscillator ensemble. By employing this reduction, we investigate the dynamics of our system, finding a characteristic hysteretic behavior and enhancement of the quality of the achieved synchronization.

  6. Automation practices in large molecule bioanalysis: recommendations from group L5 of the global bioanalytical consortium.

    Science.gov (United States)

    Ahene, Ago; Calonder, Claudio; Davis, Scott; Kowalchick, Joseph; Nakamura, Takahiro; Nouri, Parya; Vostiar, Igor; Wang, Yang; Wang, Jin

    2014-01-01

    In recent years, the use of automated sample handling instrumentation has come to the forefront of bioanalytical analysis in order to ensure greater assay consistency and throughput. Since robotic systems are becoming part of everyday analytical procedures, the need for consistent guidance across the pharmaceutical industry has become increasingly important. Pre-existing regulations do not go into sufficient detail in regard to how to handle the use of robotic systems for use with analytical methods, especially large molecule bioanalysis. As a result, Global Bioanalytical Consortium (GBC) Group L5 has put forth specific recommendations for the validation, qualification, and use of robotic systems as part of large molecule bioanalytical analyses in the present white paper. The guidelines presented can be followed to ensure that there is a consistent, transparent methodology that will ensure that robotic systems can be effectively used and documented in a regulated bioanalytical laboratory setting. This will allow for consistent use of robotic sample handling instrumentation as part of large molecule bioanalysis across the globe.

  7. Modelling Large sound sources in a room acoustical calculation program

    DEFF Research Database (Denmark)

    Christensen, Claus Lynge

    1999-01-01

    A room acoustical model capable of modelling point, line and surface sources is presented. Line and surfacesources are modelled using a special ray-tracing algorithm detecting the radiation pattern of the surfaces in the room.Point sources are modelled using a hybrid calculation method combining...... this ray-tracing method with Image sourcemodelling. With these three source types, it is possible to model large and complex sound sources in workrooms....

  8. Modelling Large sound sources in a room acoustical calculation program

    DEFF Research Database (Denmark)

    Christensen, Claus Lynge

    1999-01-01

    A room acoustical model capable of modelling point, line and surface sources is presented. Line and surfacesources are modelled using a special ray-tracing algorithm detecting the radiation pattern of the surfaces in the room.Point sources are modelled using a hybrid calculation method combining...... this ray-tracing method with Image sourcemodelling. With these three source types, it is possible to model large and complex sound sources in workrooms....

  9. Large scale stochastic spatio-temporal modelling with PCRaster

    Science.gov (United States)

    Karssenberg, Derek; Drost, Niels; Schmitz, Oliver; de Jong, Kor; Bierkens, Marc F. P.

    2013-04-01

    PCRaster is a software framework for building spatio-temporal models of land surface processes (http://www.pcraster.eu). Building blocks of models are spatial operations on raster maps, including a large suite of operations for water and sediment routing. These operations are available to model builders as Python functions. The software comes with Python framework classes providing control flow for spatio-temporal modelling, Monte Carlo simulation, and data assimilation (Ensemble Kalman Filter and Particle Filter). Models are built by combining the spatial operations in these framework classes. This approach enables modellers without specialist programming experience to construct large, rather complicated models, as many technical details of modelling (e.g., data storage, solving spatial operations, data assimilation algorithms) are taken care of by the PCRaster toolbox. Exploratory modelling is supported by routines for prompt, interactive visualisation of stochastic spatio-temporal data generated by the models. The high computational requirements for stochastic spatio-temporal modelling, and an increasing demand to run models over large areas at high resolution, e.g. in global hydrological modelling, require an optimal use of available, heterogeneous computing resources by the modelling framework. Current work in the context of the eWaterCycle project is on a parallel implementation of the modelling engine, capable of running on a high-performance computing infrastructure such as clusters and supercomputers. Model runs will be distributed over multiple compute nodes and multiple processors (GPUs and CPUs). Parallelization will be done by parallel execution of Monte Carlo realizations and sub regions of the modelling domain. In our approach we use multiple levels of parallelism, improving scalability considerably. On the node level we will use OpenCL, the industry standard for low-level high performance computing kernels. To combine multiple nodes we will use

  10. Large scale stochastic spatio-temporal modelling with PCRaster

    NARCIS (Netherlands)

    Karssenberg, D.J.; Drost, N.; Schmitz, O.; Jong, K. de; Bierkens, M.F.P.

    2013-01-01

    PCRaster is a software framework for building spatio-temporal models of land surface processes (http://www.pcraster.eu). Building blocks of models are spatial operations on raster maps, including a large suite of operations for water and sediment routing. These operations are available to model

  11. Large scale stochastic spatio-temporal modelling with PCRaster

    NARCIS (Netherlands)

    Karssenberg, D.J.; Drost, N.; Schmitz, O.; Jong, K. de; Bierkens, M.F.P.

    2013-01-01

    PCRaster is a software framework for building spatio-temporal models of land surface processes (http://www.pcraster.eu). Building blocks of models are spatial operations on raster maps, including a large suite of operations for water and sediment routing. These operations are available to model buil

  12. An accurate and simple large signal model of HEMT

    DEFF Research Database (Denmark)

    Liu, Qing

    1989-01-01

    A large-signal model of discrete HEMTs (high-electron-mobility transistors) has been developed. It is simple and suitable for SPICE simulation of hybrid digital ICs. The model parameters are extracted by using computer programs and data provided by the manufacturer. Based on this model, a hybrid...

  13. Aero-acoustic modeling using large eddy simulation

    DEFF Research Database (Denmark)

    Shen, Wen Zhong; Sørensen, Jens Nørkær

    2007-01-01

    The splitting technique for aero-acoustic computations is extended to simulate three-dimensional flow and acoustic waves from airfoils. The aero-acoustic model is coupled to a sub-grid-scale turbulence model for Large-Eddy Simulations. In the first test case, the model is applied to compute laminar...

  14. Large Core Code Evaluation Working Group Benchmark Problem Four: neutronics and burnup analysis of a large heterogeneous fast reactor. Part 1. Analysis of benchmark results. [LMFBR

    Energy Technology Data Exchange (ETDEWEB)

    Cowan, C.L.; Protsik, R.; Lewellen, J.W. (eds.)

    1984-01-01

    The Large Core Code Evaluation Working Group Benchmark Problem Four was specified to provide a stringent test of the current methods which are used in the nuclear design and analyses process. The benchmark specifications provided a base for performing detailed burnup calculations over the first two irradiation cycles for a large heterogeneous fast reactor. Particular emphasis was placed on the techniques for modeling the three-dimensional benchmark geometry, and sensitivity studies were carried out to determine the performance parameter sensitivities to changes in the neutronics and burnup specifications. The results of the Benchmark Four calculations indicated that a linked RZ-XY (Hex) two-dimensional representation of the benchmark model geometry can be used to predict mass balance data, power distributions, regionwise fuel exposure data and burnup reactivities with good accuracy when compared with the results of direct three-dimensional computations. Most of the small differences in the results of the benchmark analyses by the different participants were attributed to ambiguities in carrying out the regionwise flux renormalization calculations throughout the burnup step.

  15. Intelligence and Personal Influence in Groups: Four Nonlinear Models.

    Science.gov (United States)

    Simonton, Dean Keith

    1985-01-01

    Four models are developed to provide a conceptual basis for a curvilinear relation between intelligence and an individual's influence over group members. The models deal with influence and percentile placement in intelligence, comprehension by potential followers, vulnerability to rival intellects, and correlation between mean group IQ and the…

  16. The LGBTQ Responsive Model for Supervision of Group Work

    Science.gov (United States)

    Goodrich, Kristopher M.; Luke, Melissa

    2011-01-01

    Although supervision of group work has been linked to the development of multicultural and social justice competencies, there are no models for supervision of group work specifically designed to address the needs of lesbian, gay, bisexual, transgender, and questioning (LGBTQ) persons. This manuscript presents the LGBTQ Responsive Model for…

  17. The LGBTQ Responsive Model for Supervision of Group Work

    Science.gov (United States)

    Goodrich, Kristopher M.; Luke, Melissa

    2011-01-01

    Although supervision of group work has been linked to the development of multicultural and social justice competencies, there are no models for supervision of group work specifically designed to address the needs of lesbian, gay, bisexual, transgender, and questioning (LGBTQ) persons. This manuscript presents the LGBTQ Responsive Model for…

  18. The Punctuated-Tuckman: Towards a New Group Development Model

    Science.gov (United States)

    Hurt, Andrew C.; Trombley, Sarah M.

    2007-01-01

    Two commonly accepted theories of group development are the Tuckman model (Tuckman & Jensen, 1977) and the Punctuated-Equilibrium model (Gersick, 1988). Critiques of both are that they assume linear development and that they fail to account for outside influences. In contrast, Tubbs (2004) suggests that group development should be viewed from a…

  19. Working group report: Flavor physics and model building

    Indian Academy of Sciences (India)

    M K Parida; Nita Sinha; B Adhikary; B Allanach; A Alok; K S Babu; B Brahmachari; D Choudhury; E J Chun; P K Das; A Ghosal; D Hitlin; W S Hou; S Kumar; H N Li; E Ma; S K Majee; G Majumdar; B Mishra; G Mohanty; S Nandi; H Pas; M K Parida; S D Rindani; J P Saha; N Sahu; Y Sakai; S Sen; C Sharma; C D Sharma; S Shalgar; N N Singh; S Uma Sankar; N Sinha; R Sinha; F Simonetto; R Srikanth; R Vaidya

    2006-11-01

    This is the report of flavor physics and model building working group at WHEPP-9. While activities in flavor physics have been mainly focused on -physics, those in model building have been primarily devoted to neutrino physics. We present summary of working group discussions carried out during the workshop in the above fields, and also briefly review the progress made in some projects subsequently

  20. Investigating Facebook Groups through a Random Graph Model

    OpenAIRE

    Dinithi Pallegedara; Lei Pan

    2014-01-01

    Facebook disseminates messages for billions of users everyday. Though there are log files stored on central servers, law enforcement agencies outside of the U.S. cannot easily acquire server log files from Facebook. This work models Facebook user groups by using a random graph model. Our aim is to facilitate detectives quickly estimating the size of a Facebook group with which a suspect is involved. We estimate this group size according to the number of immediate friends and the number of ext...

  1. Large field excursions from a few site relaxion model

    Science.gov (United States)

    Fonseca, N.; de Lima, L.; Machado, C. S.; Matheus, R. D.

    2016-07-01

    Relaxion models are an interesting new avenue to explain the radiative stability of the Standard Model scalar sector. They require very large field excursions, which are difficult to generate in a consistent UV completion and to reconcile with the compact field space of the relaxion. We propose an N -site model which naturally generates the large decay constant needed to address these issues. Our model offers distinct advantages with respect to previous proposals: the construction involves non-Abelian fields, allowing for controlled high-energy behavior and more model building possibilities, both in particle physics and inflationary models, and also admits a continuum limit when the number of sites is large, which may be interpreted as a warped extra dimension.

  2. Dynamics of two-group conflicts: A statistical physics model

    Science.gov (United States)

    Diep, H. T.; Kaufman, Miron; Kaufman, Sanda

    2017-03-01

    We propose a "social physics" model for two-group conflict. We consider two disputing groups. Each individual i in each of the two groups has a preference si regarding the way in which the conflict should be resolved. The individual preferences span a range between + M (prone to protracted conflict) and - M (prone to settle the conflict). The noise in this system is quantified by a "social temperature". Individuals interact within their group and with individuals of the other group. A pair of individuals (i , j) within a group contributes -si ∗sj to the energy. The inter-group energy of individual i is taken to be proportional to the product between si and the mean value of the preferences from the other group's members. We consider an equivalent-neighbor Renyi-Erdos network where everyone interacts with everyone. We present some examples of conflicts that may be described with this model.

  3. Renormalization-group flow of the effective action of cosmological large-scale structures

    CERN Document Server

    Floerchinger, Stefan

    2017-01-01

    Following an approach of Matarrese and Pietroni, we derive the functional renormalization group (RG) flow of the effective action of cosmological large-scale structures. Perturbative solutions of this RG flow equation are shown to be consistent with standard cosmological perturbation theory. Non-perturbative approximate solutions can be obtained by truncating the a priori infinite set of possible effective actions to a finite subspace. Using for the truncated effective action a form dictated by dissipative fluid dynamics, we derive RG flow equations for the scale dependence of the effective viscosity and sound velocity of non-interacting dark matter, and we solve them numerically. Physically, the effective viscosity and sound velocity account for the interactions of long-wavelength fluctuations with the spectrum of smaller-scale perturbations. We find that the RG flow exhibits an attractor behaviour in the IR that significantly reduces the dependence of the effective viscosity and sound velocity on the input ...

  4. Public perceptions of low carbon energy technologies. Results from a Dutch large group workshop

    Energy Technology Data Exchange (ETDEWEB)

    Brunsting, S.; Van Bree, B.; Feenstra, C.F.J.; Hekkenberg, M. [ECN Policy Studies, Petten (Netherlands)

    2011-06-15

    This report describes the outcomes of a large group workshop held in Utrecht, the Netherlands on 21 May 2011. The workshop aims to learn about Dutch citizens perspectives on climate change and low emission energy technologies and how these perspectives may change after receiving and discussing objective information. This report presents participants environmental profile, stated beliefs, knowledge and attitudes, support for different energy technologies, and environmental behaviours and intentions, derived from questionnaire answers and observations during the day. The report also presents observed changes on the above over the course of the workshop. Whereas the report provides some conclusions and inferences throughout its sections, the focus of the report is on presenting the observations. No overall conclusions are drawn.

  5. Next Generation Very Large Array Memo No. 6, Science Working Group 1: The Cradle of Life

    CERN Document Server

    Isella, Andrea; Moullet, Arielle; Galván-Madrid, Roberto; Johnstone, Doug; Ricci, Luca; Tobin, John; Testi, Leonardo; Beltran, Maite; Lazio, Joseph; Siemion, Andrew; Liu, Hauyu Baobab; Du, Fujun; Öberg, Karin I; Bergin, Ted; Caselli, Paola; Bourke, Tyler; Carilli, Chris; Perez, Laura; Butler, Bryan; de Pater, Imke; Qi, Chunhua; Hofstadter, Mark; Moreno, Raphael; Alexander, David; Williams, Jonathan; Goldsmith, Paul; Wyatt, Mark; Loinard, Laurent; Di Francesco, James; Wilner, David; Schilke, Peter; Ginsburg, Adam; Sánchez-Monge, Álvaro; Zhang, Qizhou; Beuther, Henrik

    2015-01-01

    This paper discusses compelling science cases for a future long-baseline interferometer operating at millimeter and centimeter wavelengths, like the proposed Next Generation Vary Large Array (ngVLA). We report on the activities of the Cradle of Life science working group, which focused on the formation of low- and high-mass stars, the formation of planets and evolution of protoplanetary disks, the physical and compositional study of Solar System bodies, and the possible detection of radio signals from extraterrestrial civilizations. We propose 19 scientific projects based on the current specification of the ngVLA. Five of them are highlighted as possible Key Science Projects: (1) Resolving the density structure and dynamics of the youngest HII regions and high-mass protostellar jets, (2) Unveiling binary/multiple protostars at higher resolution, (3) Mapping planet formation regions in nearby disks on scales down to 1 AU, (4) Studying the formation of complex molecules, and (5) Deep atmospheric mapping of gian...

  6. Studies of dental anomalies in a large group of school children.

    Science.gov (United States)

    Küchler, Erika C; Risso, Patrícia A; Costa, Marcelo de Castro; Modesto, Adriana; Vieira, Alexandre R

    2008-10-01

    The identification of specific patterns of dental anomalies would allow testing the hypothesis that certain genetic and environmental factors contribute to distinct dental anomaly subphenotypes. A sexual dimorphism in tooth agenesis and its association with other dental anomalies has been suggested. The aim of this study was to investigate a large group of children to define dental anomaly subphenotypes that may aid future genetic studies. Orthopantamograms of 1198 subjects were examined and 1167 were used in this study. The frequency of tooth agenesis in the studied population was 4.8%. Male:female ratios varied from 2:1 in the agenesis of upper lateral incisors to 0.5:1 in premolar agenesis. The risk of infra-occlusion of primary molars and double formation of primary incisors was increased in individuals with tooth agenesis.

  7. Group size, grooming and fission in primates: a modeling approach based on group structure.

    Science.gov (United States)

    Sueur, Cédric; Deneubourg, Jean-Louis; Petit, Odile; Couzin, Iain D

    2011-03-21

    In social animals, fission is a common mode of group proliferation and dispersion and may be affected by genetic or other social factors. Sociality implies preserving relationships between group members. An increase in group size and/or in competition for food within the group can result in decrease certain social interactions between members, and the group may split irreversibly as a consequence. One individual may try to maintain bonds with a maximum of group members in order to keep group cohesion, i.e. proximity and stable relationships. However, this strategy needs time and time is often limited. In addition, previous studies have shown that whatever the group size, an individual interacts only with certain grooming partners. There, we develop a computational model to assess how dynamics of group cohesion are related to group size and to the structure of grooming relationships. Groups' sizes after simulated fission are compared to observed sizes of 40 groups of primates. Results showed that the relationship between grooming time and group size is dependent on how each individual attributes grooming time to its social partners, i.e. grooming a few number of preferred partners or grooming equally or not all partners. The number of partners seemed to be more important for the group cohesion than the grooming time itself. This structural constraint has important consequences on group sociality, as it gives the possibility of competition for grooming partners, attraction for high-ranking individuals as found in primates' groups. It could, however, also have implications when considering the cognitive capacities of primates.

  8. Long-term resource variation and group size: A large-sample field test of the Resource Dispersion Hypothesis

    Directory of Open Access Journals (Sweden)

    Morecroft Michael D

    2001-07-01

    Full Text Available Abstract Background The Resource Dispersion Hypothesis (RDH proposes a mechanism for the passive formation of social groups where resources are dispersed, even in the absence of any benefits of group living per se. Despite supportive modelling, it lacks empirical testing. The RDH predicts that, rather than Territory Size (TS increasing monotonically with Group Size (GS to account for increasing metabolic needs, TS is constrained by the dispersion of resource patches, whereas GS is independently limited by their richness. We conducted multiple-year tests of these predictions using data from the long-term study of badgers Meles meles in Wytham Woods, England. The study has long failed to identify direct benefits from group living and, consequently, alternative explanations for their large group sizes have been sought. Results TS was not consistently related to resource dispersion, nor was GS consistently related to resource richness. Results differed according to data groupings and whether territories were mapped using minimum convex polygons or traditional methods. Habitats differed significantly in resource availability, but there was also evidence that food resources may be spatially aggregated within habitat types as well as between them. Conclusions This is, we believe, the largest ever test of the RDH and builds on the long-term project that initiated part of the thinking behind the hypothesis. Support for predictions were mixed and depended on year and the method used to map territory borders. We suggest that within-habitat patchiness, as well as model assumptions, should be further investigated for improved tests of the RDH in the future.

  9. Large field inflation models from higher-dimensional gauge theories

    Science.gov (United States)

    Furuuchi, Kazuyuki; Koyama, Yoji

    2015-02-01

    Motivated by the recent detection of B-mode polarization of CMB by BICEP2 which is possibly of primordial origin, we study large field inflation models which can be obtained from higher-dimensional gauge theories. The constraints from CMB observations on the gauge theory parameters are given, and their naturalness are discussed. Among the models analyzed, Dante's Inferno model turns out to be the most preferred model in this framework.

  10. Large field inflation models from higher-dimensional gauge theories

    Energy Technology Data Exchange (ETDEWEB)

    Furuuchi, Kazuyuki [Manipal Centre for Natural Sciences, Manipal University, Manipal, Karnataka 576104 (India); Koyama, Yoji [Department of Physics, National Tsing-Hua University, Hsinchu 30013, Taiwan R.O.C. (China)

    2015-02-23

    Motivated by the recent detection of B-mode polarization of CMB by BICEP2 which is possibly of primordial origin, we study large field inflation models which can be obtained from higher-dimensional gauge theories. The constraints from CMB observations on the gauge theory parameters are given, and their naturalness are discussed. Among the models analyzed, Dante’s Inferno model turns out to be the most preferred model in this framework.

  11. How the group affects the mind : A cognitive model of idea generation in groups

    NARCIS (Netherlands)

    Nijstad, Bernard A.; Stroebe, Wolfgang

    2006-01-01

    A model called search for ideas in associative memory (SIAM) is proposed to account for various research findings in the area of group idea generation. The model assumes that idea generation is a repeated search for ideas in associative memory, which proceeds in 2 stages (knowledge activation and id

  12. Architecture Models and Data Flows in Local and Group Datawarehouses

    Science.gov (United States)

    Bogza, R. M.; Zaharie, Dorin; Avasilcai, Silvia; Bacali, Laura

    Architecture models and possible data flows for local and group datawarehouses are presented, together with some data processing models. The architecture models consists of several layers and the data flow between them. The choosen architecture of a datawarehouse depends on the data type and volumes from the source data, and inflences the analysis, data mining and reports done upon the data from DWH.

  13. Therapeutic Enactment: Integrating Individual and Group Counseling Models for Change

    Science.gov (United States)

    Westwood, Marvin J.; Keats, Patrice A.; Wilensky, Patricia

    2003-01-01

    The purpose of this article is to introduce the reader to a group-based therapy model known as therapeutic enactment. A description of this multimodal change model is provided by outlining the relevant background information, key concepts related to specific change processes, and the differences in this model compared to earlier psychodrama…

  14. Dynamics of group knowledge production in facilitated modelling workshops

    DEFF Research Database (Denmark)

    Tavella, Elena; Franco, L. Alberto

    2015-01-01

    The term ‘facilitated modelling’ is used in the literature to characterise an approach to structuring problems, developing options and evaluating decisions by groups working in a model-supported workshop environment, and assisted by a facilitator. The approach involves an interactive process...... by which models are jointly developed with group members interacting face-to-face, with or without computer support. The models produced are used to inform negotiations about the nature of the issues faced by the group, and how to address them. While the facilitated modelling literature is impressive...... the form of three distinct group knowledge production patterns: generative, collaborative and assertive. Further, each pattern is characterised by a particular mix of communicative behaviours and model-supported interactions that has implications for the creation of new knowledge within the workshop. Our...

  15. Two Models for Semi-Supervised Terrorist Group Detection

    Science.gov (United States)

    Ozgul, Fatih; Erdem, Zeki; Bowerman, Chris

    Since discovery of organization structure of offender groups leads the investigation to terrorist cells or organized crime groups, detecting covert networks from crime data are important to crime investigation. Two models, GDM and OGDM, which are based on another representation model - OGRM are developed and tested on nine terrorist groups. GDM, which is basically depending on police arrest data and “caught together” information and OGDM, which uses a feature matching on year-wise offender components from arrest and demographics data, performed well on terrorist groups, but OGDM produced high precision with low recall values. OGDM uses a terror crime modus operandi ontology which enabled matching of similar crimes.

  16. Large N Scalars: From Glueballs to Dynamical Higgs Models

    CERN Document Server

    Sannino, Francesco

    2015-01-01

    We construct effective Lagrangians, and corresponding counting schemes, valid to describe the dynamics of the lowest lying large N stable massive composite state emerging in strongly coupled theories. The large N counting rules can now be employed when computing quantum corrections via an effective Lagrangian description. The framework allows for systematic investigations of composite dynamics of non-Goldstone nature. Relevant examples are the lightest glueball states emerging in any Yang-Mills theory. We further apply the effective approach and associated counting scheme to composite models at the electroweak scale. To illustrate the formalism we consider the possibility that the Higgs emerges as: the lightest glueball of a new composite theory; the large N scalar meson in models of dynamical electroweak symmetry breaking; the large N pseudodilaton useful also for models of near-conformal dynamics. For each of these realisations we determine the leading N corrections to the electroweak precision parameters. ...

  17. Quantifying fish escape behaviour through large mesh panels in trawls based on catch comparision data – model development and a case study from Skagerrak In: ICES (2012) Report of the ICES-FAO Working Group on Fishing Gear Technology and Fish Behaivour (WGFTFB), 23-27 April 2012, Lorient, France

    DEFF Research Database (Denmark)

    Krag, Ludvig Ahm; Herrmann, Bent; Karlsen, Junita

    Based on catch comparison data, it is demonstrated how detailed and quantitative information about species-specific and size dependent escape behaviour in relation to a large mesh panel can be extracted. A new analytical model is developed, applied, and compared to the traditional modelling appro...

  18. Finite Mixture Multilevel Multidimensional Ordinal IRT Models for Large Scale Cross-Cultural Research

    NARCIS (Netherlands)

    M.G. de Jong (Martijn); J-B.E.M. Steenkamp (Jan-Benedict)

    2009-01-01

    textabstractWe present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups

  19. Modelling and measurements of wakes in large wind farms

    DEFF Research Database (Denmark)

    Barthelmie, Rebecca Jane; Rathmann, Ole; Frandsen, Sten Tronæs;

    2007-01-01

    The paper presents research conducted in the Flow workpackage of the EU funded UPWIND project which focuses on improving models of flow within and downwind of large wind farms in complex terrain and offshore. The main activity is modelling the behaviour of wind turbine wakes in order to improve...

  20. Modelling and measurements of wakes in large wind farms

    DEFF Research Database (Denmark)

    Barthelmie, Rebecca Jane; Rathmann, Ole; Frandsen, Sten Tronæs

    2007-01-01

    The paper presents research conducted in the Flow workpackage of the EU funded UPWIND project which focuses on improving models of flow within and downwind of large wind farms in complex terrain and offshore. The main activity is modelling the behaviour of wind turbine wakes in order to improve p...

  1. Multiple-membership multiple-classification models for social network and group dependences

    OpenAIRE

    Tranmer, Mark; Steel, David; Browne, William J

    2014-01-01

    The social network literature on network dependences has largely ignored other sources of dependence, such as the school that a student attends, or the area in which an individual lives. The multilevel modelling literature on school and area dependences has, in turn, largely ignored social networks. To bridge this divide, a multiple-membership multiple-classification modelling approach for jointly investigating social network and group dependences is presented. This allows social network and ...

  2. The Beyond the standard model working group: Summary report

    Energy Technology Data Exchange (ETDEWEB)

    G. Azuelos et al.

    2004-03-18

    In this working group we have investigated a number of aspects of searches for new physics beyond the Standard Model (SM) at the running or planned TeV-scale colliders. For the most part, we have considered hadron colliders, as they will define particle physics at the energy frontier for the next ten years at least. The variety of models for Beyond the Standard Model (BSM) physics has grown immensely. It is clear that only future experiments can provide the needed direction to clarify the correct theory. Thus, our focus has been on exploring the extent to which hadron colliders can discover and study BSM physics in various models. We have placed special emphasis on scenarios in which the new signal might be difficult to find or of a very unexpected nature. For example, in the context of supersymmetry (SUSY), we have considered: how to make fully precise predictions for the Higgs bosons as well as the superparticles of the Minimal Supersymmetric Standard Model (MSSM) (parts III and IV); MSSM scenarios in which most or all SUSY particles have rather large masses (parts V and VI); the ability to sort out the many parameters of the MSSM using a variety of signals and study channels (part VII); whether the no-lose theorem for MSSM Higgs discovery can be extended to the next-to-minimal Supersymmetric Standard Model (NMSSM) in which an additional singlet superfield is added to the minimal collection of superfields, potentially providing a natural explanation of the electroweak value of the parameter {micro} (part VIII); sorting out the effects of CP violation using Higgs plus squark associate production (part IX); the impact of lepton flavor violation of various kinds (part X); experimental possibilities for the gravitino and its sgoldstino partner (part XI); what the implications for SUSY would be if the NuTeV signal for di-muon events were interpreted as a sign of R-parity violation (part XII). Our other main focus was on the phenomenological implications of extra

  3. The Beyond the standard model working group: Summary report

    Energy Technology Data Exchange (ETDEWEB)

    G. Azuelos et al.

    2004-03-18

    In this working group we have investigated a number of aspects of searches for new physics beyond the Standard Model (SM) at the running or planned TeV-scale colliders. For the most part, we have considered hadron colliders, as they will define particle physics at the energy frontier for the next ten years at least. The variety of models for Beyond the Standard Model (BSM) physics has grown immensely. It is clear that only future experiments can provide the needed direction to clarify the correct theory. Thus, our focus has been on exploring the extent to which hadron colliders can discover and study BSM physics in various models. We have placed special emphasis on scenarios in which the new signal might be difficult to find or of a very unexpected nature. For example, in the context of supersymmetry (SUSY), we have considered: how to make fully precise predictions for the Higgs bosons as well as the superparticles of the Minimal Supersymmetric Standard Model (MSSM) (parts III and IV); MSSM scenarios in which most or all SUSY particles have rather large masses (parts V and VI); the ability to sort out the many parameters of the MSSM using a variety of signals and study channels (part VII); whether the no-lose theorem for MSSM Higgs discovery can be extended to the next-to-minimal Supersymmetric Standard Model (NMSSM) in which an additional singlet superfield is added to the minimal collection of superfields, potentially providing a natural explanation of the electroweak value of the parameter {micro} (part VIII); sorting out the effects of CP violation using Higgs plus squark associate production (part IX); the impact of lepton flavor violation of various kinds (part X); experimental possibilities for the gravitino and its sgoldstino partner (part XI); what the implications for SUSY would be if the NuTeV signal for di-muon events were interpreted as a sign of R-parity violation (part XII). Our other main focus was on the phenomenological implications of extra

  4. Dynamical real space renormalization group applied to sandpile models.

    Science.gov (United States)

    Ivashkevich, E V; Povolotsky, A M; Vespignani, A; Zapperi, S

    1999-08-01

    A general framework for the renormalization group analysis of self-organized critical sandpile models is formulated. The usual real space renormalization scheme for lattice models when applied to nonequilibrium dynamical models must be supplemented by feedback relations coming from the stationarity conditions. On the basis of these ideas the dynamically driven renormalization group is applied to describe the boundary and bulk critical behavior of sandpile models. A detailed description of the branching nature of sandpile avalanches is given in terms of the generating functions of the underlying branching process.

  5. Nonlinear Reynolds stress models and the renormalization group

    Science.gov (United States)

    Rubinstein, Robert; Barton, J. Michael

    1990-01-01

    The renormalization group is applied to derive a nonlinear algebraic Reynolds stress model of anisotropic turbulence in which the Reynolds stresses are quadratic functions of the mean velocity gradients. The model results from a perturbation expansion that is truncated systematically at second order with subsequent terms contributing no further information. The resulting turbulence model applied to both low and high Reynolds number flows without requiring wall functions or ad hoc modifications of the equations. All constants are derived from the renormalization group procedure; no adjustable constants arise. The model permits inequality of the Reynolds normal stresses, a necessary condition for calculating turbulence-driven secondary flows in noncircular ducts.

  6. Large scale experiments as a tool for numerical model development

    DEFF Research Database (Denmark)

    Kirkegaard, Jens; Hansen, Erik Asp; Fuchs, Jesper;

    2003-01-01

    for improvement of the reliability of physical model results. This paper demonstrates by examples that numerical modelling benefits in various ways from experimental studies (in large and small laboratory facilities). The examples range from very general hydrodynamic descriptions of wave phenomena to specific......Experimental modelling is an important tool for study of hydrodynamic phenomena. The applicability of experiments can be expanded by the use of numerical models and experiments are important for documentation of the validity of numerical tools. In other cases numerical tools can be applied...... hydrodynamic interaction with structures. The examples also show that numerical model development benefits from international co-operation and sharing of high quality results....

  7. LARGE SIGNAL DISCRETE-TIME MODEL FOR PARALLELED BUCK CONVERTERS

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    As a number of switch-combinations are involved in operation of multi-converter-system, conventional methods for obtaining discrete-time large signal model of these converter systems result in a very complex solution. A simple sampled-data technique for modeling distributed dc-dc PWM converters system (DCS) was proposed. The resulting model is nonlinear and can be linearized for analysis and design of DCS. These models are also suitable for fast simulation of these networks. As the input and output of dc-dc converters are slow varying, suitable model for DCS was obtained in terms of the finite order input/output approximation.

  8. Abacus models for parabolic quotients of affine Weyl groups

    CERN Document Server

    Hanusa, Christopher R H

    2011-01-01

    We introduce abacus diagrams that describe minimal length coset representatives in affine Weyl groups of types B, C, and D. These abacus diagrams use a realization of the affine Weyl group of type C due to Eriksson to generalize a construction of James for the symmetric group. We also describe several combinatorial models for these parabolic quotients that generalize classical results in affine type A related to core partitions.

  9. Beyond standard model report of working group II

    CERN Document Server

    Joshipura, A S; Joshipura, Anjan S; Roy, Probir

    1995-01-01

    Working group II at WHEPP3 concentrated on issues related to the supersymmetric standard model as well as SUSY GUTS and neutrino properties. The projects identified by various working groups as well as progress made in them since WHEPP3 are briefly reviewed.

  10. Renormalization-group flow of the effective action of cosmological large-scale structures

    Science.gov (United States)

    Floerchinger, Stefan; Garny, Mathias; Tetradis, Nikolaos; Wiedemann, Urs Achim

    2017-01-01

    Following an approach of Matarrese and Pietroni, we derive the functional renormalization group (RG) flow of the effective action of cosmological large-scale structures. Perturbative solutions of this RG flow equation are shown to be consistent with standard cosmological perturbation theory. Non-perturbative approximate solutions can be obtained by truncating the a priori infinite set of possible effective actions to a finite subspace. Using for the truncated effective action a form dictated by dissipative fluid dynamics, we derive RG flow equations for the scale dependence of the effective viscosity and sound velocity of non-interacting dark matter, and we solve them numerically. Physically, the effective viscosity and sound velocity account for the interactions of long-wavelength fluctuations with the spectrum of smaller-scale perturbations. We find that the RG flow exhibits an attractor behaviour in the IR that significantly reduces the dependence of the effective viscosity and sound velocity on the input values at the UV scale. This allows for a self-contained computation of matter and velocity power spectra for which the sensitivity to UV modes is under control.

  11. Veal calves’ clinical/health status in large groups fed with automatic feeding devices

    Directory of Open Access Journals (Sweden)

    Giulio Cozzi

    2010-01-01

    Full Text Available Aim of the current study was to evaluate the clinical/health status of veal calves in 3 farms that adopt large group housing and automatic feeding stations in Italy. Visits were scheduled in three phases of the rearing cycle (early, middle, and end. Results showed a high incidence of coughing, skin infection and bloated rumen particularly in the middle phase while cross-sucking signs were present at the early stage when calves’ nibbling proclivity is still high. Throughout the rearing cycle, the frequency of bursitis increased reaching 53% of calves at the end. The percentage of calves with a poorer body condition than the mid-range of the batch raised gradually as well, likely due to the non-proportioned teat/calves ratio that increases competition for feed and reduces milk intake of the low ranking animals. The remarked growth differences among pen-mates and the mortality rate close to 7% showed by the use of automatic feeding devices for milk delivery seem not compensating the lower labour demand, therefore its sustainability at the present status is doubtful both for the veal calves’ welfare and the farm incomes.

  12. Toy Model for Large Non-Symmetric Random Matrices

    CERN Document Server

    Snarska, Małgorzata

    2010-01-01

    Non-symmetric rectangular correlation matrices occur in many problems in economics. We test the method of extracting statistically meaningful correlations between input and output variables of large dimensionality and build a toy model for artificially included correlations in large random time series.The results are then applied to analysis of polish macroeconomic data and can be used as an alternative to classical cointegration approach.

  13. Satellite image collection modeling for large area hazard emergency response

    Science.gov (United States)

    Liu, Shufan; Hodgson, Michael E.

    2016-08-01

    Timely collection of critical hazard information is the key to intelligent and effective hazard emergency response decisions. Satellite remote sensing imagery provides an effective way to collect critical information. Natural hazards, however, often have large impact areas - larger than a single satellite scene. Additionally, the hazard impact area may be discontinuous, particularly in flooding or tornado hazard events. In this paper, a spatial optimization model is proposed to solve the large area satellite image acquisition planning problem in the context of hazard emergency response. In the model, a large hazard impact area is represented as multiple polygons and image collection priorities for different portion of impact area are addressed. The optimization problem is solved with an exact algorithm. Application results demonstrate that the proposed method can address the satellite image acquisition planning problem. A spatial decision support system supporting the optimization model was developed. Several examples of image acquisition problems are used to demonstrate the complexity of the problem and derive optimized solutions.

  14. Modelling and transient stability of large wind farms

    DEFF Research Database (Denmark)

    Akhmatov, Vladislav; Knudsen, Hans; Nielsen, Arne Hejde

    2003-01-01

    The paper is dealing-with modelling and short-term Voltage stability considerations of large wind farms. A physical model of a large offshore wind farm consisting of a large number of windmills is implemented in the dynamic simulation tool PSS/E. Each windmill in the wind farm is represented...... by a physical model of grid-connected windmills. The windmill generators ate conventional induction generators and the wind farm is ac-connected to the power system. Improvements-of short-term voltage stability in case of failure events in the external power system are treated with use of conventional generator...... of dynamic reactive compensation demands. In case of blade angle control applied at failure events, dynamic reactive compensation is not necessary for maintaining the voltage stability....

  15. Dualities in 3D large N vector models

    Science.gov (United States)

    Muteeb, Nouman; Zayas, Leopoldo A. Pando; Quevedo, Fernando

    2016-05-01

    Using an explicit path integral approach we derive non-abelian bosonization and duality of 3D systems in the large N limit. We first consider a fermionic U( N) vector model coupled to level k Chern-Simons theory, following standard techniques we gauge the original global symmetry and impose the corresponding field strength F μν to vanish introducing a Lagrange multiplier Λ. Exchanging the order of integrations we obtain the bosonized theory with Λ as the propagating field using the large N rather than the previously used large mass limit. Next we follow the same procedure to dualize the scalar U ( N) vector model coupled to Chern-Simons and find its corresponding dual theory. Finally, we compare the partition functions of the two resulting theories and find that they agree in the large N limit including a level/rank duality. This provides a constructive evidence for previous proposals on level/rank duality of 3D vector models in the large N limit. We also present a partial analysis at subleading order in large N and find that the duality does not generically hold at this level.

  16. Dualities in 3D large N vector models

    Energy Technology Data Exchange (ETDEWEB)

    Muteeb, Nouman [The Abdus Salam International Centre for Theoretical Physics, ICTP,Strada Costiera 11, 34014 Trieste (Italy); SISSA,Via Bonomea 265, 34136 Trieste (Italy); Zayas, Leopoldo A. Pando [The Abdus Salam International Centre for Theoretical Physics, ICTP,Strada Costiera 11, 34014 Trieste (Italy); Michigan Center for Theoretical Physics, Department of Physics,University of Michigan, Ann Arbor, MI 48109 (United States); Quevedo, Fernando [The Abdus Salam International Centre for Theoretical Physics, ICTP,Strada Costiera 11, 34014 Trieste (Italy); DAMTP, CMS, University of Cambridge,Wilberforce Road, Cambridge, CB3 0WA (United Kingdom)

    2016-05-09

    Using an explicit path integral approach we derive non-abelian bosonization and duality of 3D systems in the large N limit. We first consider a fermionic U(N) vector model coupled to level k Chern-Simons theory, following standard techniques we gauge the original global symmetry and impose the corresponding field strength F{sub μν} to vanish introducing a Lagrange multiplier Λ. Exchanging the order of integrations we obtain the bosonized theory with Λ as the propagating field using the large N rather than the previously used large mass limit. Next we follow the same procedure to dualize the scalar U(N) vector model coupled to Chern-Simons and find its corresponding dual theory. Finally, we compare the partition functions of the two resulting theories and find that they agree in the large N limit including a level/rank duality. This provides a constructive evidence for previous proposals on level/rank duality of 3D vector models in the large N limit. We also present a partial analysis at subleading order in large N and find that the duality does not generically hold at this level.

  17. Automorphisms and Generalized Involution Models of Finite Complex Reflection Groups

    CERN Document Server

    Marberg, Eric

    2010-01-01

    We prove that a finite complex reflection group has a generalized involution model, as defined by Bump and Ginzburg, if and only if each of its irreducible factors is either $G(r,p,n)$ with $\\gcd(p,n)=1$; $G(r,p,2)$ with $r/p$ odd; or $G_{23}$, the Coxeter group of type $H_3$. We additionally provide explicit formulas for all automorphisms of $G(r,p,n)$, and construct new Gelfand models for the groups $G(r,p,n)$ with $\\gcd(p,n)=1$.

  18. Deciphering the Crowd: Modeling and Identification of Pedestrian Group Motion

    Directory of Open Access Journals (Sweden)

    Norihiro Hagita

    2013-01-01

    Full Text Available Associating attributes to pedestrians in a crowd is relevant for various areas like surveillance, customer profiling and service providing. The attributes of interest greatly depend on the application domain and might involve such social relations as friends or family as well as the hierarchy of the group including the leader or subordinates. Nevertheless, the complex social setting inherently complicates this task. We attack this problem by exploiting the small group structures in the crowd. The relations among individuals and their peers within a social group are reliable indicators of social attributes. To that end, this paper identifies social groups based on explicit motion models integrated through a hypothesis testing scheme. We develop two models relating positional and directional relations. A pair of pedestrians is identified as belonging to the same group or not by utilizing the two models in parallel, which defines a compound hypothesis testing scheme. By testing the proposed approach on three datasets with different environmental properties and group characteristics, it is demonstrated that we achieve an identification accuracy of 87% to 99%. The contribution of this study lies in its definition of positional and directional relation models, its description of compound evaluations, and the resolution of ambiguities with our proposed uncertainty measure based on the local and global indicators of group relation.

  19. Large animal and primate models of spinal cord injury for the testing of novel therapies.

    Science.gov (United States)

    Kwon, Brian K; Streijger, Femke; Hill, Caitlin E; Anderson, Aileen J; Bacon, Mark; Beattie, Michael S; Blesch, Armin; Bradbury, Elizabeth J; Brown, Arthur; Bresnahan, Jacqueline C; Case, Casey C; Colburn, Raymond W; David, Samuel; Fawcett, James W; Ferguson, Adam R; Fischer, Itzhak; Floyd, Candace L; Gensel, John C; Houle, John D; Jakeman, Lyn B; Jeffery, Nick D; Jones, Linda Ann Truett; Kleitman, Naomi; Kocsis, Jeffery; Lu, Paul; Magnuson, David S K; Marsala, Martin; Moore, Simon W; Mothe, Andrea J; Oudega, Martin; Plant, Giles W; Rabchevsky, Alexander Sasha; Schwab, Jan M; Silver, Jerry; Steward, Oswald; Xu, Xiao-Ming; Guest, James D; Tetzlaff, Wolfram

    2015-07-01

    Large animal and primate models of spinal cord injury (SCI) are being increasingly utilized for the testing of novel therapies. While these represent intermediary animal species between rodents and humans and offer the opportunity to pose unique research questions prior to clinical trials, the role that such large animal and primate models should play in the translational pipeline is unclear. In this initiative we engaged members of the SCI research community in a questionnaire and round-table focus group discussion around the use of such models. Forty-one SCI researchers from academia, industry, and granting agencies were asked to complete a questionnaire about their opinion regarding the use of large animal and primate models in the context of testing novel therapeutics. The questions centered around how large animal and primate models of SCI would be best utilized in the spectrum of preclinical testing, and how much testing in rodent models was warranted before employing these models. Further questions were posed at a focus group meeting attended by the respondents. The group generally felt that large animal and primate models of SCI serve a potentially useful role in the translational pipeline for novel therapies, and that the rational use of these models would depend on the type of therapy and specific research question being addressed. While testing within these models should not be mandatory, the detection of beneficial effects using these models lends additional support for translating a therapy to humans. These models provides an opportunity to evaluate and refine surgical procedures prior to use in humans, and safety and bio-distribution in a spinal cord more similar in size and anatomy to that of humans. Our results reveal that while many feel that these models are valuable in the testing of novel therapies, important questions remain unanswered about how they should be used and how data derived from them should be interpreted. Copyright © 2015 Elsevier

  20. Investigating the LGBTQ Responsive Model for Supervision of Group Work

    Science.gov (United States)

    Luke, Melissa; Goodrich, Kristopher M.

    2013-01-01

    This article reports an investigation of the LGBTQ Responsive Model for Supervision of Group Work, a trans-theoretical supervisory framework to address the needs of lesbian, gay, bisexual, transgender, and questioning (LGBTQ) persons (Goodrich & Luke, 2011). Findings partially supported applicability of the LGBTQ Responsive Model for Supervision…

  1. Investigating the LGBTQ Responsive Model for Supervision of Group Work

    Science.gov (United States)

    Luke, Melissa; Goodrich, Kristopher M.

    2013-01-01

    This article reports an investigation of the LGBTQ Responsive Model for Supervision of Group Work, a trans-theoretical supervisory framework to address the needs of lesbian, gay, bisexual, transgender, and questioning (LGBTQ) persons (Goodrich & Luke, 2011). Findings partially supported applicability of the LGBTQ Responsive Model for Supervision…

  2. Explaining Cooperation in Groups: Testing Models of Reciprocity and Learning

    Science.gov (United States)

    Biele, Guido; Rieskamp, Jorg; Czienskowski, Uwe

    2008-01-01

    What are the cognitive processes underlying cooperation in groups? This question is addressed by examining how well a reciprocity model, two learning models, and social value orientation can predict cooperation in two iterated n-person social dilemmas with continuous contributions. In the first of these dilemmas, the public goods game,…

  3. A Creative Therapies Model for the Group Supervision of Counsellors.

    Science.gov (United States)

    Wilkins, Paul

    1995-01-01

    Sets forth a model of group supervision, drawing on a creative therapies approach which provides an effective way of delivering process issues, conceptualization issues, and personalization issues. The model makes particular use of techniques drawn from art therapy and from psychodrama, and should be applicable to therapists of many orientations.…

  4. Functional renormalization group approach to the Kraichnan model.

    Science.gov (United States)

    Pagani, Carlo

    2015-09-01

    We study the anomalous scaling of the structure functions of a scalar field advected by a random Gaussian velocity field, the Kraichnan model, by means of functional renormalization group techniques. We analyze the symmetries of the model and derive the leading correction to the structure functions considering the renormalization of composite operators and applying the operator product expansion.

  5. ALTRUISM, EGOISM AND GROUP COHESION IN A LOCAL INTERACTION MODEL

    OpenAIRE

    José A. García Martínez

    2004-01-01

    In this paper we have introduced and parameterized the concept of ?group cohesion? in a model of local interaction with a population divided into groups. This allows us to control the level of ?isolation? of these groups: We thus analyze if the degree of group cohesion is relevant to achieve an efficient behaviour and which level would be the best one for this purpose. We are interested in situations where there is a trade off between efficiency and individual incentives. This trade off is st...

  6. Group Lasso for high dimensional sparse quantile regression models

    CERN Document Server

    Kato, Kengo

    2011-01-01

    This paper studies the statistical properties of the group Lasso estimator for high dimensional sparse quantile regression models where the number of explanatory variables (or the number of groups of explanatory variables) is possibly much larger than the sample size while the number of variables in "active" groups is sufficiently small. We establish a non-asymptotic bound on the $\\ell_{2}$-estimation error of the estimator. This bound explains situations under which the group Lasso estimator is potentially superior/inferior to the $\\ell_{1}$-penalized quantile regression estimator in terms of the estimation error. We also propose a data-dependent choice of the tuning parameter to make the method more practical, by extending the original proposal of Belloni and Chernozhukov (2011) for the $\\ell_{1}$-penalized quantile regression estimator. As an application, we analyze high dimensional additive quantile regression models. We show that under a set of primitive regularity conditions, the group Lasso estimator c...

  7. Modeling Study of Planar Flexible Manipulator Undergoing Large Deformation

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The planar flexible manipulator undergoing large deformation is investigated by using finite element method (FEM). Three kinds of reference frames are employed to describe the deformation of arbitrary point in the flexible manipulator, which are global frame, body-fixed frame and co-rotational frame. The rigid-flexible coupling dynamic equation of the planar flexible manipulator is derived using the Hamilton's principle. Numerical simulations are carried out in the end of this paper to demonstrate the effectiveness of the proposed model. The simulation results indicate that the proposed model is efficient not only for small deformation but also for large deformation.

  8. SPARC Groups: A Model for Incorporating Spiritual Psychoeducation into Group Work

    Science.gov (United States)

    Christmas, Christopher; Van Horn, Stacy M.

    2012-01-01

    The use of spirituality as a resource for clients within the counseling field is growing; however, the primary focus has been on individual therapy. The purpose of this article is to provide counseling practitioners, administrators, and researchers with an approach for incorporating spiritual psychoeducation into group work. The proposed model can…

  9. Cinlar Subgrid Scale Model for Large Eddy Simulation

    CERN Document Server

    Kara, Rukiye

    2016-01-01

    We construct a new subgrid scale (SGS) stress model for representing the small scale effects in large eddy simulation (LES) of incompressible flows. We use the covariance tensor for representing the Reynolds stress and include Clark's model for the cross stress. The Reynolds stress is obtained analytically from Cinlar random velocity field, which is based on vortex structures observed in the ocean at the subgrid scale. The validity of the model is tested with turbulent channel flow computed in OpenFOAM. It is compared with the most frequently used Smagorinsky and one-equation eddy SGS models through DNS data.

  10. A model of interaction between anticorruption authority and corruption groups

    Energy Technology Data Exchange (ETDEWEB)

    Neverova, Elena G.; Malafeyef, Oleg A. [Saint-Petersburg State University, Saint-Petersburg, Russia, 35, Universitetskii prospekt, Petrodvorets, 198504 Email:elenaneverowa@gmail.com, malafeyevoa@mail.ru (Russian Federation)

    2015-03-10

    The paper provides a model of interaction between anticorruption unit and corruption groups. The main policy functions of the anticorruption unit involve reducing corrupt practices in some entities through an optimal approach to resource allocation and effective anticorruption policy. We develop a model based on Markov decision-making process and use Howard’s policy-improvement algorithm for solving an optimal decision strategy. We examine the assumption that corruption groups retaliate against the anticorruption authority to protect themselves. This model was implemented through stochastic game.

  11. Including investment risk in large-scale power market models

    DEFF Research Database (Denmark)

    Lemming, Jørgen Kjærgaard; Meibom, P.

    2003-01-01

    can be included in large-scale partial equilibrium models of the power market. The analyses are divided into a part about risk measures appropriate for power market investors and a more technical part about the combination of a risk-adjustment model and a partial-equilibrium model. To illustrate......Long-term energy market models can be used to examine investments in production technologies, however, with market liberalisation it is crucial that such models include investment risks and investor behaviour. This paper analyses how the effect of investment risk on production technology selection...... the analyses quantitatively, a framework based on an iterative interaction between the equilibrium model and a separate risk-adjustment module was constructed. To illustrate the features of the proposed modelling approach we examined how uncertainty in demand and variable costs affects the optimal choice...

  12. Renormalization-group calculation of excitation properties for impurity models

    Science.gov (United States)

    Yoshida, M.; Whitaker, M. A.; Oliveira, L. N.

    1990-05-01

    The renormalization-group method developed by Wilson to calculate thermodynamical properties of dilute magnetic alloys is generalized to allow the calculation of dynamical properties of many-body impurity Hamiltonians. As a simple illustration, the impurity spectral density for the resonant-level model (i.e., the U=0 Anderson model) is computed. As a second illustration, for the same model, the longitudinal relaxation rate for a nuclear spin coupled to the impurity is calculated as a function of temperature.

  13. Bayesian model reduction and empirical Bayes for group (DCM) studies.

    Science.gov (United States)

    Friston, Karl J; Litvak, Vladimir; Oswal, Ashwini; Razi, Adeel; Stephan, Klaas E; van Wijk, Bernadette C M; Ziegler, Gabriel; Zeidman, Peter

    2016-03-01

    This technical note describes some Bayesian procedures for the analysis of group studies that use nonlinear models at the first (within-subject) level - e.g., dynamic causal models - and linear models at subsequent (between-subject) levels. Its focus is on using Bayesian model reduction to finesse the inversion of multiple models of a single dataset or a single (hierarchical or empirical Bayes) model of multiple datasets. These applications of Bayesian model reduction allow one to consider parametric random effects and make inferences about group effects very efficiently (in a few seconds). We provide the relatively straightforward theoretical background to these procedures and illustrate their application using a worked example. This example uses a simulated mismatch negativity study of schizophrenia. We illustrate the robustness of Bayesian model reduction to violations of the (commonly used) Laplace assumption in dynamic causal modelling and show how its recursive application can facilitate both classical and Bayesian inference about group differences. Finally, we consider the application of these empirical Bayesian procedures to classification and prediction.

  14. Solitonic Models Based on Quantum Groups and the Standard Model

    CERN Document Server

    Finkelstein, Robert J

    2010-01-01

    The idea that the elementary particles might have the symmetry of knots has had a long history. In any current formulation of this idea, however, the knot must be quantized. The present review is a summary of a small set of papers that began as an attempt to correlate the properties of quantized knots with the empirical properties of the elementary particles. As the ideas behind these papers have developed over a number of years the model has evolved, and this review is intended to present the model in its current form. The original picture of an elementary fermion as a solitonic knot of field, described by the trefoil representation of SUq(2), has expanded into its current form in which a knotted field is complementary to a composite structure composed of three or more preons that in turn are described by the fundamental representation of SLq(2). These complementary descriptions may be interpreted as describing single composite particles composed of three or more preons bound by a knotted field.

  15. Model Experiments for the Determination of Airflow in Large Spaces

    DEFF Research Database (Denmark)

    Nielsen, Peter V.

    Model experiments are one of the methods used for the determination of airflow in large spaces. This paper will discuss the formation of the governing dimensionless numbers. It is shown that experiments with a reduced scale often will necessitate a fully developed turbulence level of the flow....... Details of the flow from supply openings are very important for the determination of room air distribution. It is in some cases possible to make a simplified supply opening for the model experiment....

  16. New Pathways between Group Theory and Model Theory

    CERN Document Server

    Fuchs, László; Goldsmith, Brendan; Strüngmann, Lutz

    2017-01-01

    This volume focuses on group theory and model theory with a particular emphasis on the interplay of the two areas. The survey papers provide an overview of the developments across group, module, and model theory while the research papers present the most recent study in those same areas. With introductory sections that make the topics easily accessible to students, the papers in this volume will appeal to beginning graduate students and experienced researchers alike. As a whole, this book offers a cross-section view of the areas in group, module, and model theory, covering topics such as DP-minimal groups, Abelian groups, countable 1-transitive trees, and module approximations. The papers in this book are the proceedings of the conference “New Pathways between Group Theory and Model Theory,” which took place February 1-4, 2016, in Mülheim an der Ruhr, Germany, in honor of the editors’ colleague Rüdiger Göbel. This publication is dedicated to Professor Göbel, who passed away in 2014. He was one of th...

  17. A first large-scale flood inundation forecasting model

    Energy Technology Data Exchange (ETDEWEB)

    Schumann, Guy J-P; Neal, Jeffrey C.; Voisin, Nathalie; Andreadis, Konstantinos M.; Pappenberger, Florian; Phanthuwongpakdee, Kay; Hall, Amanda C.; Bates, Paul D.

    2013-11-04

    At present continental to global scale flood forecasting focusses on predicting at a point discharge, with little attention to the detail and accuracy of local scale inundation predictions. Yet, inundation is actually the variable of interest and all flood impacts are inherently local in nature. This paper proposes a first large scale flood inundation ensemble forecasting model that uses best available data and modeling approaches in data scarce areas and at continental scales. The model was built for the Lower Zambezi River in southeast Africa to demonstrate current flood inundation forecasting capabilities in large data-scarce regions. The inundation model domain has a surface area of approximately 170k km2. ECMWF meteorological data were used to force the VIC (Variable Infiltration Capacity) macro-scale hydrological model which simulated and routed daily flows to the input boundary locations of the 2-D hydrodynamic model. Efficient hydrodynamic modeling over large areas still requires model grid resolutions that are typically larger than the width of many river channels that play a key a role in flood wave propagation. We therefore employed a novel sub-grid channel scheme to describe the river network in detail whilst at the same time representing the floodplain at an appropriate and efficient scale. The modeling system was first calibrated using water levels on the main channel from the ICESat (Ice, Cloud, and land Elevation Satellite) laser altimeter and then applied to predict the February 2007 Mozambique floods. Model evaluation showed that simulated flood edge cells were within a distance of about 1 km (one model resolution) compared to an observed flood edge of the event. Our study highlights that physically plausible parameter values and satisfactory performance can be achieved at spatial scales ranging from tens to several hundreds of thousands of km2 and at model grid resolutions up to several km2. However, initial model test runs in forecast mode

  18. Exploring medical student learning in the large group teaching environment: examining current practice to inform curricular development

    OpenAIRE

    Luscombe, Ciara; Montgomery, Julia

    2016-01-01

    Background Lectures continue to be an efficient and standardised way to deliver information to large groups of students. It has been well documented that students prefer interactive lectures, based on active learning principles, to didactic teaching in the large group setting. Despite this, it is often the case than many students do not engage with active learning tasks and attempts at interaction. By exploring student experiences, expectations and how they use lectures in their learning we w...

  19. Application of Logic Models in a Large Scientific Research Program

    Science.gov (United States)

    O'Keefe, Christine M.; Head, Richard J.

    2011-01-01

    It is the purpose of this article to discuss the development and application of a logic model in the context of a large scientific research program within the Commonwealth Scientific and Industrial Research Organisation (CSIRO). CSIRO is Australia's national science agency and is a publicly funded part of Australia's innovation system. It conducts…

  20. Application of Logic Models in a Large Scientific Research Program

    Science.gov (United States)

    O'Keefe, Christine M.; Head, Richard J.

    2011-01-01

    It is the purpose of this article to discuss the development and application of a logic model in the context of a large scientific research program within the Commonwealth Scientific and Industrial Research Organisation (CSIRO). CSIRO is Australia's national science agency and is a publicly funded part of Australia's innovation system. It conducts…

  1. Large scale semantic 3D modeling of the urban landscape

    NARCIS (Netherlands)

    I. Esteban Lopez

    2012-01-01

    Modeling and understanding large urban areas is becoming an important topic in a world were everything is being digitized. A semantic and accurate 3D representation of a city can be used in many applications such as event and security planning and management, assisted navigation, autonomous operatio

  2. Large scale solar district heating. Evaluation, modelling and designing - Appendices

    Energy Technology Data Exchange (ETDEWEB)

    Heller, A.

    2000-07-01

    The appendices present the following: A) Cad-drawing of the Marstal CSHP design. B) Key values - large-scale solar heating in Denmark. C) Monitoring - a system description. D) WMO-classification of pyranometers (solarimeters). E) The computer simulation model in TRNSYS. F) Selected papers from the author. (EHS)

  3. Distribution of ABO blood groups and rhesus factor in a Large Scale ...

    African Journals Online (AJOL)

    J. Torabizade maatoghi

    2015-08-20

    Aug 20, 2015 ... a Iranian Blood Transfusion Organization Research Center, Iran b Ahvaz Jundishapur ... Knowing the distribution of blood groups in different blood collection .... groups is of utmost importance due to location of the province.

  4. Group Offending in Mass Atrocities: Proposing a Group Violence Strategies Model for International Crimes

    Directory of Open Access Journals (Sweden)

    Regina Elisabeth Rauxloh

    2016-12-01

    Full Text Available Most research in mass atrocities, especially genocide, is conducted at the macro level exploring how mass violence is instigated, planned and orchestrated at the level of the state. This paper on the other hand suggests that more research of the individual perpetrator is needed to complement the understanding of mass atrocities. The author develops therefore a new model, the group violence strategies model. This model combines various traditional criminological models of group offending and proposes a three stage analysis, looking at the individual aggressor, the actions within the offender group and the actions between offender group and victim group to understand better the phenomenon that ordinary people commit unspeakable crimes. La mayor parte de las investigaciones sobre atrocidades en masa, especialmente genocidio, se desarrollan a nivel macro, analizando cómo se instiga, planea y orquestra la violencia de masas a nivel de estado. Este artículo, sin embargo, sugiere que es necesaria una mayor investigación del criminal individual, para complementar la comprensión de las atrocidades en masa. Así, se desarrolla un nuevo modelo, el modelo de estrategias de violencia en grupo. Este modelo combina diversos modelos criminológicos tradicionales de violencia en grupo y propone tres etapas de análisis, mirando al agresor individual, las acciones dentro del grupo criminal y las acciones entre el grupo criminal y el grupo de víctimas, para entender mejor este fenómeno por el que personas corrientes cometen crímenes atroces. DOWNLOAD THIS PAPER FROM SSRN: https://ssrn.com/abstract=2875712

  5. Order reduction of large-scale linear oscillatory system models

    Energy Technology Data Exchange (ETDEWEB)

    Trudnowksi, D.J. (Pacific Northwest Lab., Richland, WA (United States))

    1994-02-01

    Eigen analysis and signal analysis techniques of deriving representations of power system oscillatory dynamics result in very high-order linear models. In order to apply many modern control design methods, the models must be reduced to a more manageable order while preserving essential characteristics. Presented in this paper is a model reduction method well suited for large-scale power systems. The method searches for the optimal subset of the high-order model that best represents the system. An Akaike information criterion is used to define the optimal reduced model. The method is first presented, and then examples of applying it to Prony analysis and eigenanalysis models of power systems are given.

  6. A numerical shoreline model for shorelines with large curvature

    DEFF Research Database (Denmark)

    Kærgaard, Kasper Hauberg; Fredsøe, Jørgen

    2013-01-01

    This paper presents a new numerical model for shoreline change which can be used to model the evolution of shorelines with large curvature. The model is based on a one-line formulation in terms of coordinates which follow the shape of the shoreline, instead of the more common approach where the two...... orthogonal horizontal directions are used. The volume error in the sediment continuity equation which is thereby introduced is removed through an iterative procedure. The model treats the shoreline changes by computing the sediment transport in a 2D coastal area model, and then integrating the sediment...... transport field across the coastal profile to obtain the longshore sediment transport variation along the shoreline. The model is used to compute the evolution of a shoreline with a 90° change in shoreline orientation; due to this drastic change in orientation a migrating shoreline spit develops...

  7. Small groups, large profits: Calculating interest rates in community-managed microfinance

    DEFF Research Database (Denmark)

    Rasmussen, Ole Dahl

    2012-01-01

    Savings groups are a widely used strategy for women’s economic resilience – over 80% of members worldwide are women, and in the case described here, 72.5%. In these savings groups it is common to see the interest rate on savings reported as "20-30% annually". Using panel data from 204 groups in M...

  8. Group Elevator Peak Scheduling Based on Robust Optimization Model

    Directory of Open Access Journals (Sweden)

    ZHANG, J.

    2013-08-01

    Full Text Available Scheduling of Elevator Group Control System (EGCS is a typical combinatorial optimization problem. Uncertain group scheduling under peak traffic flows has become a research focus and difficulty recently. RO (Robust Optimization method is a novel and effective way to deal with uncertain scheduling problem. In this paper, a peak scheduling method based on RO model for multi-elevator system is proposed. The method is immune to the uncertainty of peak traffic flows, optimal scheduling is realized without getting exact numbers of each calling floor's waiting passengers. Specifically, energy-saving oriented multi-objective scheduling price is proposed, RO uncertain peak scheduling model is built to minimize the price. Because RO uncertain model could not be solved directly, RO uncertain model is transformed to RO certain model by elevator scheduling robust counterparts. Because solution space of elevator scheduling is enormous, to solve RO certain model in short time, ant colony solving algorithm for elevator scheduling is proposed. Based on the algorithm, optimal scheduling solutions are found quickly, and group elevators are scheduled according to the solutions. Simulation results show the method could improve scheduling performances effectively in peak pattern. Group elevators' efficient operation is realized by the RO scheduling method.

  9. Extended Group Contribution Model for Polyfunctional Phase Equilibria

    DEFF Research Database (Denmark)

    Abildskov, Jens

    -liquid equilibria from data on binary mixtures, composed of structurally simple molecules with a single functional group. More complex is the situation with mixtures composed of structurally more complicated molecules or molecules with more than one functional group. The UNIFAC method is extended to handle...... polyfunctional group situations, based on additional information on molecular structure. The extension involves the addition of second-order correction terms to the existing equation. In this way the current first-order formulation is retained. The second-order concept is developed for mixture properties based....... In chapter 4 parameters are estimated for the first-order UNIFAC model, based on which parameters are estimated for one of the second-order models described in chapter 3. The parameter estimation is based on measured binary data on around 4000 systems, covering 11 C-, H- and O-containing functional groups...

  10. Clustering Analysis of Black-start Decision-making with a Large Group of Decision-makers

    Institute of Scientific and Technical Information of China (English)

    2012-01-01

    The optimization of black start decisiommaking plays an important role in the rapid restoration of a power system after a major failure/outage. With the introduction of the concept of smart grids and the development of real-time communication networks, the black-start decision-makers are no longer limited to only one or a few power system experts such as dispatchers, but rather a large group of professional people in practice. The overall behaviors of a large decision-making group of decision-makers/experts are more complicated and unpredictable. However, the existing methods for black-start decision-making cannot handle the situations with a large group of decision-makers. Given this background, a clustering algorithm is presented to optimize the black-start decision-making problem with a large group of decision-makers. Group decision-making preferences are obtained by clustering analysis, and the final black-start decisiommaking results are achieved by combining the weights of black-start indexes and the preferences of the decision-making group. The effectiveness of the proposed method is validated by a practical case. This work extends the black-start decision-making problem to situations with a large group of decision-makers.

  11. Thermodynamic Modeling of Organic-Inorganic Aerosols with the Group-Contribution Model AIOMFAC

    Science.gov (United States)

    Zuend, A.; Marcolli, C.; Luo, B. P.; Peter, T.

    2009-04-01

    Liquid aerosol particles are - from a physicochemical viewpoint - mixtures of inorganic salts, acids, water and a large variety of organic compounds (Rogge et al., 1993; Zhang et al., 2007). Molecular interactions between these aerosol components lead to deviations from ideal thermodynamic behavior. Strong non-ideality between organics and dissolved ions may influence the aerosol phases at equilibrium by means of liquid-liquid phase separations into a mainly polar (aqueous) and a less polar (organic) phase. A number of activity models exists to successfully describe the thermodynamic equilibrium of aqueous electrolyte solutions. However, the large number of different, often multi-functional, organic compounds in mixed organic-inorganic particles is a challenging problem for the development of thermodynamic models. The group-contribution concept as introduced in the UNIFAC model by Fredenslund et al. (1975), is a practical method to handle this difficulty and to add a certain predictability for unknown organic substances. We present the group-contribution model AIOMFAC (Aerosol Inorganic-Organic Mixtures Functional groups Activity Coefficients), which explicitly accounts for molecular interactions between solution constituents, both organic and inorganic, to calculate activities, chemical potentials and the total Gibbs energy of mixed systems (Zuend et al., 2008). This model enables the computation of vapor-liquid (VLE), liquid-liquid (LLE) and solid-liquid (SLE) equilibria within one framework. Focusing on atmospheric applications we considered eight different cations, five anions and a wide range of alcohols/polyols as organic compounds. With AIOMFAC, the activities of the components within an aqueous electrolyte solution are very well represented up to high ionic strength. We show that the semi-empirical middle-range parametrization of direct organic-inorganic interactions in alcohol-water-salt solutions enables accurate computations of vapor-liquid and liquid

  12. Discrete time duration models with group-level heterogeneity

    DEFF Research Database (Denmark)

    Frederiksen, Anders; Honoré, Bo; Hu, Loujia

    2007-01-01

    Dynamic discrete choice panel data models have received a great deal of attention. In those models, the dynamics is usually handled by including the lagged outcome as an explanatory variable. In this paper we consider an alternative model in which the dynamics is handled by using the duration...... in the current state as a covariate. We propose estimators that allow for group-specific effect in parametric and semiparametric versions of the model. The proposed method is illustrated by an empirical analysis of job durations allowing for firm-level effects....

  13. Tensor renormalization group analysis of CP(N-1) model

    CERN Document Server

    Kawauchi, Hikaru

    2016-01-01

    We apply the higher order tensor renormalization group to lattice CP($N-1$) model in two dimensions. A tensor network representation of the CP($N-1$) model in the presence of the $\\theta$-term is derived. We confirm that the numerical results of the CP(1) model without the $\\theta$-term using this method are consistent with that of the O(3) model which is analyzed by the same method in the region $\\beta \\gg 1$ and that obtained by Monte Carlo simulation in a wider range of $\\beta$. The numerical computation including the $\\theta$-term is left for future challenges.

  14. Tensor renormalization group analysis of CP (N -1 ) model

    Science.gov (United States)

    Kawauchi, Hikaru; Takeda, Shinji

    2016-06-01

    We apply the higher-order tensor renormalization group to the lattice CP (N -1 ) model in two dimensions. A tensor network representation of the CP (N -1 ) model in the presence of the θ term is derived. We confirm that the numerical results of the CP(1) model without the θ term using this method are consistent with that of the O(3) model which is analyzed by the same method in the region β ≫1 and that obtained by the Monte Carlo simulation in a wider range of β . The numerical computation including the θ term is left for future challenges.

  15. Appreciative Socialization Group. A Model of Personal Development

    Directory of Open Access Journals (Sweden)

    Simona PONEA

    2010-12-01

    Full Text Available In this article we want to present o new of form of group, which we consider as being important for the process of personal development. Groups are a form of gathering more people united by a common purpose. We believe that through their group, members can develop new skills and also can obtain the change in the direction they want. Socialization is the processthat we “share” along with others, by communicating and also by having close views towards different things in life. Appreciative socialization involves placing emphasis on those elements that have value to us, which are positive. We consider appreciative group socialization as a model of good practice that aims the development among group members and increasesempowerment process.

  16. From Mindless Masses to Small Groups: Conceptualizing Collective Behavior in Crowd Modeling.

    Science.gov (United States)

    Templeton, Anne; Drury, John; Philippides, Andrew

    2015-09-01

    Computer simulations are increasingly used to monitor and predict behavior at large crowd events, such as mass gatherings, festivals and evacuations. We critically examine the crowd modeling literature and call for future simulations of crowd behavior to be based more closely on findings from current social psychological research. A systematic review was conducted on the crowd modeling literature (N = 140 articles) to identify the assumptions about crowd behavior that modelers use in their simulations. Articles were coded according to the way in which crowd structure was modeled. It was found that 2 broad types are used: mass approaches and small group approaches. However, neither the mass nor the small group approaches can accurately simulate the large collective behavior that has been found in extensive empirical research on crowd events. We argue that to model crowd behavior realistically, simulations must use methods which allow crowd members to identify with each other, as suggested by self-categorization theory.

  17. Large Scale Simulations of the Kinetic Ising Model

    Science.gov (United States)

    Münkel, Christian

    We present Monte Carlo simulation results for the dynamical critical exponent z of the two- and three-dimensional kinetic Ising model. The z-values were calculated from the magnetization relaxation from an ordered state into the equilibrium state at Tc for very large systems with up to (169984)2 and (3072)3 spins. To our knowledge, these are the largest Ising-systems simulated todate. We also report the successful simulation of very large lattices on a massively parallel MIMD computer with high speedups of approximately 1000 and an efficiency of about 0.93.

  18. Small groups, large profits: Calculating interest rates in community-managed microfinance

    DEFF Research Database (Denmark)

    Rasmussen, Ole Dahl

    2012-01-01

    Savings groups are a widely used strategy for women’s economic resilience – over 80% of members worldwide are women, and in the case described here, 72.5%. In these savings groups it is common to see the interest rate on savings reported as "20-30% annually". Using panel data from 204 groups...... in Malawi, I show that the right figure is likely to be at least twice this figure. For these groups, the annual return is 62%. The difference comes from sector-wide application of a non-standard interest rate calculations and unrealistic assumptions about the savings profile in the groups. As a result......, it is impossible to compare returns in savings groups with returns elsewhere. Moreover, the interest on savings is incomparable to the interest rate on loans. I argue for the use of a standardized comparable metric and suggest easy ways to implement it. Developments of new tools and standard along these lines...

  19. GBP-WAHSN: A Group-Based Protocol for Large Wireless Ad Hoc and Sensor Networks

    Institute of Scientific and Technical Information of China (English)

    Jaime Lloret; Miguel Garcia; Jesus Tomás; Fernando Boronat

    2008-01-01

    Grouping nodes gives better performance to the whole network by diminishing the average network delay and avoiding unnecessary message for warding and additional overhead. Many routing protocols for ad-hoc and sensor network shave been designed but none of them are based on groups. In this paper, we will start defining group-based topologies,and then we will show how some wireless ad hoc sensor networks (WAHSN) routing protocols perform when the nodes are arranged in groups. In our proposal connections between groups are established as a function of the proximity of the nodes and the neighbor's available capacity (based on the node's energy). We describe the architecture proposal, the messages that are needed for the proper operation and its mathematical description. We have also simulated how much time is needed to propagate information between groups. Finally, we will show a comparison with other architectures.

  20. Models for blisks with large blends and small mistuning

    Science.gov (United States)

    Tang, Weihan; Epureanu, Bogdan I.; Filippi, Sergio

    2017-03-01

    Small deviations of the structural properties of individual sectors of blisks, referred to as mistuning, can lead to localization of vibration energy and drastically increased forced responses. Similar phenomena are observed in blisks with large damages or repair blends. Such deviations are best studied statistically because they are random. In the absence of cyclic symmetry, the computational cost to predict the vibration behavior of blisks becomes prohibitively high. That has lead to the development of various reduced-order models (ROMs). Existing approaches are either for small mistuning, or are computationally expensive and thus not effective for statistical analysis. This paper discusses a reduced-order modeling method for blisks with both large and small mistuning, which requires low computational effort. This method utilizes the pristine, rogue and interface modal expansion (PRIME) method to model large blends. PRIME uses only sector-level cyclic modes strategically combined together to create a reduction basis which yields ROMs that efficiently and accurately model large mistuning. To model small mistuning, nodal energy weighted transformation (NEWT) is integrated with PRIME, resulting in N-PRIME, which requires only sector-level calculations to create a ROM which captures both small and large mistuning with minimized computational effort. The combined effects of large blends and small mistuning are studied using N-PRIME for a dual flow path system and for a conventional blisk. The accuracy of the N-PRIME method is validated against full-order finite element analyses for both natural and forced response computations, including displacement amplitudes and surface stresses. Results reveal that N-PRIME is capable of accurately predicting the dynamics of a blisk with severely large mistuning, along with small random mistuning throughout each sector. Also, N-PRIME can accurately capture modes with highly localized motions. A statistical analysis is performed to

  1. Comparative Testing of Hemostatic Dressing in a Large Animal Model (Sus Scorofa) with Severe hepatic Injuries

    Science.gov (United States)

    2013-12-02

    hemostatic dressings in a large animal model (Sus scrofa ) with severe hepatic injuries PRINCIPAL INVESTIGATOR (PI) / TRAINING COORDINATOR (TC): Capt...to Date Sus scrofa 36 18 18 Note. Many fewer animals than approved were used because one of the original treatment groups (Lypressin- soaked gauze

  2. A large deformation viscoelastic model for double-network hydrogels

    Science.gov (United States)

    Mao, Yunwei; Lin, Shaoting; Zhao, Xuanhe; Anand, Lallit

    2017-03-01

    We present a large deformation viscoelasticity model for recently synthesized double network hydrogels which consist of a covalently-crosslinked polyacrylamide network with long chains, and an ionically-crosslinked alginate network with short chains. Such double-network gels are highly stretchable and at the same time tough, because when stretched the crosslinks in the ionically-crosslinked alginate network rupture which results in distributed internal microdamage which dissipates a substantial amount of energy, while the configurational entropy of the covalently-crosslinked polyacrylamide network allows the gel to return to its original configuration after deformation. In addition to the large hysteresis during loading and unloading, these double network hydrogels also exhibit a substantial rate-sensitive response during loading, but exhibit almost no rate-sensitivity during unloading. These features of large hysteresis and asymmetric rate-sensitivity are quite different from the response of conventional hydrogels. We limit our attention to modeling the complex viscoelastic response of such hydrogels under isothermal conditions. Our model is restricted in the sense that we have limited our attention to conditions under which one might neglect any diffusion of the water in the hydrogel - as might occur when the gel has a uniform initial value of the concentration of water, and the mobility of the water molecules in the gel is low relative to the time scale of the mechanical deformation. We also do not attempt to model the final fracture of such double-network hydrogels.

  3. Large Animal Models for Foamy Virus Vector Gene Therapy

    Directory of Open Access Journals (Sweden)

    Peter A. Horn

    2012-12-01

    Full Text Available Foamy virus (FV vectors have shown great promise for hematopoietic stem cell (HSC gene therapy. Their ability to efficiently deliver transgenes to multi-lineage long-term repopulating cells in large animal models suggests they will be effective for several human hematopoietic diseases. Here, we review FV vector studies in large animal models, including the use of FV vectors with the mutant O6-methylguanine-DNA methyltransferase, MGMTP140K to increase the number of genetically modified cells after transplantation. In these studies, FV vectors have mediated efficient gene transfer to polyclonal repopulating cells using short ex vivo transduction protocols designed to minimize the negative effects of ex vivo culture on stem cell engraftment. In this regard, FV vectors appear superior to gammaretroviral vectors, which require longer ex vivo culture to effect efficient transduction. FV vectors have also compared favorably with lentiviral vectors when directly compared in the dog model. FV vectors have corrected leukocyte adhesion deficiency and pyruvate kinase deficiency in the dog large animal model. FV vectors also appear safer than gammaretroviral vectors based on a reduced frequency of integrants near promoters and also near proto-oncogenes in canine repopulating cells. Together, these studies suggest that FV vectors should be highly effective for several human hematopoietic diseases, including those that will require relatively high percentages of gene-modified cells to achieve clinical benefit.

  4. Observational templates of star cluster disruption. The stellar group NGC 1901 in front of the Large Magellanic Cloud

    CERN Document Server

    Carraro, G; Villanova, S; Bidin, C M; le de Marcos, C F; Baumgardt, H; Solivella, G; Carraro, Giovanni; Marcos, Raul de la Fuente; Villanova, Sandro; Bidin, Christian Moni; Marcos, Carlos de le Fuente; Baumgardt, Holger; Solivella, Gladys

    2007-01-01

    Observations indicate that present-day star formation in the Milky Way disk takes place in stellar ensembles or clusters rather than in isolation. Bound, long lived stellar groups are known as open clusters. They gradually lose stars and in their final evolutionary stages they are severely disrupted leaving an open cluster remnant made of a few stars. In this paper, we study in detail the stellar content and kinematics of the poorly populated star cluster NGC1901. This object appears projected against the Large Magellanic Cloud. The aim of the present work is to derive the current evolutionary status, binary fraction, age and mass of this stellar group. These are fundamental quantities to compare with those from N-body models in order to study the most general topic of star cluster evolution and dissolution.The analysis is performed using wide-field photometry in the UBVI pass-band, proper motions from the UCAC.2 catalog, and 3 epochs of high resolution spectroscopy, as well as results from extensive N-body c...

  5. Enhanced four-wave-mixing effects by large group indices of one-dimensional silicon photonic crystal waveguides.

    Science.gov (United States)

    Kim, Dong Wook; Kim, Seung Hwan; Lee, Seoung Hun; Jong, Heung Sun; Lee, Jong-Moo; Lee, El-Hang; Kim, Kyong Hon

    2013-12-02

    Enhanced four-wave-mixing (FWM) effects have been observed with the help of large group-indices near the band edges in one-dimensional (1-D) silicon photonic crystal waveguides (Si PhCWs). A significant increase of the FWM conversion efficiency of about 17 dB was measured near the transmission band edge of the 1-D PhCW through an approximate 3.2 times increase of the group index from 8 to 24 with respect to the central transmission band region despite a large group-velocity dispersion. Numerical analyses based on the coupled-mode equations for the degenerated FWM process describe the experimentally measured results well. Our results indicate that the 1-D PhCWs are good candidates for large group-index enhanced nonlinearity devices even without having any special dispersion engineering.

  6. Large eddy simulation modelling of combustion for propulsion applications.

    Science.gov (United States)

    Fureby, C

    2009-07-28

    Predictive modelling of turbulent combustion is important for the development of air-breathing engines, internal combustion engines, furnaces and for power generation. Significant advances in modelling non-reactive turbulent flows are now possible with the development of large eddy simulation (LES), in which the large energetic scales of the flow are resolved on the grid while modelling the effects of the small scales. Here, we discuss the use of combustion LES in predictive modelling of propulsion applications such as gas turbine, ramjet and scramjet engines. The LES models used are described in some detail and are validated against laboratory data-of which results from two cases are presented. These validated LES models are then applied to an annular multi-burner gas turbine combustor and a simplified scramjet combustor, for which some additional experimental data are available. For these cases, good agreement with the available reference data is obtained, and the LES predictions are used to elucidate the flow physics in such devices to further enhance our knowledge of these propulsion systems. Particular attention is focused on the influence of the combustion chemistry, turbulence-chemistry interaction, self-ignition, flame holding burner-to-burner interactions and combustion oscillations.

  7. Dimensional reduction of Markov state models from renormalization group theory

    Science.gov (United States)

    Orioli, S.; Faccioli, P.

    2016-09-01

    Renormalization Group (RG) theory provides the theoretical framework to define rigorous effective theories, i.e., systematic low-resolution approximations of arbitrary microscopic models. Markov state models are shown to be rigorous effective theories for Molecular Dynamics (MD). Based on this fact, we use real space RG to vary the resolution of the stochastic model and define an algorithm for clustering microstates into macrostates. The result is a lower dimensional stochastic model which, by construction, provides the optimal coarse-grained Markovian representation of the system's relaxation kinetics. To illustrate and validate our theory, we analyze a number of test systems of increasing complexity, ranging from synthetic toy models to two realistic applications, built form all-atom MD simulations. The computational cost of computing the low-dimensional model remains affordable on a desktop computer even for thousands of microstates.

  8. Dimensional reduction of Markov state models from renormalization group theory.

    Science.gov (United States)

    Orioli, S; Faccioli, P

    2016-09-28

    Renormalization Group (RG) theory provides the theoretical framework to define rigorous effective theories, i.e., systematic low-resolution approximations of arbitrary microscopic models. Markov state models are shown to be rigorous effective theories for Molecular Dynamics (MD). Based on this fact, we use real space RG to vary the resolution of the stochastic model and define an algorithm for clustering microstates into macrostates. The result is a lower dimensional stochastic model which, by construction, provides the optimal coarse-grained Markovian representation of the system's relaxation kinetics. To illustrate and validate our theory, we analyze a number of test systems of increasing complexity, ranging from synthetic toy models to two realistic applications, built form all-atom MD simulations. The computational cost of computing the low-dimensional model remains affordable on a desktop computer even for thousands of microstates.

  9. A simplified experimental model of large-for-size liver transplantation in pigs

    Directory of Open Access Journals (Sweden)

    Antonio Jose Goncalves Leal

    2013-01-01

    Full Text Available OBJECTIVE: The ideal ratio between liver graft mass and recipient body weight for liver transplantation in small infants is unknown; however, if this ratio is over 4%, a condition called large-for-size may occur. Experimental models of large-for-size liver transplants have not been described in the literature. In addition, orthotopic liver transplantation is marked by high morbidity and mortality rates in animals due to the clamping of the venous splanchnic system. Therefore, the objective of this study was to create a porcine model of large-for-size liver transplantation with clamping of the supraceliac aorta during the anhepatic phase as an alternative to venovenous bypass. METHOD: Fourteen pigs underwent liver transplantation with whole-liver grafts without venovenous bypass and were divided into two experimental groups: the control group, in which the weights of the donors were similar to the weights of the recipients; and the large-for-size group, in which the weights of the donors were nearly 2 times the weights of the recipients. Hemodynamic data, the results of serum biochemical analyses and histological examination of the transplanted livers were collected. RESULTS: The mortality rate in both groups was 16.5% (1/7. The animals in the large-for-size group had increased serum levels of potassium, sodium, aspartate aminotransferase and alanine aminotransferase after graft reperfusion. The histological analyses revealed that there were no significant differences between the groups. CONCLUSION: This transplant method is a feasible experimental model of large-for-size liver transplantation.

  10. Computer-aided polymer design using group contribution plus property models

    DEFF Research Database (Denmark)

    Satyanarayana, Kavitha Chelakara; Abildskov, Jens; Gani, Rafiqul

    2009-01-01

    . Polymer repeat unit property prediction models are required to calculate the properties of the generated repeat units. A systematic framework incorporating recently developed group contribution plus (GC(+)) models and an extended CAMD technique to include design of polymer repeat units is highlighted...... in this paper. The advantage of a GC(+) model in CAMD applications is that a very large number of polymer structures can be considered even though some of the group parameters may not be available. A number of case studies involving different polymer design problems have been solved through the developed...

  11. An Agent-Based Model of Status Construction in Task Focused Groups

    NARCIS (Netherlands)

    Grow, André; Flache, Andreas; Wittek, Rafael

    2015-01-01

    Status beliefs link social distinctions, such as gender and race, to assumptions about competence and social worth. Recent modeling work in status construction theory suggests that interactions in small, task focused groups can lead to the spontaneous emergence and diffusion of such beliefs in large

  12. Modelling animal group fission using social network dynamics.

    Science.gov (United States)

    Sueur, Cédric; Maire, Anaïs

    2014-01-01

    Group life involves both advantages and disadvantages, meaning that individuals have to compromise between their nutritional needs and their social links. When a compromise is impossible, the group splits in order to reduce conflict of interests and favour positive social interactions between its members. In this study we built a dynamic model of social networks to represent a succession of temporary fissions involving a change in social relations that could potentially lead to irreversible group fission (i.e. no more group fusion). This is the first study that assesses how a social network changes according to group fission-fusion dynamics. We built a model that was based on different parameters: the group size, the influence of nutritional needs compared to social needs, and the changes in the social network after a temporary fission. The results obtained from this theoretical data indicate how the percentage of social relation transfer, the number of individuals and the relative importance of nutritional requirements and social links influence the average number of days before irreversible fission occurs. The greater the nutritional needs and the higher the transfer of social relations during temporary fission, the fewer days will be observed before an irreversible fission. It is crucial to bridge the gap between the individual and the population level if we hope to understand how simple, local interactions may drive ecological systems.

  13. High-performance ab initio density matrix renormalization group method: Applicability to large-scale multireference problems for metal compounds

    Science.gov (United States)

    Kurashige, Yuki; Yanai, Takeshi

    2009-06-01

    This article presents an efficient and parallelized implementation of the density matrix renormalization group (DMRG) algorithm for quantum chemistry calculations. The DMRG method as a large-scale multireference electronic structure model is by nature particularly efficient for one-dimensionally correlated systems, while the present development is oriented toward applications for polynuclear transition metal compounds, in which the macroscopic one-dimensional structure of electron correlation is absent. A straightforward extension of the DMRG algorithm is proposed with further improvements and aggressive optimizations to allow its application with large multireference active space, which is often demanded for metal compound calculations. Special efficiency is achieved by making better use of sparsity and symmetry in the operator and wave function representations. By accomplishing computationally intensive DMRG calculations, the authors have found that a large number of renormalized basis states are required to represent high entanglement of the electron correlation for metal compound applications, and it is crucial to adopt auxiliary perturbative correction to the projected density matrix during the DMRG sweep optimization in order to attain proper convergence to the solution. Potential energy curve calculations for the Cr2 molecule near the known equilibrium precisely predicted the full configuration interaction energies with a correlation space of 24 electrons in 30 orbitals [denoted by (24e,30o)]. The energies are demonstrated to be accurate to 0.6mEh (the error from the extrapolated best value) when as many as 10 000 renormalized basis states are employed for the left and right DMRG block representations. The relative energy curves for [Cu2O2]2+ along the isomerization coordinate were obtained from DMRG and other correlated calculations, for which a fairly large orbital space (32e,62o) is modeled as a full correlation space. The DMRG prediction nearly overlaps

  14. Hidden Markov Models for the Activity Profile of Terrorist Groups

    CERN Document Server

    Raghavan, Vasanthan; Tartakovsky, Alexander G

    2012-01-01

    The main focus of this work is on developing models for the activity profile of a terrorist group, detecting sudden spurts and downfalls in this profile, and in general, tracking it over a period of time. Toward this goal, a d-state hidden Markov model (HMM) that captures the latent states underlying the dynamics of the group and thus its activity profile is developed. The simplest setting of d = 2 corresponds to the case where the dynamics are coarsely quantized as Active and Inactive, respectively. Two strategies for spurt detection and tracking are developed here: a model-independent strategy that uses the exponential weighted moving-average (EWMA) filter to track the strength of the group as measured by the number of attacks perpetrated by it, and a state estimation strategy that exploits the underlying HMM structure. The EWMA strategy is robust to modeling uncertainties and errors, and tracks persistent changes (changes that last for a sufficiently long duration) in the strength of the group. On the othe...

  15. Affine group formulation of the Standard Model coupled to gravity

    CERN Document Server

    Chou, Ching-Yi; Soo, Chopin

    2013-01-01

    Using the affine group formalism, we perform a nonperturbative quantization leading to the construction of elements of a physical Hilbert space for full, Lorentzian quantum gravity coupled to the Standard Model in four spacetime dimensions. This paper constitutes a first step toward understanding the phenomenology of quantum gravitational effects stemming from a consistent treatment of minimal couplings to matter.

  16. Wave groups in uni-directional surface-wave models

    NARCIS (Netherlands)

    Groesen, van E.

    1998-01-01

    Uni-directional wave models are used to study wave groups that appear in wave tanks of hydrodynamic laboratories; characteristic for waves in such tanks is that the wave length is rather small, comparable to the depth of the layer. In second-order theory, the resulting Nonlinear Schrödinger (NLS) eq

  17. On the renormalization group transformation for scalar hierarchical models

    Energy Technology Data Exchange (ETDEWEB)

    Koch, H. (Texas Univ., Austin (USA). Dept. of Mathematics); Wittwer, P. (Geneva Univ. (Switzerland). Dept. de Physique Theorique)

    1991-06-01

    We give a new proof for the existence of a non-Gaussian hierarchical renormalization group fixed point, using what could be called a beta-function for this problem. We also discuss the asymptotic behavior of this fixed point, and the connection between the hierarchical models of Dyson and Gallavotti. (orig.).

  18. Duct thermal performance models for large commercial buildings

    Energy Technology Data Exchange (ETDEWEB)

    Wray, Craig P.

    2003-10-01

    Despite the potential for significant energy savings by reducing duct leakage or other thermal losses from duct systems in large commercial buildings, California Title 24 has no provisions to credit energy-efficient duct systems in these buildings. A substantial reason is the lack of readily available simulation tools to demonstrate the energy-saving benefits associated with efficient duct systems in large commercial buildings. The overall goal of the Efficient Distribution Systems (EDS) project within the PIER High Performance Commercial Building Systems Program is to bridge the gaps in current duct thermal performance modeling capabilities, and to expand our understanding of duct thermal performance in California large commercial buildings. As steps toward this goal, our strategy in the EDS project involves two parts: (1) developing a whole-building energy simulation approach for analyzing duct thermal performance in large commercial buildings, and (2) using the tool to identify the energy impacts of duct leakage in California large commercial buildings, in support of future recommendations to address duct performance in the Title 24 Energy Efficiency Standards for Nonresidential Buildings. The specific technical objectives for the EDS project were to: (1) Identify a near-term whole-building energy simulation approach that can be used in the impacts analysis task of this project (see Objective 3), with little or no modification. A secondary objective is to recommend how to proceed with long-term development of an improved compliance tool for Title 24 that addresses duct thermal performance. (2) Develop an Alternative Calculation Method (ACM) change proposal to include a new metric for thermal distribution system efficiency in the reporting requirements for the 2005 Title 24 Standards. The metric will facilitate future comparisons of different system types using a common ''yardstick''. (3) Using the selected near-term simulation approach

  19. Research on Effectiveness Modeling of the Online Chat Group

    Directory of Open Access Journals (Sweden)

    Hua-Fei Zhang

    2013-01-01

    Full Text Available The online chat group is a small-scale multiuser social networking platform, in which users participate in the discussions and send and receive information. Online chat group service providers are concerned about the number of active members because more active members means more advertising revenues. For the group owners and members, efficiency of information acquisition is the concern. So it is of great value to model these two indicators’ impacting factors. This paper deduces the mathematical models of the number of active members and efficiency of information acquisition and then conducts numerical experiment. The experimental results provide evidences about how to improve the number of active members and efficiency of information acquisition.

  20. The Cognitive Complexity in Modelling the Group Decision Process

    Directory of Open Access Journals (Sweden)

    Barna Iantovics

    2010-06-01

    Full Text Available The paper investigates for some basic contextual factors (such
    us the problem complexity, the users' creativity and the problem space complexity the cognitive complexity associated with modelling the group decision processes (GDP in e-meetings. The analysis is done by conducting a socio-simulation experiment for an envisioned collaborative software tool that acts as a stigmergic environment for modelling the GDP. The simulation results revels some interesting design guidelines for engineering some contextual functionalities that minimize the cognitive complexity associated with modelling the GDP.

  1. Soil carbon management in large-scale Earth system modelling

    DEFF Research Database (Denmark)

    Olin, S.; Lindeskog, M.; Pugh, T. A. M.;

    2015-01-01

    Croplands are vital ecosystems for human well-being and provide important ecosystem services such as crop yields, retention of nitrogen and carbon storage. On large (regional to global)-scale levels, assessment of how these different services will vary in space and time, especially in response to...... modelling C–N interactions in agricultural ecosystems under future environmental change and the effects these have on terrestrial biogeochemical cycles....

  2. Considerations in Scale-Modeling of Large Urban Fires

    Science.gov (United States)

    1984-11-15

    is inconsequential and that all molecular transport processes are unimportant). The nondimensional parameters to be preserved between the model and...fuel bed. Parker, Corlett and B. T. Lee [!3] also come to a similar conclusion based * on the following two points. First, in large fires, molecular ...to USDA Forest Service, Prepared by Instituto Nacional "de Tecnica Aeroespacial, Madrid, Spain, (May, 1967). "" 57. S.L. Lee and G.M. Hellman

  3. One-dimensional adhesion model for large scale structures

    Directory of Open Access Journals (Sweden)

    Kayyunnapara Thomas Joseph

    2010-05-01

    Full Text Available We discuss initial value problems and initial boundary value problems for some systems of partial differential equations appearing in the modelling for the large scale structure formation in the universe. We restrict the initial data to be bounded measurable and locally bounded variation function and use Volpert product to justify the product which appear in the equation. For more general initial data in the class of generalized functions of Colombeau, we construct the solution in the sense of association.

  4. From Large to Small Scales: Global Models of the ISM

    CERN Document Server

    D'Avillez, M A

    2004-01-01

    We review large scale modelling of the ISM with emphasis on the importance to include the disk-halo-disk duty cycle and to use a dynamical refinement of the grid (in regions where steep variations of density and pressure occur) for a realistic modelling of the ISM. We also discuss the necessity of convergence of the simulation results by comparing 0.625, 1.25 and 2.5 pc resolution simulations and show that a minimum grid resolution of 1.25 pc is required for quantitatively reliable results, as there is a rapid convergence for $\\Delta x \\leq 1.1$ pc.

  5. Supervision in Factor Models Using a Large Number of Predictors

    DEFF Research Database (Denmark)

    Boldrini, Lorenzo; Hillebrand, Eric Tobias

    In this paper we investigate the forecasting performance of a particular factor model (FM) in which the factors are extracted from a large number of predictors. We use a semi-parametric state-space representation of the FM in which the forecast objective, as well as the factors, is included.......g. a standard dynamic factor model with separate forecast and state equations....... in the state vector. The factors are informed of the forecast target (supervised) through the state equation dynamics. We propose a way to assess the contribution of the forecast objective on the extracted factors that exploits the Kalman filter recursions. We forecast one target at a time based...

  6. Non-Standard Models, Solar Neutrinos, and Large \\theta_{13}

    CERN Document Server

    Bonventre, R; Klein, J R; Gann, G D Orebi; Seibert, S; Wasalski, O

    2013-01-01

    Solar neutrino experiments have yet to see directly the transition region between matter-enhanced and vacuum oscillations. The transition region is particularly sensitive to models of non-standard neutrino interactions and propagation. We examine several such non-standard models, which predict a lower-energy transition region and a flatter survival probability for the ^{8}B solar neutrinos than the standard large-mixing angle (LMA) model. We find that while some of the non-standard models provide a better fit to the solar neutrino data set, the large measured value of \\theta_{13} and the size of the experimental uncertainties lead to a low statistical significance for these fits. We have also examined whether simple changes to the solar density profile can lead to a flatter ^{8}B survival probability than the LMA prediction, but find that this is not the case for reasonable changes. We conclude that the data in this critical region is still too poor to determine whether any of these models, or LMA, is the bes...

  7. Introduction to the IWA task group on biofilm modeling.

    Science.gov (United States)

    Noguera, D R; Morgenroth, E

    2004-01-01

    An International Water Association (IWA) Task Group on Biofilm Modeling was created with the purpose of comparatively evaluating different biofilm modeling approaches. The task group developed three benchmark problems for this comparison, and used a diversity of modeling techniques that included analytical, pseudo-analytical, and numerical solutions to the biofilm problems. Models in one, two, and three dimensional domains were also compared. The first benchmark problem (BM1) described a monospecies biofilm growing in a completely mixed reactor environment and had the purpose of comparing the ability of the models to predict substrate fluxes and concentrations for a biofilm system of fixed total biomass and fixed biomass density. The second problem (BM2) represented a situation in which substrate mass transport by convection was influenced by the hydrodynamic conditions of the liquid in contact with the biofilm. The third problem (BM3) was designed to compare the ability of the models to simulate multispecies and multisubstrate biofilms. These three benchmark problems allowed identification of the specific advantages and disadvantages of each modeling approach. A detailed presentation of the comparative analyses for each problem is provided elsewhere in these proceedings.

  8. Real space renormalization group theory of disordered models of glasses.

    Science.gov (United States)

    Angelini, Maria Chiara; Biroli, Giulio

    2017-03-28

    We develop a real space renormalization group analysis of disordered models of glasses, in particular of the spin models at the origin of the random first-order transition theory. We find three fixed points, respectively, associated with the liquid state, with the critical behavior, and with the glass state. The latter two are zero-temperature ones; this provides a natural explanation of the growth of effective activation energy scale and the concomitant huge increase of relaxation time approaching the glass transition. The lower critical dimension depends on the nature of the interacting degrees of freedom and is higher than three for all models. This does not prevent 3D systems from being glassy. Indeed, we find that their renormalization group flow is affected by the fixed points existing in higher dimension and in consequence is nontrivial. Within our theoretical framework, the glass transition results in an avoided phase transition.

  9. Development and In silico Evaluation of Large-Scale Metabolite Identification Methods using Functional Group Detection for Metabolomics

    Directory of Open Access Journals (Sweden)

    Joshua M Mitchell

    2014-07-01

    Full Text Available Large-scale identification of metabolites is key to elucidating and modeling metabolism at the systems level. Advances in metabolomics technologies, particularly ultra-high resolution mass spectrometry enable comprehensive and rapid analysis of metabolites. However, a significant barrier to meaningful data interpretation is the identification of a wide range of metabolites including unknowns and the determination of their role(s in various metabolic networks. Chemoselective (CS probes to tag metabolite functional groups combined with high mass accuracy provide additional structural constraints for metabolite identification and quantification. We have developed a novel algorithm, Chemically Aware Substructure Search (CASS that efficiently detects functional groups within existing metabolite databases, allowing for combined molecular formula and functional group (from CS tagging queries to aid in metabolite identification without a priori knowledge. Analysis of the isomeric compounds in both Human Metabolome Database (HMDB and KEGG Ligand demonstrated a high percentage of isomeric molecular formulae (43% and 28% respectively, indicating the necessity for techniques such as CS-tagging. Furthermore, these two databases have only moderate overlap in molecular formulae. Thus, it is prudent to use multiple databases in metabolite assignment, since each major metabolite database represents different portions of metabolism within the biosphere. In silico analysis of various CS-tagging strategies under different conditions for adduct formation demonstrate that combined FT-MS derived molecular formulae and CS-tagging can uniquely identify up to 71% of KEGG and 37% of the combined KEGG/HMDB database versus 41% and 17% respectively without adduct formation. This difference between database isomer disambiguation highlights the strength of CS-tagging for non-lipid metabolite identification. However, unique identification of complex lipids still needs

  10. A family of dynamic models for large-eddy simulation

    Science.gov (United States)

    Carati, D.; Jansen, K.; Lund, T.

    1995-01-01

    Since its first application, the dynamic procedure has been recognized as an effective means to compute rather than prescribe the unknown coefficients that appear in a subgrid-scale model for Large-Eddy Simulation (LES). The dynamic procedure is usually used to determine the nondimensional coefficient in the Smagorinsky (1963) model. In reality the procedure is quite general and it is not limited to the Smagorinsky model by any theoretical or practical constraints. The purpose of this note is to consider a generalized family of dynamic eddy viscosity models that do not necessarily rely on the local equilibrium assumption built into the Smagorinsky model. By invoking an inertial range assumption, it will be shown that the coefficients in the new models need not be nondimensional. This additional degree of freedom allows the use of models that are scaled on traditionally unknown quantities such as the dissipation rate. In certain cases, the dynamic models with dimensional coefficients are simpler to implement, and allow for a 30% reduction in the number of required filtering operations.

  11. Multiresolution comparison of precipitation datasets for large-scale models

    Science.gov (United States)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  12. Coagulation-Fragmentation Model for Animal Group-Size Statistics

    Science.gov (United States)

    Degond, Pierre; Liu, Jian-Guo; Pego, Robert L.

    2017-04-01

    We study coagulation-fragmentation equations inspired by a simple model proposed in fisheries science to explain data for the size distribution of schools of pelagic fish. Although the equations lack detailed balance and admit no H-theorem, we are able to develop a rather complete description of equilibrium profiles and large-time behavior, based on recent developments in complex function theory for Bernstein and Pick functions. In the large-population continuum limit, a scaling-invariant regime is reached in which all equilibria are determined by a single scaling profile. This universal profile exhibits power-law behavior crossing over from exponent -2/3 for small size to -3/2 for large size, with an exponential cutoff.

  13. Coagulation-Fragmentation Model for Animal Group-Size Statistics

    Science.gov (United States)

    Degond, Pierre; Liu, Jian-Guo; Pego, Robert L.

    2016-10-01

    We study coagulation-fragmentation equations inspired by a simple model proposed in fisheries science to explain data for the size distribution of schools of pelagic fish. Although the equations lack detailed balance and admit no H-theorem, we are able to develop a rather complete description of equilibrium profiles and large-time behavior, based on recent developments in complex function theory for Bernstein and Pick functions. In the large-population continuum limit, a scaling-invariant regime is reached in which all equilibria are determined by a single scaling profile. This universal profile exhibits power-law behavior crossing over from exponent -2/3 for small size to -3/2 for large size, with an exponential cutoff.

  14. Plant functional group composition and large-scale species richness in European agricultural landscapes

    NARCIS (Netherlands)

    Liira, J.; Schmidt, T.; Aavik, T.; Arens, P.F.P.; Augenstein, I.; Bailey, D.; Billeter, R.; Bukacek, R.; Burel, F.; Blust, de G.; Cock, de R.; Dirksen, J.; Edwards, P.J.; Hamersky, R.; Herzog, F.; Klotz, S.; Kuhn, I.; Coeur, Le D.; Miklova, P.; Roubalova, M.; Schweiger, O.; Smulders, M.J.M.; Wingerden, van W.K.R.E.; Bugter, R.J.F.; Zobel, M.

    2008-01-01

    Question: Which are the plant functional groups responding most clearly to agricultural disturbances? Which are the relative roles of habitat availability, landscape configuration and agricultural land use intensity in affecting the functional composition and diversity of vascular plants in agricult

  15. North-south asymmetry in small and large sunspot group activity and violation of even-odd solar cycle rule

    Science.gov (United States)

    Javaraiah, J.

    2016-07-01

    According to Gnevyshev-Ohl (G-O) rule an odd-numbered cycle is stronger than its preceding even-numbered cycle. In the modern time the cycle pair (22, 23) violated this rule. By using the combined Greenwich Photoheliographic Results (GPR) and Solar Optical Observing Network (SOON) sunspot group data during the period 1874-2015, and Debrecen Photoheliographic Data (DPD) of sunspot groups during the period 1974-2015, here we have found that the solar cycle pair (22, 23) violated the G-O rule because, besides during cycle 23 a large deficiency of small sunspot groups in both the northern and the southern hemispheres, during cycle 22 a large abundance of small sunspot groups in the southern hemisphere. In the case of large and small sunspot groups the cycle pair (22, 23) violated the G-O rule in the northern and southern hemispheres, respectively, suggesting the north-south asymmetry in solar activity has a significant contribution in the violation of G-O rule. The amplitude of solar cycle 24 is smaller than that of solar cycle 23. However, Coronal Mass Ejections (CMEs) rate in the rising phases of the cycles 23 and 24 are almost same (even slightly large in cycle 24). From both the SOON and the DPD sunspot group data here we have also found that on the average the ratio of the number (counts) of large sunspot groups to the number of small sunspot groups is larger in the rising phase of cycle 24 than that in the corresponding phase of cycle 23. We suggest this could be a potential reason for the aforesaid discrepancy in the CME rates during the rising phases of cycles 23 and 24. These results have significant implication on solar cycle mechanism.

  16. Work group diversity and group performance: an integrative model and research agenda.

    Science.gov (United States)

    van Knippenberg, Daan; De Dreu, Carsten K W; Homan, Astrid C

    2004-12-01

    Research on the relationship between work group diversity and performance has yielded inconsistent results. To address this problem, the authors propose the categorization-elaboration model (CEM), which reconceptualizes and integrates information/decision making and social categorization perspectives on work-group diversity and performance. The CEM incorporates mediator and moderator variables that typically have been ignored in diversity research and incorporates the view that information/decision making and social categorization processes interact such that intergroup biases flowing from social categorization disrupt the elaboration (in-depth processing) of task-relevant information and perspectives. In addition, the authors propose that attempts to link the positive and negative effects of diversity to specific types of diversity should be abandoned in favor of the assumption that all dimensions of diversity may have positive as well as negative effects. The ways in which these propositions may set the agenda for future research in diversity are discussed.

  17. Morphodynamic modeling of an embayed beach under wave group forcing

    Science.gov (United States)

    Reniers, A. J. H. M.; Roelvink, J. A.; Thornton, E. B.

    2004-01-01

    The morphodynamic response of the nearshore zone of an embayed beach induced by wave groups is examined with a numerical model. The model utilizes the nonlinear shallow water equations to phase resolve the mean and infragravity motions in combination with an advection-diffusion equation for the sediment transport. The sediment transport associated with the short-wave asymmetry is accounted for by means of a time-integrated contribution of the wave nonlinearity using stream function theory. The two-dimensional (2-D) computations consider wave group energy made up of directionally spread, short waves with a zero mean approach angle with respect to the shore normal, incident on an initially alongshore uniform barred beach. Prior to the 2-D computations, the model is calibrated with prototype flume measurements of waves, currents, and bed level changes during erosive and accretive conditions. The most prominent feature of the 2-D model computations is the development of an alongshore quasi-periodic bathymetry of shoals cut by rip channels. Without directional spreading, the smallest alongshore separation of the rip channels is obtained, and the beach response is self-organizing in nature. Introducing a small amount of directional spreading (less than 2°) results in a strong increase in the alongshore length scales as the beach response changes from self-organizing to being quasi-forced. A further increase in directional spreading leads again to smaller length scales. The hypothesized correlation between the observed rip spacing and wave group forced edge waves over the initially alongshore uniform bathymetry is not found. However, there is a correlation between the alongshore length scales of the wave group-induced quasi-steady flow circulations and the eventual alongshore spacing of the rip channels. This suggests that the scouring associated with the quasi-steady flow induced by the initial wave groups triggers the development of rip channels via a positive feedback

  18. Simulation of large-scale rule-based models

    Energy Technology Data Exchange (ETDEWEB)

    Hlavacek, William S [Los Alamos National Laboratory; Monnie, Michael I [Los Alamos National Laboratory; Colvin, Joshua [NON LANL; Faseder, James [NON LANL

    2008-01-01

    Interactions of molecules, such as signaling proteins, with multiple binding sites and/or multiple sites of post-translational covalent modification can be modeled using reaction rules. Rules comprehensively, but implicitly, define the individual chemical species and reactions that molecular interactions can potentially generate. Although rules can be automatically processed to define a biochemical reaction network, the network implied by a set of rules is often too large to generate completely or to simulate using conventional procedures. To address this problem, we present DYNSTOC, a general-purpose tool for simulating rule-based models. DYNSTOC implements a null-event algorithm for simulating chemical reactions in a homogenous reaction compartment. The simulation method does not require that a reaction network be specified explicitly in advance, but rather takes advantage of the availability of the reaction rules in a rule-based specification of a network to determine if a randomly selected set of molecular components participates in a reaction during a time step. DYNSTOC reads reaction rules written in the BioNetGen language which is useful for modeling protein-protein interactions involved in signal transduction. The method of DYNSTOC is closely related to that of STOCHSIM. DYNSTOC differs from STOCHSIM by allowing for model specification in terms of BNGL, which extends the range of protein complexes that can be considered in a model. DYNSTOC enables the simulation of rule-based models that cannot be simulated by conventional methods. We demonstrate the ability of DYNSTOC to simulate models accounting for multisite phosphorylation and multivalent binding processes that are characterized by large numbers of reactions. DYNSTOC is free for non-commercial use. The C source code, supporting documentation and example input files are available at .

  19. Graphs of groups on surfaces interactions and models

    CERN Document Server

    White, AT

    2001-01-01

    The book, suitable as both an introductory reference and as a text book in the rapidly growing field of topological graph theory, models both maps (as in map-coloring problems) and groups by means of graph imbeddings on sufaces. Automorphism groups of both graphs and maps are studied. In addition connections are made to other areas of mathematics, such as hypergraphs, block designs, finite geometries, and finite fields. There are chapters on the emerging subfields of enumerative topological graph theory and random topological graph theory, as well as a chapter on the composition of English

  20. ARCHITECTURAL LARGE CONSTRUCTED ENVIRONMENT. MODELING AND INTERACTION USING DYNAMIC SIMULATIONS

    Directory of Open Access Journals (Sweden)

    P. Fiamma

    2012-09-01

    Full Text Available How to use for the architectural design, the simulation coming from a large size data model? The topic is related to the phase coming usually after the acquisition of the data, during the construction of the model and especially after, when designers must have an interaction with the simulation, in order to develop and verify their idea. In the case of study, the concept of interaction includes the concept of real time "flows". The work develops contents and results that can be part of the large debate about the current connection between "architecture" and "movement". The focus of the work, is to realize a collaborative and participative virtual environment on which different specialist actors, client and final users can share knowledge, targets and constraints to better gain the aimed result. The goal is to have used a dynamic micro simulation digital resource that allows all the actors to explore the model in powerful and realistic way and to have a new type of interaction in a complex architectural scenario. On the one hand, the work represents a base of knowledge that can be implemented more and more; on the other hand the work represents a dealt to understand the large constructed architecture simulation as a way of life, a way of being in time and space. The architectural design before, and the architectural fact after, both happen in a sort of "Spatial Analysis System". The way is open to offer to this "system", knowledge and theories, that can support architectural design work for every application and scale. We think that the presented work represents a dealt to understand the large constructed architecture simulation as a way of life, a way of being in time and space. Architecture like a spatial configuration, that can be reconfigurable too through designing.

  1. Small Group Collaboration in the Large Lecture Setting: Collaborative Process, Pedagogical Paradigms, and Institutional Constraints.

    Science.gov (United States)

    Michalchik, Vera; Schaeffer, Evonne; Tovar, Lawrence; Steinbeck, Reinhold; Bhargava, Tina; Kerns, Charles; Engel, Claudia; Levtov, Ruti

    This paper focuses on some of the key issues involved in implementing a collaborative design project in the setting of the large undergraduate lecture course at a major research university, offering a preliminary analysis of the assignment mainly as a function of how students managed and interpreted it. The collaborative design project was…

  2. New Principles of Coordination in Large-scale Micro- and Molecular-Robotic Groups

    CERN Document Server

    Kornienko, S

    2011-01-01

    Micro- and molecular-robotic systems act as large-scale swarms. Capabilities of sensing, communication and information processing are very limited on these scales. This short position paper describes a swarm-based minimalistic approach, which can be applied for coordinating collective behavior in such systems.

  3. Encourage Learners in the Large Class to Speak English in Group Work

    Science.gov (United States)

    Meng, Fanshao

    2009-01-01

    Large-class English teaching is an inexorable trend in many Chinese universities and colleges, which leads to a strange and serious phenomenon that most students' English is ironically but vividly described as "the dumb English". Therefore, cultivating students' communicative skills and developing their language competence has become a…

  4. Using Facebook Groups to Encourage Science Discussions in a Large-Enrollment Biology Class

    Science.gov (United States)

    Pai, Aditi; McGinnis, Gene; Bryant, Dana; Cole, Megan; Kovacs, Jennifer; Stovall, Kyndra; Lee, Mark

    2017-01-01

    This case study reports the instructional development, impact, and lessons learned regarding the use of Facebook as an educational tool within a large enrollment Biology class at Spelman College (Atlanta, GA). We describe the use of this social networking site to (a) engage students in active scientific discussions, (b) build community within the…

  5. Challenges of Modeling Flood Risk at Large Scales

    Science.gov (United States)

    Guin, J.; Simic, M.; Rowe, J.

    2009-04-01

    Flood risk management is a major concern for many nations and for the insurance sector in places where this peril is insured. A prerequisite for risk management, whether in the public sector or in the private sector is an accurate estimation of the risk. Mitigation measures and traditional flood management techniques are most successful when the problem is viewed at a large regional scale such that all inter-dependencies in a river network are well understood. From an insurance perspective the jury is still out there on whether flood is an insurable peril. However, with advances in modeling techniques and computer power it is possible to develop models that allow proper risk quantification at the scale suitable for a viable insurance market for flood peril. In order to serve the insurance market a model has to be event-simulation based and has to provide financial risk estimation that forms the basis for risk pricing, risk transfer and risk management at all levels of insurance industry at large. In short, for a collection of properties, henceforth referred to as a portfolio, the critical output of the model is an annual probability distribution of economic losses from a single flood occurrence (flood event) or from an aggregation of all events in any given year. In this paper, the challenges of developing such a model are discussed in the context of Great Britain for which a model has been developed. The model comprises of several, physically motivated components so that the primary attributes of the phenomenon are accounted for. The first component, the rainfall generator simulates a continuous series of rainfall events in space and time over thousands of years, which are physically realistic while maintaining the statistical properties of rainfall at all locations over the model domain. A physically based runoff generation module feeds all the rivers in Great Britain, whose total length of stream links amounts to about 60,000 km. A dynamical flow routing

  6. [Influence of spv plasmid genes group in Salmonella Enteritidis virulence for chickens. I. Occurrence of spv plasmid genes group in Salmonella Enteritidis large virulence plasmid].

    Science.gov (United States)

    Madajczak, Grzegorz; Binek, Marian

    2005-01-01

    Many Salmonella Enteritidis virulence factors are encoded by genes localized on plasmids, especially large virulence plasmid, in highly conserved fragment, they create spv plasmid gene group. The aims of realized researches were spv genes occurrence evaluation and composition analysis among Salmonella Enteritidis strains caused infection in chickens. Researches were realized on 107 isolates, where in every cases large virulence plasmid 59 kbp size were detected. Specific nucleotides sequences of spv genes (spvRABCD) were detected in 47.7% of isolates. In the rest of examined bacteria spv genes occurred variably. Most often extreme genes of spv group, like spvR and spvD were absent, what could indicate that factors encoded by them are not most important for Salmonella Enteritidis live and their expressed virulence.

  7. Large-scale Modeling of Inundation in the Amazon Basin

    Science.gov (United States)

    Luo, X.; Li, H. Y.; Getirana, A.; Leung, L. R.; Tesfa, T. K.

    2015-12-01

    Flood events have impacts on the exchange of energy, water and trace gases between land and atmosphere, hence potentially affecting the climate. The Amazon River basin is the world's largest river basin. Seasonal floods occur in the Amazon Basin each year. The basin being characterized by flat gradients, backwater effects are evident in the river dynamics. This factor, together with large uncertainties in river hydraulic geometry, surface topography and other datasets, contribute to difficulties in simulating flooding processes over this basin. We have developed a large-scale inundation scheme in the framework of the Model for Scale Adaptive River Transport (MOSART) river routing model. Both the kinematic wave and the diffusion wave routing methods are implemented in the model. A new process-based algorithm is designed to represent river channel - floodplain interactions. Uncertainties in the input datasets are partly addressed through model calibration. We will present the comparison of simulated results against satellite and in situ observations and analysis to understand factors that influence inundation processes in the Amazon Basin.

  8. Phase diagram and criticality of the random anisotropy model in the large-N limit

    Science.gov (United States)

    Mouhanna, Dominique; Tarjus, Gilles

    2016-12-01

    We revisit the thermodynamic behavior of the random-anisotropy O(N ) model by investigating its large-N limit. We focus on the system at zero temperature where the mean-field-like artifacts of the large-N limit are less severe. We analyze the connection between the description in terms of self-consistent Schwinger-Dyson equations and the functional renormalization group. We provide a unified description of the phase diagram and critical behavior of the model and clarify the nature of the possible "glassy" phases. Finally we discuss the implications of our findings for the finite-N and finite-temperature systems.

  9. Analytical modeling of large-angle CMBR anisotropies from textures

    CERN Document Server

    Magueijo, J

    1995-01-01

    We propose an analytic method for predicting the large angle CMBR temperature fluctuations induced by model textures. The model makes use of only a small number of phenomenological parameters which ought to be measured from simple simulations. We derive semi-analytically the C^l-spectrum for 2\\leq l\\leq 30 together with its associated non-Gaussian cosmic variance error bars. A slightly tilted spectrum with an extra suppression at low l is found, and we investigate the dependence of the tilt on the parameters of the model. We also produce a prediction for the two point correlation function. We find a high level of cosmic confusion between texture scenarios and standard inflationary theories in any of these quantities. However, we discover that a distinctive non-Gaussian signal ought to be expected at low l, reflecting the prominent effect of the last texture in these multipoles.

  10. Modeling The Large Scale Bias of Neutral Hydrogen

    CERN Document Server

    Marin, Felipe; Seo, Hee-Jong; Vallinotto, Alberto

    2009-01-01

    We present analytical estimates of the large scale bias of neutral Hydrogen (HI) based on the Halo Occupation Distribution formalism. We use a simple, non-parametric model which monotonically relates the total mass of a halo with its HI mass at zero redshift; for earlier times we assume limiting models for the HI density parameter evolution, consistent with the data presently available, as well as two main scenarios for the evolution of our HI mass - Halo mass relation. We find that both the linear and the first non-linear bias terms exhibit a remarkable evolution with redshift, regardless of the specific limiting model assumed for the HI evolution. These analytical predictions are then shown to be consistent with measurements performed on the Millennium Simulation. Additionally, we show that this strong bias evolution does not sensibly affect the measurement of the HI Power Spectrum.

  11. Tensor renormalization group methods for spin and gauge models

    Science.gov (United States)

    Zou, Haiyuan

    The analysis of the error of perturbative series by comparing it to the exact solution is an important tool to understand the non-perturbative physics of statistical models. For some toy models, a new method can be used to calculate higher order weak coupling expansion and modified perturbation theory can be constructed. However, it is nontrivial to generalize the new method to understand the critical behavior of high dimensional spin and gauge models. Actually, it is a big challenge in both high energy physics and condensed matter physics to develop accurate and efficient numerical algorithms to solve these problems. In this thesis, one systematic way named tensor renormalization group method is discussed. The applications of the method to several spin and gauge models on a lattice are investigated. theoretically, the new method allows one to write an exact representation of the partition function of models with local interactions. E.g. O(N) models, Z2 gauge models and U(1) gauge models. Practically, by using controllable approximations, results in both finite volume and the thermodynamic limit can be obtained. Another advantage of the new method is that it is insensitive to sign problems for models with complex coupling and chemical potential. Through the new approach, the Fisher's zeros of the 2D O(2) model in the complex coupling plane can be calculated and the finite size scaling of the results agrees well with the Kosterlitz-Thouless assumption. Applying the method to the O(2) model with a chemical potential, new phase diagram of the models can be obtained. The structure of the tensor language may provide a new tool to understand phase transition properties in general.

  12. Acquisition Integration Models: How Large Companies Successfully Integrate Startups

    Directory of Open Access Journals (Sweden)

    Peter Carbone

    2011-10-01

    Full Text Available Mergers and acquisitions (M&A have been popular means for many companies to address the increasing pace and level of competition that they face. Large companies have pursued acquisitions to more quickly access technology, markets, and customers, and this approach has always been a viable exit strategy for startups. However, not all deals deliver the anticipated benefits, in large part due to poor integration of the acquired assets into the acquiring company. Integration can greatly impact the success of the acquisition and, indeed, the combined company’s overall market success. In this article, I explore the implementation of several integration models that have been put into place by a large company and extract principles that may assist negotiating parties with maximizing success. This perspective may also be of interest to smaller companies as they explore exit options while trying to ensure continued market success after acquisition. I assert that business success with acquisitions is dependent on an appropriate integration model, but that asset integration is not formulaic. Any integration effort must consider the specific market context and personnel involved.

  13. Dynamical modeling and analysis of large cellular regulatory networks

    Science.gov (United States)

    Bérenguier, D.; Chaouiya, C.; Monteiro, P. T.; Naldi, A.; Remy, E.; Thieffry, D.; Tichit, L.

    2013-06-01

    The dynamical analysis of large biological regulatory networks requires the development of scalable methods for mathematical modeling. Following the approach initially introduced by Thomas, we formalize the interactions between the components of a network in terms of discrete variables, functions, and parameters. Model simulations result in directed graphs, called state transition graphs. We are particularly interested in reachability properties and asymptotic behaviors, which correspond to terminal strongly connected components (or "attractors") in the state transition graph. A well-known problem is the exponential increase of the size of state transition graphs with the number of network components, in particular when using the biologically realistic asynchronous updating assumption. To address this problem, we have developed several complementary methods enabling the analysis of the behavior of large and complex logical models: (i) the definition of transition priority classes to simplify the dynamics; (ii) a model reduction method preserving essential dynamical properties, (iii) a novel algorithm to compact state transition graphs and directly generate compressed representations, emphasizing relevant transient and asymptotic dynamical properties. The power of an approach combining these different methods is demonstrated by applying them to a recent multilevel logical model for the network controlling CD4+ T helper cell response to antigen presentation and to a dozen cytokines. This model accounts for the differentiation of canonical Th1 and Th2 lymphocytes, as well as of inflammatory Th17 and regulatory T cells, along with many hybrid subtypes. All these methods have been implemented into the software GINsim, which enables the definition, the analysis, and the simulation of logical regulatory graphs.

  14. Real-space renormalization group approach to the Anderson model

    Science.gov (United States)

    Campbell, Eamonn

    Many of the most interesting electronic behaviours currently being studied are associated with strong correlations. In addition, many of these materials are disordered either intrinsically or due to doping. Solving interacting systems exactly is extremely computationally expensive, and approximate techniques developed for strongly correlated systems are not easily adapted to include disorder. As a non-interacting disordered model, it makes sense to consider the Anderson model as a first step in developing an approximate method of solution to the interacting and disordered Anderson-Hubbard model. Our renormalization group (RG) approach is modeled on that proposed by Johri and Bhatt [23]. We found an error in their work which we have corrected in our procedure. After testing the execution of the RG, we benchmarked the density of states and inverse participation ratio results against exact diagonalization. Our approach is significantly faster than exact diagonalization and is most accurate in the limit of strong disorder.

  15. Comparison of 12-step groups to mutual help alternatives for AUD in a large, national study: Differences in membership characteristics and group participation, cohesion, and satisfaction.

    Science.gov (United States)

    Zemore, Sarah E; Kaskutas, Lee Ann; Mericle, Amy; Hemberg, Jordana

    2017-02-01

    Many studies suggest that participation in 12-step groups contributes to better recovery outcomes, but people often object to such groups and most do not sustain regular involvement. Yet, research on alternatives to 12-step groups is very sparse. The present study aimed to extend the knowledge base on mutual help group alternatives for those with an alcohol use disorder (AUD), sampling from large, active, abstinence-focused groups including Women for Sobriety (WFS), LifeRing, and SMART Recovery (SMART). This paper presents a cross-sectional analysis of this longitudinal study, using baseline data to describe the profile and participation characteristics of attendees of these groups in comparison to 12-step members. Data from participants 18 and over with a lifetime AUD (N=651) were collected using Web-based surveys. Members of alternative 12-step groups were recruited in collaboration with group directors, who helped publicize the study by emailing meeting conveners and attendees and posting announcements on social media. A comparison group of current (past-30-day) 12-step attendees was recruited from an online meeting hub for recovering persons. Interested parties were directed to a Webpage where they were screened, and eligible participants completed an online survey assessing demographic and clinical variables; in-person and online mutual help involvement; and group satisfaction and cohesion. Analyses involved comparing those identifying WFS, SMART, and LifeRing as their primary group to 12-step members on the above characteristics. Compared to 12-step members, members of the mutual help alternatives were less religious and generally higher on education and income. WFS and LifeRing members were also older, more likely to be married, and lower on lifetime drug and psychiatric severity; meanwhile, LifeRing and SMART members were less likely to endorse the most stringent abstinence goal. Finally, despite lower levels of in-person meeting attendance, members of all

  16. Correlates of sedentary time in different age groups: results from a large cross sectional Dutch survey

    NARCIS (Netherlands)

    Bernaards, C.; Hildebrandt, V.H.; Hendriksen, I.J.

    2016-01-01

    Background. Evidence shows that prolonged sitting is associated with an increased risk of mortality, independent of physical activity (PA). The aim of the study was to identify correlates of sedentary time (ST) in different age groups and day types (i.e. school-/work day versus non-school-/non-work

  17. Research and Teaching: Aligning Assessment to Instruction--Collaborative Group Testing in Large- Enrollment Science Classes

    Science.gov (United States)

    Siegel, Marcelle; Roberts, Tina M.; Freyermuth, Sharyn K.; Witzig, Stephen B.; Izci, Kemal

    2015-01-01

    The authors describe a collaborative group-testing strategy implemented and studied in undergraduate science classes. This project investigated how the assessment strategy relates to student performance and perceptions about collaboration and focused on two sections of an undergraduate biotechnology course taught in separate semesters.

  18. Large-Scale Tests of the DGP Model

    CERN Document Server

    Song, Y S; Hu, W; Song, Yong-Seon; Sawicki, Ignacy; Hu, Wayne

    2006-01-01

    The self-accelerating braneworld model (DGP) can be tested from measurements of the expansion history of the universe and the formation of structure. Current constraints on the expansion history from supernova luminosity distances, the CMB, and the Hubble constant exclude the simplest flat DGP model at about 3sigma. The best-fit open DGP model is, however, only a marginally poorer fit to the data than flat LCDM. Its substantially different expansion history raises structure formation challenges for the model. A dark-energy model with the same expansion history would predict a highly significant discrepancy with the baryon oscillation measurement due the high Hubble constant required and a large enhancement of CMB anisotropies at the lowest multipoles due to the ISW effect. For the DGP model to satisfy these constraints new gravitational phenomena would have to appear at the non-linear and cross-over scales respectively. A prediction of the DGP expansion history in a region where the phenomenology is well unde...

  19. Ensemble renormalization group for the random-field hierarchical model.

    Science.gov (United States)

    Decelle, Aurélien; Parisi, Giorgio; Rocchi, Jacopo

    2014-03-01

    The renormalization group (RG) methods are still far from being completely understood in quenched disordered systems. In order to gain insight into the nature of the phase transition of these systems, it is common to investigate simple models. In this work we study a real-space RG transformation on the Dyson hierarchical lattice with a random field, which leads to a reconstruction of the RG flow and to an evaluation of the critical exponents of the model at T=0. We show that this method gives very accurate estimations of the critical exponents by comparing our results with those obtained by some of us using an independent method.

  20. Improved engine wall models for Large Eddy Simulation (LES)

    Science.gov (United States)

    Plengsaard, Chalearmpol

    Improved wall models for Large Eddy Simulation (LES) are presented in this research. The classical Werner-Wengle (WW) wall shear stress model is used along with near-wall sub-grid scale viscosity. A sub-grid scale turbulent kinetic energy is employed in a model for the eddy viscosity. To gain better heat flux results, a modified classical variable-density wall heat transfer model is also used. Because no experimental wall shear stress results are available in engines, the fully turbulent developed flow in a square duct is chosen to validate the new wall models. The model constants in the new wall models are set to 0.01 and 0.8, respectively and are kept constant throughout the investigation. The resulting time- and spatially-averaged velocity and temperature wall functions from the new wall models match well with the law-of-the-wall experimental data at Re = 50,000. In order to study the effect of hot air impinging walls, jet impingement on a flat plate is also tested with the new wall models. The jet Reynolds number is equal to 21,000 and a fixed jet-to-plate spacing of H/D = 2.0. As predicted by the new wall models, the time-averaged skin friction coefficient agrees well with experimental data, while the computed Nusselt number agrees fairly well when r/D > 2.0. Additionally, the model is validated using experimental data from a Caterpillar engine operated with conventional diesel combustion. Sixteen different operating engine conditions are simulated. The majority of the predicted heat flux results from each thermocouple location follow similar trends when compared with experimental data. The magnitude of peak heat fluxes as predicted by the new wall models is in the range of typical measured values in diesel combustion, while most heat flux results from previous LES wall models are over-predicted. The new wall models generate more accurate predictions and agree better with experimental data.

  1. Quasi hope algebras, group cohomology and orbifold models

    Science.gov (United States)

    Dijkgraaf, R.; Pasquier, V.; Roche, P.

    1991-01-01

    We construct non trivial quasi Hopf algebras associated to any finite group G and any element of H3( G, U(1)). We analyze in details the set of representations of these algebras and show that we recover the main interesting datas attached to particular orbifolds of Rational Conformal Field Theory or equivalently to the topological field theories studied by R. Dijkgraaf and E. Witten. This leads us to the construction of the R-matrix structure in non abelian RCFT orbifold models.

  2. Applying OWA operator to model group behaviors in uncertain QFD

    OpenAIRE

    2013-01-01

    It is a crucial step to derive the priority order of design requirements (DRs) from customer requirements (CRs) in quality function deployment (QFD). However, it is not straightforward to prioritize DRs due to two types of uncertainties: human subjective perception and user variability. This paper proposes an OWA based group decision-making approach to uncertain QFD with an application to a flexible manufacturing system design. The proposed model performs computations solely based on the orde...

  3. Improving CASINO performance for models with large number of electrons

    Energy Technology Data Exchange (ETDEWEB)

    Anton, L; Alfe, D; Hood, R Q; Tanqueray, D

    2009-05-13

    Quantum Monte Carlo calculations have at their core algorithms based on statistical ensembles of multidimensional random walkers which are straightforward to use on parallel computers. Nevertheless some computations have reached the limit of the memory resources for models with more than 1000 electrons because of the need to store a large amount of electronic orbitals related data. Besides that, for systems with large number of electrons, it is interesting to study if the evolution of one configuration of random walkers can be done faster in parallel. We present a comparative study of two ways to solve these problems: (1) distributed orbital data done with MPI or Unix inter-process communication tools, (2) second level parallelism for configuration computation.

  4. Experimental study on load bearing behavior of large-scaled caps with pile groups

    Institute of Scientific and Technical Information of China (English)

    Guo Chao; Lu Bo; Gong Weiming; Qiu Hongxing

    2009-01-01

    The objective of this investigation was to study the behavior of deep pile caps and the ultimate load-carrying capacity. Four 1/10 scaled models of nine-pile caps were cast and tested on vertical loads to failure. The destruction shapes of pile caps, the correlation between load and displacement, and the internal stresses were analyzed systematical-ly. The results demonstrated that the failures of all the four models are resulted from punching shear; the internal flow of the forces in nine-pile caps can be approximated by "strut-and-fie" model. Furthermore, the failure loads of these spec-imens were predicted by some of the present design methods and the calculated results were compared with the experi-mental loads. The comparative results also indicated that the "strut-and-tie" model is a more reasonable design method for deep pile caps design.

  5. Leukocyte deformability: finite element modeling of large viscoelastic deformation.

    Science.gov (United States)

    Dong, C; Skalak, R

    1992-09-21

    An axisymmetric deformation of a viscoelastic sphere bounded by a prestressed elastic thin shell in response to external pressure is studied by a finite element method. The research is motivated by the need for understanding the passive behavior of human leukocytes (white blood cells) and interpreting extensive experimental data in terms of the mechanical properties. The cell at rest is modeled as a sphere consisting of a cortical prestressed shell with incompressible Maxwell fluid interior. A large-strain deformation theory is developed based on the proposed model. General non-linear, large strain constitutive relations for the cortical shell are derived by neglecting the bending stiffness. A representation of the constitutive equations in the form of an integral of strain history for the incompressible Maxwell interior is used in the formulation of numerical scheme. A finite element program is developed, in which a sliding boundary condition is imposed on all contact surfaces. The mathematical model developed is applied to evaluate experimental data of pipette tests and observations of blood flow.

  6. Large animal models for vaccine development and testing.

    Science.gov (United States)

    Gerdts, Volker; Wilson, Heather L; Meurens, Francois; van Drunen Littel-van den Hurk, Sylvia; Wilson, Don; Walker, Stewart; Wheler, Colette; Townsend, Hugh; Potter, Andrew A

    2015-01-01

    The development of human vaccines continues to rely on the use of animals for research. Regulatory authorities require novel vaccine candidates to undergo preclinical assessment in animal models before being permitted to enter the clinical phase in human subjects. Substantial progress has been made in recent years in reducing and replacing the number of animals used for preclinical vaccine research through the use of bioinformatics and computational biology to design new vaccine candidates. However, the ultimate goal of a new vaccine is to instruct the immune system to elicit an effective immune response against the pathogen of interest, and no alternatives to live animal use currently exist for evaluation of this response. Studies identifying the mechanisms of immune protection; determining the optimal route and formulation of vaccines; establishing the duration and onset of immunity, as well as the safety and efficacy of new vaccines, must be performed in a living system. Importantly, no single animal model provides all the information required for advancing a new vaccine through the preclinical stage, and research over the last two decades has highlighted that large animals more accurately predict vaccine outcome in humans than do other models. Here we review the advantages and disadvantages of large animal models for human vaccine development and demonstrate that much of the success in bringing a new vaccine to market depends on choosing the most appropriate animal model for preclinical testing. © The Author 2015. Published by Oxford University Press on behalf of the Institute for Laboratory Animal Research. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  7. Large eddy simulation subgrid model for soot prediction

    Science.gov (United States)

    El-Asrag, Hossam Abd El-Raouf Mostafa

    Soot prediction in realistic systems is one of the most challenging problems in theoretical and applied combustion. Soot formation as a chemical process is very complicated and not fully understood. The major difficulty stems from the chemical complexity of the soot formation process as well as its strong coupling with the other thermochemical and fluid processes that occur simultaneously. Soot is a major byproduct of incomplete combustion, having a strong impact on the environment as well as the combustion efficiency. Therefore, innovative methods is needed to predict soot in realistic configurations in an accurate and yet computationally efficient way. In the current study, a new soot formation subgrid model is developed and reported here. The new model is designed to be used within the context of the Large Eddy Simulation (LES) framework, combined with Linear Eddy Mixing (LEM) as a subgrid combustion model. The final model can be applied equally to premixed and non-premixed flames over any required geometry and flow conditions in the free, the transition, and the continuum regimes. The soot dynamics is predicted using a Method of Moments approach with Lagrangian Interpolative Closure (MOMIC) for the fractional moments. Since no prior knowledge of the particles distribution is required, the model is generally applicable. The current model accounts for the basic soot transport phenomena as transport by molecular diffusion and Thermophoretic forces. The model is first validated against experimental results for non-sooting swirling non-premixed and partially premixed flames. Next, a set of canonical premixed sooting flames are simulated, where the effect of turbulence, binary diffusivity and C/O ratio on soot formation are studied. Finally, the model is validated against a non-premixed jet sooting flame. The effect of the flame structure on the different soot formation stages as well as the particle size distribution is described. Good results are predicted with

  8. Scale invariant behavior in a large N matrix model

    CERN Document Server

    Narayanan, Rajamani

    2016-01-01

    Eigenvalue distributions of properly regularized Wilson loop operators are used to study the transition from ultra-violet (UV) behavior to infra-red (IR) behavior in gauge theories coupled to matter that potentially have an IR fixed point (FP). We numerically demonstrate emergence of scale invariance in a matrix model that describes $SU(N)$ gauge theory coupled to two flavors of massless adjoint fermions in the large $N$ limit. The eigenvalue distribution of Wilson loops of varying sizes cannot be described by a universal lattice beta-function connecting the UV to the IR.

  9. Modeling skin effect in large magnetized iron detectors

    CERN Document Server

    Incurvati, M

    2003-01-01

    The experimental problem of the calibration of magnetic field in large iron detectors is discussed. Emphasis is laid on techniques based on ballistic measurements as the ones employed by MINOS or OPERA.In particular, we provide analytical formulas to model the behavior of the apparatus in the transient regime, keeping into account eddy current effects and the finite penetration velocity of the driving fields. These formulas ease substantially the design of the calibration apparatus.Results are compared with experimental data coming from a prototype of the OPERA spectrometer.

  10. Design and modelling of innovative machinery systems for large ships

    DEFF Research Database (Denmark)

    Larsen, Ulrik

    Eighty percent of the growing global merchandise trade is transported by sea. The shipping industry is required to reduce the pollution and increase the energy efficiency of ships in the near future. There is a relatively large potential for approaching these requirements by implementing waste heat...... parameters for marine WHR. Using this mentioned methodology, regression models are derived for the prediction of the maximum obtainable thermal efficiency of ORCs. A unique configuration of the Kalina cycle, the Split-cycle, is analysed to evaluate the fullest potential of the Kalina cycle for the purpose...

  11. Modeling of large-scale oxy-fuel combustion processes

    DEFF Research Database (Denmark)

    Yin, Chungen

    2012-01-01

    , among which radiative heat transfer under oxy-fuel conditions is one of the fundamental issues. This paper demonstrates the nongray-gas effects in modeling of large-scale oxy-fuel combustion processes. Oxy-fuel combustion of natural gas in a 609MW utility boiler is numerically studied, in which....... The simulation results show that the gray and non-gray calculations of the same oxy-fuel WSGGM make distinctly different predictions in the wall radiative heat transfer, incident radiative flux, radiative source, gas temperature and species profiles. In relative to the non-gray implementation, the gray...

  12. A Model for Predicting Thermomechanical Response of Large Space Structures.

    Science.gov (United States)

    1984-06-01

    Dr. Tony Amos (202)767-4937 u.DD FORM 1473, 83 APR EDITION OF I JAN 73 IS OBSOLETE. SECURITY CLASSIFICATION CF THIS PAGE .. ’o 1 v A MODEL FOR...SYMBOL Dr. Tony Amos (202)767-4937 SDD FORM 1473, 83 APR E c , TION OF 1 JAN 73 ,S OBSOLETE. SECLUHITY (CLASSI. ICATION Oi) THI PAGE -i LARGE SPACE...94] for predicting the buckling loads associated with general instability of beam-like lattice trusses. Bazant and Christensen [95] present a

  13. Resin infusion of large composite structures modeling and manufacturing process

    Energy Technology Data Exchange (ETDEWEB)

    Loos, A.C. [Michigan State Univ., Dept. of Mechanical Engineering, East Lansing, MI (United States)

    2006-07-01

    The resin infusion processes resin transfer molding (RTM), resin film infusion (RFI) and vacuum assisted resin transfer molding (VARTM) are cost effective techniques for the fabrication of complex shaped composite structures. The dry fibrous preform is placed in the mold, consolidated, resin impregnated and cured in a single step process. The fibrous performs are often constructed near net shape using highly automated textile processes such as knitting, weaving and braiding. In this paper, the infusion processes RTM, RFI and VARTM are discussed along with the advantages of each technique compared with traditional composite fabrication methods such as prepreg tape lay up and autoclave cure. The large number of processing variables and the complex material behavior during infiltration and cure make experimental optimization of the infusion processes costly and inefficient. Numerical models have been developed which can be used to simulate the resin infusion processes. The model formulation and solution procedures for the VARTM process are presented. A VARTM process simulation of a carbon fiber preform was presented to demonstrate the type of information that can be generated by the model and to compare the model predictions with experimental measurements. Overall, the predicted flow front positions, resin pressures and preform thicknesses agree well with the measured values. The results of the simulation show the potential cost and performance benefits that can be realized by using a simulation model as part of the development process. (au)

  14. Do land parameters matter in large-scale hydrological modelling?

    Science.gov (United States)

    Gudmundsson, Lukas; Seneviratne, Sonia I.

    2013-04-01

    Many of the most pending issues in large-scale hydrology are concerned with predicting hydrological variability at ungauged locations. However, current-generation hydrological and land surface models that are used for their estimation suffer from large uncertainties. These models rely on mathematical approximations of the physical system as well as on mapped values of land parameters (e.g. topography, soil types, land cover) to predict hydrological variables (e.g. evapotranspiration, soil moisture, stream flow) as a function of atmospheric forcing (e.g. precipitation, temperature, humidity). Despite considerable progress in recent years, it remains unclear whether better estimates of land parameters can improve predictions - or - if a refinement of model physics is necessary. To approach this question we suggest scrutinizing our perception of hydrological systems by confronting it with the radical assumption that hydrological variability at any location in space depends on past and present atmospheric forcing only, and not on location-specific land parameters. This so called "Constant Land Parameter Hypothesis (CLPH)" assumes that variables like runoff can be predicted without taking location specific factors such as topography or soil types into account. We demonstrate, using a modern statistical tool, that monthly runoff in Europe can be skilfully estimated using atmospheric forcing alone, without accounting for locally varying land parameters. The resulting runoff estimates are used to benchmark state-of-the-art process models. These are found to have inferior performance, despite their explicit process representation, which accounts for locally varying land parameters. This suggests that progress in the theory of hydrological systems is likely to yield larger improvements in model performance than more precise land parameter estimates. The results also question the current modelling paradigm that is dominated by the attempt to account for locally varying land

  15. Next Generation Very Large Array Memo No. 5: Science Working Groups -- Project Overview

    CERN Document Server

    Carilli, C L; Ott, J; Beasley, A; Isella, A; Murphy, E; Leroy, A; Casey, C; Moullet, A; Lacy, M; Hodge, J; Bower, G; Demorest, P; Hull, C; Hughes, M; di Francesco, J; Narayanan, D; Kent, B; Clark, B; Butler, B

    2015-01-01

    We summarize the design, capabilities, and some of the priority science goals of a next generation Very Large Array (ngVLA). The ngVLA is an interferometric array with 10x larger effective collecting area and 10x higher spatial resolution than the current VLA and the Atacama Large Millimeter Array (ALMA), optimized for operation in the wavelength range 0.3cm to 3cm. The ngVLA opens a new window on the Universe through ultra-sensitive imaging of thermal line and continuum emission down to milliarcecond resolution, as well as unprecedented broad band continuum polarimetric imaging of non-thermal processes. The continuum resolution will reach 9mas at 1cm, with a brightness temperature sensitivity of 6K in 1 hour. For spectral lines, the array at 1" resolution will reach 0.3K surface brightness sensitivity at 1cm and 10 km/s spectral resolution in 1 hour. These capabilities are the only means with which to answer a broad range of critical scientific questions in modern astronomy, including direct imaging of plane...

  16. Examining the Content of Head Start Teachers' Literacy Instruction within Two Activity Contexts during Large-Group Circle Time

    Science.gov (United States)

    Zhang, Chenyi; Diamond, Karen E.; Powell, Douglas R.

    2015-01-01

    Large-group circle time is an important component of many preschool classrooms' daily schedules. This study scrutinized the teaching content of Head Start teachers' literacy instruction (i.e., the types of literacy concept embedded within the instruction, lexical characteristics of teachers' talk, and elaborations on literacy knowledge) in two…

  17. Poverty, Relationship Conflict, and the Regulation of Cortisol in Small and Large Group Contexts at Child Care

    Science.gov (United States)

    Rappolt-Schlichtmann, Gabrielle; Willett, John B.; Ayoub, Catherine C.; Lindsley, Robert; Hulette, Annmarie C.; Fischer, Kurt W.

    2009-01-01

    The purpose of this research is to explore the dynamics of cortisol regulation in the context of center-based child care by examining the impact of social context (large classroom vs. small group) and relationship quality with caregivers (conflict with mothers and teachers). We extend the research on children's physiologic stress system…

  18. Model parameters for representative wetland plant functional groups

    Science.gov (United States)

    Williams, Amber S.; Kiniry, James R.; Mushet, David M.; Smith, Loren M.; McMurry, Scott T.; Attebury, Kelly; Lang, Megan; McCarty, Gregory W.; Shaffer, Jill A.; Effland, William R.; Johnson, Mari-Vaughn V.

    2017-01-01

    Wetlands provide a wide variety of ecosystem services including water quality remediation, biodiversity refugia, groundwater recharge, and floodwater storage. Realistic estimation of ecosystem service benefits associated with wetlands requires reasonable simulation of the hydrology of each site and realistic simulation of the upland and wetland plant growth cycles. Objectives of this study were to quantify leaf area index (LAI), light extinction coefficient (k), and plant nitrogen (N), phosphorus (P), and potassium (K) concentrations in natural stands of representative plant species for some major plant functional groups in the United States. Functional groups in this study were based on these parameters and plant growth types to enable process-based modeling. We collected data at four locations representing some of the main wetland regions of the United States. At each site, we collected on-the-ground measurements of fraction of light intercepted, LAI, and dry matter within the 2013–2015 growing seasons. Maximum LAI and k variables showed noticeable variations among sites and years, while overall averages and functional group averages give useful estimates for multisite simulation modeling. Variation within each species gives an indication of what can be expected in such natural ecosystems. For P and K, the concentrations from highest to lowest were spikerush (Eleocharis macrostachya), reed canary grass (Phalaris arundinacea), smartweed (Polygonum spp.), cattail (Typha spp.), and hardstem bulrush (Schoenoplectus acutus). Spikerush had the highest N concentration, followed by smartweed, bulrush, reed canary grass, and then cattail. These parameters will be useful for the actual wetland species measured and for the wetland plant functional groups they represent. These parameters and the associated process-based models offer promise as valuable tools for evaluating environmental benefits of wetlands and for evaluating impacts of various agronomic practices in

  19. Large-Signal DG-MOSFET Modelling for RFID Rectification

    Directory of Open Access Journals (Sweden)

    R. Rodríguez

    2016-01-01

    Full Text Available This paper analyses the undoped DG-MOSFETs capability for the operation of rectifiers for RFIDs and Wireless Power Transmission (WPT at microwave frequencies. For this purpose, a large-signal compact model has been developed and implemented in Verilog-A. The model has been numerically validated with a device simulator (Sentaurus. It is found that the number of stages to achieve the optimal rectifier performance is inferior to that required with conventional MOSFETs. In addition, the DC output voltage could be incremented with the use of appropriate mid-gap metals for the gate, as TiN. Minor impact of short channel effects (SCEs on rectification is also pointed out.

  20. Lattice Boltzmann Large Eddy Simulation Model of MHD

    CERN Document Server

    Flint, Christopher

    2016-01-01

    The work of Ansumali \\textit{et al.}\\cite{Ansumali} is extended to Two Dimensional Magnetohydrodynamic (MHD) turbulence in which energy is cascaded to small spatial scales and thus requires subgrid modeling. Applying large eddy simulation (LES) modeling of the macroscopic fluid equations results in the need to apply ad-hoc closure schemes. LES is applied to a suitable mesoscopic lattice Boltzmann representation from which one can recover the MHD equations in the long wavelength, long time scale Chapman-Enskog limit (i.e., the Knudsen limit). Thus on first performing filter width expansions on the lattice Boltzmann equations followed by the standard small Knudsen expansion on the filtered lattice Boltzmann system results in a closed set of MHD turbulence equations provided we enforce the physical constraint that the subgrid effects first enter the dynamics at the transport time scales. In particular, a multi-time relaxation collision operator is considered for the density distribution function and a single rel...

  1. Large N classical dynamics of holographic matrix models

    CERN Document Server

    Asplund, Curtis T; Dzienkowski, Eric

    2014-01-01

    Using a numerical simulation of the classical dynamics of the plane-wave and flat space matrix models of M-theory, we study the thermalization, equilibrium thermodynamics and fluctuations of these models as we vary the temperature and the size of the matrices, N. We present our numerical implementation in detail and several checks of its precision and consistency. We show evidence for thermalization by matching the time-averaged distributions of the matrix eigenvalues to the distributions of the appropriate Traceless Gaussian Unitary Ensemble of random matrices. We study the autocorrelations and power spectra for various fluctuating observables and observe evidence of the expected chaotic dynamics as well as a hydrodynamic type limit at large N, including near-equilibrium dissipation processes. These configurations are holographically dual to black holes in the dual string theory or M-theory and we discuss how our results could be related to the corresponding supergravity black hole solutions.

  2. Inducing transparency with large magnetic response and group indices by hybrid dielectric metamaterials.

    Science.gov (United States)

    Chen, Cheng-Kuang; Lai, Yueh-Chun; Yang, Yu-Hang; Chen, Chia-Yun; Yen, Ta-Jen

    2012-03-26

    We present metamaterial-induced transparency (MIT) phenomena with enhanced magnetic fields in hybrid dielectric metamaterials. Using two hybrid structures of identical-dielectric-constant resonators (IDRs) and distinct-dielectric-constant resonators (DDRs), we demonstrate a larger group index (ng~354), better bandwidth-delay product (BDP~0.9) than metallic-type metamaterials. The keys to enable these properties are to excite either the trapped mode or the suppressed mode resonances, which can be managed by controlling the contrast of dielectric constants between the dielectric resonators in the hybrid metamaterials.

  3. Using Flume Experiments to Model Large Woody Debris Transport Dynamics

    Science.gov (United States)

    Braudrick, C. A.; Grant, G. E.

    2001-05-01

    In the last decade there has been increasing interest in quantifying the transport dynamics of large woody debris in a variety of stream types. We used flume experiments to test theoretical models of wood entrainment, transport, and deposition in streams. Because wood moves infrequently during high flows where direct measurement and observation can be difficult and dangerous. Flume experiments provide an excellent setting to study wood dynamics because channel types, flow, log size, and other parameters can be varied relatively easily and extensive data can be collected over a short time period. Our flume experiments verified theoretical model predictions that piece movement is dependent on the diameter of the log and its orientation in large rivers (where piece length is less than channel width). Piece length, often reported as the most important factor in determining piece movement in field studies, was not a factor in these simulated large channels. This is likely due to the importance of banks and vegetation on inhibiting log movement in the field, particularly for pieces longer than channel width. Logs are often at least partially lodged on the banks sometimes upstream of vegetation or other logs which anchors the piece, and increases the force required for entrainment. Rootwads also increased the flow depth required to move individual logs. By raising logs off the channel bed, rootwads decrease the buoyant and drag forces acting on the log. We also developed a theoretical model of wood transport and deposition based upon the ratios of the piece length to channel width, piece length to the radius of curvature of the channel, and piece diameter to water depth. In these experiments we noted that individual logs tend to move down the channel parallel to the channel margin, and deposited on the outside of bends, heads of shallow and exposed bars, and bar crossovers. Our theoretical model was not borne out by the experiments, likely because there were few potential

  4. The Economic Consequences of a Large EMU Results of Macroeconomic Model Simulations

    Directory of Open Access Journals (Sweden)

    Fritz Breuss

    1997-05-01

    Full Text Available Recent economic forecasts increase the probability that firstly, the EMU can start as planned on January 1, 1999 and secondly, that it will start with a large group of countries. The economic implications of the artificially unification of "hard-currency" and "soft-currency" countries are analysed by means of macroeconomic model simulations. The results of a large "non-optimal" EMU are as expected. On the one hand, there are positive income effects for all countries although unevenly distributed over the participants on the other hand, the internal (inflation and external (value of the Euro vis-à-vis the Dollar stability are at risk. The "hard-currency" group will be the major winner (in terms of real GDP and employment, whereas the "soft-currency" group has to carry the adjustment costs to a regime of fixed exchange rates (Euro which results in slower growth, decline in employment and a deterioration of their budgetary position. The necessary convergence of prices and interest rates leads to an increase (decrease of inflation and interest rates in the "hard-currency" countries ("soft-currency" countries. If the EMU will start with a large group there will be a tendency to devalue the Euro against the Dollar. As a consequence of the uneven economic performance of a large (non-optimal EMU I would suggest to start the EMU with a core group of "hard-currency" countries. After this mini EMU succeeded the other Member States could join the EMU.

  5. The Importance of Computational Modeling of Large Pumping Stations

    Directory of Open Access Journals (Sweden)

    S. M. Bozh'eva

    2015-01-01

    Full Text Available The article presents main design and structure principles of pumping stations. It specifies basic requirements for the favorable hydraulic operation conditions of the pumping units. The article also describes the designing cases, when computational modeling is necessary to analyse activity of pumping station and provide its reliable operation. A specific example of the large pumping station with submersible pumps describes the process of computational modeling of its operation. As the object of simulation was selected the underground pumping station with a diameter of 26 m and a depth of 13 m, divided into two independent branches, equipped with 8 submersible pumps. The objective of this work was to evaluate the effectiveness of the design solution by CFD methods, to analyze the design of the inlet chamber, to identify possible difficulties with the operation of the facility. In details are described the structure of the considered pumping station and applied computational models of physical processes. The article gives the detailed formulation of the task of simulation and the methods of its solving and presents the initial and boundary conditions. It describes the basic operation modes of the pumping station. The obtained results were presented as the flow patterns for each operation mode with detailed explanations. Data obtained as a result of CFD, prove the correctness of the general design solutions of the project. The submersible pump operation at the minimum water level was verified, was confirmed a lack of vortex formation as well as were proposed measures to improve the operating conditions of the facility. In the inlet chamber there are shown the stagnant zones, requiring separate schedule of cleaning. The measure against floating debris and foam was proposed. It justifies the use of computational modeling (CFD for the verifying and adjusting of the projects of large pumping stations as a much more precise tool that takes into account

  6. North-south asymmetry in small and large sunspot group activity and violation of even-odd solar cycle rule

    CERN Document Server

    Javaraiah, J

    2016-01-01

    According to Gnevyshev-Ohl (G-O) rule an odd-numbered cycle is stronger than its preceding even-numbered cycle. In the modern time the cycle pair (22, 23) violated this rule. By using the combined Greenwich Photoheliographic Results (GPR) and Solar Optical Observing Network (SOON) sunspot group data during the period 1874-2015, and Debrecen Photoheliographic Data (DPD) of sunspot groups during the period 1974-2015, here we have found that the solar cycle pair (22, 23) violated the G-O rule because, besides during cycle 23 a large deficiency of small sunspot groups in both the northern and the southern hemispheres, during cycle 22 a large abundance of small sunspot groups in the southern hemisphere. In the case of large and small sunspot groups the cycle pair (22, 23) violated the G-O rule in the northern and southern hemispheres, respectively, suggesting the north-south asymmetry in solar activity has a significant contribution in the violation of G-O rule. The amplitude of solar cycle 24 is smaller than that...

  7. Promoting Oral Interaction in Large Groups through Task-Based Learning

    Directory of Open Access Journals (Sweden)

    Yolima Forero Rocha

    2005-10-01

    Full Text Available This research project attempts to show the way a group of five teachers used task-based learning with a group of 50 seventh graders to improve oral interaction. The students belonged to Isabel II School. They took an active part in the implementation of tasks and were asked to answer two questionnaires. Some English classes were observed and recorded; finally, an evaluation was taken by students to test their improvement.Este proyecto de investigación trata de mostrar la forma como un grupo de cinco profesoras usaron el método de aprendizaje basado en tareas con un grupo de 50 estudiantes del grado séptimo, con el fin de mejorar la interacción oral. Los estudiantes pertenecían al Colegio Isabel II y fueron parte activa en la implementación de las tareas. Respondieron dos cuestionarios, se les observó y se grabaron en video algunas clases; finalmente, se hizo una evaluación para poner a prueba el avance de los estudiantes.

  8. Aeroservoelastic model based active control for large civil aircraft

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    A modeling and control approach for an advanced configured large civil aircraft with aeroservoelasticity via the LQG method and control allocation is presented.Mathematical models and implementation issues for the multi-input/multi-output(MIMO) aeroservoelastic system simulation developed for a flexible wing with multi control surfaces are described.A fuzzy logic based optimization approach is employed to solve the constrained control allocation problem via intelligently adjusting the components of output vector and find a proper vector in the attainable moment set(AMS) autonomously.The basic idea is to minimize the L2 norm of error between the desired moment and achievable moment using the designing freedom provided by redundantly allocated actuators and control surfaces.Considering the constraints of control surfaces,in order to obtain acceptable performance of aircraft such as stability and maneuverability,the fuzzy weights are updated by the learning algorithm,which makes the closed-loop system self-adaptation.Finally,an application example of flight control designing for the advanced civil aircraft is discussed as a demonstration.The studies we have performed showed that the advanced configured large civil aircraft has good performance with the proper designed control law designed via the proposed approach.The gust alleviation and flutter suppression are applied with the synergetic effects of elevator,ailerons,equivalent rudders and flaps.The results show good closed loop performance and meet the requirement of constraint of control surfaces.

  9. Modelling large-scale halo bias using the bispectrum

    CERN Document Server

    Pollack, Jennifer E; Porciani, Cristiano

    2011-01-01

    We study the relation between the halo and matter density fields -- commonly termed bias -- in the LCDM framework. In particular, we examine the local model of biasing at quadratic order in matter density. This model is characterized by parameters b_1 and b_2. Using an ensemble of N-body simulations, we apply several statistical methods to estimate the parameters. We measure halo and matter fluctuations smoothed on various scales and find that the parameters vary with smoothing scale. We argue that, for real-space measurements, owing to the mixing of wavemodes, no scale can be found for which the parameters are independent of smoothing. However, this is not the case in Fourier space. We measure halo power spectra and construct estimates for an effective large-scale bias. We measure the configuration dependence of the halo bispectra B_hhh and reduced bispectra Q_hhh for very large-scale k-space triangles. From this we constrain b_1 and b_2. Using the lowest-order perturbation theory, we find that for B_hhh the...

  10. Research on Large Energy Power Enterprise Group total compensation conglomerates distribution systems Construction

    Directory of Open Access Journals (Sweden)

    Cheng Jia-xu

    2016-01-01

    Full Text Available In order to solve the problem of exist in W company salary distribution, such as the work guide is unknown, incentive enough, the target set properly and other issues, this paper especially focuses on the characteristics of enterprises, the company management and control model and other aspects to do in-depth study. On the basis of recognition control mode the pay distribution structure is determined, and then the appropriate indicators are sated. They are linked to further clarify the distribution of the guide, while the target value of properly designed and incentive mechanisms to ensure that the salary distribution model to follow on the basis of the relevant principles on the realization of economic promotion and to promote the strategic objectives.

  11. Modeling Resource Utilization of a Large Data Acquisition System

    CERN Document Server

    Santos, Alejandro; The ATLAS collaboration

    2017-01-01

    The ATLAS 'Phase-II' upgrade, scheduled to start in 2024, will significantly change the requirements under which the data-acquisition system operates. The input data rate, currently fixed around 150 GB/s, is anticipated to reach 5 TB/s. In order to deal with the challenging conditions, and exploit the capabilities of newer technologies, a number of architectural changes are under consideration. Of particular interest is a new component, known as the Storage Handler, which will provide a large buffer area decoupling real-time data taking from event filtering. Dynamic operational models of the upgraded system can be used to identify the required resources and to select optimal techniques. In order to achieve a robust and dependable model, the current data-acquisition architecture has been used as a test case. This makes it possible to verify and calibrate the model against real operation data. Such a model can then be evolved toward the future ATLAS Phase-II architecture. In this paper we introduce the current ...

  12. Modelling Resource Utilization of a Large Data Acquisition System

    CERN Document Server

    Santos, Alejandro; The ATLAS collaboration

    2017-01-01

    The ATLAS 'Phase-II' upgrade, scheduled to start in 2024, will significantly change the requirements under which the data-acquisition system operates. The input data rate, currently fixed around 150 GB/s, is anticipated to reach 5 TB/s. In order to deal with the challenging conditions, and exploit the capabilities of newer technologies, a number of architectural changes are under consideration. Of particular interest is a new component, known as the Storage Handler, which will provide a large buffer area decoupling real-time data taking from event filtering. Dynamic operational models of the upgraded system can be used to identify the required resources and to select optimal techniques. In order to achieve a robust and dependable model, the current data-acquisition architecture has been used as a test case. This makes it possible to verify and calibrate the model against real operation data. Such a model can then be evolved toward the future ATLAS Phase-II architecture. In this paper we introduce the current ...

  13. Modelling large-scale evacuation of music festivals

    Directory of Open Access Journals (Sweden)

    E. Ronchi

    2016-05-01

    Full Text Available This paper explores the use of multi-agent continuous evacuation modelling for representing large-scale evacuation scenarios at music festivals. A 65,000 people capacity music festival area was simulated using the model Pathfinder. Three evacuation scenarios were developed in order to explore the capabilities of evacuation modelling during such incidents, namely (1 a preventive evacuation of a section of the festival area containing approximately 15,000 people due to a fire breaking out on a ship, (2 an escalating scenario involving the total evacuation of the entire festival area (65,000 people due to a bomb threat, and (3 a cascading scenario involving the total evacuation of the entire festival area (65,000 people due to the threat of an explosion caused by a ship engine overheating. This study suggests that the analysis of the people-evacuation time curves produced by evacuation models, coupled with a visual analysis of the simulated evacuation scenarios, allows for the identification of the main factors affecting the evacuation process (e.g., delay times, overcrowding at exits in relation to exit widths, etc. and potential measures that could improve safety.

  14. GROUP GUIDANCE SERVICES MANAGEMENT OF BEHAVIORAL TECHNIC HOMEWORK MODEL

    Directory of Open Access Journals (Sweden)

    Juhri A M.

    2013-09-01

    Full Text Available Abstract: This simple paper describes the implementation of management guidance service groups using the model home visits behavioral techniques (behavior technic homework. The ideas outlined in this paper are intended to add insight for counselors in the management of the implementation of counseling services group that carried out effectively. This simple paper is expected to be used as reference studies in theoretical matters relating to the management guidance services group, for counselors to students both need guidance services and those who passively as they face various problems difficulties martial jar and obstacles in the achievement of learning , In general, this study aims to provide insight in particular in the development of social skills for students, especially the ability to communicate with the participants of the service (students more While specifically to encourage the development of feelings, thoughts, perceptions, insights and attitudes that support embodiments behavior Iebih creative and effective in improving communication skills both verbal and non-verbal for students. Keyword: counselor, counseling, group, student

  15. Algebraic Properties of Curvature Operators in Lorentzian Manifolds with Large Isometry Groups

    Directory of Open Access Journals (Sweden)

    Giovanni Calvaruso

    2010-01-01

    Full Text Available Together with spaces of constant sectional curvature and products of a real line with a manifold of constant curvature, the socalled Egorov spaces and ε-spaces exhaust the class of n-dimensional Lorentzian manifolds admitting a group of isometries of dimension at least ½n(n−1+1, for almost all values of n [Patrangenaru V., Geom. Dedicata 102 (2003, 25-33]. We shall prove that the curvature tensor of these spaces satisfy several interesting algebraic properties. In particular, we will show that Egorov spaces are Ivanov-Petrova manifolds, curvature-Ricci commuting (indeed, semi-symmetric and P-spaces, and that ε-spaces are Ivanov-Petrova and curvature-curvature commuting manifolds.

  16. Large-scale climate variation modifies the winter grouping behavior of endangered Indiana bats

    Science.gov (United States)

    Thogmartin, Wayne E.; McKann, Patrick C.

    2014-01-01

    Power laws describe the functional relationship between 2 quantities, such as the frequency of a group as the multiplicative power of group size. We examined whether the annual size of well-surveyed wintering populations of endangered Indiana bats (Myotis sodalis) followed a power law, and then leveraged this relationship to predict whether the aggregation of Indiana bats in winter was influenced by global climate processes. We determined that Indiana bat wintering populations were distributed according to a power law (mean scaling coefficient α = −0.44 [95% confidence interval {95% CI} = −0.61, −0.28). The antilog of these annual scaling coefficients ranged between 0.67 and 0.81, coincident with the three-fourths power found in many other biological phenomena. We associated temporal patterns in the annual (1983–2011) scaling coefficient with the North Atlantic Oscillation (NAO) index in August (βNAOAugust = −0.017 [90% CI = −0.032, −0.002]), when Indiana bats are deciding when and where to hibernate. After accounting for the strong effect of philopatry to habitual wintering locations, Indiana bats aggregated in larger wintering populations during periods of severe winter and in smaller populations in milder winters. The association with August values of the NAO indicates that bats anticipate future winter weather conditions when deciding where to roost, a heretofore unrecognized role for prehibernation swarming behavior. Future research is needed to understand whether the three-fourths–scaling patterns we observed are related to scaling in metabolism.

  17. Renormalization group approach to causal bulk viscous cosmological models

    Energy Technology Data Exchange (ETDEWEB)

    Belinchon, J A [Grupo Inter-Universitario de Analisis Dimensional, Dept. Fisica ETS Arquitectura UPM, Av. Juan de Herrera 4, Madrid (Spain); Harko, T [Department of Physics, University of Hong Kong, Pokfulam Road, Hong Kong (China); Mak, M K [Department of Physics, Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong (China)

    2002-06-07

    The renormalization group method is applied to the study of homogeneous and flat Friedmann-Robertson-Walker type universes, filled with a causal bulk viscous cosmological fluid. The starting point of the study is the consideration of the scaling properties of the gravitational field equations, the causal evolution equation of the bulk viscous pressure and the equations of state. The requirement of scale invariance imposes strong constraints on the temporal evolution of the bulk viscosity coefficient, temperature and relaxation time, thus leading to the possibility of obtaining the bulk viscosity coefficient-energy density dependence. For a cosmological model with bulk viscosity coefficient proportional to the Hubble parameter, we perform the analysis of the renormalization group flow around the scale-invariant fixed point, thereby obtaining the long-time behaviour of the scale factor.

  18. Evaluation of the perceptual grouping parameter in the CTVA model

    Directory of Open Access Journals (Sweden)

    Manuel Cortijo

    2005-01-01

    Full Text Available The CODE Theory of Visual Attention (CTVA is a mathematical model explaining the effects of grouping by proximity and distance upon reaction times and accuracy of response with regard to elements in the visual display. The predictions of the theory agree quite acceptably in one and two dimensions (CTVA-2D with the experimental results (reaction times and accuracy of response. The difference between reaction-times for the compatible and incompatible responses, known as the responsecompatibility effect, is also acceptably predicted, except at small distances and high number of distractors. Further results using the same paradigm at even smaller distances have been now obtained, showing greater discrepancies. Then, we have introduced a method to evaluate the strength of sensory evidence (eta parameter, which takes grouping by similarity into account and minimizes these discrepancies.

  19. Clustering of Local Group distances: publication bias or correlated measurements? I. The Large Magellanic Cloud

    CERN Document Server

    de Grijs, Richard; Bono, Giuseppe

    2014-01-01

    The distance to the Large Magellanic Cloud (LMC) represents a key local rung of the extragalactic distance ladder. Yet, the galaxy's distance modulus has long been an issue of contention, in particular in view of claims that most newly determined distance moduli cluster tightly - and with a small spread - around the "canonical" distance modulus, (m-M)_0 = 18.50 mag. We compiled 233 separate LMC distance determinations published between 1990 and 2013. Our analysis of the individual distance moduli, as well as of their two-year means and standard deviations resulting from this largest data set of LMC distance moduli available to date, focuses specifically on Cepheid and RR Lyrae variable-star tracer populations, as well as on distance estimates based on features in the observational Hertzsprung-Russell diagram. We conclude that strong publication bias is unlikely to have been the main driver of the majority of published LMC distance moduli. However, for a given distance tracer, the body of publications leading ...

  20. Large N$_{c}$ universality of the baryon Isgur-Wise form factor the group theoretical approach

    CERN Document Server

    Chow, C K

    1996-01-01

    In a previous article, it has been proved under the framework of chiral soliton model that the same Isgur--Wise form factor describes the semileptonic \\Lambda_b\\to\\Lambda_c and \\Sigma^{(*)}_b\\to\\Sigma^{(* )}_c decays in the large N_c limit. It is shown here that this result is in fact independent of the chiral soliton model and is solely the consequence of the spin-flavor SU(4) symmetry which arises in the baryon sector in the large N_c limit.

  1. Trials of large group teaching in Malaysian private universities: a cross sectional study of teaching medicine and other disciplines

    Directory of Open Access Journals (Sweden)

    Too LaySan

    2011-09-01

    Full Text Available Abstract Background This is a pilot cross sectional study using both quantitative and qualitative approach towards tutors teaching large classes in private universities in the Klang Valley (comprising Kuala Lumpur, its suburbs, adjoining towns in the State of Selangor and the State of Negeri Sembilan, Malaysia. The general aim of this study is to determine the difficulties faced by tutors when teaching large group of students and to outline appropriate recommendations in overcoming them. Findings Thirty-two academics from six private universities from different faculties such as Medical Sciences, Business, Information Technology, and Engineering disciplines participated in this study. SPSS software was used to analyse the data. The results in general indicate that the conventional instructor-student approach has its shortcoming and requires changes. Interestingly, tutors from Medicine and IT less often faced difficulties and had positive experience in teaching large group of students. Conclusion However several suggestions were proposed to overcome these difficulties ranging from breaking into smaller classes, adopting innovative teaching, use of interactive learning methods incorporating interactive assessment and creative technology which enhanced students learning. Furthermore the study provides insights on the trials of large group teaching which are clearly identified to help tutors realise its impact on teaching. The suggestions to overcome these difficulties and to maximize student learning can serve as a guideline for tutors who face these challenges.

  2. Quasi Hopf algebras, group cohomology and orbifold models

    Energy Technology Data Exchange (ETDEWEB)

    Dijkgraaf, R. (Princeton Univ., NJ (USA). Joseph Henry Labs.); Pasquier, V. (CEA Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France). Inst. de Recherche Fondamentale (IRF)); Roche, P. (Ecole Polytechnique, 91 - Palaiseau (France). Centre de Physique Theorique)

    1991-01-01

    We construct non trivial quasi Hopf algebras associated to any finite group G and any element of H{sup 3}(G,U)(1). We analyze in details the set of representations of these algebras and show that we recover the main interesting datas attached to particular orbifolds of Rational Conformal Field Theory or equivalently to the topological field theories studied by R. Dijkgraaf and E. Witten. This leads us to the construction of the R-matrix structure in non abelian RCFT orbifold models. (orig.).

  3. Effects of ischemic preconditioning in a pig model of large-for-size liver transplantation

    Science.gov (United States)

    Leal, Antonio José Gonçalves; Tannuri, Ana Cristina Aoun; Belon, Alessandro Rodrigo; Guimarães, Raimundo Renato Nunes; Coelho, Maria Cecília Mendonça; de Oliveira Gonçalves, Josiane; Serafini, Suellen; de Melo, Evandro Sobroza; Tannuri, Uenis

    2015-01-01

    OBJECTIVE: In most cases of pediatric liver transplantation, the clinical scenario of large-for-size transplants can lead to hepatic dysfunction and a decreased blood supply to the liver graft. The objective of the present experimental investigation was to evaluate the effects of ischemic preconditioning on this clinical entity. METHODS: Eighteen pigs were divided into three groups and underwent liver transplantation: a control group, in which the weights of the donors were similar to those of the recipients, a large-for-size group, and a large-for-size + ischemic preconditioning group. Blood samples were collected from the recipients to evaluate the pH and the sodium, potassium, aspartate aminotransferase and alanine aminotransferase levels. In addition, hepatic tissue was sampled from the recipients for histological evaluation, immunohistochemical analyses to detect hepatocyte apoptosis and proliferation and molecular analyses to evaluate the gene expression of Bax (pro-apoptotic), Bcl-XL (anti-apoptotic), c-Fos and c-Jun (immediate-early genes), ischemia-reperfusion-related inflammatory cytokines (IL-1, TNF-alpha and IL-6, which is also a stimulator of hepatocyte regeneration), intracellular adhesion molecule, endothelial nitric oxide synthase (a mediator of the protective effect of ischemic preconditioning) and TGF-beta (a pro-fibrogenic cytokine). RESULTS: All animals developed acidosis. At 1 hour and 3 hours after reperfusion, the animals in the large-for-size and large-for-size + ischemic preconditioning groups had decreased serum levels of Na and increased serum levels of K and aspartate aminotransferase compared with the control group. The molecular analysis revealed higher expression of the Bax, TNF-alpha, I-CAM and TGF-beta genes in the large-for-size group compared with the control and large-for-size + ischemic preconditioning groups. Ischemic preconditioning was responsible for an increase in c-Fos, IL-1, IL-6 and e-NOS gene expression. CONCLUSION

  4. Effects of ischemic preconditioning in a pig model of large-for-size liver transplantation

    Directory of Open Access Journals (Sweden)

    Antonio José Gonçalves Leal

    2015-02-01

    Full Text Available OBJECTIVE: In most cases of pediatric liver transplantation, the clinical scenario of large-for-size transplants can lead to hepatic dysfunction and a decreased blood supply to the liver graft. The objective of the present experimental investigation was to evaluate the effects of ischemic preconditioning on this clinical entity. METHODS: Eighteen pigs were divided into three groups and underwent liver transplantation: a control group, in which the weights of the donors were similar to those of the recipients, a large-for-size group, and a large-for-size + ischemic preconditioning group. Blood samples were collected from the recipients to evaluate the pH and the sodium, potassium, aspartate aminotransferase and alanine aminotransferase levels. In addition, hepatic tissue was sampled from the recipients for histological evaluation, immunohistochemical analyses to detect hepatocyte apoptosis and proliferation and molecular analyses to evaluate the gene expression of Bax (pro-apoptotic, Bcl-XL (anti-apoptotic, c-Fos and c-Jun (immediate-early genes, ischemia-reperfusion-related inflammatory cytokines (IL-1, TNF-alpha and IL-6, which is also a stimulator of hepatocyte regeneration, intracellular adhesion molecule, endothelial nitric oxide synthase (a mediator of the protective effect of ischemic preconditioning and TGF-beta (a pro-fibrogenic cytokine. RESULTS: All animals developed acidosis. At 1 hour and 3 hours after reperfusion, the animals in the large-for-size and large-for-size + ischemic preconditioning groups had decreased serum levels of Na and increased serum levels of K and aspartate aminotransferase compared with the control group. The molecular analysis revealed higher expression of the Bax, TNF-alpha, I-CAM and TGF-beta genes in the large-for-size group compared with the control and large-for-size + ischemic preconditioning groups. Ischemic preconditioning was responsible for an increase in c-Fos, IL-1, IL-6 and e-NOS gene expression

  5. Large scale inference in the Infinite Relational Model: Gibbs sampling is not enough

    DEFF Research Database (Denmark)

    Albers, Kristoffer Jon; Moth, Andreas Leon Aagard; Mørup, Morten

    2013-01-01

    The stochastic block-model and its non-parametric extension, the Infinite Relational Model (IRM), have become key tools for discovering group-structure in complex networks. Identifying these groups is a combinatorial inference problem which is usually solved by Gibbs sampling. However, whether...... Gibbs sampling suffices and can be scaled to the modeling of large scale real world complex networks has not been examined sufficiently. In this paper we evaluate the performance and mixing ability of Gibbs sampling in the Infinite Relational Model (IRM) by implementing a high performance Gibbs sampler....... We find that Gibbs sampling can be computationally scaled to handle millions of nodes and billions of links. Investigating the behavior of the Gibbs sampler for different sizes of networks we find that the mixing ability decreases drastically with the network size, clearly indicating a need...

  6. Improving large-scale groundwater models by considering fossil gradients

    Science.gov (United States)

    Schulz, Stephan; Walther, Marc; Michelsen, Nils; Rausch, Randolf; Dirks, Heiko; Al-Saud, Mohammed; Merz, Ralf; Kolditz, Olaf; Schüth, Christoph

    2017-05-01

    Due to limited availability of surface water, many arid to semi-arid countries rely on their groundwater resources. Despite the quasi-absence of present day replenishment, some of these groundwater bodies contain large amounts of water, which was recharged during pluvial periods of the Late Pleistocene to Early Holocene. These mostly fossil, non-renewable resources require different management schemes compared to those which are usually applied in renewable systems. Fossil groundwater is a finite resource and its withdrawal implies mining of aquifer storage reserves. Although they receive almost no recharge, some of them show notable hydraulic gradients and a flow towards their discharge areas, even without pumping. As a result, these systems have more discharge than recharge and hence are not in steady state, which makes their modelling, in particular the calibration, very challenging. In this study, we introduce a new calibration approach, composed of four steps: (i) estimating the fossil discharge component, (ii) determining the origin of fossil discharge, (iii) fitting the hydraulic conductivity with a pseudo steady-state model, and (iv) fitting the storage capacity with a transient model by reconstructing head drawdown induced by pumping activities. Finally, we test the relevance of our approach and evaluated the effect of considering or ignoring fossil gradients on aquifer parameterization for the Upper Mega Aquifer (UMA) on the Arabian Peninsula.

  7. The Beyond the Standard Model Working Group Summary Report

    CERN Document Server

    Azuelos, Georges; Hewett, J L; Landsberg, G L; Matchev, K; Paige, Frank E; Rizzo, T; Rurua, L; Abdullin, S; Albert, A; Allanach, Benjamin C; Blazek, T; Cavalli, D; Charles, F; Cheung, K; Dedes, A; Dimopoulos, Savas K; Dreiner, H; Ellwanger, Ulrich; Gorbunov, D S; Heinemeyer, S; Hinchliffe, Ian; Hugonie, C; Moretti, S; Polesello, G; Przysiezniak, H; Richardson, Peter; Vacavant, L; Weiglein, Georg

    2002-01-01

    Report of the "Beyond the Standard Model" working group for the Workshop `Physics at TeV Colliders', Les Houches, France, 21 May - 1 June 2001. It consists of 18 separate parts: 1. Preface; 2. Theoretical Discussion; 3. Numerical Calculation of the mSUGRA and Higgs Spectrum; 4. Theoretical Uncertainties in Sparticle Mass Predictions; 5. High Mass Supersymmetry with High Energy Hadron Colliders; 6. SUSY with Heavy Scalars at LHC; 7. Inclusive Study of MSSM in CMS; 8. Establishing a No-Lose Theorem for NMSSM Higgs Boson Discovery at the LHC; 9. Effects of Supersymmetric Phases on Higgs Production in Association with Squark Pairs in the Minimal Supersymmetric Standard Model; 10. Study of the Lepton Flavour Violating Decays of Charged Fermions in SUSY GUTs; 11. Interactions of the Goldstino Supermultiplet with Standard Model Fields; 12. Attempts at Explaining the NuTeV Observation of Di-Muon Events; 13. Kaluza-Klein States of the Standard Model Gauge Bosons: Constraints From High Energy Experiments; 14. Kaluza-Kl...

  8. One decade of the Data Fusion Information Group (DFIG) model

    Science.gov (United States)

    Blasch, Erik

    2015-05-01

    The revision of the Joint Directors of the Laboratories (JDL) Information Fusion model in 2004 discussed information processing, incorporated the analyst, and was coined the Data Fusion Information Group (DFIG) model. Since that time, developments in information technology (e.g., cloud computing, applications, and multimedia) have altered the role of the analyst. Data production has outpaced the analyst; however the analyst still has the role of data refinement and information reporting. In this paper, we highlight three examples being addressed by the DFIG model. One example is the role of the analyst to provide semantic queries (through an ontology) so that vast amount of data available can be indexed, accessed, retrieved, and processed. The second idea is reporting which requires the analyst to collect the data into a condensed and meaningful form through information management. The last example is the interpretation of the resolved information from data that must include contextual information not inherent in the data itself. Through a literature review, the DFIG developments in the last decade demonstrate the usability of the DFIG model to bring together the user (analyst or operator) and the machine (information fusion or manager) in a systems design.

  9. Renormalization group approach to a p-wave superconducting model

    Energy Technology Data Exchange (ETDEWEB)

    Continentino, Mucio A.; Deus, Fernanda [Centro Brasileiro de Pesquisas Fisicas, Rua Dr. Xavier Sigaud, 150, Urca 22290-180, Rio de Janeiro, RJ (Brazil); Caldas, Heron [Departamento de Ciências Naturais, Universidade Federal de São João Del Rei, 36301-000, São João Del Rei, MG (Brazil)

    2014-04-01

    We present in this work an exact renormalization group (RG) treatment of a one-dimensional p-wave superconductor. The model proposed by Kitaev consists of a chain of spinless fermions with a p-wave gap. It is a paradigmatic model of great actual interest since it presents a weak pairing superconducting phase that has Majorana fermions at the ends of the chain. Those are predicted to be useful for quantum computation. The RG allows to obtain the phase diagram of the model and to study the quantum phase transition from the weak to the strong pairing phase. It yields the attractors of these phases and the critical exponents of the weak to strong pairing transition. We show that the weak pairing phase of the model is governed by a chaotic attractor being non-trivial from both its topological and RG properties. In the strong pairing phase the RG flow is towards a conventional strong coupling fixed point. Finally, we propose an alternative way for obtaining p-wave superconductivity in a one-dimensional system without spin–orbit interaction.

  10. Full reduction of large finite random Ising systems by real space renormalization group.

    Science.gov (United States)

    Efrat, Avishay; Schwartz, Moshe

    2003-08-01

    We describe how to evaluate approximately various physical interesting quantities in random Ising systems by direct renormalization of a finite system. The renormalization procedure is used to reduce the number of degrees of freedom to a number that is small enough, enabling direct summing over the surviving spins. This procedure can be used to obtain averages of functions of the surviving spins. We show how to evaluate averages that involve spins that do not survive the renormalization procedure. We show, for the random field Ising model, how to obtain Gamma(r)=-, the "connected" correlation function, and S(r)=, the "disconnected" correlation function. Consequently, we show how to obtain the average susceptibility and the average energy. For an Ising system with random bonds and random fields, we show how to obtain the average specific heat. We conclude by presenting our numerical results for the average susceptibility and the function Gamma(r) along one of the principal axes. (In this work, the full three-dimensional (3D) correlation is calculated and not just parameters such nu or eta). The results for the average susceptibility are used to extract the critical temperature and critical exponents of the 3D random field Ising system.

  11. Large group intervention for military reintegration: peer support & Yellow Ribbon enhancements.

    Science.gov (United States)

    Castellano, Cherie; Everly, George S

    2010-01-01

    University Behavioral HealthCare, University of Medicine and Dentistry of New Jersey in partnership with the New Jersey Department of Military & Veterans Affairs established a program entitled the "New Jersey Veterans Helpline," modeled after the "Cop 2 Cop Helpline," in 2005 to assist veterans and their families within the state. The events of September 11, 2001, demanded an unprecedented response to address the behavioral health care needs of first responders in New Jersey and highlighted the similarities amongst the military population in their response. Although the New Jersey Veterans Helpline program was initiated as a peer based helpline, the need for support in pre- and post-deployment quickly emerged. This paper describes the application of the Cop 2 Cop interventions with the Port Authority Police Department (PAPD) entitled "Acute Stress Management Reentry Program." This program was adapted and combined with Yellow Ribbon Guideline enhancements to create a "60 Day Resiliency & Reintegration Program" led by the New Jersey Veterans program to over 2,400 soldiers returning from war.

  12. The unified minimal supersymmetric model with large Yukawa couplings

    CERN Document Server

    Rattazzi, Riccardo

    1996-01-01

    The consequences of assuming the third-generation Yukawa couplings are all large and comparable are studied in the context of the minimal supersymmetric extension of the standard model. General aspects of the RG evolution of the parameters, theoretical constraints needed to ensure proper electroweak symmetry breaking, and experimental and cosmological bounds on low-energy parameters are presented. We also present complete and exact semi-analytic solutions to the 1-loop RG equations. Focusing on SU(5) or SO(10) unification, we analyze the relationship between the top and bottom masses and the superspectrum, and the phenomenological implications of the GUT conditions on scalar masses. Future experimental measurements of the superspectrum and of the strong coupling will distinguish between various GUT-scale scenarios. And if present experimental knowledge is to be accounted for most naturally, a particular set of predictions is singled out.

  13. Soil carbon management in large-scale Earth system modelling

    DEFF Research Database (Denmark)

    Olin, S.; Lindeskog, M.; Pugh, T. A. M.

    2015-01-01

    .5. Our results show that the potential for carbon sequestration due to typical cropland management practices such as no-till management and cover crops proposed in previous studies is not realised, globally or over larger climatic regions. Our results highlight important considerations to be made when......Croplands are vital ecosystems for human well-being and provide important ecosystem services such as crop yields, retention of nitrogen and carbon storage. On large (regional to global)-scale levels, assessment of how these different services will vary in space and time, especially in response...... to cropland management, are scarce. We explore cropland management alternatives and the effect these can have on future C and N pools and fluxes using the land-use-enabled dynamic vegetation model LPJ-GUESS (Lund–Potsdam–Jena General Ecosystem Simulator). Simulated crop production, cropland carbon storage...

  14. A stochastic large deformation model for computational anatomy

    DEFF Research Database (Denmark)

    Arnaudon, Alexis; Holm, Darryl D.; Pai, Akshay Sadananda Uppinakudru

    2017-01-01

    In the study of shapes of human organs using computational anatomy, variations are found to arise from inter-subject anatomical differences, disease-specific effects, and measurement noise. This paper introduces a stochastic model for incorporating random variations into the Large Deformation...... Diffeomorphic Metric Mapping (LDDMM) framework. By accounting for randomness in a particular setup which is crafted to fit the geometrical properties of LDDMM, we formulate the template estimation problem for landmarks with noise and give two methods for efficiently estimating the parameters of the noise fields...... from a prescribed data set. One method directly approximates the time evolution of the variance of each landmark by a finite set of differential equations, and the other is based on an Expectation-Maximisation algorithm. In the second method, the evaluation of the data likelihood is achieved without...

  15. Large geospatial images discovery: metadata model and technological framework

    Directory of Open Access Journals (Sweden)

    Lukáš Brůha

    2015-12-01

    Full Text Available The advancements in geospatial web technology triggered efforts for disclosure of valuable resources of historical collections. This paper focuses on the role of spatial data infrastructures (SDI in such efforts. The work describes the interplay between SDI technologies and potential use cases in libraries such as cartographic heritage. The metadata model is introduced to link up the sources from these two distinct fields. To enhance the data search capabilities, the work focuses on the representation of the content-based metadata of raster images, which is the crucial prerequisite to target the search in a more effective way. The architecture of the prototype system for automatic raster data processing, storage, analysis and distribution is introduced. The architecture responds to the characteristics of input datasets, namely to the continuous flow of very large raster data and related metadata. Proposed solutions are illustrated on the case study of cartometric analysis of digitised early maps and related metadata encoding.

  16. Computational modeling of single particle scattering over large distances

    Science.gov (United States)

    Rapp, Rebecca; Plumley, Rajan; McCracken, Michael

    2016-09-01

    We present a Monte Carlo simulation of the propagation of a single particle through a large three-dimensional volume under the influence of individual scattering events. In such systems, short paths can be quickly and accurately simulated using random walks defined by individual scattering parameters, but the simulation time greatly increases as the size of the space grows. We present a method for reducing the overall simulation time by restricting the simulation to a cube of unit length; each `cell' is characterized by a set of parameters which dictate the distributions of allowable step lengths and polar scattering angles. We model propagation over large distances by constructing a lattice of cells with physical parameters that depend on position, such that the full set would represent a space within the entire volume available to the particle. With these, we propose the use of Markov chains to determine a probable path for the particle, thereby removing the need to simulate every step in the particle's path. For a single particle with constant velocity, we can use the step statistics to determine the travel time of the particle. We investigate the effect of scattering parameters such as average step distance and possible scattering angles on the probabilities of a cell.

  17. Large scale solar district heating. Evaluation, modelling and designing

    Energy Technology Data Exchange (ETDEWEB)

    Heller, A.

    2000-07-01

    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the tool for design studies and on a local energy planning case. The evaluation of the central solar heating technology is based on measurements on the case plant in Marstal, Denmark, and on published and unpublished data for other, mainly Danish, CSDHP plants. Evaluations on the thermal, economical and environmental performances are reported, based on the experiences from the last decade. The measurements from the Marstal case are analysed, experiences extracted and minor improvements to the plant design proposed. For the detailed designing and energy planning of CSDHPs, a computer simulation model is developed and validated on the measurements from the Marstal case. The final model is then generalised to a 'generic' model for CSDHPs in general. The meteorological reference data, Danish Reference Year, is applied to find the mean performance for the plant designs. To find the expectable variety of the thermal performance of such plants, a method is proposed where data from a year with poor solar irradiation and a year with strong solar irradiation are applied. Equipped with a simulation tool design studies are carried out spreading from parameter analysis over energy planning for a new settlement to a proposal for the combination of plane solar collectors with high performance solar collectors, exemplified by a trough solar collector. The methodology of utilising computer simulation proved to be a cheap and relevant tool in the design of future solar heating plants. The thesis also exposed the demand for developing computer models for the more advanced solar collector designs and especially for the control operation of CSHPs. In the final chapter the CSHP technology is put into perspective with respect to other possible technologies to find the relevance of the application

  18. Renormalization group flow of scalar models in gravity

    Energy Technology Data Exchange (ETDEWEB)

    Guarnieri, Filippo

    2014-04-08

    In this Ph.D. thesis we study the issue of renormalizability of gravitation in the context of the renormalization group (RG), employing both perturbative and non-perturbative techniques. In particular, we focus on different gravitational models and approximations in which a central role is played by a scalar degree of freedom, since their RG flow is easier to analyze. We restrict our interest in particular to two quantum gravity approaches that have gained a lot of attention recently, namely the asymptotic safety scenario for gravity and the Horava-Lifshitz quantum gravity. In the so-called asymptotic safety conjecture the high energy regime of gravity is controlled by a non-Gaussian fixed point which ensures non-perturbative renormalizability and finiteness of the correlation functions. We then investigate the existence of such a non trivial fixed point using the functional renormalization group, a continuum version of the non-perturbative Wilson's renormalization group. In particular we quantize the sole conformal degree of freedom, which is an approximation that has been shown to lead to a qualitatively correct picture. The question of the existence of a non-Gaussian fixed point in an infinite-dimensional parameter space, that is for a generic f(R) theory, cannot however be studied using such a conformally reduced model. Hence we study it by quantizing a dynamically equivalent scalar-tensor theory, i.e. a generic Brans-Dicke theory with ω=0 in the local potential approximation. Finally, we investigate, using a perturbative RG scheme, the asymptotic freedom of the Horava-Lifshitz gravity, that is an approach based on the emergence of an anisotropy between space and time which lifts the Newton's constant to a marginal coupling and explicitly preserves unitarity. In particular we evaluate the one-loop correction in 2+1 dimensions quantizing only the conformal degree of freedom.

  19. Multistability in Large Scale Models of Brain Activity.

    Directory of Open Access Journals (Sweden)

    Mathieu Golos

    2015-12-01

    Full Text Available Noise driven exploration of a brain network's dynamic repertoire has been hypothesized to be causally involved in cognitive function, aging and neurodegeneration. The dynamic repertoire crucially depends on the network's capacity to store patterns, as well as their stability. Here we systematically explore the capacity of networks derived from human connectomes to store attractor states, as well as various network mechanisms to control the brain's dynamic repertoire. Using a deterministic graded response Hopfield model with connectome-based interactions, we reconstruct the system's attractor space through a uniform sampling of the initial conditions. Large fixed-point attractor sets are obtained in the low temperature condition, with a bigger number of attractors than ever reported so far. Different variants of the initial model, including (i a uniform activation threshold or (ii a global negative feedback, produce a similarly robust multistability in a limited parameter range. A numerical analysis of the distribution of the attractors identifies spatially-segregated components, with a centro-medial core and several well-delineated regional patches. Those different modes share similarity with the fMRI independent components observed in the "resting state" condition. We demonstrate non-stationary behavior in noise-driven generalizations of the models, with different meta-stable attractors visited along the same time course. Only the model with a global dynamic density control is found to display robust and long-lasting non-stationarity with no tendency toward either overactivity or extinction. The best fit with empirical signals is observed at the edge of multistability, a parameter region that also corresponds to the highest entropy of the attractors.

  20. Mathematical modeling of local perfusion in large distensible microvascular networks

    Science.gov (United States)

    Causin, Paola; Malgaroli, Francesca

    2017-08-01

    Microvessels -blood vessels with diameter less than 200 microns- form large, intricate networks organized into arterioles, capillaries and venules. In these networks, the distribution of flow and pressure drop is a highly interlaced function of single vessel resistances and mutual vessel interactions. In this paper we propose a mathematical and computational model to study the behavior of microcirculatory networks subjected to different conditions. The network geometry is composed of a graph of connected straight cylinders, each one representing a vessel. The blood flow and pressure drop across the single vessel, further split into smaller elements, are related through a generalized Ohm's law featuring a conductivity parameter, function of the vessel cross section area and geometry, which undergo deformations under pressure loads. The membrane theory is used to describe the deformation of vessel lumina, tailored to the structure of thick-walled arterioles and thin-walled venules. In addition, since venules can possibly experience negative transmural pressures, a buckling model is also included to represent vessel collapse. The complete model including arterioles, capillaries and venules represents a nonlinear system of PDEs, which is approached numerically by finite element discretization and linearization techniques. We use the model to simulate flow in the microcirculation of the human eye retina, a terminal system with a single inlet and outlet. After a phase of validation against experimental measurements, we simulate the network response to different interstitial pressure values. Such a study is carried out both for global and localized variations of the interstitial pressure. In both cases, significant redistributions of the blood flow in the network arise, highlighting the importance of considering the single vessel behavior along with its position and connectivity in the network.

  1. A small group learning model for evidence-based medicine

    Directory of Open Access Journals (Sweden)

    Al Achkar M

    2016-10-01

    Full Text Available Morhaf Al Achkar, M Kelly Davies Department of Family Medicine, Indiana University School of Medicine, Indianapolis, IN, USA Background: Evidence-based medicine (EBM skills are invaluable tools for residents and practicing physicians. The purpose of this study is to evaluate the effectiveness of small-group learning models in teaching fundamental EBM skills. Methods: The intervention consisted of an EBM bootcamp divided into four 2-hour sessions across 4-week rotations. Residents worked in small groups of three to four to explore fundamentals of EBM through interactive dialogue and mock clinical scenario practice. The intervention’s effectiveness was evaluated using pre- and post-assessments. Results: A total of 40 (93.0% residents out of a potential 43 participated in the EBM bootcamps across the 3 years. There was significant improvement of 3.28 points on self-assessed EBM skills from an average of 9.66–12.945 out of a maximum score of 15 (P=0.000. There was significant improvement of 1.68 points on the EBM skills test from an average of 6.02–7.71 out of a maximum score of 9 (P=0.00. All residents (100% agreed or strongly agreed that EBM is important for a physician’s clinical practice. This view did not change after the training. Conclusion: A brief small-group interactive workshop in EBM basic skills at the start of residency was effective in developing fundamental EBM skills. Keywords: evidence-based medicine, resident training, small group

  2. Large Sanjiang basin groups outside of the Songliao Basin Meso-Senozoic Tectonic-sediment evolution and hydrocarbon accumulation

    Science.gov (United States)

    Zheng, M.; Wu, X.

    2015-12-01

    The basis geological problem is still the bottleneck of the exploration work of the lager Sanjiang basin groups. In general terms, the problems are including the prototype basins and basin forming mechanism of two aspects. In this paper, using the field geological survey and investigation, logging data analysis, seismic data interpretation technical means large Sanjiang basin groups and basin forming mechanism of the prototype are discussed. Main draw the following conclusions: 1. Sanjiang region group-level formation can be completely contrasted. 2. Tension faults, compressive faults, shear structure composition and structure combination of four kinds of compound fracture are mainly developed In the study area. The direction of their distribution can be divided into SN, EW, NNE, NEE, NNW, NWW to other groups of fracture. 3. Large Sanjiang basin has the SN and the EW two main directions of tectonic evolution. Cenozoic basins in Sanjiang region in group formation located the two tectonic domains of ancient Paleo-Asian Ocean and the Pacific Interchange. 4. Large Sanjiang basin has experienced in the late Mesozoic tectonic evolution of two-stage and nine times. The first stage, developmental stage basement, they are ① Since the Mesozoic era and before the Jurassic; ② Early Jurassic period; The second stage, cap stage of development, they are ③ Late Jurassic depression developmental stages of compression; ④ Early Cretaceous rifting stage; ⑤ depression in mid-Early Cretaceous period; ⑥ tensile Early Cretaceous rifting stage; ⑦ inversion of Late Cretaceous tectonic compression stage; ⑧ Paleogene - Neogene; ⑨ After recently Ji Baoquan Sedimentary Ridge. 5. Large Sanjiang basin group is actually a residual basin structure, and Can be divided into left - superimposed (Founder, Tangyuan depression, Hulin Basin), residual - inherited type (Sanjiang basin), residual - reformed (Jixi, Boli, Hegang basin). there are two developed depression and the mechanism

  3. Exploring the Impact of Students' Learning Approach on Collaborative Group Modeling of Blood Circulation

    Science.gov (United States)

    Lee, Shinyoung; Kang, Eunhee; Kim, Heui-Baik

    2015-01-01

    This study aimed to explore the effect on group dynamics of statements associated with deep learning approaches (DLA) and their contribution to cognitive collaboration and model development during group modeling of blood circulation. A group was selected for an in-depth analysis of collaborative group modeling. This group constructed a model in a…

  4. The group-as-a-whole-object relations model of group psychotherapy.

    Science.gov (United States)

    Rosen, D; Stukenberg, K W; Saeks, S

    2001-01-01

    The authors review the theoretical basis of group psychotherapy performed at The Menninger Clinic and demonstrate how the theory has been put into practice on two different types of inpatient units. The fundamental elements of the theory and practice used can be traced to object relations theory as originally proposed by Melanie Klein. Her work with individuals was directly applied to working with groups by Ezriel and Bion, who focused on interpreting group tension. More modern approaches have reintegrated working with individual concerns while also attending to the group-as-a-whole. Historically, these principles have been applied to long-term group treatment. The authors apply the concepts from the group-as-a-whole literature to short- and medium-length inpatient groups with open membership. They offer clinical examples of the application of these principles in short-term inpatient settings in groups with open membership.

  5. Induction of continuous expanding infrarenal aortic aneurysms in a large porcine animal model

    DEFF Research Database (Denmark)

    Kloster, Brian Ozeraitis; Lund, Lars; Lindholt, Jes S.

    2015-01-01

    BACKGROUND: A large animal model with a continuous expanding infrarenal aortic aneurysm gives access to a more realistic AAA model with anatomy and physiology similar to humans, and thus allows for new experimental research in the natural history and treatment options of the disease. METHODS: 10...... ultrasound, hereafter the pigs were euthanized for inspection and AAA wall sampling for histological analysis. RESULTS: In group A, all pigs developed continuous expanding AAA's with a mean increase in AP-diameter to 16.26 ± 0.93 mm equivalent to a 57% increase. In group B the AP-diameters increased to 11...... in group A. The most frequent complication was a neurological deficit in the lower limbs. CONCLUSION: In pigs it's possible to induce continuous expanding AAA's based upon proteolytic degradation and pathological flow, resembling the real life dynamics of human aneurysms. Because the lumbars are preserved...

  6. A Low-Dimensional Model for the Maximal Amplification Factor of Bichromatic Wave Groups

    Directory of Open Access Journals (Sweden)

    W. N. Tan

    2003-11-01

    Full Text Available We consider a low-dimensional model derived from the nonlinear-Schrödinger equation that describes the evolution of a special class of surface gravity wave groups, namely bichromatic waves. The model takes only two modes into account, namely the primary mode and the third order mode which is known to be most relevant for bichromatic waves with small frequency difference. Given an initial condition, an analytical expression for the maximal amplitude of the evolution of this initial wave group according to the model can be readily obtained. The aim of this investigation is to predict the amplification factor defined as the quotient between the maximal amplitude over all time & space and the initial maximal amplitude. Although this is a problem of general interest, as a case study, initial conditions in the form of a bichromatic wave group are taken. Using the low dimensional model it is found that the least upper bound of the maximal amplification factor for this bichromatic wave group is √2. To validate the analytical results of this model, a numerical simulation on the full model is also performed. As can be expected, good agreement is observed between analytical and numerical solutions for a certain range of parameters; when the initial amplitude is not too large, or when the difference of frequency is not too small. The results are relevant and motivated for the generation of waves in hydrodynamic laboratories.

  7. Delivering a very brief psychoeducational program to cancer patients and family members in a large group format.

    Science.gov (United States)

    Cunningham, A J; Edmonds, C V; Williams, D

    1999-01-01

    It is well established that brief psychoeducational programs for cancer patients will significantly improve mean quality of life. As this kind of adjunctive treatment becomes integrated into general cancer management, it will be necessary to devise cost-effective and efficacious programs that can be offered to relatively large numbers of patients. We have developed a very brief 4-session program that provides this service to 40-80 patients and family members per month (and seems capable of serving much larger numbers, depending on the capacity of the facility in which they assemble). Patients meet in a hospital auditorium for a large group, lecture-style program that offers training in basic coping skills: stress management, relaxation training, thought monitoring and changing, mental imagery and goal setting. Over the first year we have treated 363 patients and 150 family members. Improvements were assessed by changes in the POMS-Short Form, and both patients and family members were found to improve significantly over the course of the program. While this is not a randomized comparison, it suggests that the benefits gained from a large group in a classroom are not substantially less than the improvements that have been documented in the usual small group format, where more interactive discussions are possible.

  8. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Baldwin, C; Abdulla, G; Critchlow, T

    2003-01-31

    This paper discusses using the wavelets modeling technique as a mechanism for querying large-scale spatio-temporal scientific simulation data. Wavelets have been used successfully in time series analysis and in answering surprise and trend queries. Our approach however is driven by the need for compression, which is necessary for viable throughput given the size of the targeted data, along with the end user requirements from the discovery process. Our users would like to run fast queries to check the validity of the simulation algorithms used. In some cases users are welling to accept approximate results if the answer comes back within a reasonable time. In other cases they might want to identify a certain phenomena and track it over time. We face a unique problem because of the data set sizes. It may take months to generate one set of the targeted data; because of its shear size, the data cannot be stored on disk for long and thus needs to be analyzed immediately before it is sent to tape. We integrated wavelets within AQSIM, a system that we are developing to support exploration and analyses of tera-scale size data sets. We will discuss the way we utilized wavelets decomposition in our domain to facilitate compression and in answering a specific class of queries that is harder to answer with any other modeling technique. We will also discuss some of the shortcomings of our implementation and how to address them.

  9. EXO-ZODI MODELING FOR THE LARGE BINOCULAR TELESCOPE INTERFEROMETER

    Energy Technology Data Exchange (ETDEWEB)

    Kennedy, Grant M.; Wyatt, Mark C.; Panić, Olja; Shannon, Andrew [Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom); Bailey, Vanessa; Defrère, Denis; Hinz, Philip M.; Rieke, George H.; Skemer, Andrew J.; Su, Katherine Y. L. [Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721 (United States); Bryden, Geoffrey; Mennesson, Bertrand; Morales, Farisa; Serabyn, Eugene [Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109 (United States); Danchi, William C.; Roberge, Aki; Stapelfeldt, Karl R. [NASA Goddard Space Flight Center, Exoplanets and Stellar Astrophysics, Code 667, Greenbelt, MD 20771 (United States); Haniff, Chris [Cavendish Laboratory, University of Cambridge, JJ Thomson Avenue, Cambridge CB3 0HE (United Kingdom); Lebreton, Jérémy [Infrared Processing and Analysis Center, MS 100-22, California Institute of Technology, 770 South Wilson Avenue, Pasadena, CA 91125 (United States); Millan-Gabet, Rafael [NASA Exoplanet Science Institute, California Institute of Technology, 770 South Wilson Avenue, Pasadena, CA 91125 (United States); and others

    2015-02-01

    Habitable zone dust levels are a key unknown that must be understood to ensure the success of future space missions to image Earth analogs around nearby stars. Current detection limits are several orders of magnitude above the level of the solar system's zodiacal cloud, so characterization of the brightness distribution of exo-zodi down to much fainter levels is needed. To this end, the Large Binocular Telescope Interferometer (LBTI) will detect thermal emission from habitable zone exo-zodi a few times brighter than solar system levels. Here we present a modeling framework for interpreting LBTI observations, which yields dust levels from detections and upper limits that are then converted into predictions and upper limits for the scattered light surface brightness. We apply this model to the HOSTS survey sample of nearby stars; assuming a null depth uncertainty of 10{sup –4} the LBTI will be sensitive to dust a few times above the solar system level around Sun-like stars, and to even lower dust levels for more massive stars.

  10. An Overview of Westinghouse Realistic Large Break LOCA Evaluation Model

    Directory of Open Access Journals (Sweden)

    Cesare Frepoli

    2008-01-01

    Full Text Available Since the 1988 amendment of the 10 CFR 50.46 rule in 1988, Westinghouse has been developing and applying realistic or best-estimate methods to perform LOCA safety analyses. A realistic analysis requires the execution of various realistic LOCA transient simulations where the effect of both model and input uncertainties are ranged and propagated throughout the transients. The outcome is typically a range of results with associated probabilities. The thermal/hydraulic code is the engine of the methodology but a procedure is developed to assess the code and determine its biases and uncertainties. In addition, inputs to the simulation are also affected by uncertainty and these uncertainties are incorporated into the process. Several approaches have been proposed and applied in the industry in the framework of best-estimate methods. Most of the implementations, including Westinghouse, follow the Code Scaling, Applicability and Uncertainty (CSAU methodology. Westinghouse methodology is based on the use of the WCOBRA/TRAC thermal-hydraulic code. The paper starts with an overview of the regulations and its interpretation in the context of realistic analysis. The CSAU roadmap is reviewed in the context of its implementation in the Westinghouse evaluation model. An overview of the code (WCOBRA/TRAC and methodology is provided. Finally, the recent evolution to nonparametric statistics in the current edition of the W methodology is discussed. Sample results of a typical large break LOCA analysis for a PWR are provided.

  11. A maximum entropy model for opinions in social groups

    Science.gov (United States)

    Davis, Sergio; Navarrete, Yasmín; Gutiérrez, Gonzalo

    2014-04-01

    We study how the opinions of a group of individuals determine their spatial distribution and connectivity, through an agent-based model. The interaction between agents is described by a Hamiltonian in which agents are allowed to move freely without an underlying lattice (the average network topology connecting them is determined from the parameters). This kind of model was derived using maximum entropy statistical inference under fixed expectation values of certain probabilities that (we propose) are relevant to social organization. Control parameters emerge as Lagrange multipliers of the maximum entropy problem, and they can be associated with the level of consequence between the personal beliefs and external opinions, and the tendency to socialize with peers of similar or opposing views. These parameters define a phase diagram for the social system, which we studied using Monte Carlo Metropolis simulations. Our model presents both first and second-order phase transitions, depending on the ratio between the internal consequence and the interaction with others. We have found a critical value for the level of internal consequence, below which the personal beliefs of the agents seem to be irrelevant.

  12. The positive group affect spiral : a dynamic model of the emergence of positive affective similarity in work groups

    NARCIS (Netherlands)

    Walter, F.; Bruch, H.

    2008-01-01

    This conceptual paper seeks to clarify the process of the emergence of positive collective affect. Specifically, it develops a dynamic model of the emergence of positive affective similarity in work groups. It is suggested that positive group affective similarity and within-group relationship qualit

  13. The positive group affect spiral : a dynamic model of the emergence of positive affective similarity in work groups

    NARCIS (Netherlands)

    Walter, F.; Bruch, H.

    This conceptual paper seeks to clarify the process of the emergence of positive collective affect. Specifically, it develops a dynamic model of the emergence of positive affective similarity in work groups. It is suggested that positive group affective similarity and within-group relationship

  14. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Baldwin, C; Abdulla, G; Critchlow, T

    2002-02-25

    Data produced by large scale scientific simulations, experiments, and observations can easily reach tera-bytes in size. The ability to examine data-sets of this magnitude, even in moderate detail, is problematic at best. Generally this scientific data consists of multivariate field quantities with complex inter-variable correlations and spatial-temporal structure. To provide scientists and engineers with the ability to explore and analyze such data sets we are using a twofold approach. First, we model the data with the objective of creating a compressed yet manageable representation. Second, with that compressed representation, we provide the user with the ability to query the resulting approximation to obtain approximate yet sufficient answers; a process called adhoc querying. This paper is concerned with a wavelet modeling technique that seeks to capture the important physical characteristics of the target scientific data. Our approach is driven by the compression, which is necessary for viable throughput, along with the end user requirements from the discovery process. Our work contrasts existing research which applies wavelets to range querying, change detection, and clustering problems by working directly with a decomposition of the data. The difference in this procedures is due primarily to the nature of the data and the requirements of the scientists and engineers. Our approach directly uses the wavelet coefficients of the data to compress as well as query. We will provide some background on the problem, describe how the wavelet decomposition is used to facilitate data compression and how queries are posed on the resulting compressed model. Results of this process will be shown for several problems of interest and we will end with some observations and conclusions about this research.

  15. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Baldwin, C; Abdulla, G; Critchlow, T

    2002-02-25

    Data produced by large scale scientific simulations, experiments, and observations can easily reach tera-bytes in size. The ability to examine data-sets of this magnitude, even in moderate detail, is problematic at best. Generally this scientific data consists of multivariate field quantities with complex inter-variable correlations and spatial-temporal structure. To provide scientists and engineers with the ability to explore and analyze such data sets we are using a twofold approach. First, we model the data with the objective of creating a compressed yet manageable representation. Second, with that compressed representation, we provide the user with the ability to query the resulting approximation to obtain approximate yet sufficient answers; a process called adhoc querying. This paper is concerned with a wavelet modeling technique that seeks to capture the important physical characteristics of the target scientific data. Our approach is driven by the compression, which is necessary for viable throughput, along with the end user requirements from the discovery process. Our work contrasts existing research which applies wavelets to range querying, change detection, and clustering problems by working directly with a decomposition of the data. The difference in this procedures is due primarily to the nature of the data and the requirements of the scientists and engineers. Our approach directly uses the wavelet coefficients of the data to compress as well as query. We will provide some background on the problem, describe how the wavelet decomposition is used to facilitate data compression and how queries are posed on the resulting compressed model. Results of this process will be shown for several problems of interest and we will end with some observations and conclusions about this research.

  16. Sensitivity in forward modeled hyperspectral reflectance due to phytoplankton groups

    Science.gov (United States)

    Manzo, Ciro; Bassani, Cristiana; Pinardi, Monica; Giardino, Claudia; Bresciani, Mariano

    2016-04-01

    Phytoplankton is an integral part of the ecosystem, affecting trophic dynamics, nutrient cycling, habitat condition, and fisheries resources. The types of phytoplankton and their concentrations are used to describe the status of water and the processes inside of this. This study investigates bio-optical modeling of phytoplankton functional types (PFT) in terms of pigment composition demonstrating the capability of remote sensing to recognize freshwater phytoplankton. In particular, a sensitivity analysis of simulated hyperspectral water reflectance (with band setting of HICO, APEX, EnMAP, PRISMA and Sentinel-3) of productive eutrophic waters of Mantua lakes (Italy) environment is presented. The bio-optical model adopted for simulating the hyperspectral water reflectance takes into account the reflectance dependency on geometric conditions of light field, on inherent optical properties (backscattering and absorption coefficients) and on concentrations of water quality parameters (WQPs). The model works in the 400-750nm wavelength range, while the model parametrization is based on a comprehensive dataset of WQP concentrations and specific inherent optical properties of the study area, collected in field surveys carried out from May to September of 2011 and 2014. The following phytoplankton groups, with their specific absorption coefficients, a*Φi(λ), were used during the simulation: Chlorophyta, Cyanobacteria with phycocyanin, Cyanobacteria and Cryptophytes with phycoerythrin, Diatoms with carotenoids and mixed phytoplankton. The phytoplankton absorption coefficient aΦ(λ) is modelled by multiplying the weighted sum of the PFTs, Σpia*Φi(λ), with the chlorophyll-a concentration (Chl-a). To highlight the variability of water reflectance due to variation of phytoplankton pigments, the sensitivity analysis was performed by keeping constant the WQPs (i.e., Chl-a=80mg/l, total suspended matter=12.58g/l and yellow substances=0.27m-1). The sensitivity analysis was

  17. Working Group Reports: Working Group 1 - Software Systems Design and Implementation for Environmental Modeling

    Science.gov (United States)

    The purpose of the Interagency Steering Committee on Multimedia Environmental Modeling (ISCMEM) is to foster the exchange of information about environmental modeling tools, modeling frameworks, and environmental monitoring databases that are all in the public domain. It is compos...

  18. A Mathematical Images Group Model to Estimate the Sound Level in a Close-Fitting Enclosure

    Directory of Open Access Journals (Sweden)

    Michael J. Panza

    2014-01-01

    Full Text Available This paper describes a special mathematical images model to determine the sound level inside a close-fitting sound enclosure. Such an enclosure is defined as the internal air volume defined by a machine vibration noise source at one wall and a parallel reflecting wall located very close to it and acts as the outside radiating wall of the enclosure. Four smaller surfaces define a parallelepiped for the volume. The main reverberation group is between the two large parallel planes. Viewed as a discrete line-type source, the main group is extended as additional discrete line-type source image groups due to reflections from the four smaller surfaces. The images group approach provides a convergent solution for the case where hard reflective surfaces are modeled with absorption coefficients equal to zero. Numerical examples are used to calculate the sound pressure level incident on the outside wall and the effect of adding high absorption to the front wall. This is compared to the result from the general large room diffuse reverberant field enclosure formula for several hard wall absorption coefficients and distances between machine and front wall. The images group method is shown to have low sensitivity to hard wall absorption coefficient value and presents a method where zero sound absorption for hard surfaces can be used rather than an initial hard surface sound absorption estimate or measurement to predict the internal sound levels the effect of adding absorption.

  19. Entropy of Operator-valued Random Variables A Variational Principle for Large N Matrix Models

    CERN Document Server

    Akant, L; Rajeev, S G

    2002-01-01

    We show that, in 't Hooft's large N limit, matrix models can be formulated as a classical theory whose equations of motion are the factorized Schwinger--Dyson equations. We discover an action principle for this classical theory. This action contains a universal term describing the entropy of the non-commutative probability distributions. We show that this entropy is a nontrivial 1-cocycle of the non-commutative analogue of the diffeomorphism group and derive an explicit formula for it. The action principle allows us to solve matrix models using novel variational approximation methods; in the simple cases where comparisons with other methods are possible, we get reasonable agreement.

  20. Analysis and Design Environment for Large Scale System Models and Collaborative Model Development Project

    Data.gov (United States)

    National Aeronautics and Space Administration — As NASA modeling efforts grow more complex and more distributed among many working groups, new tools and technologies are required to integrate their efforts...

  1. Multi-scale inference of interaction rules in animal groups using Bayesian model selection.

    Directory of Open Access Journals (Sweden)

    Richard P Mann

    2012-01-01

    Full Text Available Inference of interaction rules of animals moving in groups usually relies on an analysis of large scale system behaviour. Models are tuned through repeated simulation until they match the observed behaviour. More recent work has used the fine scale motions of animals to validate and fit the rules of interaction of animals in groups. Here, we use a Bayesian methodology to compare a variety of models to the collective motion of glass prawns (Paratya australiensis. We show that these exhibit a stereotypical 'phase transition', whereby an increase in density leads to the onset of collective motion in one direction. We fit models to this data, which range from: a mean-field model where all prawns interact globally; to a spatial Markovian model where prawns are self-propelled particles influenced only by the current positions and directions of their neighbours; up to non-Markovian models where prawns have 'memory' of previous interactions, integrating their experiences over time when deciding to change behaviour. We show that the mean-field model fits the large scale behaviour of the system, but does not capture fine scale rules of interaction, which are primarily mediated by physical contact. Conversely, the Markovian self-propelled particle model captures the fine scale rules of interaction but fails to reproduce global dynamics. The most sophisticated model, the non-Markovian model, provides a good match to the data at both the fine scale and in terms of reproducing global dynamics. We conclude that prawns' movements are influenced by not just the current direction of nearby conspecifics, but also those encountered in the recent past. Given the simplicity of prawns as a study system our research suggests that self-propelled particle models of collective motion should, if they are to be realistic at multiple biological scales, include memory of previous interactions and other non-Markovian effects.

  2. Multi-scale inference of interaction rules in animal groups using Bayesian model selection.

    Science.gov (United States)

    Mann, Richard P; Perna, Andrea; Strömbom, Daniel; Garnett, Roman; Herbert-Read, James E; Sumpter, David J T; Ward, Ashley J W

    2012-01-01

    Inference of interaction rules of animals moving in groups usually relies on an analysis of large scale system behaviour. Models are tuned through repeated simulation until they match the observed behaviour. More recent work has used the fine scale motions of animals to validate and fit the rules of interaction of animals in groups. Here, we use a Bayesian methodology to compare a variety of models to the collective motion of glass prawns (Paratya australiensis). We show that these exhibit a stereotypical 'phase transition', whereby an increase in density leads to the onset of collective motion in one direction. We fit models to this data, which range from: a mean-field model where all prawns interact globally; to a spatial Markovian model where prawns are self-propelled particles influenced only by the current positions and directions of their neighbours; up to non-Markovian models where prawns have 'memory' of previous interactions, integrating their experiences over time when deciding to change behaviour. We show that the mean-field model fits the large scale behaviour of the system, but does not capture fine scale rules of interaction, which are primarily mediated by physical contact. Conversely, the Markovian self-propelled particle model captures the fine scale rules of interaction but fails to reproduce global dynamics. The most sophisticated model, the non-Markovian model, provides a good match to the data at both the fine scale and in terms of reproducing global dynamics. We conclude that prawns' movements are influenced by not just the current direction of nearby conspecifics, but also those encountered in the recent past. Given the simplicity of prawns as a study system our research suggests that self-propelled particle models of collective motion should, if they are to be realistic at multiple biological scales, include memory of previous interactions and other non-Markovian effects.

  3. Multicriteria decision group model for the selection of suppliers

    Directory of Open Access Journals (Sweden)

    Luciana Hazin Alencar

    2008-08-01

    Full Text Available Several authors have been studying group decision making over the years, which indicates how relevant it is. This paper presents a multicriteria group decision model based on ELECTRE IV and VIP Analysis methods, to those cases where there is great divergence among the decision makers. This model includes two stages. In the first, the ELECTRE IV method is applied and a collective criteria ranking is obtained. In the second, using criteria ranking, VIP Analysis is applied and the alternatives are selected. To illustrate the model, a numerical application in the context of the selection of suppliers in project management is used. The suppliers that form part of the project team have a crucial role in project management. They are involved in a network of connected activities that can jeopardize the success of the project, if they are not undertaken in an appropriate way. The question tackled is how to select service suppliers for a project on behalf of an enterprise that assists the multiple objectives of the decision-makers.Vários autores têm estudado decisão em grupo nos últimos anos, o que indica a relevância do assunto. Esse artigo apresenta um modelo multicritério de decisão em grupo baseado nos métodos ELECTRE IV e VIP Analysis, adequado aos casos em que se tem uma grande divergência entre os decisores. Esse modelo é composto por dois estágios. No primeiro, o método ELECTRE IV é aplicado e uma ordenação dos critérios é obtida. No próximo estágio, com a ordenação dos critérios, o método VIP Analysis é aplicado e as alternativas são selecionadas. Para ilustrar o modelo, uma aplicação numérica no contexto da seleção de fornecedores em projetos é realizada. Os fornecedores que fazem parte da equipe do projeto têm um papel fundamental no gerenciamento de projetos. Eles estão envolvidos em uma rede de atividades conectadas que, caso não sejam executadas de forma apropriada, podem colocar em risco o sucesso do

  4. Progress and Current Challenges in Modeling Large RNAs.

    Science.gov (United States)

    Somarowthu, Srinivas

    2016-02-27

    Recent breakthroughs in next-generation sequencing technologies have led to the discovery of several classes of non-coding RNAs (ncRNAs). It is now apparent that RNA molecules are not only just carriers of genetic information but also key players in many cellular processes. While there has been a rapid increase in the number of ncRNA sequences deposited in various databases over the past decade, the biological functions of these ncRNAs are largely not well understood. Similar to proteins, RNA molecules carry out a function by forming specific three-dimensional structures. Understanding the function of a particular RNA therefore requires a detailed knowledge of its structure. However, determining experimental structures of RNA is extremely challenging. In fact, RNA-only structures represent just 1% of the total structures deposited in the PDB. Thus, computational methods that predict three-dimensional RNA structures are in high demand. Computational models can provide valuable insights into structure-function relationships in ncRNAs and can aid in the development of functional hypotheses and experimental designs. In recent years, a set of diverse RNA structure prediction tools have become available, which differ in computational time, input data and accuracy. This review discusses the recent progress and challenges in RNA structure prediction methods.

  5. Direct observational evidence for a large transient galaxy population in groups at 0.85

    CERN Document Server

    Balogh, Michael L; Wilman, David J; Finoguenov, Alexis; Parker, Laura C; Connelly, Jennifer L; Mulchaey, John S; Bower, Richard G; Tanaka, Masayuki; Giodini, Stefania

    2010-01-01

    (abridged) We introduce our survey of galaxy groups at 0.8515 members. The dynamical mass estimates are in good agreement with the masses estimated from the X-ray luminosity, with most of the groups having 131E10.1 Msun, and for blue galaxies we sample masses as low as Mstar=1E8.8 Msun. Like lower-redshift groups, these systems are dominated by red galaxies, at all stellar masses Mstar>1E10.1 Msun. Few group galaxies inhabit the ``blue cloud'' that dominates the surrounding field; instead, we find a large and possibly distinct population of galaxies with intermediate colours. The ``green valley'' that exists at low redshift is instead well-populated in these groups, containing ~30 per cent of galaxies. These do not appear to be exceptionally dusty galaxies, and about half show prominent Balmer-absorption lines. Furthermore, their HST morphologies appear to be intermediate between those of red-sequence and blue-cloud galaxies of the same stellar mass. We postulate that these are a transi ent population, migrat...

  6. Barnes maze testing strategies with small and large rodent models.

    Science.gov (United States)

    Rosenfeld, Cheryl S; Ferguson, Sherry A

    2014-02-26

    Spatial learning and memory of laboratory rodents is often assessed via navigational ability in mazes, most popular of which are the water and dry-land (Barnes) mazes. Improved performance over sessions or trials is thought to reflect learning and memory of the escape cage/platform location. Considered less stressful than water mazes, the Barnes maze is a relatively simple design of a circular platform top with several holes equally spaced around the perimeter edge. All but one of the holes are false-bottomed or blind-ending, while one leads to an escape cage. Mildly aversive stimuli (e.g. bright overhead lights) provide motivation to locate the escape cage. Latency to locate the escape cage can be measured during the session; however, additional endpoints typically require video recording. From those video recordings, use of automated tracking software can generate a variety of endpoints that are similar to those produced in water mazes (e.g. distance traveled, velocity/speed, time spent in the correct quadrant, time spent moving/resting, and confirmation of latency). Type of search strategy (i.e. random, serial, or direct) can be categorized as well. Barnes maze construction and testing methodologies can differ for small rodents, such as mice, and large rodents, such as rats. For example, while extra-maze cues are effective for rats, smaller wild rodents may require intra-maze cues with a visual barrier around the maze. Appropriate stimuli must be identified which motivate the rodent to locate the escape cage. Both Barnes and water mazes can be time consuming as 4-7 test trials are typically required to detect improved learning and memory performance (e.g. shorter latencies or path lengths to locate the escape platform or cage) and/or differences between experimental groups. Even so, the Barnes maze is a widely employed behavioral assessment measuring spatial navigational abilities and their potential disruption by genetic, neurobehavioral manipulations, or drug

  7. Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam’s Window*

    Science.gov (United States)

    Onorante, Luca; Raftery, Adrian E.

    2015-01-01

    Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam’s window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods. PMID:26917859

  8. SEMICONDUCTOR DEVICES MEXTRAM model based SiGe HBT large-signal modeling

    Science.gov (United States)

    Bo, Han; Shoulin, Li; Jiali, Cheng; Qiuyan, Yin; Jianjun, Gao

    2010-10-01

    An improved large-signal equivalent-circuit model for SiGe HBTs based on the MEXTRAM model (level 504.5) is proposed. The proposed model takes into account the soft knee effect. The model keeps the main features of the MEXTRAM model even though some simplifications have been made in the equivalent circuit topology. This model is validated in DC and AC analyses for SiGe HBTs fabricated with 0.35-μm BiCMOS technology, 1 × 8 μm2 emitter area. Good agreement is achieved between the measured and modeled results for DC and S-parameters (from 50 MHz to 20 GHz), which shows that the proposed model is accurate and reliable. The model has been implemented in Verilog-A using the ADS circuit simulator.

  9. METHODOLOGY AND CALCULATIONS FOR THE ASSIGNMENT OF WASTE GROUPS FOR THE LARGE UNDERGROUND WASTE STORAGE TANKS AT THE HANFORD SITE

    Energy Technology Data Exchange (ETDEWEB)

    WEBER RA

    2009-01-16

    The Hanford Site contains 177 large underground radioactive waste storage tanks (28 double-shell tanks and 149 single-shell tanks). These tanks are categorized into one of three waste groups (A, B, and C) based on their waste and tank characteristics. These waste group assignments reflect a tank's propensity to retain a significant volume of flammable gases and the potential of the waste to release retained gas by a buoyant displacement gas release event. Assignments of waste groups to the 177 double-shell tanks and single-shell tanks, as reported in this document, are based on a Monte Carlo analysis of three criteria. The first criterion is the headspace flammable gas concentration following release of retained gas. This criterion determines whether the tank contains sufficient retained gas such that the well-mixed headspace flammable gas concentration would reach 100% of the lower flammability limit if the entire tank's retained gas were released. If the volume of retained gas is not sufficient to reach 100% of the lower flammability limit, then flammable conditions cannot be reached and the tank is classified as a waste group C tank independent of the method the gas is released. The second criterion is the energy ratio and considers whether there is sufficient supernatant on top of the saturated solids such that gas-bearing solids have the potential energy required to break up the material and release gas. Tanks that are not waste group C tanks and that have an energy ratio < 3.0 do not have sufficient potential energy to break up material and release gas and are assigned to waste group B. These tanks are considered to represent a potential induced flammable gas release hazard, but no spontaneous buoyant displacement flammable gas release hazard. Tanks that are not waste group C tanks and have an energy ratio {ge} 3.0, but that pass the third criterion (buoyancy ratio < 1.0, see below) are also assigned to waste group B. Even though the designation as

  10. METHODOLOGY AND CALCULATIONS FOR THE ASSIGNMENT OF WASTE GROUPS FOR THE LARGE UNDERGROUND WASTE STORAGE TANKS AT THE HANFORD SITE

    Energy Technology Data Exchange (ETDEWEB)

    FOWLER KD

    2007-12-27

    This document categorizes each of the large waste storage tanks into one of several categories based on each tank's waste characteristics. These waste group assignments reflect a tank's propensity to retain a significant volume of flammable gases and the potential of the waste to release retained gas by a buoyant displacement event. Revision 7 is the annual update of the calculations of the flammable gas Waste Groups for DSTs and SSTs. The Hanford Site contains 177 large underground radioactive waste storage tanks (28 double-shell tanks and 149 single-shell tanks). These tanks are categorized into one of three waste groups (A, B, and C) based on their waste and tank characteristics. These waste group assignments reflect a tank's propensity to retain a significant volume of flammable gases and the potential of the waste to release retained gas by a buoyant displacement gas release event. Assignments of waste groups to the 177 double-shell tanks and single-shell tanks, as reported in this document, are based on a Monte Carlo analysis of three criteria. The first criterion is the headspace flammable gas concentration following release of retained gas. This criterion determines whether the tank contains sufficient retained gas such that the well-mixed headspace flammable gas concentration would reach 100% of the lower flammability limit if the entire tank's retained gas were released. If the volume of retained gas is not sufficient to reach 100% of the lower flammability limit, then flammable conditions cannot be reached and the tank is classified as a waste group C tank independent of the method the gas is released. The second criterion is the energy ratio and considers whether there is sufficient supernatant on top of the saturated solids such that gas-bearing solids have the potential energy required to break up the material and release gas. Tanks that are not waste group C tanks and that have an energy ratio < 3.0 do not have sufficient

  11. Group Contribution Based Process Flowsheet Synthesis, Design and Modelling

    DEFF Research Database (Denmark)

    d'Anterroches, Loïc; Gani, Rafiqul

    2005-01-01

    In a group contribution method for pure component property prediction, a molecule is described as a set of groups linked together to form a molecular structure. In the same way, for flowsheet "property" prediction, a flowsheet can be described as a set of process-groups linked together to represent...... provides a contribution to the "property" of the flowsheet, which can be performance in terms of energy consumption, thereby allowing a flowsheet "property" to be calculated, once it is described by the groups. Another feature of this approach is that the process-group attachments provide automatically...... the flowsheet structure. Just as a functional group is a collection of atoms, a process-group is a collection of operations forming an "unit" operation or a set of "unit" operations. The link between the process-groups are the streams similar to the bonds that are attachments to atoms/groups. Each process-group...

  12. Multiobjective adaptive surrogate modeling-based optimization for parameter estimation of large, complex geophysical models

    Science.gov (United States)

    Gong, Wei; Duan, Qingyun; Li, Jianduo; Wang, Chen; Di, Zhenhua; Ye, Aizhong; Miao, Chiyuan; Dai, Yongjiu

    2016-03-01

    Parameter specification is an important source of uncertainty in large, complex geophysical models. These models generally have multiple model outputs that require multiobjective optimization algorithms. Although such algorithms have long been available, they usually require a large number of model runs and are therefore computationally expensive for large, complex dynamic models. In this paper, a multiobjective adaptive surrogate modeling-based optimization (MO-ASMO) algorithm is introduced that aims to reduce computational cost while maintaining optimization effectiveness. Geophysical dynamic models usually have a prior parameterization scheme derived from the physical processes involved, and our goal is to improve all of the objectives by parameter calibration. In this study, we developed a method for directing the search processes toward the region that can improve all of the objectives simultaneously. We tested the MO-ASMO algorithm against NSGA-II and SUMO with 13 test functions and a land surface model - the Common Land Model (CoLM). The results demonstrated the effectiveness and efficiency of MO-ASMO.

  13. Testing Group Mean Differences of Latent Variables in Multilevel Data Using Multiple-Group Multilevel CFA and Multilevel MIMIC Modeling.

    Science.gov (United States)

    Kim, Eun Sook; Cao, Chunhua

    2015-01-01

    Considering that group comparisons are common in social science, we examined two latent group mean testing methods when groups of interest were either at the between or within level of multilevel data: multiple-group multilevel confirmatory factor analysis (MG ML CFA) and multilevel multiple-indicators multiple-causes modeling (ML MIMIC). The performance of these methods were investigated through three Monte Carlo studies. In Studies 1 and 2, either factor variances or residual variances were manipulated to be heterogeneous between groups. In Study 3, which focused on within-level multiple-group analysis, six different model specifications were considered depending on how to model the intra-class group correlation (i.e., correlation between random effect factors for groups within cluster). The results of simulations generally supported the adequacy of MG ML CFA and ML MIMIC for multiple-group analysis with multilevel data. The two methods did not show any notable difference in the latent group mean testing across three studies. Finally, a demonstration with real data and guidelines in selecting an appropriate approach to multilevel multiple-group analysis are provided.

  14. IBAR: Interacting boson model calculations for large system sizes

    Science.gov (United States)

    Casperson, R. J.

    2012-04-01

    Scaling the system size of the interacting boson model-1 (IBM-1) into the realm of hundreds of bosons has many interesting applications in the field of nuclear structure, most notably quantum phase transitions in nuclei. We introduce IBAR, a new software package for calculating the eigenvalues and eigenvectors of the IBM-1 Hamiltonian, for large numbers of bosons. Energies and wavefunctions of the nuclear states, as well as transition strengths between them, are calculated using these values. Numerical errors in the recursive calculation of reduced matrix elements of the d-boson creation operator are reduced by using an arbitrary precision mathematical library. This software has been tested for up to 1000 bosons using comparisons to analytic expressions. Comparisons have also been made to the code PHINT for smaller system sizes. Catalogue identifier: AELI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 28 734 No. of bytes in distributed program, including test data, etc.: 4 104 467 Distribution format: tar.gz Programming language: C++ Computer: Any computer system with a C++ compiler Operating system: Tested under Linux RAM: 150 MB for 1000 boson calculations with angular momenta of up to L=4 Classification: 17.18, 17.20 External routines: ARPACK (http://www.caam.rice.edu/software/ARPACK/) Nature of problem: Construction and diagonalization of large Hamiltonian matrices, using reduced matrix elements of the d-boson creation operator. Solution method: Reduced matrix elements of the d-boson creation operator have been stored in data files at machine precision, after being recursively calculated with higher than machine precision. The Hamiltonian matrix is calculated and diagonalized, and the requested transition strengths are calculated

  15. Construction and analysis of tree models for chromosomal classification of diffuse large B-cell lymphomas

    Institute of Scientific and Technical Information of China (English)

    Hui-Yong Jiang; Zhong-Xi Huang; Xue-Feng Zhang; Richard Desper; Tong Zhao

    2007-01-01

    AIM: To construct tree models for classification of diffuse large B-cell lymphomas (DLBCL) by chromosome copy numbers, to compare them with cDNA microarray classification, and to explore models of multi-gene, multi-step and multi-pathway processes of DLBCL tumorigenesis.METHODS: Maximum-weight branching and distance based models were constructed based on the comparative genomic hybridization (CGH) data of 123 DLBCL samples using the established methods and software of Desper et al. A maximum likelihood tree model was also used to analyze the data. By comparing with the results reported in literature, values of tree models in the classification of DLBCL were elucidated.RESULTS: Both the branching and the distance-based trees classified DLBCL into three groups. We combined the classification methods of the two models and classified DLBCL into three categories according to their characteristics. The first group was marked by +Xq, +Xp, -17p and +13q; the second group by +3q, +18q and +18p; and the third group was marked by -6q and +6p. This chromosomal classification was consistent with cDNA classification. It indicated that -6q and +3q were two main events in the tumorigenesis of lymphoma.CONCLUSION: Tree models of lymphoma established from CGH data can be used in the classification of DLBCL. These models can suggest multi-gene, multi-step and multi-pathway processes of tumorigenesis.Two pathways, -6q preceding +6q and +3q preceding +18q, may be important in understanding tumorigenesis of DLBCL. The pathway, -6q preceding +6q, may have a close relationship with the tumorigenesis of non-GCB DLBCL.

  16. Bayesian latent feature modeling for modeling bipartite networks with overlapping groups

    DEFF Research Database (Denmark)

    Jørgensen, Philip H.; Mørup, Morten; Schmidt, Mikkel Nørgaard;

    2016-01-01

    Bi-partite networks are commonly modelled using latent class or latent feature models. Whereas the existing latent class models admit marginalization of parameters specifying the strength of interaction between groups, existing latent feature models do not admit analytical marginalization...... of the parameters accounting for the interaction strength within the feature representation. We propose a new binary latent feature model that admits analytical marginalization of interaction strengths such that model inference reduces to assigning nodes to latent features. We propose a constraint inspired...... to the infinite relational model and the infinite Bernoulli mixture model. We find that the model provides a new latent feature representation of structure while in link-prediction performing close to existing models. Our current extension of the notion of communities and collapsed inference to binary latent...

  17. The Economic Consequences of a Large EMU Results of Macroeconomic Model Simulations

    Directory of Open Access Journals (Sweden)

    Fritz Breuss

    1997-05-01

    Full Text Available Recent economic forecasts increase the probability that firstly, the EMU can start as planned on January 1, 1999 and secondly, that it will start with a large group of countries. The economic implications of the artificially unification of "hard-currency" and "soft-currency" countries are analysed by means of macroeconomic model simulations. The results of a large "non-optimal" EMU are as expected. On the one hand, there are positive income effects for all countries although unevenly distributed over the participants on the other hand, the internal (inflation and external (value of the Euro vis-à-vis the Dollar stability are at risk. The "hard-currency" group will be the major winner (in terms of real GDP and employment, whereas the "soft-currency" group has to carry the adjustment costs to a regime of fixed exchange rates (Euro which results in slower growth, decline in employment and a deterioration of their budgetary position. The necessary convergence of prices and interest rates leads to an increase (decrease of inflation and interest rates in the "hard-currency" countries ("soft-currency" countries. If the EMU will start with a large group there will be a tendency to devalue the Euro against the Dollar. As a consequence of the uneven economic performance of a large (non-optimal EMU I would suggest to start the EMU with a core group of "hard-currency" countries. After this mini EMU succeeded the other Member States could join the EMU.

  18. The Economic Consequences of a Large EMU – Results of Macroeconomic Model Simulations

    Directory of Open Access Journals (Sweden)

    Fritz Breuss

    1997-05-01

    Full Text Available Recent economic forecasts increase the probability that firstly, the EMU can start as planned on January 1, 1999 and secondly, that it will start with a large group of countries. The economic implications of the artificially unification of "hard-currency" and "soft-currency" countries are analysed by means of macroeconomic model simulations. The results of a large "non-optimal" EMU are as expected. On the one hand, there are positive income effects for all countries – although unevenly distributed over the participants – on the other hand, the internal (inflation and external (value of the Euro vis-à-vis the Dollar stability are at risk. The "hard-currency" group will be the major winner (in terms of real GDP and employment, whereas the "soft-currency" group has to carry the adjustment costs to a regime of fixed exchange rates (Euro which results in slower growth, decline in employment and a deterioration of their budgetary position. The necessary convergence of prices and interest rates leads to an increase (decrease of inflation and interest rates in the "hard-currency" countries ("soft-currency" countries. If the EMU will start with a large group there will be a tendency to devalue the Euro against the Dollar. As a consequence of the uneven economic performance of a large (non-optimal EMU I would suggest to start the EMU with a core group of "hard-currency" countries. After this mini EMU succeeded the other Member States could join the EMU.

  19. Using the IGCRA (individual, group, classroom reflective action technique to enhance teaching and learning in large accountancy classes

    Directory of Open Access Journals (Sweden)

    Cristina Poyatos

    2011-02-01

    Full Text Available First year accounting has generally been perceived as one of the more challenging first year business courses for university students. Various Classroom Assessment Techniques (CATs have been proposed to attempt to enrich and enhance student learning, with these studies generally positioning students as learners alone. This paper uses an educational case study approach and examines the implementation of the IGCRA (individual, group, classroom reflective action technique, a Classroom Assessment Technique, on first year accounting students’ learning performance. Building on theoretical frameworks in the areas of cognitive learning, social development, and dialogical learning, the technique uses reports to promote reflection on both learning and teaching. IGCRA was found to promote feedback on the effectiveness of student, as well as teacher satisfaction. Moreover, the results indicated formative feedback can assist to improve the learning and learning environment for a large group of first year accounting students. Clear guidelines for its implementation are provided in the paper.

  20. Guidelines for a priori grouping of species in hierarchical community models

    Science.gov (United States)

    Pacifici, Krishna; Zipkin, Elise; Collazo, Jaime; Irizarry, Julissa I.; DeWan, Amielle A.

    2014-01-01

    Recent methodological advances permit the estimation of species richness and occurrences for rare species by linking species-level occurrence models at the community level. The value of such methods is underscored by the ability to examine the influence of landscape heterogeneity on species assemblages at large spatial scales. A salient advantage of community-level approaches is that parameter estimates for data-poor species are more precise as the estimation process borrows from data-rich species. However, this analytical benefit raises a question about the degree to which inferences are dependent on the implicit assumption of relatedness among species. Here, we assess the sensitivity of community/group-level metrics, and individual-level species inferences given various classification schemes for grouping species assemblages using multispecies occurrence models. We explore the implications of these groupings on parameter estimates for avian communities in two ecosystems: tropical forests in Puerto Rico and temperate forests in northeastern United States. We report on the classification performance and extent of variability in occurrence probabilities and species richness estimates that can be observed depending on the classification scheme used. We found estimates of species richness to be most precise and to have the best predictive performance when all of the data were grouped at a single community level. Community/group-level parameters appear to be heavily influenced by the grouping criteria, but were not driven strictly by total number of detections for species. We found different grouping schemes can provide an opportunity to identify unique assemblage responses that would not have been found if all of the species were analyzed together. We suggest three guidelines: (1) classification schemes should be determined based on study objectives; (2) model selection should be used to quantitatively compare different classification approaches; and (3) sensitivity

  1. Modelling the exposure of wildlife to radiation: key findings and activities of IAEA working groups

    Energy Technology Data Exchange (ETDEWEB)

    Beresford, Nicholas A. [NERC Centre for Ecology and Hydrology, Lancaster Environment Center, Library Av., Bailrigg, Lancaster, LA1 4AP (United Kingdom); School of Environment and Life Sciences, University of Salford, Manchester, M4 4WT (United Kingdom); Vives i Batlle, Jordi; Vandenhove, Hildegarde [Belgian Nuclear Research Centre, Belgian Nuclear Research Centre, Boeretang 200, 2400 Mol (Belgium); Beaugelin-Seiller, Karine [Institut de Radioprotection et de Surete Nucleaire (IRSN), PRP-ENV, SERIS, LM2E, Cadarache (France); Johansen, Mathew P. [ANSTO Australian Nuclear Science and Technology Organisation, New Illawarra Rd, Menai, NSW (Australia); Goulet, Richard [Canadian Nuclear Safety Commission, Environmental Risk Assessment Division, 280 Slater, Ottawa, K1A0H3 (Canada); Wood, Michael D. [School of Environment and Life Sciences, University of Salford, Manchester, M4 4WT (United Kingdom); Ruedig, Elizabeth [Department of Environmental and Radiological Health Sciences, Colorado State University, Fort Collins (United States); Stark, Karolina; Bradshaw, Clare [Department of Ecology, Environment and Plant Sciences, Stockholm University, SE-10691 (Sweden); Andersson, Pal [Swedish Radiation Safety Authority, SE-171 16, Stockholm (Sweden); Copplestone, David [Biological and Environmental Sciences, University of Stirling, Stirling, FK9 4LA (United Kingdom); Yankovich, Tamara L.; Fesenko, Sergey [International Atomic Energy Agency, Vienna International Centre, 1400, Vienna (Austria)

    2014-07-01

    In total, participants from 14 countries, representing 19 organisations, actively participated in the model application/inter-comparison activities of the IAEA's EMRAS II programme Biota Modelling Group. A range of models/approaches were used by participants (e.g. the ERICA Tool, RESRAD-BIOTA, the ICRP Framework). The agreed objectives of the group were: 'To improve Member State's capabilities for protection of the environment by comparing and validating models being used, or developed, for biota dose assessment (that may be used) as part of the regulatory process of licensing and compliance monitoring of authorised releases of radionuclides.' The activities of the group, the findings of which will be described, included: - An assessment of the predicted unweighted absorbed dose rates for 74 radionuclides estimated by 10 approaches for five of the ICRPs Reference Animal and Plant geometries assuming 1 Bq per unit organism or media. - Modelling the effect of heterogeneous distributions of radionuclides in sediment profiles on the estimated exposure of organisms. - Model prediction - field data comparisons for freshwater ecosystems in a uranium mining area and a number of wetland environments. - An evaluation of the application of available models to a scenario considering radioactive waste buried in shallow trenches. - Estimating the contribution of {sup 235}U to dose rates in freshwater environments. - Evaluation of the factors contributing to variation in modelling results. The work of the group continues within the framework of the IAEA's MODARIA programme, which was initiated in 2012. The work plan of the MODARIA working group has largely been defined by the findings of the previous EMRAS programme. On-going activities of the working group, which will be described, include the development of a database of dynamic parameters for wildlife dose assessment and exercises involving modelling the exposure of organisms in the marine coastal

  2. Some combinatorial models for reduced expressions in Coxeter groups

    CERN Document Server

    Denoncourt, Hugh

    2011-01-01

    Stanley's formula for the number of reduced expressions of a permutation regarded as a Coxeter group element raises the question of how to enumerate the reduced expressions of an arbitrary Coxeter group element. We provide a framework for answering this question by constructing combinatorial objects that represent the inversion set and the reduced expressions for an arbitrary Coxeter group element. The framework also provides a formula for the length of an element formed by deleting a generator from a Coxeter group element. Fan and Hagiwara, et al$.$ showed that for certain Coxeter groups, the short-braid avoiding elements characterize those elements that give reduced expressions when any generator is deleted from a reduced expression. We provide a characterization that holds in all Coxeter groups. Lastly, we give applications to the freely braided elements introduced by Green and Losonczy, generalizing some of their results that hold in simply-laced Coxeter groups to the arbitrary Coxeter group setting.

  3. Affine group formulation of the Standard Model coupled to gravity

    Energy Technology Data Exchange (ETDEWEB)

    Chou, Ching-Yi, E-mail: l2897107@mail.ncku.edu.tw [Department of Physics, National Cheng Kung University, Taiwan (China); Ita, Eyo, E-mail: ita@usna.edu [Department of Physics, US Naval Academy, Annapolis, MD (United States); Soo, Chopin, E-mail: cpsoo@mail.ncku.edu.tw [Department of Physics, National Cheng Kung University, Taiwan (China)

    2014-04-15

    In this work we apply the affine group formalism for four dimensional gravity of Lorentzian signature, which is based on Klauder’s affine algebraic program, to the formulation of the Hamiltonian constraint of the interaction of matter and all forces, including gravity with non-vanishing cosmological constant Λ, as an affine Lie algebra. We use the hermitian action of fermions coupled to gravitation and Yang–Mills theory to find the density weight one fermionic super-Hamiltonian constraint. This term, combined with the Yang–Mills and Higgs energy densities, are composed with York’s integrated time functional. The result, when combined with the imaginary part of the Chern–Simons functional Q, forms the affine commutation relation with the volume element V(x). Affine algebraic quantization of gravitation and matter on equal footing implies a fundamental uncertainty relation which is predicated upon a non-vanishing cosmological constant. -- Highlights: •Wheeler–DeWitt equation (WDW) quantized as affine algebra, realizing Klauder’s program. •WDW formulated for interaction of matter and all forces, including gravity, as affine algebra. •WDW features Hermitian generators in spite of fermionic content: Standard Model addressed. •Constructed a family of physical states for the full, coupled theory via affine coherent states. •Fundamental uncertainty relation, predicated on non-vanishing cosmological constant.

  4. On Range Searching in the Group Model and Combinatorial Discrepancy

    DEFF Research Database (Denmark)

    Larsen, Kasper Green

    2011-01-01

    In this paper we establish an intimate connection between dynamic range searching in the group model and combinatorial discrepancy. Our result states that, for a broad class of range searching data structures (including all known upper bounds), it must hold that $t_ut_q = Omega(disc^2/lg n)$ where...... $t_u$ is the worst case update time, $t_q$ the worst case query time and $disc$ is the combinatorial discrepancy of the range searching problem in question. This relation immediately implies a whole range of exceptionally high and near-tight lower bounds for all of the basic range searching problems....... We list a few of them in the following:begin{itemize}item For half space range searching in $d$-dimensional space, we get a lower bound of $t_u t_q = Omega(n^{1-1/d}/lg n)$. This comes within a $lg n lg lg n$ factor of the best known upper bound. item For orthogonal range searching in $d...

  5. ON range searching in the group model and combinatorial discrepancy

    DEFF Research Database (Denmark)

    Larsen, Kasper Green

    2014-01-01

    In this paper we establish an intimate connection between dynamic range searching in the group model and combinatorial discrepancy. Our result states that, for a broad class of range searching data structures (including all known upper bounds), it must hold that $t_u t_q=\\Omega(\\mbox{disc}^2......)$, where $t_u$ is the worst case update time, $t_q$ is the worst case query time, and disc is the combinatorial discrepancy of the range searching problem in question. This relation immediately implies a whole range of exceptionally high and near-tight lower bounds for all of the basic range searching...... problems. We list a few of them in the following: (1) For $d$-dimensional halfspace range searching, we get a lower bound of $t_u t_q=\\Omega(n^{1-1/d})$. This comes within an lg lg $n$ factor of the best known upper bound. (2) For orthogonal range searching, we get a lower bound of $t_u t...

  6. Spreadsheets Grow Up: Three Spreadsheet Engineering Methodologies for Large Financial Planning Models

    CERN Document Server

    Grossman, Thomas A

    2010-01-01

    Many large financial planning models are written in a spreadsheet programming language (usually Microsoft Excel) and deployed as a spreadsheet application. Three groups, FAST Alliance, Operis Group, and BPM Analytics (under the name "Spreadsheet Standards Review Board") have independently promulgated standardized processes for efficiently building such models. These spreadsheet engineering methodologies provide detailed guidance on design, construction process, and quality control. We summarize and compare these methodologies. They share many design practices, and standardized, mechanistic procedures to construct spreadsheets. We learned that a written book or standards document is by itself insufficient to understand a methodology. These methodologies represent a professionalization of spreadsheet programming, and can provide a means to debug a spreadsheet that contains errors. We find credible the assertion that these spreadsheet engineering methodologies provide enhanced productivity, accuracy and maintain...

  7. Formal language models for finding groups of experts

    NARCIS (Netherlands)

    S. Liang; M. de Rijke

    2016-01-01

    The task of finding groups or teams has recently received increased attention, as a natural and challenging extension of search tasks aimed at retrieving individual entities. We introduce a new group finding task: given a query topic, we try to find knowledgeable groups that have expertise on that t

  8. A Model Psychoeducational Group for Survivors of Organizational Downsizing.

    Science.gov (United States)

    Foley, Pamela F.; Smith, John E.

    1999-01-01

    Describes a one-day psychoeducational group for survivors of a recent organizational downsizing. Principal goal of the group is to prevent "Layoff Survivor Syndrome" through instruction and group exercises designed to normalize common responses and increase awareness of positive coping strategies. Provides descriptions of group…

  9. Reduction of large-scale numerical ground water flow models

    NARCIS (Netherlands)

    Vermeulen, P.T.M.; Heemink, A.W.; Testroet, C.B.M.

    2002-01-01

    Numerical models are often used for simulating ground water flow. Written in state space form, the dimension of these models is of the order of the number of model cells and can be very high (> million). As a result, these models are computationally very demanding, especially if many different scena

  10. Parallel runs of a large air pollution model on a grid of Sun computers

    DEFF Research Database (Denmark)

    Alexandrov, V.N.; Owczarz, W.; Thomsen, Per Grove

    2004-01-01

    Large -scale air pollution models can successfully be used in different environmental studies. These models are described mathematically by systems of partial differential equations. Splitting procedures followed by discretization of the spatial derivatives leads to several large systems of ordin...

  11. Validating the Runoff from the PRECIS Model Using a Large-Scale Routing Model

    Institute of Scientific and Technical Information of China (English)

    CAO Lijuan; DONG Wenjie; XU Yinlong; ZHANG Yong; Michael SPARROW

    2007-01-01

    The streamflow over the Yellow River basin is simulated using the PRECIS (Providing REgional Climates for Impacts Studies) regional climate model driven by 15-year (1979-1993) ECMWF reanalysis data as the initial and lateral boundary conditions and an off-line large-scale routing model (LRM). The LRM uses physical catchment and river channel information and allows streamflow to be predicted for large continental rivers with a 1°× 1° spatial resolution. The results show that the PRECIS model can reproduce the general southeast to northwest gradient distribution of the precipitation over the Yellow River basin. The PRECISLRM model combination has the capability to simulate the seasonal and annual streamflow over the Yellow River basin. The simulated streamflow is generally coincident with the naturalized streamflow both in timing and in magnitude.

  12. Advances and visions in large-scale hydrological modelling: findings from the 11th workshop on large-scale hydrological modelling

    NARCIS (Netherlands)

    Döll, P.; Berkhoff, K.; Bormann, H.; Fohrer, N.; Gerten, D.; Hagemann, S.; Krol, Martinus S.

    2008-01-01

    Large-scale hydrological modelling has become increasingly wide-spread during the last decade. An annual workshop series on large-scale hydrological modelling has provided, since 1997, a forum to the German-speaking community for discussing recent developments and achievements in this research area.

  13. Large animal models of rare genetic disorders: sheep as phenotypically relevant models of human genetic disease.

    Science.gov (United States)

    Pinnapureddy, Ashish R; Stayner, Cherie; McEwan, John; Baddeley, Olivia; Forman, John; Eccles, Michael R

    2015-09-02

    Animals that accurately model human disease are invaluable in medical research, allowing a critical understanding of disease mechanisms, and the opportunity to evaluate the effect of therapeutic compounds in pre-clinical studies. Many types of animal models are used world-wide, with the most common being small laboratory animals, such as mice. However, rodents often do not faithfully replicate human disease, despite their predominant use in research. This discordancy is due in part to physiological differences, such as body size and longevity. In contrast, large animal models, including sheep, provide an alternative to mice for biomedical research due to their greater physiological parallels with humans. Completion of the full genome sequences of many species, and the advent of Next Generation Sequencing (NGS) technologies, means it is now feasible to screen large populations of domesticated animals for genetic variants that resemble human genetic diseases, and generate models that more accurately model rare human pathologies. In this review, we discuss the notion of using sheep as large animal models, and their advantages in modelling human genetic disease. We exemplify several existing naturally occurring ovine variants in genes that are orthologous to human disease genes, such as the Cln6 sheep model for Batten disease. These, and other sheep models, have contributed significantly to our understanding of the relevant human disease process, in addition to providing opportunities to trial new therapies in animals with similar body and organ size to humans. Therefore sheep are a significant species with respect to the modelling of rare genetic human disease, which we summarize in this review.

  14. Modeling a Dry Etch Process for Large-Area Devices

    Energy Technology Data Exchange (ETDEWEB)

    Buss, R.J.; Hebner, G.A.; Ruby, D.S.; Yang, P.

    1999-07-28

    There has been considerable interest in developing dry processes which can effectively replace wet processing in the manufacture of large area photovoltaic devices. Environmental and health issues are a driver for this activity because wet processes generally increase worker exposure to toxic and hazardous chemicals and generate large volumes of liquid hazardous waste. Our work has been directed toward improving the performance of screen-printed solar cells while using plasma processing to reduce hazardous chemical usage.

  15. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    NARCIS (Netherlands)

    Loon, van A.F.; Huijgevoort, van M.H.J.; Lanen, van H.A.J.

    2012-01-01

    Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological

  16. Triviality of hierarchical O(N) spin model in four dimensions with large N

    CERN Document Server

    Watanabe, H

    2003-01-01

    The renormalization group transformation for the hierarchical O(N) spin model in four dimensions is studied by means of characteristic functions of single-site measures, and convergence of the critical trajectory to the Gaussian fixed point is shown for a sufficiently large N. In the strong coupling regime, the trajectory is controlled by the help of the exactly solved O(\\infty) trajectory, while, in the weak coupling regime, convergence to the Gaussian fixed point is shown by power decay of the effective coupling constant.

  17. Long-Run Properties of Large-Scale Macroeconometric Models

    OpenAIRE

    Kenneth F. WALLIS-; John D. WHITLEY

    1987-01-01

    We consider alternative approaches to the evaluation of the long-run properties of dynamic nonlinear macroeconometric models, namely dynamic simulation over an extended database, or the construction and direct solution of the steady-state version of the model. An application to a small model of the UK economy is presented. The model is found to be unstable, but a stable form can be produced by simple alterations to the structure.

  18. A SUSY SO(10) model with large tan$\\beta$

    CERN Document Server

    Lazarides, G

    1994-01-01

    We construct a supersymmetric SO(10) model with the asymptotic relation tan\\beta \\simeq m_t/m_b automatically arising from its structure. The model retains the significant Minimal Supersymmetric Standard Model predictions for sin^2 \\theta_w and \\alpha_s and contains an automatic Z_2 matter parity. Proton decay through d=5 operators is sufficiently suppressed. It is remarkable that no global symmetries need to be imposed on the model.

  19. Using Breakout Groups as an Active Learning Technique in a Large Undergraduate Nutrition Classroom at the University of Guelph

    Directory of Open Access Journals (Sweden)

    Genevieve Newton

    2012-12-01

    Full Text Available Breakout groups have been widely used under many different conditions, but the lack of published information related to their use in undergraduate settings highlights the need for research related to their use in this context. This paper describes a study investigating the use of breakout groups in undergraduate education as it specifically relates to teaching a large 4th year undergraduate Nutrition class in a physically constrained lecture space. In total, 220 students completed a midterm survey and 229 completed a final survey designed to measure student satisfaction. Survey results were further analyzed to measure relationships between student perception of breakout group effectiveness and (1 gender and (2 cumulative GPA. Results of both surveys revealed that over 85% of students either agreed or strongly agreed that using breakout groups enhanced their learning experience, with females showing a significantly greater level of satisfaction and higher final course grade than males. Although not stratified by gender, a consistent finding between surveys was a lower perception of breakout group effectiveness by students with a cumulative GPA above 90%. The majority of respondents felt that despite the awkward room space, the breakout groups were easy to create and participate in, which suggests that breakout groups can be successfully used in a large undergraduate classroom despite physical constraints. The findings of this work are relevant given the applicability of breakout groups to a wide range of disciplines, and the relative ease of integration into a traditional lecture format.Les enseignants ont recours aux petits groupes dans de nombreuses conditions différentes, cependant, le manque d’information publiée sur leur utilisation au premier cycle confirme la nécessité d’effectuer des recherches sur ce format dans ce contexte. Le présent article rend compte d’une étude portant sur l’utilisation des petits groupes au premier

  20. Large scale modelling of catastrophic floods in Italy

    Science.gov (United States)

    Azemar, Frédéric; Nicótina, Ludovico; Sassi, Maximiliano; Savina, Maurizio; Hilberts, Arno

    2017-04-01

    The RMS European Flood HD model® is a suite of country scale flood catastrophe models covering 13 countries throughout continental Europe and the UK. The models are developed with the goal of supporting risk assessment analyses for the insurance industry. Within this framework RMS is developing a hydrologic and inundation model for Italy. The model aims at reproducing the hydrologic and hydraulic properties across the domain through a modeling chain. A semi-distributed hydrologic model that allows capturing the spatial variability of the runoff formation processes is coupled with a one-dimensional river routing algorithm and a two-dimensional (depth averaged) inundation model. This model setup allows capturing the flood risk from both pluvial (overland flow) and fluvial flooding. Here we describe the calibration and validation methodologies for this modelling suite applied to the Italian river basins. The variability that characterizes the domain (in terms of meteorology, topography and hydrologic regimes) requires a modeling approach able to represent a broad range of meteo-hydrologic regimes. The calibration of the rainfall-runoff and river routing models is performed by means of a genetic algorithm that identifies the set of best performing parameters within the search space over the last 50 years. We first establish the quality of the calibration parameters on the full hydrologic balance and on individual discharge peaks by comparing extreme statistics to observations over the calibration period on several stations. The model is then used to analyze the major floods in the country; we discuss the different meteorological setup leading to the historical events and the physical mechanisms that induced these floods. We can thus assess the performance of RMS' hydrological model in view of the physical mechanisms leading to flood and highlight the main controls on flood risk modelling throughout the country. The model's ability to accurately simulate antecedent

  1. Minimal axiom group of similarity-based rough set model

    Institute of Scientific and Technical Information of China (English)

    DAI Jian-hua; PAN Yun-he

    2006-01-01

    Rough set axiomatization is one aspect of rough set study to characterize rough set theory using dependable and minimal axiom groups.Thus,rough set theory can be studied by logic and axiom system methods.The classical rough set theory is based on equivalence relation,but the rough set theory based on similarity relation has wide applications in the real world.To characterize similarity-based rough set theory,an axiom group named S,consisting of 3 axioms,is proposed.The reliability of the axiom group,which shows that characterizing of rough set theory based on similarity relation is rational,is proved.Simultaneously,the minimization of the axiom group,which requests that each axiom is an equation and independent,is proved.The axiom group is helpful to research rough set theory by logic and axiom system methods.

  2. Lattice Boltzmann modeling of multiphase flows at large density ratio with an improved pseudopotential model

    CERN Document Server

    Li, Q; Li, X J

    2012-01-01

    Owing to its conceptual simplicity and computational efficiency, the pseudopotential multiphase lattice Boltzmann (LB) model has attracted significant attention since its emergence. In this work, we aim to extend the pseudopotential LB model to the simulations of multiphase flows at large density ratio and relatively high Reynolds number. First, based on our recent work [Li et al., Phys. Rev. E. 86, 016709 (2012)], an improved forcing scheme is proposed for the multiple-relaxation-time (MRT) pseudopotential LB model in order to achieve thermodynamic consistency and large density ratio in the model. Next, through investigating the effects of the parameter a in the Carnahan-Starling equation of state, we find that, as compared with a = 1, a = 0.25 is capable of greatly reducing the magnitude of the spurious currents at large density ratio. Furthermore, it is found that a lower liquid viscosity can be gained in the pseudopotential LB model by increasing the kinematic viscosity ratio between the vapor and liquid ...

  3. Improvement of Large Animal Model for Studying Osteoporosis

    Directory of Open Access Journals (Sweden)

    Kiełbowicz Zdzisław

    2015-04-01

    Full Text Available The aim of the study was to determine the impact of steroidal medications on the structure and mechanical properties of supporting tissues of sheep under experimentally-induced osteoporosis. A total of 21 sheep were used, divided into three groups: a negative control (KN (n = 3, a positive control (KP (n = 3 with ovariectomy, and a steroidal group (KS (n = 15 with ovariectomy and glucocorticosteroids. All animals were kept on a low protein and mineral diet and had limited physical activity and access to sunlight. Quantitative computed tomography was the examination method. The declines in the examined parameter values in the KS group were more than three times higher than in the KN group. The study suggests that a glucocorticosteroidal therapy accelerates and intensifies processes taking place in the course of osteoporosis. The combination of glucocorticosteroids with ovariectomy, a restrictive diet, limited physical activity, and no access to sunlight leads to a decrease in radiological bone density.

  4. Pion-Nucleon Scattering in a Large-N Sigma Model

    CERN Document Server

    Mattis, M P; MATTIS, Michael P.; SILBAR, Richard R.

    1995-01-01

    We review the large-N_c approach to meson-baryon scattering, including recent interesting developments. We then study pion-nucleon scattering in a particular variant of the linear sigma-model, in which the couplings of the sigma and pi mesons to the nucleon are echoed by couplings to the entire tower of I=J baryons (including the Delta) as dictated by large-N_c group theory. We sum the complete set of multi-loop meson-exchange \\pi N --> \\pi N and \\pi N --> \\sigma N Feynman diagrams, to leading order in 1/N_c. The key idea, reviewed in detail, is that large-N_c allows the approximation of LOOP graphs by TREE graphs, so long as the loops contain at least one baryon leg; trees, in turn, can be summed by solving classical equations of motion. We exhibit the resulting partial-wave S-matrix and the rich nucleon and Delta resonance spectrum of this simple model, comparing not only to experiment but also to pion-nucleon scattering in the Skyrme model. The moral is that much of the detailed structure of the meson-bary...

  5. Renormalization-group theory for the eddy viscosity in subgrid modeling

    Science.gov (United States)

    Zhou, YE; Vahala, George; Hossain, Murshed

    1988-01-01

    Renormalization-group theory is applied to incompressible three-dimensional Navier-Stokes turbulence so as to eliminate unresolvable small scales. The renormalized Navier-Stokes equation now includes a triple nonlinearity with the eddy viscosity exhibiting a mild cusp behavior, in qualitative agreement with the test-field model results of Kraichnan. For the cusp behavior to arise, not only is the triple nonlinearity necessary but the effects of pressure must be incorporated in the triple term. The renormalized eddy viscosity will not exhibit a cusp behavior if it is assumed that a spectral gap exists between the large and small scales.

  6. Advances and visions in large-scale hydrological modelling: findings from the 11th Workshop on Large-Scale Hydrological Modelling

    Directory of Open Access Journals (Sweden)

    P. Döll

    2008-10-01

    Full Text Available Large-scale hydrological modelling has become increasingly wide-spread during the last decade. An annual workshop series on large-scale hydrological modelling has provided, since 1997, a forum to the German-speaking community for discussing recent developments and achievements in this research area. In this paper we present the findings from the 2007 workshop which focused on advances and visions in large-scale hydrological modelling. We identify the state of the art, difficulties and research perspectives with respect to the themes "sensitivity of model results", "integrated modelling" and "coupling of processes in hydrosphere, atmosphere and biosphere". Some achievements in large-scale hydrological modelling during the last ten years are presented together with a selection of remaining challenges for the future.

  7. Modeling Temporal Behavior in Large Networks: A Dynamic Mixed-Membership Model

    Energy Technology Data Exchange (ETDEWEB)

    Rossi, R; Gallagher, B; Neville, J; Henderson, K

    2011-11-11

    Given a large time-evolving network, how can we model and characterize the temporal behaviors of individual nodes (and network states)? How can we model the behavioral transition patterns of nodes? We propose a temporal behavior model that captures the 'roles' of nodes in the graph and how they evolve over time. The proposed dynamic behavioral mixed-membership model (DBMM) is scalable, fully automatic (no user-defined parameters), non-parametric/data-driven (no specific functional form or parameterization), interpretable (identifies explainable patterns), and flexible (applicable to dynamic and streaming networks). Moreover, the interpretable behavioral roles are generalizable, computationally efficient, and natively supports attributes. We applied our model for (a) identifying patterns and trends of nodes and network states based on the temporal behavior, (b) predicting future structural changes, and (c) detecting unusual temporal behavior transitions. We use eight large real-world datasets from different time-evolving settings (dynamic and streaming). In particular, we model the evolving mixed-memberships and the corresponding behavioral transitions of Twitter, Facebook, IP-Traces, Email (University), Internet AS, Enron, Reality, and IMDB. The experiments demonstrate the scalability, flexibility, and effectiveness of our model for identifying interesting patterns, detecting unusual structural transitions, and predicting the future structural changes of the network and individual nodes.

  8. Quantum groups as generalized gauge symmetries in WZNW models. Part II. The quantized model

    Science.gov (United States)

    Hadjiivanov, L.; Furlan, P.

    2017-07-01

    This is the second part of a paper dealing with the "internal" (gauge) symmetry of the Wess-Zumino-Novikov-Witten (WZNW) model on a compact Lie group G. It contains a systematic exposition, for G = SU( n), of the canonical quantization based on the study of the classical model (performed in the first part) following the quantum group symmetric approach first advocated by L.D. Faddeev and collaborators. The internal symmetry of the quantized model is carried by the chiral WZNW zero modes satisfying quadratic exchange relations and an n-linear determinant condition. For generic values of the deformation parameter the Fock representation of the zero modes' algebra gives rise to a model space of U q ( sl( n)). The relevant root of unity case is studied in detail for n = 2 when a "restricted" (finite dimensional) quotient quantum group is shown to appear in a natural way. The module structure of the zero modes' Fock space provides a specific duality with the solutions of the Knizhnik-Zamolodchikov equation for the four point functions of primary fields suggesting the existence of an extended state space of logarithmic CFT type. Combining left and right zero modes (i.e., returning to the 2 D model), the rational CFT structure shows up in a setting reminiscent to covariant quantization of gauge theories in which the restricted quantum group plays the role of a generalized gauge symmetry.

  9. Light dilaton in the large N tricritical O (N ) model

    Science.gov (United States)

    Omid, Hamid; Semenoff, Gordon W.; Wijewardhana, L. C. R.

    2016-12-01

    The leading order of the large N limit of the O (N ) symmetric phi-six theory in three dimensions has a phase which exhibits spontaneous breaking of scale symmetry accompanied by a massless dilaton which is a Goldstone boson. At the next-to-leading order in large N , the phi-six coupling has a beta function of order 1 /N and it is expected that the dilaton acquires a small mass, proportional to the beta function and the condensate. In this article, we show that this "light dilaton" is actually a tachyon. This indicates an instability of the phase of the theory with spontaneously broken approximate scale invariance.

  10. The standard model Higgs search at the large hadron collider

    Indian Academy of Sciences (India)

    Satyaki Bhattacharya; on behalf of the CMS and the ATLAS Collaborations

    2007-11-01

    The experiments at the large hadron collider (LHC) will probe for Higgs boson in the mass range between the lower bound on the Higgs mass set by the experiments at the large electron positron collider (LEP) and the unitarity bound (∼ 1 TeV). Strategies are being developed to look for signatures of Higgs boson and measure its properties. In this paper results from full detector simulation-based studies on Higgs discovery from both ATLAS and CMS experiments at the LHC will be presented. Results of simulation studies on Higgs coupling measurement at LHC will be discussed.

  11. Facilitated spin models in one dimension: a real-space renormalization group study.

    Science.gov (United States)

    Whitelam, Stephen; Garrahan, Juan P

    2004-10-01

    We use a real-space renormalization group (RSRG) to study the low-temperature dynamics of kinetically constrained Ising chains (KCICs). We consider the cases of the Fredrickson-Andersen (FA) model, the East model, and the partially asymmetric KCIC. We show that the RSRG allows one to obtain in a unified manner the dynamical properties of these models near their zero-temperature critical points. These properties include the dynamic exponent, the growth of dynamical length scales, and the behavior of the excitation density near criticality. For the partially asymmetric chain, the RG predicts a crossover, on sufficiently large length and time scales, from East-like to FA-like behavior. Our results agree with the known results for KCICs obtained by other methods.

  12. A Categorical Model for the Virtual Braid Group

    OpenAIRE

    Kauffman, Louis H.; Lambropoulou, Sofia

    2011-01-01

    This paper gives a new interpretation of the virtual braid group in terms of a strict monoidal category SC that is freely generated by one object and three morphisms, two of the morphisms corresponding to basic pure virtual braids and one morphism corresponding to a transposition in the symmetric group. The key to this approach is to take pure virtual braids as primary. The generators of the pure virtual braid group are abstract solutions to the algebraic Yang-Baxter equation. This point of v...

  13. Modelling the spreading of large-scale wildland fires

    CERN Document Server

    Drissi, Mohamed

    2014-01-01

    The objective of the present study is twofold. First, the last developments and validation results of a hybrid model designed to simulate fire patterns in heterogeneous landscapes are presented. The model combines the features of a stochastic small-world network model with those of a deterministic semi-physical model of the interaction between burning and non-burning cells that strongly depends on local conditions of wind, topography, and vegetation. Radiation and convection from the flaming zone, and radiative heat loss to the ambient are considered in the preheating process of unburned cells. Second, the model is applied to an Australian grassland fire experiment as well as to a real fire that took place in Corsica in 2009. Predictions compare favorably to experiments in terms of rate of spread, area and shape of the burn. Finally, the sensitivity of the model outcomes (here the rate of spread) to six input parameters is studied using a two-level full factorial design.

  14. Prognostic value of ABO blood group in patients with renal cell carcinoma: single-institution results from a large cohort.

    Science.gov (United States)

    Lee, Chunwoo; You, Dalsan; Sohn, Mooyoung; Jeong, In Gab; Song, Cheryn; Kwon, Taekmin; Hong, Bumsik; Hong, Jun Hyuk; Ahn, Hanjong; Kim, Choung-Soo

    2015-08-01

    To evaluate the association between ABO blood group and prognosis in patients with renal cell carcinoma (RCC) undergoing surgery. A review of the nephrectomy database of the Asan Medical Center identified 3,172 consecutive patients who underwent nephrectomy for RCC between 1997 and 2012. Patients were followed up for a median 60.2 months (interquartile range 33-102 months). Recurrence-free (RFS), cancer-specific (CSS), and overall survival (OS) were calculated by the Kaplan-Meier method and compared using the log-rank test. A Cox proportional hazards regression model was used to estimate the prognostic significance of each variable. Of these 3,172 patients, 915 (28.8 %), 1,057 (33.7 %), 860 (26.7 %) and 340 (10.8 %) were blood types O, A, B, and AB, respectively. ABO blood group was not associated with age, sex, operation method, American Society of Anesthesiologists physical status classification, histologic subtype, or pathological TNM stage. The 5-year OS rates in patients with blood types O, A, B, and AB were 86.0, 86.8, 86.6, and 88.6 %, respectively, and the 10-year OS rates were 78.7, 78.6, 79.1, and 76.9 %, respectively (P = 0.990). ABO blood group was not significantly associated with RFS (P = 0.921) or CSS (P = 0.808). Univariable and multivariable analyses showed that ABO blood group was not a significant prognostic factor of RFS, CSS, or OS. Our study found that ABO blood group is not associated with survival outcomes and is not a prognostic factor in patients who underwent surgery for RCC.

  15. Experimental Investigation of Very Large Model Wind Turbine Arrays

    Science.gov (United States)

    Charmanski, Kyle; Wosnik, Martin

    2013-11-01

    The decrease in energy yield in large wind farms (array losses) and associated revenue losses can be significant. When arrays are sufficiently large they can reach what is known as a fully developed wind turbine array boundary layer, or fully developed wind farm condition. This occurs when the turbulence statistics and the structure of the turbulence, within and above a wind farm, as well as the performance of the turbines remain the same from one row to the next. The study of this condition and how it is affected by parameters such as turbine spacing, power extraction, tip speed ratio, etc. is important for the optimization of large wind farms. An experimental investigation of the fully developed wind farm condition was conducted using a large array of porous disks (upstream) and realistically scaled 3-bladed wind turbines with a diameter of 0.25m. The turbines and porous disks were placed inside a naturally grown turbulent boundary layer in the 6m × 2.5m × 72m test section of the UNH Flow Physics Facility which can achieve test section velocities of up to 14 m/s and Reynolds numbers δ+ = δuτ / ν ~ 20 , 000 . Power, rate of rotation and rotor thrust were measured for select turbines, and hot-wire anemometry was used for flow measurements.

  16. On-line core monitoring system based on buckling corrected modified one group model

    Energy Technology Data Exchange (ETDEWEB)

    Freire, Fernando S., E-mail: freire@eletronuclear.gov.br [ELETROBRAS Eletronuclear Gerencia de Combustivel Nuclear, Rio de Janeiro, RJ (Brazil)

    2011-07-01

    Nuclear power reactors require core monitoring during plant operation. To provide safe, clean and reliable core continuously evaluate core conditions. Currently, the reactor core monitoring process is carried out by nuclear code systems that together with data from plant instrumentation, such as, thermocouples, ex-core detectors and fixed or moveable In-core detectors, can easily predict and monitor a variety of plant conditions. Typically, the standard nodal methods can be found on the heart of such nuclear monitoring code systems. However, standard nodal methods require large computer running times when compared with standards course-mesh finite difference schemes. Unfortunately, classic finite-difference models require a fine mesh reactor core representation. To override this unlikely model characteristic we can usually use the classic modified one group model to take some account for the main core neutronic behavior. In this model a course-mesh core representation can be easily evaluated with a crude treatment of thermal neutrons leakage. In this work, an improvement made on classic modified one group model based on a buckling thermal correction was used to obtain a fast, accurate and reliable core monitoring system methodology for future applications, providing a powerful tool for core monitoring process. (author)

  17. Comparing large-scale computational approaches to epidemic modeling: Agent-based versus structured metapopulation models

    Directory of Open Access Journals (Sweden)

    Merler Stefano

    2010-06-01

    Full Text Available Abstract Background In recent years large-scale computational models for the realistic simulation of epidemic outbreaks have been used with increased frequency. Methodologies adapt to the scale of interest and range from very detailed agent-based models to spatially-structured metapopulation models. One major issue thus concerns to what extent the geotemporal spreading pattern found by different modeling approaches may differ and depend on the different approximations and assumptions used. Methods We provide for the first time a side-by-side comparison of the results obtained with a stochastic agent-based model and a structured metapopulation stochastic model for the progression of a baseline pandemic event in Italy, a large and geographically heterogeneous European country. The agent-based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM model, based on high-resolution census data worldwide, and integrating airline travel flow data with short-range human mobility patterns at the global scale. The model also considers age structure data for Italy. GLEaM and the agent-based models are synchronized in their initial conditions by using the same disease parameterization, and by defining the same importation of infected cases from international travels. Results The results obtained show that both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing on the order of a few days. The relative difference of the epidemic size depends on the basic reproductive ratio, R0, and on the fact that the metapopulation model consistently yields a larger incidence than the agent-based model, as expected due to the differences in the structure in the intra-population contact pattern of the approaches. The age

  18. Running Large-Scale Air Pollution Models on Parallel Computers

    DEFF Research Database (Denmark)

    Georgiev, K.; Zlatev, Z.

    2000-01-01

    Proceedings of the 23rd NATO/CCMS International Technical Meeting on Air Pollution Modeling and Its Application, held 28 September - 2 October 1998, in Varna, Bulgaria.......Proceedings of the 23rd NATO/CCMS International Technical Meeting on Air Pollution Modeling and Its Application, held 28 September - 2 October 1998, in Varna, Bulgaria....

  19. A Large Scale, High Resolution Agent-Based Insurgency Model

    Science.gov (United States)

    2013-09-30

    2007). HSCB Models can be employed for simulating mission scenarios, determining optimal strategies for disrupting terrorist networks, or training and...High Resolution Agent-Based Insurgency Model ∑ = ⎜ ⎜ ⎝ ⎛ − −− = desired 1 move,desired, desired,,desired, desired,, N j ij jmoveij moveiD rp prp

  20. Misspecified poisson regression models for large-scale registry data

    DEFF Research Database (Denmark)

    Grøn, Randi; Gerds, Thomas A.; Andersen, Per K.

    2016-01-01

    working models that are then likely misspecified. To support and improve conclusions drawn from such models, we discuss methods for sensitivity analysis, for estimation of average exposure effects using aggregated data, and a semi-parametric bootstrap method to obtain robust standard errors. The methods...

  1. Modelling expected train passenger delays on large scale railway networks

    DEFF Research Database (Denmark)

    Landex, Alex; Nielsen, Otto Anker

    2006-01-01

    Forecasts of regularity for railway systems have traditionally – if at all – been computed for trains, not for passengers. Relatively recently it has become possible to model and evaluate the actual passenger delays by a passenger regularity model for the operation already carried out. First...

  2. METHODOLOGY & CALCULATIONS FOR THE ASSIGNMENT OF WASTE GROUPS FOR THE LARGE UNDERGROUND WASTE STORAGE TANKS AT THE HANFORD SITE

    Energy Technology Data Exchange (ETDEWEB)

    BARKER, S.A.

    2006-07-27

    Waste stored within tank farm double-shell tanks (DST) and single-shell tanks (SST) generates flammable gas (principally hydrogen) to varying degrees depending on the type, amount, geometry, and condition of the waste. The waste generates hydrogen through the radiolysis of water and organic compounds, thermolytic decomposition of organic compounds, and corrosion of a tank's carbon steel walls. Radiolysis and thermolytic decomposition also generates ammonia. Nonflammable gases, which act as dilutents (such as nitrous oxide), are also produced. Additional flammable gases (e.g., methane) are generated by chemical reactions between various degradation products of organic chemicals present in the tanks. Volatile and semi-volatile organic chemicals in tanks also produce organic vapors. The generated gases in tank waste are either released continuously to the tank headspace or are retained in the waste matrix. Retained gas may be released in a spontaneous or induced gas release event (GRE) that can significantly increase the flammable gas concentration in the tank headspace as described in RPP-7771. The document categorizes each of the large waste storage tanks into one of several categories based on each tank's waste characteristics. These waste group assignments reflect a tank's propensity to retain a significant volume of flammable gases and the potential of the waste to release retained gas by a buoyant displacement event. Revision 5 is the annual update of the methodology and calculations of the flammable gas Waste Groups for DSTs and SSTs.

  3. EVALUATION OF LECTURE AS A LARGE GROUP TEACHING METHOD IN UNDERGRADUATE MEDICAL CURRICULUM: STUDENT’S PERSPECTIVE

    Directory of Open Access Journals (Sweden)

    Gitanjali Kailas

    2014-09-01

    Full Text Available To evaluate lecture as a large group teaching method from student’s perspective. METHODS: The present study was undertaken in the department of Microbiology, KIMS, Amalapuram. A total of 60 Second year MBBS students were taken as study subjects. A questionnaire was designed and students were asked to fill it and also give suggestions as a part of feedback about the lectures conducted in the department of Microbiology. RESULTS: A total of 83.4% students find Chalkboard method + Power point presentation as the best way of delivering a lecture. Nearly 56.6% students opined that ideal duration for class should be 40 - 50 minutes. Long duration of lecture was a major disadvantage according to 66.6% students. 90% students feel that some time period of lecture should be reserved for interactive session. Majority of students also preferred class on e-learning. 70% students feel that tutorials or seminars are needed along with theory class for better understanding of the subject. CONCLUSION: Lectures should be efficiently delivered by the instructor giving a conceptual understanding of the subject instead of mere reading the content. Lecture should be supplemented with tutorials and group discussion to improve learning. Duration of class should be restricted to 40-50 minutes as traditional long duration class makes it difficult to hold the attention of the students for an entire class period. Brief interaction with students will promote active learning. E-learning should be encouraged.

  4. Prediction of Group Delay Distribution Around Receiving Point Using Modified IRI Model and IGRF Model

    Institute of Scientific and Technical Information of China (English)

    YAN Zhaowen; WANG Gang; LI Weimin; YU Dapeng; Toyobur RAHMAN

    2011-01-01

    The international reference ionosphere (IRI) model is generally accepted standard ionosphere model.It describes the ionosphere environment in quiet state and predicts the ionosphere parameters within a certain precision.In this paper,we have made a breakthrough in the application of the IRI model by modifying the model for regions of China.The main objectives of this modification are to construct the ionosphere parameters foF2 and M (3000) F2 by using the Chinese reference ionosphere (CRI)coefficients,appropriately increase hmE and hmF2 height,reduce the thickness of F layer,validate the parameter by the measured values,and solve the electron concentration distribution with quasi-parabolic segment (QPS).In this paper,3D ray tracing algorithm is constructed based on the modified IRI model and international geomagnetic reference field (IGRF) model.In short-wave propagation,it can be used to predict the electromagnetic parameters of the receiving point,such as the receiving area,maximum useable frequency (MUF) and the distribution of the group delay etc.,which can help to determine the suitability of the communication.As an example,we estimate the group delay distributions around Changchun in the detection from Qingdao to Changchun using the modified IRI model and IGRF model,and provide technical support for the short-wave communication between the two cities.

  5. Traffic assignment models in large-scale applications

    DEFF Research Database (Denmark)

    Rasmussen, Thomas Kjær

    on individual behaviour in the model specification, (ii) proposing a method to use disaggregate Revealed Preference (RP) data to estimate utility functions and provide evidence on the value of congestion and the value of reliability, (iii) providing a method to account for individual mis...... of observations of actual behaviour to obtain estimates of the (monetary) value of different travel time components, thereby increasing the behavioural realism of largescale models. vii The generation of choice sets is a vital component in route choice models. This is, however, not a straight-forward task in real...... non-universal choice sets and (ii) flow distribution according to random utility maximisation theory. One model allows distinction between used and unused routes based on the distribution of the random error terms, while the other model allows this distinction by posing restrictions on the costs...

  6. A Habitat Model for Fish Communities in Large Streams and Small Rivers

    Directory of Open Access Journals (Sweden)

    Mark B. Bain

    2012-01-01

    Full Text Available Habitat has become one of the fundamentals for managing the environment. We report on synthesis of 30 habitat models for fish species that inhabit large streams and small rivers. Our protocol for integration of many species-level habitat models was to form a robust, general model that reflected the most common characteristics of the reviewed models. Eleven habitat variables were most commonly used in habitat models, and they were grouped by water quality, reproduction, and food and cover. The developed relations defined acceptable and optimal conditions for each habitat variable. Water quality variables were mid-summer water temperature, dissolved oxygen, pH, and turbidity. Other structural habitat variables were identified: riffle and pool velocity, riffle depth, and percent of the stream area with cover and pools. We conclude that it is feasible to consolidate species-level habitat models for fish that inhabit the same waterway type. Given the similarity among species models, our specification set will closely approximate the needs and optimal conditions of many species. These eleven variables can serve as design specifications for rehabilitating streams and small rivers in human dominated settings.

  7. Renormalization group analysis of reduced magnetohydrodynamics with application to subgrid modeling

    Science.gov (United States)

    Longcope, D. W.; Sudan, R. N.

    1991-01-01

    The technique for obtaining a subgrid model for Navier-Stokes turbulence, based on renormalization group analysis (RNG), is extended to the reduced magnetohydrodynamic (RMND) equations. It is shown that a RNG treatment of the Alfven turbulence supported by the RMHD equations leads to effective values of the viscosity and resistivity at large scales, k yields 0, dependent on the amplitude of turbulence. The effective viscosity and resistivity become independent of the molecular quantities when the RNG analysis is augmented by the Kolmogorov argument for energy cascade. A self-contained system of equations is derived for the range of scales, k = 0-K, where K = pi/Delta is the maximum wave number for a grid size Delta. Differential operators, whose coefficients depend upon the amplitudes of the large-scale quantities, represent in this system the resistive and viscous dissipation.

  8. Large deviations for Gaussian queues modelling communication networks

    CERN Document Server

    Mandjes, Michel

    2007-01-01

    Michel Mandjes, Centre for Mathematics and Computer Science (CWI) Amsterdam, The Netherlands, and Professor, Faculty of Engineering, University of Twente. At CWI Mandjes is a senior researcher and Director of the Advanced Communications Network group.  He has published for 60 papers on queuing theory, networks, scheduling, and pricing of networks.

  9. Multilevel method for modeling large-scale networks.

    Energy Technology Data Exchange (ETDEWEB)

    Safro, I. M. (Mathematics and Computer Science)

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  10. A Categorical Model for the Virtual Braid Group

    CERN Document Server

    Kauffman, Louis H

    2011-01-01

    This paper gives a new interpretation of the virtual braid group in terms of a tensor category with generating diagrams that are abstract strings or connections between pairs of strands in an identity braid, and elements corresponding to virtual crossings that generate the symmetric group. The point of this categorical formulation of the virtual braid groups is that we see how these groups form a natural extension of the symmetric groups by formal elements that satisfy the algebraic Yang-Baxter equation. The category we desribe is a natural structure for an algebraist interested in exploring formal properties of the algebraic Yang-Baxter equation, and it is directly related to more topological points of view about virtual links and virtual braids. We discuss a generalization of the virtual braiding formalism to braided tensor categories that can be used for obtaining invariants of knots and links via Hopf algebras. The invariants we obtain are invariants of rotational virtual knots and links, where the term r...

  11. A model for recovery kinetics of aluminum after large strain

    DEFF Research Database (Denmark)

    Yu, Tianbo; Hansen, Niels

    2012-01-01

    A model is suggested to analyze recovery kinetics of heavily deformed aluminum. The model is based on the hardness of isothermal annealed samples before recrystallization takes place, and it can be extrapolated to longer annealing times to factor out the recrystallization component of the hardness...... for conditions where recovery and recrystallization overlap. The model is applied to the isothermal recovery at temperatures between 140 and 220°C of commercial purity aluminum deformed to true strain 5.5. EBSD measurements have been carried out to detect the onset of discontinuous recrystallization. Furthermore...

  12. Description of the East Brazil Large Marine Ecosystem using a trophic model

    Directory of Open Access Journals (Sweden)

    Kátia M.F. Freire

    2008-09-01

    Full Text Available The objective of this study was to describe the marine ecosystem off northeastern Brazil. A trophic model was constructed for the 1970s using Ecopath with Ecosim. The impact of most of the forty-one functional groups was modest, probably due to the highly reticulated diet matrix. However, seagrass and macroalgae exerted a strong positive impact on manatee and herbivorous reef fishes, respectively. A high negative impact of omnivorous reef fishes on spiny lobsters and of sharks on swordfish was observed. Spiny lobsters and swordfish had the largest biomass changes for the simulation period (1978-2000; tunas, other large pelagics and sharks showed intermediate rates of biomass decline; and a slight increase in biomass was observed for toothed cetaceans, large carnivorous reef fishes, and dolphinfish. Recycling was an important feature of this ecosystem with low phytoplankton-originated primary production. The mean transfer efficiency between trophic levels was 11.4%. The gross efficiency of the fisheries was very low (0.00002, probably due to the low exploitation rate of most of the resources in the 1970s. Basic local information was missing for many groups. When information gaps are filled, this model may serve more credibly for the exploration of fishing policies for this area within an ecosystem approach.

  13. Calculating the renormalisation group equations of a SUSY model with Susyno

    CERN Document Server

    Fonseca, Renato M

    2011-01-01

    Susyno is a Mathematica package dedicated to the computation of the 2-loop renormalisation group equations of a supersymmetric model based on any gauge group (the only exception being multiple U(1) groups) and for any field content.

  14. Working Group 1: Software System Design and Implementation for Environmental Modeling

    Science.gov (United States)

    ISCMEM Working Group One Presentation, presentation with the purpose of fostering the exchange of information about environmental modeling tools, modeling frameworks, and environmental monitoring databases.

  15. Hierarchical generalized linear models for multiple groups of rare and common variants: jointly estimating group and individual-variant effects.

    Directory of Open Access Journals (Sweden)

    Nengjun Yi

    2011-12-01

    Full Text Available Complex diseases and traits are likely influenced by many common and rare genetic variants and environmental factors. Detecting disease susceptibility variants is a challenging task, especially when their frequencies are low and/or their effects are small or moderate. We propose here a comprehensive hierarchical generalized linear model framework for simultaneously analyzing multiple groups of rare and common variants and relevant covariates. The proposed hierarchical generalized linear models introduce a group effect and a genetic score (i.e., a linear combination of main-effect predictors for genetic variants for each group of variants, and jointly they estimate the group effects and the weights of the genetic scores. This framework includes various previous methods as special cases, and it can effectively deal with both risk and protective variants in a group and can simultaneously estimate the cumulative contribution of multiple variants and their relative importance. Our computational strategy is based on extending the standard procedure for fitting generalized linear models in the statistical software R to the proposed hierarchical models, leading to the development of stable and flexible tools. The methods are illustrated with sequence data in gene ANGPTL4 from the Dallas Heart Study. The performance of the proposed procedures is further assessed via simulation studies. The methods are implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/.

  16. Hierarchical Generalized Linear Models for Multiple Groups of Rare and Common Variants: Jointly Estimating Group and Individual-Variant Effects

    Science.gov (United States)

    Yi, Nengjun; Liu, Nianjun; Zhi, Degui; Li, Jun

    2011-01-01

    Complex diseases and traits are likely influenced by many common and rare genetic variants and environmental factors. Detecting disease susceptibility variants is a challenging task, especially when their frequencies are low and/or their effects are small or moderate. We propose here a comprehensive hierarchical generalized linear model framework for simultaneously analyzing multiple groups of rare and common variants and relevant covariates. The proposed hierarchical generalized linear models introduce a group effect and a genetic score (i.e., a linear combination of main-effect predictors for genetic variants) for each group of variants, and jointly they estimate the group effects and the weights of the genetic scores. This framework includes various previous methods as special cases, and it can effectively deal with both risk and protective variants in a group and can simultaneously estimate the cumulative contribution of multiple variants and their relative importance. Our computational strategy is based on extending the standard procedure for fitting generalized linear models in the statistical software R to the proposed hierarchical models, leading to the development of stable and flexible tools. The methods are illustrated with sequence data in gene ANGPTL4 from the Dallas Heart Study. The performance of the proposed procedures is further assessed via simulation studies. The methods are implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). PMID:22144906

  17. A Large Deformation Model for the Elastic Moduli of Two-dimensional Cellular Materials

    Institute of Scientific and Technical Information of China (English)

    HU Guoming; WAN Hui; ZHANG Youlin; BAO Wujun

    2006-01-01

    We developed a large deformation model for predicting the elastic moduli of two-dimensional cellular materials. This large deformation model was based on the large deflection of the inclined members of the cells of cellular materials. The deflection of the inclined member, the strain of the representative structure and the elastic moduli of two-dimensional cellular materials were expressed using incomplete elliptic integrals. The experimental results show that these elastic moduli are no longer constant at large deformation, but vary significantly with the strain. A comparison was made between this large deformation model and the small deformation model proposed by Gibson and Ashby.

  18. A Binaural Grouping Model for Predicting Speech Intelligibility in Multitalker Environments

    Directory of Open Access Journals (Sweden)

    Jing Mi

    2016-09-01

    Full Text Available Spatially separating speech maskers from target speech often leads to a large intelligibility improvement. Modeling this phenomenon has long been of interest to binaural-hearing researchers for uncovering brain mechanisms and for improving signal-processing algorithms in hearing-assistive devices. Much of the previous binaural modeling work focused on the unmasking enabled by binaural cues at the periphery, and little quantitative modeling has been directed toward the grouping or source-separation benefits of binaural processing. In this article, we propose a binaural model that focuses on grouping, specifically on the selection of time-frequency units that are dominated by signals from the direction of the target. The proposed model uses Equalization-Cancellation (EC processing with a binary decision rule to estimate a time-frequency binary mask. EC processing is carried out to cancel the target signal and the energy change between the EC input and output is used as a feature that reflects target dominance in each time-frequency unit. The processing in the proposed model requires little computational resources and is straightforward to implement. In combination with the Coherence-based Speech Intelligibility Index, the model is applied to predict the speech intelligibility data measured by Marrone et al. The predicted speech reception threshold matches the pattern of the measured data well, even though the predicted intelligibility improvements relative to the colocated condition are larger than some of the measured data, which may reflect the lack of internal noise in this initial version of the model.

  19. A Binaural Grouping Model for Predicting Speech Intelligibility in Multitalker Environments.

    Science.gov (United States)

    Mi, Jing; Colburn, H Steven

    2016-10-03

    Spatially separating speech maskers from target speech often leads to a large intelligibility improvement. Modeling this phenomenon has long been of interest to binaural-hearing researchers for uncovering brain mechanisms and for improving signal-processing algorithms in hearing-assistive devices. Much of the previous binaural modeling work focused on the unmasking enabled by binaural cues at the periphery, and little quantitative modeling has been directed toward the grouping or source-separation benefits of binaural processing. In this article, we propose a binaural model that focuses on grouping, specifically on the selection of time-frequency units that are dominated by signals from the direction of the target. The proposed model uses Equalization-Cancellation (EC) processing with a binary decision rule to estimate a time-frequency binary mask. EC processing is carried out to cancel the target signal and the energy change between the EC input and output is used as a feature that reflects target dominance in each time-frequency unit. The processing in the proposed model requires little computational resources and is straightforward to implement. In combination with the Coherence-based Speech Intelligibility Index, the model is applied to predict the speech intelligibility data measured by Marrone et al. The predicted speech reception threshold matches the pattern of the measured data well, even though the predicted intelligibility improvements relative to the colocated condition are larger than some of the measured data, which may reflect the lack of internal noise in this initial version of the model.

  20. Exploring model based engineering for large telescopes: getting started with descriptive models

    Science.gov (United States)

    Karban, R.; Zamparelli, M.; Bauvir, B.; Koehler, B.; Noethe, L.; Balestra, A.

    2008-07-01

    Large telescopes pose a continuous challenge to systems engineering due to their complexity in terms of requirements, operational modes, long duty lifetime, interfaces and number of components. A multitude of decisions must be taken throughout the life cycle of a new system, and a prime means of coping with complexity and uncertainty is using models as one decision aid. The potential of descriptive models based on the OMG Systems Modeling Language (OMG SysMLTM) is examined in different areas: building a comprehensive model serves as the basis for subsequent activities of soliciting and review for requirements, analysis and design alike. Furthermore a model is an effective communication instrument against misinterpretation pitfalls which are typical of cross disciplinary activities when using natural language only or free-format diagrams. Modeling the essential characteristics of the system, like interfaces, system structure and its behavior, are important system level issues which are addressed. Also shown is how to use a model as an analysis tool to describe the relationships among disturbances, opto-mechanical effects and control decisions and to refine the control use cases. Considerations on the scalability of the model structure and organization, its impact on the development process, the relation to document-centric structures, style and usage guidelines and the required tool chain are presented.